[Automated-testing] RFC Linter in meta-openembedded


Richard Purdie
 

Sharing a copy of this to openembedded-architecture.

Cheers,

Richard


Alexander Kanavin
 

Marius, I know you want everyone to move to github and abandon email (seeing the previous thread), but asking everyone to go there for the 'discussion' won't get you far.

Please write a proposal, send it here to oe-architecture list, and do include a plan for transparently adding the linter to the existing patch by email workflows that don't require anyone to open github links, ever.

Thanks,
Alex


On Sat, 29 Jan 2022 at 09:33, Richard Purdie <richard.purdie@...> wrote:
Sharing a copy of this to openembedded-architecture.

Cheers,

Richard



---------- Forwarded message ----------
From: Marius Kriegerowski <marius.kriegerowski@...>
To: automated-testing@...
Cc: 
Bcc: 
Date: Sat, 29 Jan 2022 02:39:27 +0100
Subject: [Automated-testing] RFC Linter in meta-openembedded
Good morning everybody,

We started a discussion about integrating a linter to meta-openembedded to ease the review process, provide direct feedback to contributors and increase consistency across recipes. I invite everybody to join the discussion. https://github.com/openembedded/meta-openembedded/pull/465

Cheers
Marius


PS I’m not sure this is the right email list to address this but I felt linters and CI related matters are part of automated testing. Thus, if you know a more suitable list let me know or feel free to forward.




Alexander Kanavin
 

Yes, except you aren't subscribed to the list, so your replies won't be seen by everyone, and you won't get replies that aren't CCd directly to you. Please take the trouble, and do embrace that awful outdated email concept :)

Alex


On Sat, 29 Jan 2022 at 13:42, Marius Kriegerowski <marius.kriegerowski@...> wrote:
Dear Alex,

Richard was kind enough to forward my email. Thanks @Richard. I’m happy to continue the discussion here.

Best
Marius


On 29. Jan 2022, at 13:40, Alexander Kanavin <alex.kanavin@...> wrote:

Marius, I know you want everyone to move to github and abandon email (seeing the previous thread), but asking everyone to go there for the 'discussion' won't get you far.

Please write a proposal, send it here to oe-architecture list, and do include a plan for transparently adding the linter to the existing patch by email workflows that don't require anyone to open github links, ever.

Thanks,
Alex

On Sat, 29 Jan 2022 at 09:33, Richard Purdie <richard.purdie@...> wrote:
Sharing a copy of this to openembedded-architecture.

Cheers,

Richard



---------- Forwarded message ----------
From: Marius Kriegerowski <marius.kriegerowski@...>
To: automated-testing@...
Cc: 
Bcc: 
Date: Sat, 29 Jan 2022 02:39:27 +0100
Subject: [Automated-testing] RFC Linter in meta-openembedded
Good morning everybody,

We started a discussion about integrating a linter to meta-openembedded to ease the review process, provide direct feedback to contributors and increase consistency across recipes. I invite everybody to join the discussion. https://github.com/openembedded/meta-openembedded/pull/465

Cheers
Marius


PS I’m not sure this is the right email list to address this but I felt linters and CI related matters are part of automated testing. Thus, if you know a more suitable list let me know or feel free to forward.





Marius Kriegerowski
 

Done :)

A very short wrap-up for those who don’t have a GitHub account: I proposed to add oelint-adv to meta-openembedded as part of the GitHub CI pipeline. I know that many find oelint-adv too strict. I had the same experience using it first. But diving deeper into it found out that it’s highly customisable.

We found a configuration set which I believe is a good starting point. It’s much more permissive than the basic configuration and basically errors just if important parts are e.g. missing.

I would be happy if more people would give it a try and give their opinion on what should be an error and what should be a warning. Having a solid basic common ground with regard to what a decent recipe should look like would ease the review process as it would require less manual labor and thus would make development faster and give a clearer view of what recipes should look like.

My personal take on this: I wish there was a proper set of clear linting rules when a started writing my first recipes not too long ago. And I’m sure I’m not the only one.

Another more technical question to discuss once a set of rules is found: how could this be integrated into the established workflow?

Looking forward hearing from the community.

Marius



On 29. Jan 2022, at 13:49, Alexander Kanavin <alex.kanavin@...> wrote:

Yes, except you aren't subscribed to the list, so your replies won't be seen by everyone, and you won't get replies that aren't CCd directly to you. Please take the trouble, and do embrace that awful outdated email concept :)

Alex

On Sat, 29 Jan 2022 at 13:42, Marius Kriegerowski <marius.kriegerowski@...> wrote:
Dear Alex,

Richard was kind enough to forward my email. Thanks @Richard. I’m happy to continue the discussion here.

Best
Marius


On 29. Jan 2022, at 13:40, Alexander Kanavin <alex.kanavin@...> wrote:

Marius, I know you want everyone to move to github and abandon email (seeing the previous thread), but asking everyone to go there for the 'discussion' won't get you far.

Please write a proposal, send it here to oe-architecture list, and do include a plan for transparently adding the linter to the existing patch by email workflows that don't require anyone to open github links, ever.

Thanks,
Alex

On Sat, 29 Jan 2022 at 09:33, Richard Purdie <richard.purdie@...> wrote:
Sharing a copy of this to openembedded-architecture.

Cheers,

Richard



---------- Forwarded message ----------
From: Marius Kriegerowski <marius.kriegerowski@...>
To: automated-testing@...
Cc: 
Bcc: 
Date: Sat, 29 Jan 2022 02:39:27 +0100
Subject: [Automated-testing] RFC Linter in meta-openembedded
Good morning everybody,

We started a discussion about integrating a linter to meta-openembedded to ease the review process, provide direct feedback to contributors and increase consistency across recipes. I invite everybody to join the discussion. https://github.com/openembedded/meta-openembedded/pull/465

Cheers
Marius


PS I’m not sure this is the right email list to address this but I felt linters and CI related matters are part of automated testing. Thus, if you know a more suitable list let me know or feel free to forward.






Alexander Kanavin
 

On Sat, 29 Jan 2022 at 15:31, Marius Kriegerowski <marius.kriegerowski@...> wrote:
Another more technical question to discuss once a set of rules is found: how could this be integrated into the established workflow?

This is the harder issue that needs to be resolved and implemented first, before we start arguing about what is a good set of rules to apply.

There is a already a linter that is integrated into the established email workflow and has a useful set of rules specifically for the core layers,
but sadly it fell into disrepair due to a lack of maintainer:


How about you step up as a maintainer and/or collaborate with Paul Barker, and get it fixed?

Alex


Marius Kriegerowski
 

Sounds fair. I’ll have a look. Thanks for pointing that out.

On 29. Jan 2022, at 17:39, Alexander Kanavin <alex.kanavin@...> wrote:

On Sat, 29 Jan 2022 at 15:31, Marius Kriegerowski <marius.kriegerowski@...> wrote:
Another more technical question to discuss once a set of rules is found: how could this be integrated into the established workflow?

This is the harder issue that needs to be resolved and implemented first, before we start arguing about what is a good set of rules to apply.

There is a already a linter that is integrated into the established email workflow and has a useful set of rules specifically for the core layers,
but sadly it fell into disrepair due to a lack of maintainer:


How about you step up as a maintainer and/or collaborate with Paul Barker, and get it fixed?

Alex


Marius Kriegerowski
 

Dear Alex,

Richard was kind enough to forward my email. Thanks @Richard. I’m happy to continue the discussion here.

Best
Marius


On 29. Jan 2022, at 13:40, Alexander Kanavin <alex.kanavin@...> wrote:

Marius, I know you want everyone to move to github and abandon email (seeing the previous thread), but asking everyone to go there for the 'discussion' won't get you far.

Please write a proposal, send it here to oe-architecture list, and do include a plan for transparently adding the linter to the existing patch by email workflows that don't require anyone to open github links, ever.

Thanks,
Alex

On Sat, 29 Jan 2022 at 09:33, Richard Purdie <richard.purdie@...> wrote:
Sharing a copy of this to openembedded-architecture.

Cheers,

Richard



---------- Forwarded message ----------
From: Marius Kriegerowski <marius.kriegerowski@...>
To: automated-testing@...
Cc: 
Bcc: 
Date: Sat, 29 Jan 2022 02:39:27 +0100
Subject: [Automated-testing] RFC Linter in meta-openembedded
Good morning everybody,

We started a discussion about integrating a linter to meta-openembedded to ease the review process, provide direct feedback to contributors and increase consistency across recipes. I invite everybody to join the discussion. https://github.com/openembedded/meta-openembedded/pull/465

Cheers
Marius


PS I’m not sure this is the right email list to address this but I felt linters and CI related matters are part of automated testing. Thus, if you know a more suitable list let me know or feel free to forward.





Trevor Woerner
 

I think Richard tried to allude to this in the github conversation, but I
think it might have got lost.

Currently, there is only one path for submitting a patch to oe-core: the
mailing list. But for submitting patches to meta-openembedded there are now
two acceptable paths: mailing list and github pull request.

We have to make sure all paths are treated equally.

If the linting is only applied to the github pull request path and not the
mailing list path, then people submitting patches via github will be subject
to more stringent rules than those who use the mailing list.

Inevitably patches will get through via the mailing list that would have
failed the linting process were they sent as github pull requests. The next
person to submit a patch via github will be forced to fixup the linting of not
only their work, but of the recent patches that came in through the mailing
list and were not linted (and had linting issues).

We should always try to make sure all proposed patches are treated the same.


Konrad Weihmann <kweihmann@...>
 

Yep, that is what the discussion was about - in my opinion every path should treated equally, which is somehow blocked by the very unique way this project tends to validate patches.
So either the project is opening up to contributions from different workflows or we all have to fold our hands and pray that someone is eager to reactivate patchtest :-)

BTW I think patchtest might be the compromise we are all looking for, but as of now I see no incentive for everyone out of OE scope to pick up the pieces that are there.
Concluding from that the options are
- have patchtest being picked up by someone willing to maintain that in the long run
- completely drop the idea of integrating any kind of linter as of now

I don't want to offend anyone - but a clear yes/no decision would be helpful - just begging for someone to pick the pieces of patchtest isn't going to really help

On 18.02.22 22:18, Trevor Woerner wrote:
I think Richard tried to allude to this in the github conversation, but I
think it might have got lost.
Currently, there is only one path for submitting a patch to oe-core: the
mailing list. But for submitting patches to meta-openembedded there are now
two acceptable paths: mailing list and github pull request.
We have to make sure all paths are treated equally.
If the linting is only applied to the github pull request path and not the
mailing list path, then people submitting patches via github will be subject
to more stringent rules than those who use the mailing list.
Inevitably patches will get through via the mailing list that would have
failed the linting process were they sent as github pull requests. The next
person to submit a patch via github will be forced to fixup the linting of not
only their work, but of the recent patches that came in through the mailing
list and were not linted (and had linting issues).
We should always try to make sure all proposed patches are treated the same.


Marius Kriegerowski
 

Hello everyone,

After some time I found the time and took a deeper dive into patchtest. It’s not in the prettiest state and has a couple of design choices which make me wonder if it wouldn’t be better to go for a complete rewrite. e.g. I would rather opt for pytest as it’s much more lightweight and easier to work with and more modern than unittest. But let me dive deeper first.

Patchtest is hardly documented which makes it hard to follow. Or is there any more documentation out there outside of the repo? I would start writing documentation but want to make sure that I don’t repeat what others have done before.
Can these be removed?

I’m assuming that in the CI context the `--json` flag is used to produce json output which is then somehow parsed and returned to the user, right? I would like to work with the DevOps integration team to make sure I’m not deviating too far from what the integration team expects.

I managed to make it run on my machine but even if one test fails, it returns OK in the end. Was that implemented on purpose to make this work in the CI context?

So, just for me to know if I’m on the right track, does this look like a reasonable output indicating that it works fine?

❯ patchtest ../poky/0001-scriptutils-fix-style-to-be-more-PEP8-compliant.patch ../poky ../patchtest-oe/tests
Testing patch ../poky/0001-scriptutils-fix-style-to-be-more-PEP8-compliant.patch
run pre-merge tests
BUILDDIR is not defined. Cannot load bitbake
SKIP setUpClass (test_metadata_lic_files_chksum.LicFilesChkSum)
BUILDDIR is not defined. Cannot load bitbake
SKIP setUpClass (test_metadata_src_uri.SrcUri)
SKIP test_python_pylint.PyLint.pretest_pylint
----------------------------------------------------------------------
Ran 1 test in 0.548s

OK
run post-merge tests
PASS test_mbox_author.Author.test_author_valid
PASS test_mbox_author.Author.test_non_auh_upgrade
PASS test_mbox_bugzilla.Bugzilla.test_bugzilla_entry_format
SKIP test_mbox_description.CommitMessage.test_commit_message_presence
PASS test_mbox_format.MboxFormat.test_mbox_format
PASS test_mbox_mailinglist.MailingList.test_target_mailing_list
FAIL test_mbox_merge.Merge.test_series_merge_on_head
PASS test_mbox_shortlog.Shortlog.test_shortlog_format
PASS test_mbox_shortlog.Shortlog.test_shortlog_length
FAIL test_mbox_signed_off_by.SignedOffBy.test_signed_off_by_presence
BUILDDIR is not defined. Cannot load bitbake
SKIP setUpClass (test_metadata_lic_files_chksum.LicFilesChkSum)
BUILDDIR is not defined. Cannot load bitbake
SKIP setUpClass (test_metadata_license.License)
SKIP test_metadata_max_length.MaxLength.test_max_line_length
BUILDDIR is not defined. Cannot load bitbake
SKIP setUpClass (test_metadata_src_uri.SrcUri)
BUILDDIR is not defined. Cannot load bitbake
SKIP setUpClass (test_metadata_summary.Summary)
SKIP test_patch_cve.CVE.test_cve_tag_format
SKIP test_patch_signed_off_by.PatchSignedOffBy.test_signed_off_by_presence
SKIP test_patch_upstream_status.PatchUpstreamStatus.test_upstream_status_presence_format
SKIP test_python_pylint.PyLint.test_pylint
----------------------------------------------------------------------
Ran 15 tests in 0.260s

OK
 
Looking forward bringing this up to speed again.

Marius



On 18. Feb 2022, at 22:41, Konrad Weihmann <kweihmann@...> wrote:

Yep, that is what the discussion was about - in my opinion every path should treated equally, which is somehow blocked by the very unique way this project tends to validate patches.
So either the project is opening up to contributions from different workflows or we all have to fold our hands and pray that someone is eager to reactivate patchtest :-)

BTW I think patchtest might be the compromise we are all looking for, but as of now I see no incentive for everyone out of OE scope to pick up the pieces that are there.
Concluding from that the options are
- have patchtest being picked up by someone willing to maintain that in the long run
- completely drop the idea of integrating any kind of linter as of now

I don't want to offend anyone - but a clear yes/no decision would be helpful - just begging for someone to pick the pieces of patchtest isn't going to really help


On 18.02.22 22:18, Trevor Woerner wrote:
I think Richard tried to allude to this in the github conversation, but I
think it might have got lost.
Currently, there is only one path for submitting a patch to oe-core: the
mailing list. But for submitting patches to meta-openembedded there are now
two acceptable paths: mailing list and github pull request.
We have to make sure all paths are treated equally.
If the linting is only applied to the github pull request path and not the
mailing list path, then people submitting patches via github will be subject
to more stringent rules than those who use the mailing list.
Inevitably patches will get through via the mailing list that would have
failed the linting process were they sent as github pull requests. The next
person to submit a patch via github will be forced to fixup the linting of not
only their work, but of the recent patches that came in through the mailing
list and were not linted (and had linting issues).
We should always try to make sure all proposed patches are treated the same.





Marius Kriegerowski
 

One more thing: As I heard patchtest is basically dead at the moment. If I pick up on it I’ll first add documentation and fix all PEP8 violations, and add some CI to run (at least) flake8 and pytest. I’ll also add type hints as all that avoids errors and allows the next one to pick up the project with greater confidence. I’ll also add unit tests.

All that will lead to pretty large change sets. Do I need to send everything one by one as patches and wait for reviews or can I work on my clone https://github.com/HerrMuellerluedenscheid/patchtest and open a merge request somewhere?

Best
Marius



On 16. Apr 2022, at 13:28, Marius Kriegerowski via lists.openembedded.org <marius.kriegerowski=gmail.com@...> wrote:

Hello everyone,

After some time I found the time and took a deeper dive into patchtest. It’s not in the prettiest state and has a couple of design choices which make me wonder if it wouldn’t be better to go for a complete rewrite. e.g. I would rather opt for pytest as it’s much more lightweight and easier to work with and more modern than unittest. But let me dive deeper first.

Patchtest is hardly documented which makes it hard to follow. Or is there any more documentation out there outside of the repo? I would start writing documentation but want to make sure that I don’t repeat what others have done before.
Can these be removed?

I’m assuming that in the CI context the `--json` flag is used to produce json output which is then somehow parsed and returned to the user, right? I would like to work with the DevOps integration team to make sure I’m not deviating too far from what the integration team expects.

I managed to make it run on my machine but even if one test fails, it returns OK in the end. Was that implemented on purpose to make this work in the CI context?

So, just for me to know if I’m on the right track, does this look like a reasonable output indicating that it works fine?

❯ patchtest ../poky/0001-scriptutils-fix-style-to-be-more-PEP8-compliant.patch ../poky ../patchtest-oe/tests
Testing patch ../poky/0001-scriptutils-fix-style-to-be-more-PEP8-compliant.patch
run pre-merge tests
BUILDDIR is not defined. Cannot load bitbake
SKIP setUpClass (test_metadata_lic_files_chksum.LicFilesChkSum)
BUILDDIR is not defined. Cannot load bitbake
SKIP setUpClass (test_metadata_src_uri.SrcUri)
SKIP test_python_pylint.PyLint.pretest_pylint
----------------------------------------------------------------------
Ran 1 test in 0.548s

OK
run post-merge tests
PASS test_mbox_author.Author.test_author_valid
PASS test_mbox_author.Author.test_non_auh_upgrade
PASS test_mbox_bugzilla.Bugzilla.test_bugzilla_entry_format
SKIP test_mbox_description.CommitMessage.test_commit_message_presence
PASS test_mbox_format.MboxFormat.test_mbox_format
PASS test_mbox_mailinglist.MailingList.test_target_mailing_list
FAIL test_mbox_merge.Merge.test_series_merge_on_head
PASS test_mbox_shortlog.Shortlog.test_shortlog_format
PASS test_mbox_shortlog.Shortlog.test_shortlog_length
FAIL test_mbox_signed_off_by.SignedOffBy.test_signed_off_by_presence
BUILDDIR is not defined. Cannot load bitbake
SKIP setUpClass (test_metadata_lic_files_chksum.LicFilesChkSum)
BUILDDIR is not defined. Cannot load bitbake
SKIP setUpClass (test_metadata_license.License)
SKIP test_metadata_max_length.MaxLength.test_max_line_length
BUILDDIR is not defined. Cannot load bitbake
SKIP setUpClass (test_metadata_src_uri.SrcUri)
BUILDDIR is not defined. Cannot load bitbake
SKIP setUpClass (test_metadata_summary.Summary)
SKIP test_patch_cve.CVE.test_cve_tag_format
SKIP test_patch_signed_off_by.PatchSignedOffBy.test_signed_off_by_presence
SKIP test_patch_upstream_status.PatchUpstreamStatus.test_upstream_status_presence_format
SKIP test_python_pylint.PyLint.test_pylint
----------------------------------------------------------------------
Ran 15 tests in 0.260s

OK
 
Looking forward bringing this up to speed again.

Marius



On 18. Feb 2022, at 22:41, Konrad Weihmann <kweihmann@...> wrote:

Yep, that is what the discussion was about - in my opinion every path should treated equally, which is somehow blocked by the very unique way this project tends to validate patches.
So either the project is opening up to contributions from different workflows or we all have to fold our hands and pray that someone is eager to reactivate patchtest :-)

BTW I think patchtest might be the compromise we are all looking for, but as of now I see no incentive for everyone out of OE scope to pick up the pieces that are there.
Concluding from that the options are
- have patchtest being picked up by someone willing to maintain that in the long run
- completely drop the idea of integrating any kind of linter as of now

I don't want to offend anyone - but a clear yes/no decision would be helpful - just begging for someone to pick the pieces of patchtest isn't going to really help


On 18.02.22 22:18, Trevor Woerner wrote:
I think Richard tried to allude to this in the github conversation, but I
think it might have got lost.
Currently, there is only one path for submitting a patch to oe-core: the
mailing list. But for submitting patches to meta-openembedded there are now
two acceptable paths: mailing list and github pull request.
We have to make sure all paths are treated equally.
If the linting is only applied to the github pull request path and not the
mailing list path, then people submitting patches via github will be subject
to more stringent rules than those who use the mailing list.
Inevitably patches will get through via the mailing list that would have
failed the linting process were they sent as github pull requests. The next
person to submit a patch via github will be forced to fixup the linting of not
only their work, but of the recent patches that came in through the mailing
list and were not linted (and had linting issues).
We should always try to make sure all proposed patches are treated the same.







Marius Kriegerowski
 

Ok, one final thing for now...

Speaking of CI: is there a framework that integrates into the current architecture? Do you run a self-hosted Jenkins, drone-ci, or alike somewhere that I can make use of?

Happy easter :)



On 16. Apr 2022, at 13:38, Marius Kriegerowski via lists.openembedded.org <marius.kriegerowski=gmail.com@...> wrote:

One more thing: As I heard patchtest is basically dead at the moment. If I pick up on it I’ll first add documentation and fix all PEP8 violations, and add some CI to run (at least) flake8 and pytest. I’ll also add type hints as all that avoids errors and allows the next one to pick up the project with greater confidence. I’ll also add unit tests.

All that will lead to pretty large change sets. Do I need to send everything one by one as patches and wait for reviews or can I work on my clone https://github.com/HerrMuellerluedenscheid/patchtest and open a merge request somewhere?

Best
Marius



On 16. Apr 2022, at 13:28, Marius Kriegerowski via lists.openembedded.org <marius.kriegerowski=gmail.com@...> wrote:

Hello everyone,

After some time I found the time and took a deeper dive into patchtest. It’s not in the prettiest state and has a couple of design choices which make me wonder if it wouldn’t be better to go for a complete rewrite. e.g. I would rather opt for pytest as it’s much more lightweight and easier to work with and more modern than unittest. But let me dive deeper first.

Patchtest is hardly documented which makes it hard to follow. Or is there any more documentation out there outside of the repo? I would start writing documentation but want to make sure that I don’t repeat what others have done before.
Can these be removed?

I’m assuming that in the CI context the `--json` flag is used to produce json output which is then somehow parsed and returned to the user, right? I would like to work with the DevOps integration team to make sure I’m not deviating too far from what the integration team expects.

I managed to make it run on my machine but even if one test fails, it returns OK in the end. Was that implemented on purpose to make this work in the CI context?

So, just for me to know if I’m on the right track, does this look like a reasonable output indicating that it works fine?

❯ patchtest ../poky/0001-scriptutils-fix-style-to-be-more-PEP8-compliant.patch ../poky ../patchtest-oe/tests
Testing patch ../poky/0001-scriptutils-fix-style-to-be-more-PEP8-compliant.patch
run pre-merge tests
BUILDDIR is not defined. Cannot load bitbake
SKIP setUpClass (test_metadata_lic_files_chksum.LicFilesChkSum)
BUILDDIR is not defined. Cannot load bitbake
SKIP setUpClass (test_metadata_src_uri.SrcUri)
SKIP test_python_pylint.PyLint.pretest_pylint
----------------------------------------------------------------------
Ran 1 test in 0.548s

OK
run post-merge tests
PASS test_mbox_author.Author.test_author_valid
PASS test_mbox_author.Author.test_non_auh_upgrade
PASS test_mbox_bugzilla.Bugzilla.test_bugzilla_entry_format
SKIP test_mbox_description.CommitMessage.test_commit_message_presence
PASS test_mbox_format.MboxFormat.test_mbox_format
PASS test_mbox_mailinglist.MailingList.test_target_mailing_list
FAIL test_mbox_merge.Merge.test_series_merge_on_head
PASS test_mbox_shortlog.Shortlog.test_shortlog_format
PASS test_mbox_shortlog.Shortlog.test_shortlog_length
FAIL test_mbox_signed_off_by.SignedOffBy.test_signed_off_by_presence
BUILDDIR is not defined. Cannot load bitbake
SKIP setUpClass (test_metadata_lic_files_chksum.LicFilesChkSum)
BUILDDIR is not defined. Cannot load bitbake
SKIP setUpClass (test_metadata_license.License)
SKIP test_metadata_max_length.MaxLength.test_max_line_length
BUILDDIR is not defined. Cannot load bitbake
SKIP setUpClass (test_metadata_src_uri.SrcUri)
BUILDDIR is not defined. Cannot load bitbake
SKIP setUpClass (test_metadata_summary.Summary)
SKIP test_patch_cve.CVE.test_cve_tag_format
SKIP test_patch_signed_off_by.PatchSignedOffBy.test_signed_off_by_presence
SKIP test_patch_upstream_status.PatchUpstreamStatus.test_upstream_status_presence_format
SKIP test_python_pylint.PyLint.test_pylint
----------------------------------------------------------------------
Ran 15 tests in 0.260s

OK
 
Looking forward bringing this up to speed again.

Marius



On 18. Feb 2022, at 22:41, Konrad Weihmann <kweihmann@...> wrote:

Yep, that is what the discussion was about - in my opinion every path should treated equally, which is somehow blocked by the very unique way this project tends to validate patches.
So either the project is opening up to contributions from different workflows or we all have to fold our hands and pray that someone is eager to reactivate patchtest :-)

BTW I think patchtest might be the compromise we are all looking for, but as of now I see no incentive for everyone out of OE scope to pick up the pieces that are there.
Concluding from that the options are
- have patchtest being picked up by someone willing to maintain that in the long run
- completely drop the idea of integrating any kind of linter as of now

I don't want to offend anyone - but a clear yes/no decision would be helpful - just begging for someone to pick the pieces of patchtest isn't going to really help


On 18.02.22 22:18, Trevor Woerner wrote:
I think Richard tried to allude to this in the github conversation, but I
think it might have got lost.
Currently, there is only one path for submitting a patch to oe-core: the
mailing list. But for submitting patches to meta-openembedded there are now
two acceptable paths: mailing list and github pull request.
We have to make sure all paths are treated equally.
If the linting is only applied to the github pull request path and not the
mailing list path, then people submitting patches via github will be subject
to more stringent rules than those who use the mailing list.
Inevitably patches will get through via the mailing list that would have
failed the linting process were they sent as github pull requests. The next
person to submit a patch via github will be forced to fixup the linting of not
only their work, but of the recent patches that came in through the mailing
list and were not linted (and had linting issues).
We should always try to make sure all proposed patches are treated the same.









Richard Purdie
 

On Sat, 2022-04-16 at 13:38 +0200, Marius Kriegerowski wrote:
One more thing: As I heard patchtest is basically dead at the moment. If I
pick up on it I’ll first add documentation and fix all PEP8 violations, and
add some CI to run (at least) flake8 and pytest. I’ll also add type hints as
all that avoids errors and allows the next one to pick up the project with
greater confidence. I’ll also add unit tests.
My personal thoughts are that whilst the pep8 changes are laudable, the real
issues lie elsewhere with patchtest and it may be better to look at getting it
going again first with improvements to the language standard being made along
the way. All too often I see these cleanups breaking something, or simply making
it hard to look up where/why a change was made as you have to try and navigate
through the more cosmetic changes. I appreciate that isn't the answer you wanted
though.

Unit tests on the other hand would be good to add.

All that will lead to pretty large change sets. Do I need to send everything
one by one as patches and wait for reviews or can I work on my
clone https://github.com/HerrMuellerluedenscheid/patchtest and open a merge
request somewhere?
Creating a branch and working on it seems fine. The challenge for anyone trying
to review things is that we don't have a working patchtest instance any more and
the people who knww that codebase no longer work on the project. Posting patches
as you go may help ensure any pointers anyone can give about direction are made
before you get too far along a given path though.

Cheers,

Richard


Richard Purdie
 

Hi,

Thanks for looping back to this.

On Sat, 2022-04-16 at 13:28 +0200, Marius Kriegerowski wrote:
After some time I found the time and took a deeper dive into patchtest. It’s not
in the prettiest state and has a couple of design choices which make me wonder
if it wouldn’t be better to go for a complete rewrite. e.g. I would rather opt
for pytest as it’s much more lightweight and easier to work with and more modern
than unittest. But let me dive deeper first.
I've wondered if we do need to re-invent it but for different reasons. It
currently integrates with patchwork but for example it may be better to have it
work off lore and it may be better using some of the more modern patchwork
tooling. One example I was pointed at was:

https://git.kernel.org/pub/scm/linux/kernel/git/mricon/korg-helpers.git/tree/git-patchwork-bot.py

With regard to unitest vs pytest, please do use unittest. The reason I say that
is the rest of the project's testing runs within unitest so I'd prefer to have
one standard rather than two. You can see it here:

https://git.yoctoproject.org/poky/tree/meta/lib/oeqa

(there are selftests, runtime tests, sdk tests, eSDK tests, performance tests
and so on).

Patchtest is hardly documented which makes it hard to follow. Or is there any
more documentation out there outside of the repo? I would start writing
documentation but want to make sure that I don’t repeat what others have done
before.
Sadly there isn't :(.

There are selftests but they don’t work:
https://github.com/HerrMuellerluedenscheid/patchtest/tree/overhaul/selftest
Can these be removed?
That directory appears to contain a load of training/test data. I don't care so
much about the actual test mechanism but surely the old training data is a good
place to start to test any new instance that is brought up?

I’m assuming that in the CI context the `--json` flag is used to produce json
output which is then somehow parsed and returned to the user, right? I would
like to work with the DevOps integration team to make sure I’m not deviating too
far from what the integration team expects.
It looks like you're working on "patchtest-oe" from
https://git.yoctoproject.org/patchtest-oe/ . There is a second piece to the
puzzle, a harder piece unfortunately which is
https://git.yoctoproject.org/patchtest/ .

The latter is the piece which is effectively the glue logic between patchwork
and patchtest-oe.

I'm not sure what you're aiming for as an end result. To recap, what used to
happen is that patchwork+patchtest would see new patches on the mailing list,
pick them up, run the tests in patchtest-oe against them, then reply on the
mailing list if there were any issues detected. We would like to get back to
this.

Unfortunately we were running a forked/hacked up version of patchwork which we
couldn't update. We ended up switching to a vanilla upstream version of it so it
is now here: https://patchwork.yoctoproject.org/ however that broke patchtest.

What we'd like to do is:

a) Get patchtest back up and running against new patches on the mailing list in
some form. As I hinted at above, this may mean using a different approach to
what patchtest used to do.

b) Move the actual tests in patchtest-oe to meta/lib/oeqa/ in OE-Core as a new
set of tests.

c) Add a script in OE-Core so that you can run the tests in patchtest-oe (now in
OE-Core) against a patch before you send it to the mailing list

d) Anyone else could then reuse these tests as part of CI, e.g. on
github/gitlab/whatever

I managed to make it run on my machine but even if one test fails, it returns
OK in the end. Was that implemented on purpose to make this work in the CI
context?
I suspect patchtest would have code to read and handle that.


So, just for me to know if I’m on the right track, does this look like a
reasonable output indicating that it works fine?

❯ patchtest ../poky/0001-scriptutils-fix-style-to-be-more-PEP8-compliant.patch
../poky ../patchtest-oe/tests
Testing patch ../poky/0001-scriptutils-fix-style-to-be-more-PEP8-
compliant.patch
run pre-merge tests
BUILDDIR is not defined. Cannot load bitbake
SKIP setUpClass (test_metadata_lic_files_chksum.LicFilesChkSum)
BUILDDIR is not defined. Cannot load bitbake
SKIP setUpClass (test_metadata_src_uri.SrcUri)
SKIP test_python_pylint.PyLint.pretest_pylint
----------------------------------------------------------------------
Ran 1 test in 0.548s

OK
run post-merge tests
PASS test_mbox_author.Author.test_author_valid
PASS test_mbox_author.Author.test_non_auh_upgrade
PASS test_mbox_bugzilla.Bugzilla.test_bugzilla_entry_format
SKIP test_mbox_description.CommitMessage.test_commit_message_presence
PASS test_mbox_format.MboxFormat.test_mbox_format
PASS test_mbox_mailinglist.MailingList.test_target_mailing_list
FAIL test_mbox_merge.Merge.test_series_merge_on_head
PASS test_mbox_shortlog.Shortlog.test_shortlog_format
PASS test_mbox_shortlog.Shortlog.test_shortlog_length
FAIL test_mbox_signed_off_by.SignedOffBy.test_signed_off_by_presence
BUILDDIR is not defined. Cannot load bitbake
SKIP setUpClass (test_metadata_lic_files_chksum.LicFilesChkSum)
BUILDDIR is not defined. Cannot load bitbake
SKIP setUpClass (test_metadata_license.License)
SKIP test_metadata_max_length.MaxLength.test_max_line_length
BUILDDIR is not defined. Cannot load bitbake
SKIP setUpClass (test_metadata_src_uri.SrcUri)
BUILDDIR is not defined. Cannot load bitbake
SKIP setUpClass (test_metadata_summary.Summary)
SKIP test_patch_cve.CVE.test_cve_tag_format
SKIP test_patch_signed_off_by.PatchSignedOffBy.test_signed_off_by_presence
SKIP
test_patch_upstream_status.PatchUpstreamStatus.test_upstream_status_presence_f
ormat
SKIP test_python_pylint.PyLint.test_pylint
----------------------------------------------------------------------
Ran 15 tests in 0.260s

OK
I believe that patchtest had some minimal bitbake environment which would have
allowed some of the skipped tests to run. I'm not an expert on this and I don't
remember how the handoff happened with it though.

Hope this offers at least some guideance on where things were and where we need
to be, at least in my view! :)

Cheers,

Richard


Richard Purdie
 

On Sat, 2022-04-16 at 13:40 +0200, Marius Kriegerowski wrote:
Ok, one final thing for now...

Speaking of CI: is there a framework that integrates into the current
architecture? Do you run a self-hosted Jenkins, drone-ci, or alike somewhere
that I can make use of?
We have the project autobuilder:

https://autobuilder.yoctoproject.org/typhoon/#/console

which runs many of the other tests I mentioned in my email. We did also used to
have a VM which ran patchtest, interfacing patchwork to patchtest-oe.

As I mention in my mail, getting tests from patchtest-oe into OE-Core and able
to run standalone against a patch before submission may be a worthy goal. The
tests are meant to be lightweight so hopefully running them locally should be
ok. Whilst we do have CI, it is really focused on something quite different.
Ultimately I would love to see the VM replying to people on the list with
feedback about patches though!

Cheers,

Richard


Marius Kriegerowski
 

Dear Richard, 

Thanks for the comprehensive answer! That helped me to get a better overview of what is there and how to proceed.

I'll leave cosmetics for a later phase, stick to unittest and fix the integration so that we have a baseline from where to decide how to proceed.

Best regards and happy Easter holidays 
Marius

Richard Purdie <richard.purdie@...> schrieb am Sa., 16. Apr. 2022, 15:12:

On Sat, 2022-04-16 at 13:40 +0200, Marius Kriegerowski wrote:
> Ok, one final thing for now...
>
> Speaking of CI: is there a framework that integrates into the current
> architecture? Do you run a self-hosted Jenkins, drone-ci, or alike somewhere
> that I can make use of?

We have the project autobuilder:

https://autobuilder.yoctoproject.org/typhoon/#/console

which runs many of the other tests I mentioned in my email. We did also used to
have a VM which ran patchtest, interfacing patchwork to patchtest-oe.

As I mention in my mail, getting tests from patchtest-oe into OE-Core and able
to run standalone against a patch before submission may be a worthy goal. The
tests are meant to be lightweight so hopefully running them locally should be
ok. Whilst we do have CI, it is really focused on something quite different.
Ultimately I would love to see the VM replying to people on the list with
feedback about patches though!

Cheers,

Richard


Richard Purdie
 

On Sat, 2022-04-16 at 17:15 +0200, Marius Kriegerowski wrote:
Dear Richard, 

Thanks for the comprehensive answer! That helped me to get a better overview of
what is there and how to proceed.

I'll leave cosmetics for a later phase, stick to unittest and fix the
integration so that we have a baseline from where to decide how to proceed.
FYI I did notice some docs here:

https://git.yoctoproject.org/patchtest/tree/usage.adoc

Cheers,

Richard


Marius Kriegerowski
 

Hi Richard,

I hope you had an enjoyable summer!

Picking up on this thread now, almost half a year later :) Sorry for the late reply but turned out (a little surprising) that I’m going to have a daughter soon which shifted my priorities a little…

So, I looked into patchtest a few months ago and found a couple of issues where I considered a rewrite. So, I gave it a shot last weekend but instead of python I used rust. You can find the demo here: https://github.com/HerrMuellerluedenscheid/patchtest-rs

The functionality is currently limited to loading a patch from the commandline, checking that the summary is in place and applying the patch to a repository given by a url. The repo will be cloned on the fly.

The next step would be an smtp client that checks for incoming messages every N seconds and runs patchtest against patches and sends back a short report of what worked and what didn’t.

I just wanted to check that revitalising patchtest is still an open issue and would like to ask for quick feedback.

Best regards

Marius

On 16. Apr 2022, at 18:35, Richard Purdie <richard.purdie@...> wrote:

On Sat, 2022-04-16 at 17:15 +0200, Marius Kriegerowski wrote:
Dear Richard,

Thanks for the comprehensive answer! That helped me to get a better overview of
what is there and how to proceed.

I'll leave cosmetics for a later phase, stick to unittest and fix the
integration so that we have a baseline from where to decide how to proceed.
FYI I did notice some docs here:

https://git.yoctoproject.org/patchtest/tree/usage.adoc

Cheers,

Richard


Ross Burton
 

On 5 Sep 2022, at 12:25, Marius Kriegerowski via lists.openembedded.org <marius.kriegerowski=gmail.com@...> wrote:

So, I looked into patchtest a few months ago and found a couple of issues where I considered a rewrite. So, I gave it a shot last weekend but instead of python I used rust. You can find the demo here: https://github.com/HerrMuellerluedenscheid/patchtest-rs
There’s definitely still interest in patchtest, but my personal opinion is that considering 99% of our code is Python keeping patchtest in python means that there are more people who can work on it.

Ross


Richard Purdie
 

Hi Marius,

On Mon, 2022-09-05 at 13:25 +0200, Marius Kriegerowski wrote:
Picking up on this thread now, almost half a year later :) Sorry for
the late reply but turned out (a little surprising) that I’m going to
have a daughter soon which shifted my priorities a little…
Understandable, congratulations! :)

So, I looked into patchtest a few months ago and found a couple of
issues where I considered a rewrite. So, I gave it a shot last
weekend but instead of python I used rust. You can find the demo
here: https://github.com/HerrMuellerluedenscheid/patchtest-rs
This doesn't seem like a good direction to me. I have nothing against
rust, I've spent the past couple of months trying to sort out rust
support in OE-Core. The problem is most of our tools (bitbake,
autobuilder, buildbot and so on) are python based and most of our
developers know python. Most don't know rust (myself included). Making
a core tool hard to understand by our core developer base doesn't seem
like a wise move.

The functionality is currently limited to loading a patch from the
commandline, checking that the summary is in place and applying the
patch to a repository given by a url. The repo will be cloned on the
fly.

The next step would be an smtp client that checks for incoming
messages every N seconds and runs patchtest against patches and sends
back a short report of what worked and what didn’t.

I just wanted to check that revitalising patchtest is still an open
issue and would like to ask for quick feedback.
We definitely do want to revitialise it. One change is that we do now
have publicinbox available for OE-Core and bitbake:

https://lore.kernel.org/openembedded-core/
https://lore.kernel.org/bitbake-devel/

so rather than fighting SMTP, we should be able to read patches from
there via the git representation.

Cheers,

Richard