Re: [OE-core] Patchwork & patch handling improvements
Trevor Woerner
On 11/26/15 16:00, Paul Eggleton wrote:
I'm alsoThis all sounds like an improvement and is therefore a step in the right direction :-) A while back I had the idea of "porting" the kernel's "checkpatch.pl" to The Yocto Project (it was around the same time that I was trying to float the whole "Maintainers File" idea too, since I was also trying to re-purpose "get-maintainer.pl" as well). About one minute into that effort I realized the existing *.bb files were all over the place in terms of the order of statements and the order of the blocks of statements. At that time I found one recipe style guide from OE, and another one from The Yocto Project, each of which described a slightly different preference. So I asked on the mailing list and quickly discovered that both groups prefer a different style. I'm not saying this job isn't worth doing, but I am pointing out there's the potential for feathers to be ruffled on both sides if someone tries to produce a definitive style guide for recipe files and then enforces it in an automated way. Since it is the OpenEmbedded Project's job to provide the recipes for The Yocto Project, I'm guessing this question needs to be decided by them? If that sounds reasonable, then maybe The Yocto Project needs to acquiesce to OE's decision? Instead of cross-posting, maybe this would be a good email for the new architecture list (CC'ed)? |
|
A performance analysis chronicle
Richard Purdie
One thing that bothers me at the moment is how long we spend in
do_configure. I have some new build hardware but sadly whilst having quite a large parallel capacity, it spends a lot of time at idle and builds don't seem to do so well on it. In particular, gettext has caught my attention. Specifically, gettext on an idle system takes 128s to run do_configure, it takes more like 285s when run as part of the main build? Why? I've been trying to figure out what contributes to this. Why is it important? If we could understand this, we might be able to make "critical path" tasks faster and if we can do that, overall build times will come down. My first attempt to look at this was from a scheduling priority level: BB_TASK_NICE_LEVEL = "5" BB_TASK_NICE_LEVEL_pn-gettext = "0" BB_TASK_IONICE_LEVEL = "2.7" BB_TASK_IONICE_LEVEL_pn-gettext = "2.0" This did offer some reduction to say 275s but certainly doesn't seem to be the main issue. Next, I tried: WORKDIR_pn-gettext = "/media/build1/poky/build/tmpram/work/i586-poky-linux/gettext/0.19.4-r0" with a tmpfs at tmpram. Sadly again, whilst this changed things like do_unpack from 15s to 3s, it made no real difference to the overall configure time. Since I don't believe the system to be IO bound (the first 1700 tasks of a 6000 task core-image-sato build execute within 2 minutes) this perhaps isn't surprising. I also wondered about contention over shared paths like the metadata and sysroot being mixed in with WORKDIR. I tried a build with tmp/work and tmp/work-shared on a different physical drive, again with no real difference in performance. I decided to get some information from perf, hacking buildstats.bbclass to do: pn = d.getVar("PN", True) if pn == "gettext" and e.task == "do_configure": parentpid = os.getpid() pid = os.fork() if pid == 0: os.setsid() pid2 = os.fork() if pid2 > 0: os._exit(os.EX_OK) output = subprocess.check_output("perf record -g -t %s --inherit -o /tmp/configureperf.%s" % (parentpid, parentpid), shell=True) os._exit(os.EX_OK) and with this obtained perf records for the build under load and at idle. I've appended various output from perf report at the end of the mail. Other than seeing that fork overhead is a significant problem, its hard to see where the extra time is being spent from that profile data. If anyone knows perf well and get better information I'm open to ideas. I also started looking at what the gettext configure test is doing under strace using simplistic measurements like: strace -f -s 2048 bitbake gettext-native -c configure 2> gtn.strace This generates a file around 1GB in size, there are some useful statistics you can pull from it though: cat gtn.strace | cut -d ' ' -f 3 | grep '^[a-z]*(' | cut -d '(' -f 1 | sort | uniq -c > gtn.funcs cat gtn.strace | grep execve | cut -d ' ' -f 3 | grep '^[a-z]*(' | sort | uniq -c > gtn.strace.execs which show an approximate list of syscalls the task is making along with counts of the number of calls made. It also shows the commands that are being execve()'d. One thing which puzzled me was large numbers of call to "/usr/bin/print", "/usr/bin/which" and "/usr/bin/file", the latter with some very odd looking arguments. The former is a perl script, part of mailcap. This turns out to be some rouge autoconf macro code which tries to see if "print X" works in the shell if you're not using bash (my /bin/sh was dash). It does this at the start of every configure script and the configure script attempts to reexec itself a few times. This in turn loads the perl interpreter and the script then calls "which file", ignores anything other than the return value, it then searches PATH for file. This is just a complete and utter waste of time, there is a patch below which simply removes the macro and assumes the shell's "printf" is sane. With the above analysis approach, you can show that this worth a couple of thousand execve() calls (22,000 -> 20,000) and 15,000 stat calls (333,000 -> 320,000) and knocks around 10-15s off the loaded system do_configure time, the unloaded system time was worth more like around 3s. Of the 20,000 execs, 2,300 are "cat", "7100" are "sed", 1,000 are "expr" and the others look more problematic as being the linker/compiler. I did try experimenting to see if the cat calls could be replaces with something else. In autoconf, they are mostly from heredoc usage, e.g.: m4_define([_AC_DEFINE_UNQUOTED], [m4_if(m4_bregexp([$1], [\\\|`\|\$(\|\${\|@]), [-1], [AS_ECHO(["AS_ESCAPE([$1], [""])"]) >>confdefs.h], [cat >>confdefs.h <<_ACEOF [$1] _ACEOF])]) which I found you could replace with: m4_define([_AC_DEFINE_UNQUOTED], [m4_if(m4_bregexp([$1], [\\\|`\|\$(\|\${\|@]), [-1], [AS_ECHO(["AS_ESCAPE([$1], [""])"]) >>confdefs.h], [while IFS= read -r line || [[ -n "$line" ]]; do printf '%s\n' "$line" done >>confdefs.h <<_ACEOF [$1] _ACEOF])]) which has the advantage of using shell builtins. I did try other constructs but couldn't find anything simpler and even in this case, it was complicated, for example needing the || [ -n $line ] to ensure that trailing data would be handled correctly and [[ to be quoted correctly, its not a bashism. Unfortunately, this is line buffered IO so whilst I could cut 2,000 execve() calls, I added around 300,000 read calls instead. I did try using -d '' to avoid the EOL delimited, however that internally turns into single byte reads and 900,000 read calls! So basically, this optimisation didn't really work. If we could make cat a shell builtin, it would help but I haven't looked at what that would entail. The other thing which did jump out at me from the strace was the number of PATH searches going on for common utilities. I did wonder about whether our diverse PATH spanning metadata (the scripts dir), our sysroot and the main system was somehow causing IO bottlebeck issues. I tested that above with the separate build drive for workdir however I also tried a different trick, adding symlinks in the scripts directory for the most common utils as, cat, chmod, cp, gcc, grep, rm and sed to the host utils in /bin or /usr/bin (the scripts directory is the first one listed in PATH). I did also briefly experiment with changing "cat" to "/bin/cat" but preferred the symlink approach. The profile shows this saves around 1,200 execve() calls and around 80,000 stat calls (320,000 -> 240,000) and took the do_configure time under load to 240s so there was a measurable effect, sadly the overall build time didn't seem much changed however. The non-load case is still around the 125s mark so this perhaps does give a strong hint that cross directory IO is contributing to the inflation of the build time under load. There are obviously further things which can be looked into from here, however I thought people may find this story so far interesting and it might inspire others to start looking more closely at this too! Cheers, Richard Output from "perf report A" Children Self Command Shared Object Symbol - 5.41% 0.00% aclocal [unknown] [.] 0x43d6258d4c544155 0x43d6258d4c544155 - 5.08% 0.01% bash [kernel.kallsyms] [k] entry_SYSCALL_64_fastpath - entry_SYSCALL_64_fastpath + 84.05% __libc_fork + 3.77% __waitpid + 3.05% __GI___libc_write + 1.93% __GI___libc_read + 1.45% __xstat64 + 0.91% __open_nocancel + 0.68% __brk + 0.66% __sigprocmask + 0.53% __pipe + 3.79% 0.02% bash bash [.] make_child + 3.61% 0.01% bash libc-2.21.so [.] __libc_fork + 3.49% 3.49% aclocal perl [.] Perl_pp_match + 3.49% 0.00% bash [kernel.kallsyms] [k] sys_clone + 3.49% 0.00% bash [kernel.kallsyms] [k] _do_fork + 3.30% 0.08% bash [kernel.kallsyms] [k] copy_process.part.29 + 2.85% 0.52% bash [kernel.kallsyms] [k] page_fault + 2.80% 0.00% cc1 [unknown] [.] 0000000000000000 + 2.41% 2.41% aclocal perl [.] Perl_fbm_instr + 2.39% 2.38% aclocal perl [.] Perl_re_intuit_start + 2.33% 0.01% bash [kernel.kallsyms] [k] do_page_fault + 2.31% 0.04% bash [kernel.kallsyms] [k] __do_page_fault + 2.23% 0.00% autom4te [unknown] [.] 0000000000000000 + 2.15% 0.10% bash [kernel.kallsyms] [k] handle_mm_fault + 2.06% 0.01% bash [kernel.kallsyms] [k] perf_event_init_task + 2.04% 0.01% bash [kernel.kallsyms] [k] inherit_task_group.isra.95.part.96 + 1.94% 0.01% cc1 [kernel.kallsyms] [k] entry_SYSCALL_64_fastpath + 1.93% 0.14% bash [kernel.kallsyms] [k] inherit_event.isra.93 + 1.71% 0.35% cc1 [kernel.kallsyms] [k] page_fault Output from "perf diff A B -c ratio" # Baseline Ratio Shared Object Symbol # ........ .............. ....................... ................................................................................... # 16.62% 1.275408 m4 [.] 0x0000000000002090 3.58% 0.934606 perl [.] Perl_pp_match 3.46% 1.039378 perl [.] 0x000000000001ac0e 3.12% 0.740720 cc1 [.] 0x000000000016d946 2.77% 0.949347 perl [.] Perl_fbm_instr 2.74% 0.908970 bash [.] 0x000000000001ed44 2.53% 0.889904 perl [.] Perl_re_intuit_start 1.91% 0.706060 i586-poky-linux-ld [.] 0x0000000000002430 1.89% 0.868389 perl [.] Perl_pp_nextstate 1.82% 1.053153 libc-2.21.so [.] _int_malloc To see the list sorted by ratio: "perf diff A B -c ratio -t : | sort -t: -k2,2 -nr | cut -c -100" # Baseline Ratio Shared Object Symbol # ........ .............. ....................... ................................................................................... # 0.00: 107.335521:libpcre.so.3.13.1 :[.] 0x00000000000017e0 0.00: 58.518921:sed :[.] open@plt 0.00: 39.706804:[kernel.kallsyms] :[k] set_next_buddy 0.00: 32.437345:[kernel.kallsyms] :[k] find_busiest_group 0.00: 26.460722:[kernel.kallsyms] :[k] task_numa_fault 0.00: 25.791301:file :[.] 0x0000000000001ed0 0.00: 22.245486:libselinux.so.1 :[.] __fsetlocking@plt 0.00: 20.580930:cc1 :[.] _Z15make_pass_loop2PN3gcc7contextE 0.00: 17.729036:[kernel.kallsyms] :[k] update_sd_lb_stats 0.00: 16.301473:[kernel.kallsyms] :[k] rwsem_down_write_failed 0.00: 16.067699:[kernel.kallsyms] :[k] kernfs_refresh_inode 0.00: 15.090138:[kernel.kallsyms] :[k] seq_printf 0.00: 12.787537:ls :[.] 0x00000000000081d8 0.00: 11.657115:[kernel.kallsyms] :[k] ext4_da_invalidatepage 0.00: 11.656298:bash :[.] unary_test 0.00: 10.775472:[kernel.kallsyms] :[k] security_bprm_secureexec 0.00: 10.573946:[kernel.kallsyms] :[k] __queue_work 0.00: 9.638718:ld-2.21.so :[.] _xstat 0.00: 9.178533:cc1 :[.] _Z10add_paramsPK10param_infom 0.00: 8.937500:[kernel.kallsyms] :[k] cdev_get 0.00: 8.628314:sed :[.] getenv@plt 0.00: 8.618151:perl :[.] Perl_do_trans 0.00: 8.535535:[kernel.kallsyms] :[k] mangle_path 0.00: 8.356976:[kernel.kallsyms] :[k] __delete_from_page_cache 0.00: 8.124628:ld-2.21.so :[.] malloc@plt 0.00: 8.117871:[kernel.kallsyms] :[k] osq_lock 0.00: 7.991028:cc1 :[.] _Z20output_quoted_stringP8_IO_FILEPKc 0.00: 7.814020:[kernel.kallsyms] :[k] ext4_ext_remove_space 0.00: 7.549076:[kernel.kallsyms] :[k] load_balance 0.00: 7.441893:sed :[.] _init or to see the biggest differences in period: "perf diff A B -p -c wdiff:1,1 -t : | sort -t: -k3,3 -nr | cut -c -100" # Baseline Base period Weighted diff Period Shared Object Symbol # ........ .............. .............. .............. ....................... ................................................... 16.62:30953898486: 8524961890:39478860376:m4 :[.] 0x0000000000002090 1.73:3220644640: 538932468:3759577108:libc-2.21.so :[.] __strcmp_sse2_unaligned 1.49:2768327998: 436920702:3205248700:libc-2.21.so :[.] __memcpy_avx_unaligned 0.80:1499099032: 408223464:1907322496:[kernel.kallsyms] :[k] clear_page_c_e 0.97:1803966092: 323339402:2127305494:libc-2.21.so :[.] strlen 0.54:1006908653: 288332008:1295240661:libc-2.21.so :[.] __strchr_sse2 0.66:1228217319: 266221456:1494438775:[kernel.kallsyms] :[k] copy_page 3.46:6446114974: 253835378:6699950352:perl :[.] 0x000000000001ac0e 1.82:3388135954: 180089205:3568225159:libc-2.21.so :[.] _int_malloc 0.17:325395321: 166763174:492158495:libc-2.21.so :[.] __ctype_b_loc 0.57:1066020098: 162627432:1228647530:[kernel.kallsyms] :[k] filemap_map_pages 0.06:120013915: 140088632:260102547:[kernel.kallsyms] :[k] vma_interval_tree_insert 0.39:730051202: 132370215:862421417:[kernel.kallsyms] :[k] perf_event_alloc 0.55:1015490988: 131035917:1146526905:[kernel.kallsyms] :[k] perf_event_aux_ctx 0.43:793617868: 125666408:919284276:[kernel.kallsyms] :[k] memset_erms 0.74:1383682896: 122935416:1506618312:perl :[.] Perl_regexec_flags 0.83:1546248993: 120309599:1666558592:libc-2.21.so :[.] _IO_getc 0.25:469980070: 118797566:588777636:libc-2.21.so :[.] _dl_addr 0.26:490314628: 110317867:600632495:ld-2.21.so :[.] do_lookup_x 0.45:836227328: 106471933:942699261:perl :[.] Perl_yylex 0.85:1580066629: 103820430:1683887059:libc-2.21.so :[.] _int_free 0.62:1160141971: 101707344:1261849315:[kernel.kallsyms] :[k] unmap_page_range 0.01:19809999: 89831404:109641403:[kernel.kallsyms] :[k] native_queued_spin_lock_slowpath 0.60:1113390363: 86975498:1200365861:perl :[.] Perl_yyparse Path for autoconf to avoid /usr/bin/print The check for solaris 'print' causes significant problems on a linux machine with dash as /bin/sh since it triggers the execution of "print" which on some linux systems is a perl script which is part of mailcap. Worse, this perl script calls "which file" and if successful ignores the path file was found in and just runs "file" without a path. Each exection causes PATH to be searched. Simply assuming the shell's printf function works cuts out all the fork overhead and when parallel tasks are running, this overhead appears to be significant. RP 2015/11/28 Index: autoconf-2.69/lib/m4sugar/m4sh.m4 =================================================================== --- autoconf-2.69.orig/lib/m4sugar/m4sh.m4 +++ autoconf-2.69/lib/m4sugar/m4sh.m4 @@ -1045,40 +1045,8 @@ m4_defun([_AS_ECHO_PREPARE], [[as_nl=' ' export as_nl -# Printing a long string crashes Solaris 7 /usr/bin/printf. -as_echo='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' -as_echo=$as_echo$as_echo$as_echo$as_echo$as_echo -as_echo=$as_echo$as_echo$as_echo$as_echo$as_echo$as_echo -# Prefer a ksh shell builtin over an external printf program on Solaris, -# but without wasting forks for bash or zsh. -if test -z "$BASH_VERSION$ZSH_VERSION" \ - && (test "X`print -r -- $as_echo`" = "X$as_echo") 2>/dev/null; then - as_echo='print -r --' - as_echo_n='print -rn --' -elif (test "X`printf %s $as_echo`" = "X$as_echo") 2>/dev/null; then - as_echo='printf %s\n' - as_echo_n='printf %s' -else - if test "X`(/usr/ucb/echo -n -n $as_echo) 2>/dev/null`" = "X-n $as_echo"; then - as_echo_body='eval /usr/ucb/echo -n "$][1$as_nl"' - as_echo_n='/usr/ucb/echo -n' - else - as_echo_body='eval expr "X$][1" : "X\\(.*\\)"' - as_echo_n_body='eval - arg=$][1; - case $arg in @%:@( - *"$as_nl"*) - expr "X$arg" : "X\\(.*\\)$as_nl"; - arg=`expr "X$arg" : ".*$as_nl\\(.*\\)"`;; - esac; - expr "X$arg" : "X\\(.*\\)" | tr -d "$as_nl" - ' - export as_echo_n_body - as_echo_n='sh -c $as_echo_n_body as_echo' - fi - export as_echo_body - as_echo='sh -c $as_echo_body as_echo' -fi +as_echo='printf %s\n' +as_echo_n='printf %s' ]])# _AS_ECHO_PREPARE |
|
Re: [OE-core] Patchwork & patch handling improvements
Paul Eggleton <paul.eggleton@...>
Hi Trevor,
On Mon, 30 Nov 2015 10:19:35 Trevor Woerner wrote: On 11/26/15 16:00, Paul Eggleton wrote:I don't think there's that much of a division. I don't recall if it was youI'm alsoThis all sounds like an improvement and is therefore a step in the right that raised it at the time but the issue of having two style guides did get rectified - I changed the one on the Yocto Project wiki to simply be a link to the OE style guide in June last year. It certainly didn't come about through a conscious decision to have a different style. However there is a minor disagreement over indentation for shell functions between OE-Core and other layers - this persists because of the backporting pain a blanket replacement would potentially lead to. As I recall this did get discussed at the OE TSC level. I think that's one thing we could just not evaluate (or make an option) until such time as we resolve the difference - and I do mean to see it resolved at some point in the future. Instead of cross-posting, maybe this would be a good email for the newPerhaps yes; I'm a bit concerned that list still doesn't have that many subscribers though (currently 28, two of which are the same person). Cheers, Paul -- Paul Eggleton Intel Open Source Technology Centre |
|
Re: [OE-core] Patchwork & patch handling improvements
Paul Eggleton <paul.eggleton@...>
On Tue, 01 Dec 2015 11:47:20 Martin Jansa wrote:
On Tue, Dec 01, 2015 at 07:49:50AM +1300, Paul Eggleton wrote:I agree it's not ideal. I said above, I do want to see it resolved.Hi Trevor,Using consistent indentation (4 spaces) at least for new metadata would Leaving indentation aside for a moment do you have any comments on my proposal? Thanks, Paul -- Paul Eggleton Intel Open Source Technology Centre |
|
Re: [OE-core] Patchwork & patch handling improvements
Martin Jansa
On Wed, Dec 02, 2015 at 04:01:40PM +1300, Paul Eggleton wrote:
On Tue, 01 Dec 2015 11:47:20 Martin Jansa wrote:I'm not familiar with FDO fork, so I don't know how it looks andOn Tue, Dec 01, 2015 at 07:49:50AM +1300, Paul Eggleton wrote:I agree it's not ideal. I said above, I do want to see it resolved.Hi Trevor,Using consistent indentation (4 spaces) at least for new metadata would behaves, but any improvement on patchwork side is definitely welcome and I appreciate it. Does it support e.g. moving the patches to given bundle based on some substring in subject? To sort e.g. meta-networking, meta-java, meta-browser, .. patches automatically? I don't expect FDO fork to provide other features I'm used to from gerrit like cherry-picking to selected branch from the UI or doing the review there. But still if we're stuck with patchwork forever (because some people hate gerrit), then any improvement is really appreciated, thanks for looking into it. My only concern is about migrating current database, do you know if the migration will keep the database including bundles as they are or do you plan to set FDO version in parallel at least for some transition period? Currently I have many patches in flight, because jenkins is running full test-dependencies job for last 11 and based on progress it will take 14-21 more days to finish. Regards, -- Martin 'JaMa' Jansa jabber: Martin.Jansa@... |
|
Re: [OE-core] Patchwork & patch handling improvements
Richard Purdie
On Mon, 2015-11-30 at 10:19 -0500, Trevor Woerner wrote:
On 11/26/15 16:00, Paul Eggleton wrote:I think the areas where there are disagreements are comparatively small,I'm alsoThis all sounds like an improvement and is therefore a step in the right really just on shell whitespace. Where they do exist, they are problematic, not least as some layers effectively ignored an agreement made by the TSC simply because they didn't agree with it. It basically means the OE TSC only applies to OE-Core as far as I can tell, which is sad but is the decision that was made. This also means the TSC has no real influence over any proposed coding style being used outside OE-Core. I do still believe shell whitespace changes would cause significant patch compatibility issues, I know I disagree on Martin over that. I still don't like the idea of people blindly running a formatting script since we'll than start seeing patches every time there is a single space in the wrong place. We simply don't want that amount of churn on the metadata. I can imagine several people replying and saying that patch churn is not an issue but having seen the things people send patches for, I believe it will be. I don't want to encourage such things as I believe there are better things to do with our time (mine included as I'd have to review them, even to just say 'no'). The maintainers file is a different problem and its one of maintenance, and more specifically what being listed means, who can be listed, how that listing can be changed and so on. The Yocto Project has some notion of maintainer and there its easy, Ross and I can made decisions on who is listed and what those people are expected to do and we can make it work (its how we ensure things get upgraded with some regularity). For OE, who would do this and what would the maintainer file mean? If someone patches something, are they required to cc the maintainer on patches for example? (that would imply workflow overhead) What if they don't cc a maintainer? Should we be forced to revert such a patch? In many ways its like the "what is a stable branch?" question. Some people want to use a maintainers file as a way of having a veto on certain patches. Others want to use it as a way of finding people to fix bugs. Others again want it to help review patches. The uses vary and you need a clear definition about what its being used for to make it work. If someone wants to put together a proposal about which problem this solves, with clear definitions/charter about how it would all work, great, but I've seen a lot of problems with such files and I worry it creates more problems than it would solve. Cheers, Richard |
|
Re: [OE-core] Patchwork & patch handling improvements
Barros Pena, Belen <belen.barros.pena@...>
On 02/12/2015 08:17, "openembedded-core-bounces@... on
behalf of Martin Jansa" <openembedded-core-bounces@... on behalf of martin.jansa@...> wrote: On Wed, Dec 02, 2015 at 04:01:40PM +1300, Paul Eggleton wrote:This is how it looks like currentlyOn Tue, 01 Dec 2015 11:47:20 Martin Jansa wrote:I'm not familiar with FDO fork, so I don't know how it looks andOn Tue, Dec 01, 2015 at 07:49:50AM +1300, Paul Eggleton wrote:it canHi Trevor, http://patchwork.freedesktop.org/project/intel-gfx/patches/ but any improvement on patchwork side is definitely welcome andMmm, you might not like this, but we are thinking of getting rid of bundles. Basically, we assumed bundles were a manual way of creating patch series. The new Patchwork can identify series, so we thought: great! Bundles no longer needed. There are other features been considered that might be an alternative to bundles, like tagging
|
|
Re: [OE-core] Patchwork & patch handling improvements
Barros Pena, Belen <belen.barros.pena@...>
On 02/12/2015 10:54, "openembedded-core-bounces@... on
behalf of Barros Pena, Belen" <openembedded-core-bounces@... on behalf of belen.barros.pena@...> wrote: sorry: pressed 'send' too soon :/ As I was saying, tagging https://github.com/dlespiau/patchwork/issues/36 Or supporting several projects within the same mailing list (in your case, one per layer) https://github.com/dlespiau/patchwork/issues/77 I don't know, but I can find out. If we remove the bundles, then I guess the migration might not keep theincluding bundles bundles. Belén as they are or do you-- |
|
Re: [OE-core] Patchwork & patch handling improvements
Philip Balister
On 12/02/2015 03:44 AM, Richard Purdie wrote:
On Mon, 2015-11-30 at 10:19 -0500, Trevor Woerner wrote:....On 11/26/15 16:00, Paul Eggleton wrote: Can you remind us what the whitespace argument is again? I forget, and I think it is important that everyone understand the details. Philip PS: I trimmed the distribution to only the architecture list. |
|
Re: [OE-core] Patchwork & patch handling improvements
Richard Purdie
On Wed, 2015-12-02 at 10:20 -0500, Philip Balister wrote:
On 12/02/2015 03:44 AM, Richard Purdie wrote:At the time we fixed python functions to use four space indentation andOn Mon, 2015-11-30 at 10:19 -0500, Trevor Woerner wrote:Can you remind us what the whitespace argument is again? I forget, and IOn 11/26/15 16:00, Paul Eggleton wrote:I think the areas where there are disagreements are comparatively small, no tabs. This matches python recommendations and avoided issues with some version of python which became stricter about whitespace if I remember rightly, something like that anyway. Regardless, there was a pressing reason to fix things to one format. There was a proposal we should standardise shell functions too. One of our style guides said tabs, one said spaces. Most of OE-Core was tabs. Rather than change everything, the TSC discussed and agreed to leave as tabs rather than have patch churn although it was a tough decision and a lot of discussion. meta-oe standardised on four space indentation for shell since they believed the TSC to be wrong. There are clearly arguments either way, not least that editor settings are easier if its all spaces rather than having to be context aware. Cheers, Richard |
|
Re: [OE-core] Patchwork & patch handling improvements
Martin Jansa
On Wed, Dec 02, 2015 at 10:59:17AM +0000, Barros Pena, Belen wrote:
OK, then can we please postpone this upgrade (or run both patchworks in parallel) until these 2 features are implemented and working? I'm depending on bundles heavily, to "mark" the patches for layers with dedicated maintainer and also for extra "status" like merged in "master-next" branch for jenkins build, because standard status values aren't fine-grained enough. Regards, -- Martin 'JaMa' Jansa jabber: Martin.Jansa@... |
|
Re: [OE-core] Patchwork & patch handling improvements
Burton, Ross <ross.burton@...>
On 2 December 2015 at 18:04, Martin Jansa <martin.jansa@...> wrote:
Sounds like instead of bundles the new patchworks needs the ability for a single list to represent multiple projects (so there'd be a meta-oe, meta-python, etc), and more status values. Ross |
|
Re: [OE-core] Patchwork & patch handling improvements
On Wed, Dec 2, 2015 at 10:43 AM, Burton, Ross <ross.burton@...> wrote:
I agree with Ross that if we can get the functionality we truly need into Patchwork--and JaMa can live with the result or help guide the functionality--that we have a better path into the future. Also, for what it's worth I vote for spaces only. We can provide editor templates/settings to make it easy on the end users. There are languages that require tabs, there are languages that require spaces. Deal with the required tabs when the necessary evil arises. --Tim |
|
Re: [OE-core] Patchwork & patch handling improvements
Philip Balister
On 12/02/2015 11:22 AM, Richard Purdie wrote:
On Wed, 2015-12-02 at 10:20 -0500, Philip Balister wrote:Guys, it is 2015 and we are having a discussion about white space. TakeOn 12/02/2015 03:44 AM, Richard Purdie wrote:At the time we fixed python functions to use four space indentation andOn Mon, 2015-11-30 at 10:19 -0500, Trevor Woerner wrote:Can you remind us what the whitespace argument is again? I forget, and IOn 11/26/15 16:00, Paul Eggleton wrote:I think the areas where there are disagreements are comparatively small, a deep breath. The underlying issue here is this was brought up with the TSC, since we need clear guideance for people working on meta-data. The TSC decided to follow OE-Core. I don't think there is a right or wrong answer here, just that we need to get our act together and move toward one way of indenting, so that we provide consistent guidance and people have a chance of automating processes. If you are unhappy with TSC decisions, please run for a position. Philip PS: Another shitty day in America has depressed me enough to make silly statement. Gah, no possibly to offensive to some people.
|
|
Re: Patchwork & patch handling improvements
Trevor Woerner
Hi Richard,
On 12/02/15 03:44, Richard Purdie wrote: On Mon, 2015-11-30 at 10:19 -0500, Trevor Woerner wrote:<snip>A while back I had the idea of "porting" the kernel's "checkpatch.pl" to The maintainers file is a different problem and its one of maintenance,I didn't mean to bring up old discussions :-) All I was trying to say is that one day I had the idea that it would be nice if we had something similar to the kernel's checkpatch.pl and get-maintainer.pl. In my mind all I was trying to do was to make it easier for people to: a) submit good patches (checkpatch.pl) and b) easier to get people to review patches (get-maintainer.pl). Sending good patches: I went looking for a concise description of the accepted recipe file format, poked around on the mailing list and on the web, then found the two camps who had their own styles. In the end I gave up on this. I'm excited about Paul's proposal and hopes it goes well. Maybe we can convince people to put this script in their ~/.gitconfig's so it gets called automatically! On reviewing patches: the kernel's get-maintainer script uses a bunch of sources to try to put together a list of people who might be interested in your patch (which can be used as a CC list for your email submission). First off it looks for a MAINTAINERS file, parses it, and figures out the people who are signed up for whatever things your patch affects. Secondly, it looks at the exact lines of whatever files your patch touches and looks through the revision history for those lines to see who last touched them. It then concatenates these two groups together and suggests these people as possible CC's when you email your patch. Personally I have my ~/.gitconfig configured such that this happens automatically every time I invoke "git send-email". All I do is "git send-email --identity=<repository> <patches...>" and my own "get_oe_maintainers.pl" script gets invoked via the "cccmd" option. Then my patch gets sent to the correct mailing list and also to the list of people who are most likely to be interested in reviewing it. I had never thought of, heard of, or considered the possibility that a MAINTAINERS file could be meant for anything other than as input to the get-list-of-cc-people script :-) And frankly, my get_oe_maintainers.pl script runs perfectly fine without a MAINTAINERS file. It's just that sometimes the maintainer isn't actually the person who last touched whatever part of whatever file you're patching, and the script isn't psychic (yet) ;-) |
|
OEDAM at ELC?
Trevor Woerner
Hi,
Now that registration is open for ELC, it would be a good time to find out if there are any plans for any post-ELC activities? Best regards, Trevor |
|
Re: [OE-core] Patchwork & patch handling improvements
Barros Pena, Belen <belen.barros.pena@...>
On 02/12/2015 18:43, "Burton, Ross" <ross.burton@...> wrote:
That's already in place https://github.com/dlespiau/patchwork/issues/15 and more status values.You can add status values (to the db directly or via the Django admin interface), but they will apply to all projects in the Patchwork instance. Ideally you should be able to set a list of status values per project I guess. Cheers Belén
|
|
Re: [OE-core] Patchwork & patch handling improvements
Burton, Ross <ross.burton@...>
On 3 December 2015 at 11:43, Barros Pena, Belen <belen.barros.pena@...> wrote:
Well it depends on what the extra values are. Martin, what values do you use? A "merged in staging" value would be useful for everyone. Ross |
|
Re: [OE-core] Patchwork & patch handling improvements
Martin Jansa
On Thu, Dec 03, 2015 at 12:51:22PM +0000, Burton, Ross wrote:
On 3 December 2015 at 11:43, Barros Pena, Belen <belen.barros.pena@...The list of bundles is in:wrote:Well it depends on what the extra values are. Martin, what values do youand more status values.You can add status values (to the db directly or via the Django admin http://www.openembedded.org/wiki/Patchwork#Multiple_layers_sharing_the_same_oe_project_on_patchwork Once the patches are added to correct bundles I mark them as archived to know which ones were sorted (the main page gets empty). Advantage of bundles (something similar can probably work with tags) is that the same patch can be in multiple bundles, e.g. I can include some change in master-next branch and bundle while also adding it to meta-networking bundle for Joe to merge it when ready. -- Martin 'JaMa' Jansa jabber: Martin.Jansa@... |
|
Re: [OE-core] Patchwork & patch handling improvements
Barros Pena, Belen <belen.barros.pena@...>
On 03/12/2015 12:51, "Burton, Ross" <ross.burton@...> wrote:
On 3 December 2015 at 11:43, Barros Pena, BelenHeh, I meant that's how Patchwork works at the moment, independently of what we do for OE Martin, what values do you use? A "merged in staging" value would be |
|