Hi Igniters,
Some of us started to use the Bot to get an approval of PRs. It helps to protect master from new failures, but this requires to run RunAll tests set for each commit and this makes markable pressure to TC infra. I would like to ask you to share your ideas on how to make runAll faster, maybe you can share any of your measurements and any other info about (possible) bottlenecks. Sincerely, Dmitriy Pavlov |
Dmitriy,
You brought up really important topic that has a great impact on our project. Faster runAlls mean quicker feedback and faster progress on issues and features. We have a pretty big code base of tests, about 50 thousands tests. Do we have an idea of how these tests overlap with each other? In my mind it is possible that we have a good bunch of tests that cover the same code and could be replaced with just a single test. In the ideal world we would even determine the minimal set of tests to cover our codebase and remove excessive. -- Best regards, Sergey Chugunov. On Thu, Nov 15, 2018 at 2:34 PM Dmitriy Pavlov <[hidden email]> wrote: > Hi Igniters, > > > > Some of us started to use the Bot to get an approval of PRs. It helps to > protect master from new failures, but this requires to run RunAll tests set > for each commit and this makes markable pressure to TC infra. > > > > I would like to ask you to share your ideas on how to make runAll faster, > maybe you can share any of your measurements and any other info about > (possible) bottlenecks. > > > > Sincerely, > > Dmitriy Pavlov > |
Dmitry, Sergey,
There is a common pattern in the test codebase, when each test starts its own set of nodes, and closes them after the test finishes. Node startup takes quite a lot of time, and this time could be reduced, if tests shared the same set of nodes. I mean, if there is a test class with a lot of methods, then in many cases it's enough to start nodes in *beforeTestsStarted* and stop them in *afterTestsStopped* instead of doing it in every test case. We should encourage contributors to use this pattern. It's not applicable for all tests, and would violate isolation of tests, but I think, it's a good trade-off, because the running time of tests is a real problem. Denis чт, 15 нояб. 2018 г. в 15:11, Sergey Chugunov <[hidden email]>: > Dmitriy, > > You brought up really important topic that has a great impact on our > project. Faster runAlls mean quicker feedback and faster progress on issues > and features. > > We have a pretty big code base of tests, about 50 thousands tests. Do we > have an idea of how these tests overlap with each other? In my mind it is > possible that we have a good bunch of tests that cover the same code and > could be replaced with just a single test. > > In the ideal world we would even determine the minimal set of tests to > cover our codebase and remove excessive. > > -- > Best regards, > Sergey Chugunov. > > On Thu, Nov 15, 2018 at 2:34 PM Dmitriy Pavlov <[hidden email]> wrote: > > > Hi Igniters, > > > > > > > > Some of us started to use the Bot to get an approval of PRs. It helps to > > protect master from new failures, but this requires to run RunAll tests > set > > for each commit and this makes markable pressure to TC infra. > > > > > > > > I would like to ask you to share your ideas on how to make runAll faster, > > maybe you can share any of your measurements and any other info about > > (possible) bottlenecks. > > > > > > > > Sincerely, > > > > Dmitriy Pavlov > > > |
Denis, you can go even further. E.g. you can start topology once for the
full set of single threaded full api cache tests. Each test should start cache dynamically and run it logic. As for me, I would think of splitting RunAll to 2 steps - one containing basic tests and another with more complex tests. 2nd step should not start (except manually) if 1st step results in any build failure. --Yakov |
At one of the meetups, Vladimir Ozerov has said that we may specify
and move less risky to be broken tests in separate build plan which will be executed daily or weekly For example tests of compatibility with different JDK versions or compatibility between Ignite's releases. Also, I agree with Denis we should find and remove tests with the same checks. By the way, if someone of the community donates TC agents, can this help to reduce the time? On Thu, Nov 15, 2018 at 5:38 PM Yakov Zhdanov <[hidden email]> wrote: > > Denis, you can go even further. E.g. you can start topology once for the > full set of single threaded full api cache tests. Each test should start > cache dynamically and run it logic. > > As for me, I would think of splitting RunAll to 2 steps - one containing > basic tests and another with more complex tests. 2nd step should not start > (except manually) if 1st step results in any build failure. > > --Yakov -- Best Regards, Vyacheslav D. |
Sure, any additional compute power should help.
Extracting nightly builds is already started (at least prototyping), as well, as I know. And TC Bot triggers full(nightly) tests set: https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_RunAllNightly&branch_IgniteTests24Java8=%3Cdefault%3E&tab=buildTypeStatusDiv пт, 16 нояб. 2018 г. в 17:06, Vyacheslav Daradur <[hidden email]>: > At one of the meetups, Vladimir Ozerov has said that we may specify > and move less risky to be broken tests in separate build plan which > will be executed daily or weekly > > For example tests of compatibility with different JDK versions or > compatibility between Ignite's releases. > > Also, I agree with Denis we should find and remove tests with the same > checks. > > By the way, if someone of the community donates TC agents, can this > help to reduce the time? > > On Thu, Nov 15, 2018 at 5:38 PM Yakov Zhdanov <[hidden email]> wrote: > > > > Denis, you can go even further. E.g. you can start topology once for the > > full set of single threaded full api cache tests. Each test should start > > cache dynamically and run it logic. > > > > As for me, I would think of splitting RunAll to 2 steps - one containing > > basic tests and another with more complex tests. 2nd step should not > start > > (except manually) if 1st step results in any build failure. > > > > --Yakov > > > > -- > Best Regards, Vyacheslav D. > |
In reply to this post by daradurvs
Concerning current TeamCity compute capacity — I think we should invest into it’s stability at first: there are lots of problems associated with
— tests hangs (which now can render agent useless up to 3 days) — checkout issues (speedup proxy is installed but sometimes even it is not enough for checkout on 100 agents simultaneously, server side checkout research is required) — tests architecture (current one relies heavily on large file movement over network which also can be a bottleneck in case of 100 agents simultaneous download start) And so on. After it additional donation can be discussed. > On 16 Nov 2018, at 17:05, Vyacheslav Daradur <[hidden email]> wrote: > > At one of the meetups, Vladimir Ozerov has said that we may specify > and move less risky to be broken tests in separate build plan which > will be executed daily or weekly > > For example tests of compatibility with different JDK versions or > compatibility between Ignite's releases. > > Also, I agree with Denis we should find and remove tests with the same checks. > > By the way, if someone of the community donates TC agents, can this > help to reduce the time? > > On Thu, Nov 15, 2018 at 5:38 PM Yakov Zhdanov <[hidden email]> wrote: >> >> Denis, you can go even further. E.g. you can start topology once for the >> full set of single threaded full api cache tests. Each test should start >> cache dynamically and run it logic. >> >> As for me, I would think of splitting RunAll to 2 steps - one containing >> basic tests and another with more complex tests. 2nd step should not start >> (except manually) if 1st step results in any build failure. >> >> --Yakov > > > > -- > Best Regards, Vyacheslav D. |
Hi,
I would like to understand following. We are going to make TC green. We are going to make TC fast. Are we going to do it in parallel? пн, 19 нояб. 2018 г. в 08:39, Petr Ivanov <[hidden email]>: > > Concerning current TeamCity compute capacity — I think we should invest into it’s stability at first: there are lots of problems associated with > — tests hangs (which now can render agent useless up to 3 days) > — checkout issues (speedup proxy is installed but sometimes even it is not enough for checkout on 100 agents simultaneously, server side checkout research is required) > — tests architecture (current one relies heavily on large file movement over network which also can be a bottleneck in case of 100 agents simultaneous download start) > And so on. > > After it additional donation can be discussed. > > > On 16 Nov 2018, at 17:05, Vyacheslav Daradur <[hidden email]> wrote: > > > > At one of the meetups, Vladimir Ozerov has said that we may specify > > and move less risky to be broken tests in separate build plan which > > will be executed daily or weekly > > > > For example tests of compatibility with different JDK versions or > > compatibility between Ignite's releases. > > > > Also, I agree with Denis we should find and remove tests with the same checks. > > > > By the way, if someone of the community donates TC agents, can this > > help to reduce the time? > > > > On Thu, Nov 15, 2018 at 5:38 PM Yakov Zhdanov <[hidden email]> wrote: > >> > >> Denis, you can go even further. E.g. you can start topology once for the > >> full set of single threaded full api cache tests. Each test should start > >> cache dynamically and run it logic. > >> > >> As for me, I would think of splitting RunAll to 2 steps - one containing > >> basic tests and another with more complex tests. 2nd step should not start > >> (except manually) if 1st step results in any build failure. > >> > >> --Yakov > > > > > > > > -- > > Best Regards, Vyacheslav D. > -- Best regards, Ivan Pavlukhin |
Hi,
I have some thoughts about tests speed. First of all I must say that I do not see any simple and lightweight solution. I did some measurements a while ago and it looks like that simply optimizing number of Ignite node start/stop calls will not give us great speed up. I naively measured [1] time spent in node start/stop methods and for Cache 1 suite it turns out that only 2.5 minutes is spent there when whole suite run is about 1 hour. But I made an experiment only for Cache 1 suite and might have been not accurate enough. Moreover I thought about refactoring whole test base to employ some faster patterns and it looks unfeasible to me to accomplish with it in relatively short amount of time. So, I can imagine a following approach. There are a lot of load/concurrent tests, which executions are limited either by big number of iterations (1000+) or by time (30 sec, 60 sec). In ideal world I think there should not be such tests in Run All. If such tests regularly catch bugs unnoticed by regular tests then a there are areas which are not covered by regular (functional?) tests and missing tests could be added. If such assumption holds then load/concurrent tests should not less likely to fail if functional tests pass and therefore could be extracted out of Run All. The real world is not ideal so we can use a mentioned idea of significantly limiting load/concurrent tests execution time to 5-10 seconds in Run All. Of course there is a need to make an analysis to find all load/concurrent tests and measure theirs execution time. I think we could use some ML/Data analysis tools for that, could not we? Some additional figures: 1. Number of tests in Run All ~ 50K. 2. Number of test methods in core module ~ 7500. [1] https://github.com/apache/ignite/pull/5419/files пн, 19 нояб. 2018 г. в 14:34, Павлухин Иван <[hidden email]>: > > Hi, > > I would like to understand following. We are going to make TC green. > We are going to make TC fast. Are we going to do it in parallel? > пн, 19 нояб. 2018 г. в 08:39, Petr Ivanov <[hidden email]>: > > > > Concerning current TeamCity compute capacity — I think we should invest into it’s stability at first: there are lots of problems associated with > > — tests hangs (which now can render agent useless up to 3 days) > > — checkout issues (speedup proxy is installed but sometimes even it is not enough for checkout on 100 agents simultaneously, server side checkout research is required) > > — tests architecture (current one relies heavily on large file movement over network which also can be a bottleneck in case of 100 agents simultaneous download start) > > And so on. > > > > After it additional donation can be discussed. > > > > > On 16 Nov 2018, at 17:05, Vyacheslav Daradur <[hidden email]> wrote: > > > > > > At one of the meetups, Vladimir Ozerov has said that we may specify > > > and move less risky to be broken tests in separate build plan which > > > will be executed daily or weekly > > > > > > For example tests of compatibility with different JDK versions or > > > compatibility between Ignite's releases. > > > > > > Also, I agree with Denis we should find and remove tests with the same checks. > > > > > > By the way, if someone of the community donates TC agents, can this > > > help to reduce the time? > > > > > > On Thu, Nov 15, 2018 at 5:38 PM Yakov Zhdanov <[hidden email]> wrote: > > >> > > >> Denis, you can go even further. E.g. you can start topology once for the > > >> full set of single threaded full api cache tests. Each test should start > > >> cache dynamically and run it logic. > > >> > > >> As for me, I would think of splitting RunAll to 2 steps - one containing > > >> basic tests and another with more complex tests. 2nd step should not start > > >> (except manually) if 1st step results in any build failure. > > >> > > >> --Yakov > > > > > > > > > > > > -- > > > Best Regards, Vyacheslav D. > > > > > -- > Best regards, > Ivan Pavlukhin -- Best regards, Ivan Pavlukhin |
In reply to this post by Dmitry Pavlov
Hi Dmitriy!
We have over 50 000 test in our tests base. And this number will be noticeably increased soon by MVCC tests coverage activity. This means that it is very difficult to rework and rewrite these test manually to make it run faster. But we can choose another way. Do we have an ability to perform a statistical analysis over a considerable number of last tests runs? If we do, let's consider two points: 1. After careful consideration in terms of statistics it may turnout that significant number of these test are "evergreen" tests. It means that these tests check cases which are very difficult to break. If so, why should we run these tests each time? They are great candidates for night runs. 2. After dropping "evergreen" tests there are may be a number of tests with correlated results. There could be a lot of test groups with a some number of tests in each group, where either all tests are red or all tests are green. In this case in "fast" runs we can launch only one test from each group instead of entire group. Other tests in group can be launched at night build. Having a list of "good" tests (good tests = all tests - evergreen tests - groups (except chosen represenative from each group)), we can mark these tests with annotation @Category (or @Tag in junit 5). For fast tests runs we can run only annotated tests, for night runs - all tests as usual. When new test is added developers could decide to add or not to add this annotation. Annotated tests list should be reviewed monthly or weekly. Or, if possible, automate this procedure. -- Kind Regards Roman Kondakov On 15.11.2018 13:34, Dmitriy Pavlov wrote: > Hi Igniters, > > > > Some of us started to use the Bot to get an approval of PRs. It helps to > protect master from new failures, but this requires to run RunAll tests set > for each commit and this makes markable pressure to TC infra. > > > > I would like to ask you to share your ideas on how to make runAll faster, > maybe you can share any of your measurements and any other info about > (possible) bottlenecks. > > > > Sincerely, > > Dmitriy Pavlov > |
Hi, Roman.
> On 25 Nov 2018, at 21:26, Roman Kondakov <[hidden email]> wrote: > > Hi Dmitriy! > > We have over 50 000 test in our tests base. And this number will be noticeably increased soon by MVCC tests coverage activity. This means that it is very difficult to rework and rewrite these test manually to make it run faster. But we can choose another way. Do we have an ability to perform a statistical analysis over a considerable number of last tests runs? If we do, let's consider two points: > > 1. After careful consideration in terms of statistics it may turnout that significant number of these test are "evergreen" tests. It means that these tests check cases which are very difficult to break. If so, why should we run these tests each time? They are great candidates for night runs. > > 2. After dropping "evergreen" tests there are may be a number of tests with correlated results. There could be a lot of test groups with a some number of tests in each group, where either all tests are red or all tests are green. In this case in "fast" runs we can launch only one test from each group instead of entire group. Other tests in group can be launched at night build. What’s the point of having all this tests if they behave as one and will never fail exclusively? Maybe such tests require optimisation and shrinking into one? > > > Having a list of "good" tests (good tests = all tests - evergreen tests - groups (except chosen represenative from each group)), we can mark these tests with annotation @Category (or @Tag in junit 5). For fast tests runs we can run only annotated tests, for night runs - all tests as usual. > > When new test is added developers could decide to add or not to add this annotation. > > Annotated tests list should be reviewed monthly or weekly. Or, if possible, automate this procedure. > > > -- > Kind Regards > Roman Kondakov > > On 15.11.2018 13:34, Dmitriy Pavlov wrote: >> Hi Igniters, >> >> >> >> Some of us started to use the Bot to get an approval of PRs. It helps to >> protect master from new failures, but this requires to run RunAll tests set >> for each commit and this makes markable pressure to TC infra. >> >> >> >> I would like to ask you to share your ideas on how to make runAll faster, >> maybe you can share any of your measurements and any other info about >> (possible) bottlenecks. >> >> >> >> Sincerely, >> >> Dmitriy Pavlov >> |
Hi, Petr!
Actually these tests do not behave exactly as one test, but they behave mostly as one test. Often this is expressed in the following: we have a feature to test (i.e. transaction isolation) and different sets of parameters to be tested with this feature: cache mode, backups number, cluster size, persistence enabled/disabled, etc. And we have a list of tests with different combinations of these parameters: testTxIsolation(Replicated, 0 backups, 1 severs 0 clients, persistence disabled) testTxIsolation(Replicated, 0 backups, 2 severs 1 clients, persistence disabled) testTxIsolation(Replicated, 0 backups, 4 severs 2 clients, persistence disabled) testTxIsolation(Replicated, 0 backups, 1 severs 0 clients, persistence enabled) testTxIsolation(Replicated, 0 backups, 2 severs 1 clients, persistence enabled) testTxIsolation(Replicated, 0 backups, 4 severs 2 clients, persistence enabled) testTxIsolation(Partitioned, 0 backups, 1 severs 0 clients, persistence disabled) testTxIsolation(Partitioned, 1 backups, 2 severs 1 clients, persistence disabled) testTxIsolation(Partitioned, 2 backups, 4 severs 2 clients, persistence disabled) testTxIsolation(Partitioned, 0 backups, 1 severs 0 clients, persistence enabled) testTxIsolation(Partitioned, 1 backups, 2 severs 1 clients, persistence enabled) testTxIsolation(Partitioned, 2 backups, 4 severs 2 clients, persistence enabled) Each test in this list represents a special case which should be tested. If the key functionality of Tx isolation is broken by developer - all tests will be failed. But there are a very rare cases when only a subset of this tests is failed when others are green. In my opinion for "fast" runs we should trigger only one or two tests from this group - it should be enough to detect the most of bugs. But for night runs, of course, all cases should be checked. -- Kind Regards Roman Kondakov On 26.11.2018 11:11, Petr Ivanov wrote: > Hi, Roman. > >> On 25 Nov 2018, at 21:26, Roman Kondakov <[hidden email]> wrote: >> >> Hi Dmitriy! >> >> We have over 50 000 test in our tests base. And this number will be noticeably increased soon by MVCC tests coverage activity. This means that it is very difficult to rework and rewrite these test manually to make it run faster. But we can choose another way. Do we have an ability to perform a statistical analysis over a considerable number of last tests runs? If we do, let's consider two points: >> >> 1. After careful consideration in terms of statistics it may turnout that significant number of these test are "evergreen" tests. It means that these tests check cases which are very difficult to break. If so, why should we run these tests each time? They are great candidates for night runs. >> >> 2. After dropping "evergreen" tests there are may be a number of tests with correlated results. There could be a lot of test groups with a some number of tests in each group, where either all tests are red or all tests are green. In this case in "fast" runs we can launch only one test from each group instead of entire group. Other tests in group can be launched at night build. > What’s the point of having all this tests if they behave as one and will never fail exclusively? > Maybe such tests require optimisation and shrinking into one? > > >> >> Having a list of "good" tests (good tests = all tests - evergreen tests - groups (except chosen represenative from each group)), we can mark these tests with annotation @Category (or @Tag in junit 5). For fast tests runs we can run only annotated tests, for night runs - all tests as usual. >> >> When new test is added developers could decide to add or not to add this annotation. >> >> Annotated tests list should be reviewed monthly or weekly. Or, if possible, automate this procedure. >> >> >> -- >> Kind Regards >> Roman Kondakov >> >> On 15.11.2018 13:34, Dmitriy Pavlov wrote: >>> Hi Igniters, >>> >>> >>> >>> Some of us started to use the Bot to get an approval of PRs. It helps to >>> protect master from new failures, but this requires to run RunAll tests set >>> for each commit and this makes markable pressure to TC infra. >>> >>> >>> >>> I would like to ask you to share your ideas on how to make runAll faster, >>> maybe you can share any of your measurements and any other info about >>> (possible) bottlenecks. >>> >>> >>> >>> Sincerely, >>> >>> Dmitriy Pavlov >>> |
I see, thanks!
> On 26 Nov 2018, at 11:46, Roman Kondakov <[hidden email]> wrote: > > Hi, Petr! > > Actually these tests do not behave exactly as one test, but they behave mostly as one test. Often this is expressed in the following: we have a feature to test (i.e. transaction isolation) and different sets of parameters to be tested with this feature: cache mode, backups number, cluster size, persistence enabled/disabled, etc. And we have a list of tests with different combinations of these parameters: > > testTxIsolation(Replicated, 0 backups, 1 severs 0 clients, persistence disabled) > testTxIsolation(Replicated, 0 backups, 2 severs 1 clients, persistence disabled) > testTxIsolation(Replicated, 0 backups, 4 severs 2 clients, persistence disabled) > testTxIsolation(Replicated, 0 backups, 1 severs 0 clients, persistence enabled) > testTxIsolation(Replicated, 0 backups, 2 severs 1 clients, persistence enabled) > testTxIsolation(Replicated, 0 backups, 4 severs 2 clients, persistence enabled) > testTxIsolation(Partitioned, 0 backups, 1 severs 0 clients, persistence disabled) > testTxIsolation(Partitioned, 1 backups, 2 severs 1 clients, persistence disabled) > testTxIsolation(Partitioned, 2 backups, 4 severs 2 clients, persistence disabled) > testTxIsolation(Partitioned, 0 backups, 1 severs 0 clients, persistence enabled) > testTxIsolation(Partitioned, 1 backups, 2 severs 1 clients, persistence enabled) > testTxIsolation(Partitioned, 2 backups, 4 severs 2 clients, persistence enabled) > > Each test in this list represents a special case which should be tested. If the key functionality of Tx isolation is broken by developer - all tests will be failed. But there are a very rare cases when only a subset of this tests is failed when others are green. In my opinion for "fast" runs we should trigger only one or two tests from this group - it should be enough to detect the most of bugs. But for night runs, of course, all cases should be checked. > > -- > Kind Regards > Roman Kondakov > > On 26.11.2018 11:11, Petr Ivanov wrote: >> Hi, Roman. >> >>> On 25 Nov 2018, at 21:26, Roman Kondakov <[hidden email]> wrote: >>> >>> Hi Dmitriy! >>> >>> We have over 50 000 test in our tests base. And this number will be noticeably increased soon by MVCC tests coverage activity. This means that it is very difficult to rework and rewrite these test manually to make it run faster. But we can choose another way. Do we have an ability to perform a statistical analysis over a considerable number of last tests runs? If we do, let's consider two points: >>> >>> 1. After careful consideration in terms of statistics it may turnout that significant number of these test are "evergreen" tests. It means that these tests check cases which are very difficult to break. If so, why should we run these tests each time? They are great candidates for night runs. >>> >>> 2. After dropping "evergreen" tests there are may be a number of tests with correlated results. There could be a lot of test groups with a some number of tests in each group, where either all tests are red or all tests are green. In this case in "fast" runs we can launch only one test from each group instead of entire group. Other tests in group can be launched at night build. >> What’s the point of having all this tests if they behave as one and will never fail exclusively? >> Maybe such tests require optimisation and shrinking into one? >> >> >>> >>> Having a list of "good" tests (good tests = all tests - evergreen tests - groups (except chosen represenative from each group)), we can mark these tests with annotation @Category (or @Tag in junit 5). For fast tests runs we can run only annotated tests, for night runs - all tests as usual. >>> >>> When new test is added developers could decide to add or not to add this annotation. >>> >>> Annotated tests list should be reviewed monthly or weekly. Or, if possible, automate this procedure. >>> >>> >>> -- >>> Kind Regards >>> Roman Kondakov >>> >>> On 15.11.2018 13:34, Dmitriy Pavlov wrote: >>>> Hi Igniters, >>>> >>>> >>>> >>>> Some of us started to use the Bot to get an approval of PRs. It helps to >>>> protect master from new failures, but this requires to run RunAll tests set >>>> for each commit and this makes markable pressure to TC infra. >>>> >>>> >>>> >>>> I would like to ask you to share your ideas on how to make runAll faster, >>>> maybe you can share any of your measurements and any other info about >>>> (possible) bottlenecks. >>>> >>>> >>>> >>>> Sincerely, >>>> >>>> Dmitriy Pavlov >>>> |
In reply to this post by Dmitry Pavlov
It should be noticed that additional parameter TEST_SCALE_FACTOR was added.
This parameter with ScaleFactorUtil methods can be used for test size scaling for different runs (like ordinary and nightly RunALLs). If someone want to distinguish these builds he/she can apply scaling methods from ScaleFactorUtil in own tests. For nightly test TEST_SCALE_FACTOR=1.0, for non-nightly builds TEST_SCALE_FACTOR<1.0. For example in GridAbstractCacheInterceptorRebalanceTest test ScaleFactorUtil was used for scaling count of iterations. I guess that TEST_SCALE_FACTOR support will be added to runs at the same time with RunALL (nightly) runs. -- Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/ |
Roman,
Do you have some expectations how faster "correlated" tests elimination will make Run All? Also do you have a vision how can we determine such "correlated" tests, can we do it relatively fast? But all in all, I am not sure that reducing a group of correlated tests to only one test can show good stability. пн, 26 нояб. 2018 г. в 17:48, aplatonov <[hidden email]>: > > It should be noticed that additional parameter TEST_SCALE_FACTOR was added. > This parameter with ScaleFactorUtil methods can be used for test size > scaling for different runs (like ordinary and nightly RunALLs). If someone > want to distinguish these builds he/she can apply scaling methods from > ScaleFactorUtil in own tests. For nightly test TEST_SCALE_FACTOR=1.0, for > non-nightly builds TEST_SCALE_FACTOR<1.0. For example in > GridAbstractCacheInterceptorRebalanceTest test ScaleFactorUtil was used for > scaling count of iterations. I guess that TEST_SCALE_FACTOR support will be > added to runs at the same time with RunALL (nightly) runs. > > > > -- > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/ -- Best regards, Ivan Pavlukhin |
Hi Igniters,
At the moment we have several separated test suites: * ~Build Apache Ignite~ _ ~10..20mins * [Javadocs] _ ~10mins * [Licenses Headers] _ ~1min * [Check Code Style] _ ~7min The most time of each build (except Licenses Headers) is taken by dependency resolving. Their main goal is a check that the project is built properly. Also, profiles of [Javadocs], [Licenses Headers] uses at the step of preparing release (see DEVNOTES.txt) that means they are important. I'd suggest uniting the builds, this should reduce the time of tests on ~15 minutes and releases agents. What do you think? On Tue, Nov 27, 2018 at 3:56 PM Павлухин Иван <[hidden email]> wrote: > > Roman, > > Do you have some expectations how faster "correlated" tests > elimination will make Run All? Also do you have a vision how can we > determine such "correlated" tests, can we do it relatively fast? > > But all in all, I am not sure that reducing a group of correlated > tests to only one test can show good stability. > пн, 26 нояб. 2018 г. в 17:48, aplatonov <[hidden email]>: > > > > It should be noticed that additional parameter TEST_SCALE_FACTOR was added. > > This parameter with ScaleFactorUtil methods can be used for test size > > scaling for different runs (like ordinary and nightly RunALLs). If someone > > want to distinguish these builds he/she can apply scaling methods from > > ScaleFactorUtil in own tests. For nightly test TEST_SCALE_FACTOR=1.0, for > > non-nightly builds TEST_SCALE_FACTOR<1.0. For example in > > GridAbstractCacheInterceptorRebalanceTest test ScaleFactorUtil was used for > > scaling count of iterations. I guess that TEST_SCALE_FACTOR support will be > > added to runs at the same time with RunALL (nightly) runs. > > > > > > > > -- > > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/ > > > > -- > Best regards, > Ivan Pavlukhin -- Best Regards, Vyacheslav D. |
Hi Vyacheslav,
What do you mean by uniting? For me it looks like [Javadocs] and [Check Code Style] are not so time consuming comparing to tests, are not they? Do you suggest to combine mentioned 4 jobs into one? How long will it run in a such case? чт, 25 апр. 2019 г. в 10:50, Vyacheslav Daradur <[hidden email]>: > > Hi Igniters, > > At the moment we have several separated test suites: > * ~Build Apache Ignite~ _ ~10..20mins > * [Javadocs] _ ~10mins > * [Licenses Headers] _ ~1min > * [Check Code Style] _ ~7min > The most time of each build (except Licenses Headers) is taken by > dependency resolving. > > Their main goal is a check that the project is built properly. > > Also, profiles of [Javadocs], [Licenses Headers] uses at the step of > preparing release (see DEVNOTES.txt) that means they are important. > > I'd suggest uniting the builds, this should reduce the time of tests > on ~15 minutes and releases agents. > > What do you think? > > On Tue, Nov 27, 2018 at 3:56 PM Павлухин Иван <[hidden email]> wrote: > > > > Roman, > > > > Do you have some expectations how faster "correlated" tests > > elimination will make Run All? Also do you have a vision how can we > > determine such "correlated" tests, can we do it relatively fast? > > > > But all in all, I am not sure that reducing a group of correlated > > tests to only one test can show good stability. > > пн, 26 нояб. 2018 г. в 17:48, aplatonov <[hidden email]>: > > > > > > It should be noticed that additional parameter TEST_SCALE_FACTOR was added. > > > This parameter with ScaleFactorUtil methods can be used for test size > > > scaling for different runs (like ordinary and nightly RunALLs). If someone > > > want to distinguish these builds he/she can apply scaling methods from > > > ScaleFactorUtil in own tests. For nightly test TEST_SCALE_FACTOR=1.0, for > > > non-nightly builds TEST_SCALE_FACTOR<1.0. For example in > > > GridAbstractCacheInterceptorRebalanceTest test ScaleFactorUtil was used for > > > scaling count of iterations. I guess that TEST_SCALE_FACTOR support will be > > > added to runs at the same time with RunALL (nightly) runs. > > > > > > > > > > > > -- > > > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/ > > > > > > > > -- > > Best regards, > > Ivan Pavlukhin > > > > -- > Best Regards, Vyacheslav D. -- Best regards, Ivan Pavlukhin |
Ivan, you are right, I meant to combine them into one.
Here is a build [1], with enabled profiles (check-licenses, checkstyle) and check of javadoc to show the idea. Seems it takes ~15 minutes. [1] https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_ExperimentalBuildApacheIgniteJavadocLicensesHeaderCheckstyle&branch_IgniteTests24Java8=<default> On Fri, Apr 26, 2019 at 12:06 PM Павлухин Иван <[hidden email]> wrote: > > Hi Vyacheslav, > > What do you mean by uniting? > > For me it looks like [Javadocs] and [Check Code Style] are not so time > consuming comparing to tests, are not they? Do you suggest to combine > mentioned 4 jobs into one? How long will it run in a such case? > > чт, 25 апр. 2019 г. в 10:50, Vyacheslav Daradur <[hidden email]>: > > > > Hi Igniters, > > > > At the moment we have several separated test suites: > > * ~Build Apache Ignite~ _ ~10..20mins > > * [Javadocs] _ ~10mins > > * [Licenses Headers] _ ~1min > > * [Check Code Style] _ ~7min > > The most time of each build (except Licenses Headers) is taken by > > dependency resolving. > > > > Their main goal is a check that the project is built properly. > > > > Also, profiles of [Javadocs], [Licenses Headers] uses at the step of > > preparing release (see DEVNOTES.txt) that means they are important. > > > > I'd suggest uniting the builds, this should reduce the time of tests > > on ~15 minutes and releases agents. > > > > What do you think? > > > > On Tue, Nov 27, 2018 at 3:56 PM Павлухин Иван <[hidden email]> wrote: > > > > > > Roman, > > > > > > Do you have some expectations how faster "correlated" tests > > > elimination will make Run All? Also do you have a vision how can we > > > determine such "correlated" tests, can we do it relatively fast? > > > > > > But all in all, I am not sure that reducing a group of correlated > > > tests to only one test can show good stability. > > > пн, 26 нояб. 2018 г. в 17:48, aplatonov <[hidden email]>: > > > > > > > > It should be noticed that additional parameter TEST_SCALE_FACTOR was added. > > > > This parameter with ScaleFactorUtil methods can be used for test size > > > > scaling for different runs (like ordinary and nightly RunALLs). If someone > > > > want to distinguish these builds he/she can apply scaling methods from > > > > ScaleFactorUtil in own tests. For nightly test TEST_SCALE_FACTOR=1.0, for > > > > non-nightly builds TEST_SCALE_FACTOR<1.0. For example in > > > > GridAbstractCacheInterceptorRebalanceTest test ScaleFactorUtil was used for > > > > scaling count of iterations. I guess that TEST_SCALE_FACTOR support will be > > > > added to runs at the same time with RunALL (nightly) runs. > > > > > > > > > > > > > > > > -- > > > > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/ > > > > > > > > > > > > -- > > > Best regards, > > > Ivan Pavlukhin > > > > > > > > -- > > Best Regards, Vyacheslav D. > > > > -- > Best regards, > Ivan Pavlukhin -- Best Regards, Vyacheslav D. |
Folks,
+1 for merging all these suites into the single one. All these suites (Build Apache Ignite, Javadoc, Licenses Header, Checkstyle) required to be `green` all the time. So, we can consider making them a part of build Apache Ignite procedure. Also, I'd suggest going deeper. We can try to merge `Licenses Header` into the `Code style checker` [1]. This will simplify the code checking process. [1] http://checkstyle.sourceforge.net/config_header.html On Fri, 26 Apr 2019 at 13:17, Vyacheslav Daradur <[hidden email]> wrote: > > Ivan, you are right, I meant to combine them into one. > > Here is a build [1], with enabled profiles (check-licenses, > checkstyle) and check of javadoc to show the idea. > > Seems it takes ~15 minutes. > > [1] https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_ExperimentalBuildApacheIgniteJavadocLicensesHeaderCheckstyle&branch_IgniteTests24Java8=<default> > > On Fri, Apr 26, 2019 at 12:06 PM Павлухин Иван <[hidden email]> wrote: > > > > Hi Vyacheslav, > > > > What do you mean by uniting? > > > > For me it looks like [Javadocs] and [Check Code Style] are not so time > > consuming comparing to tests, are not they? Do you suggest to combine > > mentioned 4 jobs into one? How long will it run in a such case? > > > > чт, 25 апр. 2019 г. в 10:50, Vyacheslav Daradur <[hidden email]>: > > > > > > Hi Igniters, > > > > > > At the moment we have several separated test suites: > > > * ~Build Apache Ignite~ _ ~10..20mins > > > * [Javadocs] _ ~10mins > > > * [Licenses Headers] _ ~1min > > > * [Check Code Style] _ ~7min > > > The most time of each build (except Licenses Headers) is taken by > > > dependency resolving. > > > > > > Their main goal is a check that the project is built properly. > > > > > > Also, profiles of [Javadocs], [Licenses Headers] uses at the step of > > > preparing release (see DEVNOTES.txt) that means they are important. > > > > > > I'd suggest uniting the builds, this should reduce the time of tests > > > on ~15 minutes and releases agents. > > > > > > What do you think? > > > > > > On Tue, Nov 27, 2018 at 3:56 PM Павлухин Иван <[hidden email]> wrote: > > > > > > > > Roman, > > > > > > > > Do you have some expectations how faster "correlated" tests > > > > elimination will make Run All? Also do you have a vision how can we > > > > determine such "correlated" tests, can we do it relatively fast? > > > > > > > > But all in all, I am not sure that reducing a group of correlated > > > > tests to only one test can show good stability. > > > > пн, 26 нояб. 2018 г. в 17:48, aplatonov <[hidden email]>: > > > > > > > > > > It should be noticed that additional parameter TEST_SCALE_FACTOR was added. > > > > > This parameter with ScaleFactorUtil methods can be used for test size > > > > > scaling for different runs (like ordinary and nightly RunALLs). If someone > > > > > want to distinguish these builds he/she can apply scaling methods from > > > > > ScaleFactorUtil in own tests. For nightly test TEST_SCALE_FACTOR=1.0, for > > > > > non-nightly builds TEST_SCALE_FACTOR<1.0. For example in > > > > > GridAbstractCacheInterceptorRebalanceTest test ScaleFactorUtil was used for > > > > > scaling count of iterations. I guess that TEST_SCALE_FACTOR support will be > > > > > added to runs at the same time with RunALL (nightly) runs. > > > > > > > > > > > > > > > > > > > > -- > > > > > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/ > > > > > > > > > > > > > > > > -- > > > > Best regards, > > > > Ivan Pavlukhin > > > > > > > > > > > > -- > > > Best Regards, Vyacheslav D. > > > > > > > > -- > > Best regards, > > Ivan Pavlukhin > > > > -- > Best Regards, Vyacheslav D. |
Vyacheslav, Maxim,
Can we once again outline what benefits aggregated "Build Apache Ignite" performing various checks has comparing to a modularized approach in which separate builds perform separate tasks? For example, modularized approach looks nice because it is similar to good practices in software development where we separate responsibilities between different classes instead of aggregating them into a single class. And as usual multiple classes works together coordinating by a class from upper level. So, in fact it is a hierarchical structure. Returning to "Build Apache Ignite" it seems to me that ideally it can be hierarchical. There is a top level compilation (assembly?) job but it is always clear what tasks does it perform (check style, check license and other subjobs). пт, 26 апр. 2019 г. в 17:06, Maxim Muzafarov <[hidden email]>: > > Folks, > > +1 for merging all these suites into the single one. All these suites > (Build Apache Ignite, Javadoc, Licenses Header, Checkstyle) required > to be `green` all the time. So, we can consider making them a part of > build Apache Ignite procedure. > > Also, I'd suggest going deeper. We can try to merge `Licenses Header` > into the `Code style checker` [1]. This will simplify the code > checking process. > > [1] http://checkstyle.sourceforge.net/config_header.html > > On Fri, 26 Apr 2019 at 13:17, Vyacheslav Daradur <[hidden email]> wrote: > > > > Ivan, you are right, I meant to combine them into one. > > > > Here is a build [1], with enabled profiles (check-licenses, > > checkstyle) and check of javadoc to show the idea. > > > > Seems it takes ~15 minutes. > > > > [1] https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_ExperimentalBuildApacheIgniteJavadocLicensesHeaderCheckstyle&branch_IgniteTests24Java8=<default> > > > > On Fri, Apr 26, 2019 at 12:06 PM Павлухин Иван <[hidden email]> wrote: > > > > > > Hi Vyacheslav, > > > > > > What do you mean by uniting? > > > > > > For me it looks like [Javadocs] and [Check Code Style] are not so time > > > consuming comparing to tests, are not they? Do you suggest to combine > > > mentioned 4 jobs into one? How long will it run in a such case? > > > > > > чт, 25 апр. 2019 г. в 10:50, Vyacheslav Daradur <[hidden email]>: > > > > > > > > Hi Igniters, > > > > > > > > At the moment we have several separated test suites: > > > > * ~Build Apache Ignite~ _ ~10..20mins > > > > * [Javadocs] _ ~10mins > > > > * [Licenses Headers] _ ~1min > > > > * [Check Code Style] _ ~7min > > > > The most time of each build (except Licenses Headers) is taken by > > > > dependency resolving. > > > > > > > > Their main goal is a check that the project is built properly. > > > > > > > > Also, profiles of [Javadocs], [Licenses Headers] uses at the step of > > > > preparing release (see DEVNOTES.txt) that means they are important. > > > > > > > > I'd suggest uniting the builds, this should reduce the time of tests > > > > on ~15 minutes and releases agents. > > > > > > > > What do you think? > > > > > > > > On Tue, Nov 27, 2018 at 3:56 PM Павлухин Иван <[hidden email]> wrote: > > > > > > > > > > Roman, > > > > > > > > > > Do you have some expectations how faster "correlated" tests > > > > > elimination will make Run All? Also do you have a vision how can we > > > > > determine such "correlated" tests, can we do it relatively fast? > > > > > > > > > > But all in all, I am not sure that reducing a group of correlated > > > > > tests to only one test can show good stability. > > > > > пн, 26 нояб. 2018 г. в 17:48, aplatonov <[hidden email]>: > > > > > > > > > > > > It should be noticed that additional parameter TEST_SCALE_FACTOR was added. > > > > > > This parameter with ScaleFactorUtil methods can be used for test size > > > > > > scaling for different runs (like ordinary and nightly RunALLs). If someone > > > > > > want to distinguish these builds he/she can apply scaling methods from > > > > > > ScaleFactorUtil in own tests. For nightly test TEST_SCALE_FACTOR=1.0, for > > > > > > non-nightly builds TEST_SCALE_FACTOR<1.0. For example in > > > > > > GridAbstractCacheInterceptorRebalanceTest test ScaleFactorUtil was used for > > > > > > scaling count of iterations. I guess that TEST_SCALE_FACTOR support will be > > > > > > added to runs at the same time with RunALL (nightly) runs. > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/ > > > > > > > > > > > > > > > > > > > > -- > > > > > Best regards, > > > > > Ivan Pavlukhin > > > > > > > > > > > > > > > > -- > > > > Best Regards, Vyacheslav D. > > > > > > > > > > > > -- > > > Best regards, > > > Ivan Pavlukhin > > > > > > > > -- > > Best Regards, Vyacheslav D. -- Best regards, Ivan Pavlukhin |
Free forum by Nabble | Edit this page |