Tests for ML using binary builds

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

Tests for ML using binary builds

aplatonov
Hello, Igniters!
I would like to create several tests for ML algorithms using binary builds.
These tests should work in this way:
1) Get last master (or user-defined branch) from git repository;
2) Build Ignite with a release profile and create binary build;
3) Run several Ignite instances from binary build;
4) Run examples or synthetic tests with a training of ML algorithms and
inference;
5) Accumulate fails statistics on some board.

Currently, I'm working with own open repository in git that contains
scripts for Docker and Travis as the prototype. I want to complete these
tests and contribute them to Ignite.

Should I adapt such tests for TC after prototype complete or Travis can be
reused? Maybe such a process was created for other Ignite modules and I can
use it for ML. What do you think?

Best regards
Alexey Platonov.
Reply | Threaded
Open this post in threaded view
|

Re: Tests for ML using binary builds

Ivan Pavlukhin
Hi Alexey,

Could you please share some background? What problem are you solving
with running tests against binary builds? Perhaps, we need something
similar for other Ignite sub-projects as well.

пт, 1 мар. 2019 г. в 19:04, Алексей Платонов <[hidden email]>:

>
> Hello, Igniters!
> I would like to create several tests for ML algorithms using binary builds.
> These tests should work in this way:
> 1) Get last master (or user-defined branch) from git repository;
> 2) Build Ignite with a release profile and create binary build;
> 3) Run several Ignite instances from binary build;
> 4) Run examples or synthetic tests with a training of ML algorithms and
> inference;
> 5) Accumulate fails statistics on some board.
>
> Currently, I'm working with own open repository in git that contains
> scripts for Docker and Travis as the prototype. I want to complete these
> tests and contribute them to Ignite.
>
> Should I adapt such tests for TC after prototype complete or Travis can be
> reused? Maybe such a process was created for other Ignite modules and I can
> use it for ML. What do you think?
>
> Best regards
> Alexey Platonov.



--
Best regards,
Ivan Pavlukhin
Reply | Threaded
Open this post in threaded view
|

Re: Tests for ML using binary builds

aplatonov
Yes, sure.
Ignite ML algorithms actively use data sending between nodes and in several
cases uses per class loading mechanism.
I want to exclude failures when algorithms use unserializable data or try
to send lambdas with big context etc.
From this point of view we can just run ML examples on a little cluster
where servers is started from binary build.

сб, 2 мар. 2019 г. в 08:39, Павлухин Иван <[hidden email]>:

> Hi Alexey,
>
> Could you please share some background? What problem are you solving
> with running tests against binary builds? Perhaps, we need something
> similar for other Ignite sub-projects as well.
>
> пт, 1 мар. 2019 г. в 19:04, Алексей Платонов <[hidden email]>:
> >
> > Hello, Igniters!
> > I would like to create several tests for ML algorithms using binary
> builds.
> > These tests should work in this way:
> > 1) Get last master (or user-defined branch) from git repository;
> > 2) Build Ignite with a release profile and create binary build;
> > 3) Run several Ignite instances from binary build;
> > 4) Run examples or synthetic tests with a training of ML algorithms and
> > inference;
> > 5) Accumulate fails statistics on some board.
> >
> > Currently, I'm working with own open repository in git that contains
> > scripts for Docker and Travis as the prototype. I want to complete these
> > tests and contribute them to Ignite.
> >
> > Should I adapt such tests for TC after prototype complete or Travis can
> be
> > reused? Maybe such a process was created for other Ignite modules and I
> can
> > use it for ML. What do you think?
> >
> > Best regards
> > Alexey Platonov.
>
>
>
> --
> Best regards,
> Ivan Pavlukhin
>
Reply | Threaded
Open this post in threaded view
|

Re: Tests for ML using binary builds

dmitrievanthony
In reply to this post by aplatonov
Hi Alexey,

I think it's a great idea. Travis + Docker is a very good and cheap
solution, so we could start with it. Regards the statistics, Travis allows
to check a last build status using a badge, so it also shouldn't be a
problem.

Best regards,
Anton Dmitriev.



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
Reply | Threaded
Open this post in threaded view
|

Re: Tests for ML using binary builds

Ivan Pavlukhin
Alexey,

If problems arise in environments different from one where usual
Ignite tests run then definitely it is a good idea to cover it. And
testing other build kinds and in other environments is a good idea as
well. But a particular problem with serialization and peer class
loading is not clear for me. Why binary builds and Docker are needed
there? Why multi JVM tests from Ignite testing framework cannot reveal
mentioned problems?

Ideally I think we should aggregate all failure reporting in common
place. And for me TC bot is the best choice. Consequently it should be
TeamCity most likely.

But all in all I think we can give it a try according to you proposal
and see how the things will go.

вт, 5 мар. 2019 г. в 11:09, dmitrievanthony <[hidden email]>:

>
> Hi Alexey,
>
> I think it's a great idea. Travis + Docker is a very good and cheap
> solution, so we could start with it. Regards the statistics, Travis allows
> to check a last build status using a badge, so it also shouldn't be a
> problem.
>
> Best regards,
> Anton Dmitriev.
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/



--
Best regards,
Ivan Pavlukhin
Reply | Threaded
Open this post in threaded view
|

Re: Tests for ML using binary builds

Alexey Platonov
Ivan,
Thank for your answer. I want to use binary builds explicitly because they
don't share jars of client code. If we can use multi JVM test with
different classpaths I will use them - such approach is more convenient
from TC point of view.

P.S. I use Docker in my prototype just because it is easy for me and for
test cluster management - I can create docker-image with all configs and
scripts and run Ignite cluster in a separate network.

On Tue, Mar 5, 2019 at 12:28 PM Павлухин Иван <[hidden email]> wrote:

> Alexey,
>
> If problems arise in environments different from one where usual
> Ignite tests run then definitely it is a good idea to cover it. And
> testing other build kinds and in other environments is a good idea as
> well. But a particular problem with serialization and peer class
> loading is not clear for me. Why binary builds and Docker are needed
> there? Why multi JVM tests from Ignite testing framework cannot reveal
> mentioned problems?
>
> Ideally I think we should aggregate all failure reporting in common
> place. And for me TC bot is the best choice. Consequently it should be
> TeamCity most likely.
>
> But all in all I think we can give it a try according to you proposal
> and see how the things will go.
>
> вт, 5 мар. 2019 г. в 11:09, dmitrievanthony <[hidden email]>:
> >
> > Hi Alexey,
> >
> > I think it's a great idea. Travis + Docker is a very good and cheap
> > solution, so we could start with it. Regards the statistics, Travis
> allows
> > to check a last build status using a badge, so it also shouldn't be a
> > problem.
> >
> > Best regards,
> > Anton Dmitriev.
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>
>
>
> --
> Best regards,
> Ivan Pavlukhin
>
Reply | Threaded
Open this post in threaded view
|

Re: Tests for ML using binary builds

daradurvs
Hi, Alexey!

>>  If we can use multi JVM test with
>> different classpaths I will use them - such approach is more convenient
>> from TC point of view.

There is not such ability at the moment, you are only able to specify
additional JVM arguments in
'GridAbstractTest#additionalRemoteJvmArgs'. But, it is not very hard
to implement it if needed, see 'IgniteNodeRunner'.

We use such approach in our Compatibility Framework.
BTW, it is possible to use the framework for your goals if prepared
and installed artefacts in Maven local repository (mvn install) then
call 'startGrid(name, ver)' with your prepared version, e.g.
"2.8-SNAPSHOT".

On Tue, Mar 5, 2019 at 2:48 PM Alexey Platonov <[hidden email]> wrote:

>
> Ivan,
> Thank for your answer. I want to use binary builds explicitly because they
> don't share jars of client code. If we can use multi JVM test with
> different classpaths I will use them - such approach is more convenient
> from TC point of view.
>
> P.S. I use Docker in my prototype just because it is easy for me and for
> test cluster management - I can create docker-image with all configs and
> scripts and run Ignite cluster in a separate network.
>
> On Tue, Mar 5, 2019 at 12:28 PM Павлухин Иван <[hidden email]> wrote:
>
> > Alexey,
> >
> > If problems arise in environments different from one where usual
> > Ignite tests run then definitely it is a good idea to cover it. And
> > testing other build kinds and in other environments is a good idea as
> > well. But a particular problem with serialization and peer class
> > loading is not clear for me. Why binary builds and Docker are needed
> > there? Why multi JVM tests from Ignite testing framework cannot reveal
> > mentioned problems?
> >
> > Ideally I think we should aggregate all failure reporting in common
> > place. And for me TC bot is the best choice. Consequently it should be
> > TeamCity most likely.
> >
> > But all in all I think we can give it a try according to you proposal
> > and see how the things will go.
> >
> > вт, 5 мар. 2019 г. в 11:09, dmitrievanthony <[hidden email]>:
> > >
> > > Hi Alexey,
> > >
> > > I think it's a great idea. Travis + Docker is a very good and cheap
> > > solution, so we could start with it. Regards the statistics, Travis
> > allows
> > > to check a last build status using a badge, so it also shouldn't be a
> > > problem.
> > >
> > > Best regards,
> > > Anton Dmitriev.
> > >
> > >
> > >
> > > --
> > > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
> >
> >
> >
> > --
> > Best regards,
> > Ivan Pavlukhin
> >



--
Best Regards, Vyacheslav D.