Exploiting GP GPUs in Ignite ML

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Exploiting GP GPUs in Ignite ML

Andrey Kuznetsov
Hi, Igniters!

Machine Learning algorithms consume much computational resources. So I'm
curious whether it's possible to exploit General Purpose GPUs in Ignite?
Did somebody think about this?

--
Best regards,
  Andrey Kuznetsov.
Reply | Threaded
Open this post in threaded view
|

Re: Exploiting GP GPUs in Ignite ML

irudyak
You mean utilizing JCuda for CPU bound ML tasks?

Igor

On Wed, Mar 21, 2018 at 10:45 AM, Andrey Kuznetsov <[hidden email]>
wrote:

> Hi, Igniters!
>
> Machine Learning algorithms consume much computational resources. So I'm
> curious whether it's possible to exploit General Purpose GPUs in Ignite?
> Did somebody think about this?
>
> --
> Best regards,
>   Andrey Kuznetsov.
>
Reply | Threaded
Open this post in threaded view
|

Re: Exploiting GP GPUs in Ignite ML

Andrey Kuznetsov
Yes, as one of possible options.

ср, 21 марта 2018, 20:53 Igor Rudyak <[hidden email]>:

> You mean utilizing JCuda for CPU bound ML tasks?
>
> Igor
>
> On Wed, Mar 21, 2018 at 10:45 AM, Andrey Kuznetsov <[hidden email]>
> wrote:
>
> > Hi, Igniters!
> >
> > Machine Learning algorithms consume much computational resources. So I'm
> > curious whether it's possible to exploit General Purpose GPUs in Ignite?
> > Did somebody think about this?
> >
> > --
> > Best regards,
> >   Andrey Kuznetsov.
> >
>
Reply | Threaded
Open this post in threaded view
|

Re: Exploiting GP GPUs in Ignite ML

Yuriy Babak
In reply to this post by Andrey Kuznetsov
Hi, Andrey

This is a very good question and a bit tricky one.

To start with, one can argue that you already can use GPU in Ignite ML. This
is because we support BLAS via netlib (IGNITE-5278) and netlib, in turn, can
be configured to use NVBLAS, as explained in this library documentation:
https://github.com/fommil/netlib-java/wiki/NVBLAS

However after you dig deeper you'll see that efficient support for GPU is
more complicated, specifically because of overhead involved in data
transfers between CPU and GPU memory. And this is much more complicated and
effort consuming because we need to design things to avoid redundant data
transfers which is likely to involve changes in a lot of ML Grid code.
Tentatively we plan to integrate JCuda library which exposes a fairly
straightforward API to control CPU-GPU data transfers (via JCuda.cudaMalloc
and JCuda.cudaFree).

This is sort of design challenge though because we would like to have it
transparent for users who don't have CUDA. We're currently working on
transition to Dataset API which possibly can make it easier (IGNITE-8059 and
other tickets). After this is completed I would like to revisit the matters
of GPU integration.

Best regards,
Yury



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/