Hi, Andrey
This is a very good question and a bit tricky one.
To start with, one can argue that you already can use GPU in Ignite ML. This
is because we support BLAS via netlib (IGNITE-5278) and netlib, in turn, can
be configured to use NVBLAS, as explained in this library documentation:
https://github.com/fommil/netlib-java/wiki/NVBLASHowever after you dig deeper you'll see that efficient support for GPU is
more complicated, specifically because of overhead involved in data
transfers between CPU and GPU memory. And this is much more complicated and
effort consuming because we need to design things to avoid redundant data
transfers which is likely to involve changes in a lot of ML Grid code.
Tentatively we plan to integrate JCuda library which exposes a fairly
straightforward API to control CPU-GPU data transfers (via JCuda.cudaMalloc
and JCuda.cudaFree).
This is sort of design challenge though because we would like to have it
transparent for users who don't have CUDA. We're currently working on
transition to Dataset API which possibly can make it easier (IGNITE-8059 and
other tickets). After this is completed I would like to revisit the matters
of GPU integration.
Best regards,
Yury
--
Sent from:
http://apache-ignite-developers.2346864.n4.nabble.com/