Re: 10X decrease in performance with Ignite 2.0.0

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Re: 10X decrease in performance with Ignite 2.0.0

yzhdanov
Cross-posting to devlist.

--Yakov
Reply | Threaded
Open this post in threaded view
|

Re: 10X decrease in performance with Ignite 2.0.0

Sergi
According to our benchmarks Ignite 2.0 is not slower for get operation. I
think we need some minimal reproducer that shows the performance
degradation before making any conclusions.

Sergi

2017-05-12 1:10 GMT+03:00 Yakov Zhdanov <[hidden email]>:

> Cross-posting to devlist.
>
> --Yakov
>
Reply | Threaded
Open this post in threaded view
|

Re: 10X decrease in performance with Ignite 2.0.0

Yakov Zhdanov-2
Absolutely agree here. I think if we can add getAll() benchmark and run it
with batch sizes of 5 and 10.

Thanks!
--
Yakov Zhdanov, Director R&D
*GridGain Systems*
www.gridgain.com

2017-05-12 10:48 GMT+03:00 Sergi Vladykin <[hidden email]>:

> According to our benchmarks Ignite 2.0 is not slower for get operation. I
> think we need some minimal reproducer that shows the performance
> degradation before making any conclusions.
>
> Sergi
>
> 2017-05-12 1:10 GMT+03:00 Yakov Zhdanov <[hidden email]>:
>
>> Cross-posting to devlist.
>>
>> --Yakov
>>
>
>
Reply | Threaded
Open this post in threaded view
|

Re: 10X decrease in performance with Ignite 2.0.0

dsetrakyan
In reply to this post by yzhdanov
Chris,

After looking at your code, the only slow down that may have occurred
between 1.9 and 2.0 is the actual cache "get(...)" operation. As you may
already know, Ignite 2.0 has moved data off-heap completely, so we do not
cache data in the deserialized form any more, by default. However, you can
still enable on-heap cache, in which case the data will be cached the same
way as in 1.9.

What is the average size of the object you store in cache? If it is large,
then you have 2 options:

1. Do not deserialize your objects into classes and work directly with
BinaryObject interface.
2. Turn on on-heap cache.

Will this work for you?

D.

On Fri, May 12, 2017 at 6:53 AM, Chris Berry <[hidden email]> wrote:

> Hi,
>
> I hope this helps.
>
> This is the flow. It is very simple.
> Although, the code in the ComputeJob (executor.compute(request, algType,
> correlationId);) is relatively application complex.
>
> I hope this code makes sense.
> I had to take the actual code and expunge all of the actual Domain bits
> from
> it…
>
> But as far as Ignite is concerned, it is mostly boilerplate.
>
> Thanks,
> -- Chris
>
> =====================================
> Invoke:
>
>     private List<AResponse> executeTaskOnGrid(AComputeTask<ARequest,
> AResponse> computeTask,  List<UUID> uuids) {
>              return
> managedIgnite.getCompute().withTimeout(timeout).execute(computeTask,
> uuids);
>     }
>
> =======================================
> ComputeTask:
>
> public class AComputeTask ComputeTask<TRequest extends ARequest ,
> TResponse>
>         extends ComputeTaskAdapter<Collection&lt;UUID>, List<TResponse>> {
>
>     private final AExecutorType type;
>     private final TRequest rootARequest;
>     private final AlgorithmType algType;
>     private final String correlationId;
>     private IgniteCacheName cacheName;
>
>     @IgniteInstanceResource
>     private Ignite ignite;
>
>     public AComputeTask(AExecutorType type, TRequest request,
> AlgorithmType
> algType,  String correlationId) {
>         this.cacheName = IgniteCacheName.ACache;
>         this.type = type;
>         this.rootARequest = request;
>         this.algType = algType;
>         this.correlationId = correlationId;
>     }
>
>     @Nullable
>     @Override
>     public Map<? extends ComputeJob, ClusterNode> map(List<ClusterNode>
> subgrid, @Nullable Collection<UUID> cacheKeys)
>             throws IgniteException {
>         Map<ClusterNode, Collection&lt;UUID>> nodeToKeysMap =
> ignite.<UUID>affinity(cacheName.name()).mapKeysToNodes(cacheKeys);
>         Map<ComputeJob, ClusterNode> jobMap = new HashMap<>();
>         for (Map.Entry<ClusterNode, Collection&lt;UUID>> mapping :
> nodeToKeysMap.entrySet()) {
>             ClusterNode node = mapping.getKey();
>             final Collection<UUID> mappedKeys = mapping.getValue();
>
>             if (node != null) {
>                 ComputeBatchContext context = new
> ComputeBatchContext(node.id(), node.consistentId(), correlationId);
>                 Map<AlgorithmType, UUID[]> nodeRequestUUIDMap =
> Collections.singletonMap(algType, convertToArray(mapping.getValue()));
>                 ARequest nodeARequest = new ARequest(rootARequest,
> nodeRequestUUIDMap);
>                 AComputeJob job = new AComputeJob(type, nodeARequest,
> algType, context);
>                 jobMap.put(job, node);
>             }
>         }
>         return jobMap;
>     }
>
>     private UUID[] convertToArray(Collection<UUID> cacheKeys) {
>         return cacheKeys.toArray(new UUID[cacheKeys.size()]);
>     }
>
>     @Nullable
>     @Override
>     public List<TResponse> reduce(List<ComputeJobResult> results) throws
> IgniteException {
>         List<TResponse> responses = new ArrayList<>();
>         for (ComputeJobResult res : results) {
>             if (res.getException() != null) {
>                 ARequest  request = ((AComputeJob)
> res.getJob()).getARequest();
>
>                 // The entire result failed. So return all as errors
>                 AExecutor<TRequest, TResponse> executor =
> AExecutorFactory.getAExecutor(type);
>                 List<UUID> unitUuids =
> Lists.newArrayList(request.getMappedUUIDs().get(algType));
>                 List<TResponse> errorResponses =
> executor.createErrorResponses(unitUuids.stream(),
> ErrorCode.UnhandledException);
>                 responses.addAll(errorResponses);
>             } else {
>                 List<TResponse> perNode = res.getData();
>                 responses.addAll(perNode);
>             }
>         }
>         return response;
>     }
> }
>
> ==================================
> ComputeJob
>
> public class AComputeJob<TRequest extends ARequest, TResponse> extends
> ComputeJobAdapter {
>     @Getter
>     private final ExecutorType executorType;
>     @Getter
>     private final TRequest request;
>     @Getter
>     private final AlgorithmType algType;
>     @Getter
>     private final String correlationId;
>     @Getter
>     private final ComputeBatchContext context;
>
>     @IgniteInstanceResource
>     private Ignite ignite;
>     @JobContextResource
>     private ComputeJobContext jobContext;
>
>     public AComputeJob(ExecutorType executorType, TRequest request,
> AlgorithmType algType, ComputeBatchContext context) {
>         this.executorType = executorType;
>         this.request = request;
>         this.algType = algType;
>         this.correlationId = context.getCorrelationId();
>         this.context = context;
>     }
>
>     @Override
>     public Object execute() throws IgniteException {
>         Executor<TRequest, TResponse> executor =
> ExecutorFactory.getExecutor(executorType);
>         return executor.compute(request, algType, correlationId);
>     }
>
>     @Override
>     public void cancel() {
>         //Indicates that the cluster wants us to cooperatively cancel the
> job
>         //Since we expect these to run quickly, not going to actually do
> anything with this right now
>     }
> }
>
>
>
>
>
>
>
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/10X-decrease-in-performance-with-
> Ignite-2-0-0-tp12637p12664.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>