Hi community,
I'd like to share my investigations about the subject. Even if the caches is off-heap and contains no data, the JVM heap memory consumed. I'm calling this feature "empty cache memory overhead" ("overhead" later for shot). The size of the memory consumed depends on many factors, and varying from 1 to 50 Mb per cache on every node in the cluster. There is real systems uses >1000 caches within the cluster. So the heap memory consumed on each node will be 50 Gb or more. I've found that overhead mainly depends on this factors: 1) local partitions count assigned to the node by the affinity function; 1.a) total number of partitions of the affinity function; 1.b) number of backups; 2) IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE 3) IGNITE_AFFINITY_HISTORY_SIZE After analyzing heapdumps and the sources I've found this countable objects upon overhead depends: 1) First group. GridDhtPartitionTopologyImpl = cache count GridDhtLocalPartition = cache count * local partitions count GridCircularBuffer$Item = cache count * local partitions count * item factor (default 32). Local partitions count = affinity function total partitions / node count * (1 + number of backups) Item factor = map capacity for storing -> IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE / affinity function partitions count, but minimum 20. Real values: GridDhtPartitionTopologyImpl = 1000 Affinity function total partitions = 1024 Node count = 16 Number of backups = 3 Local partitions count = 256 GridDhtLocalPartition = 256_000 GridCircularBuffer$Item = 8_192_000 2) Second group. GridAffinityAssignmentCache = cache count * node count GridAffinityAssignment = cache count * node count * assignment factor Assignment factor depends on topology version and IGNITE_AFFINITY_HISTORY_SIZE, default 6-7. Real values: GridAffinityAssignmentCache = 16_000 GridAffinityAssignment = 112_000 I think the implementation should be changed in the way the object counts should depends on cache data size. And the small (or empty) caches should be more lightweight as possible. -- Thanks, Alexandr Kuramshin |
Alex G.,
Will this be still relevant in 2.0 when we’re expecting to release the page memory? https://issues.apache.org/jira/browse/IGNITE-3477 <https://issues.apache.org/jira/browse/IGNITE-3477> — Denis > On Jan 10, 2017, at 9:41 PM, Alexandr Kuramshin <[hidden email]> wrote: > > Hi community, > > I'd like to share my investigations about the subject. > > Even if the caches is off-heap and contains no data, the JVM heap memory > consumed. I'm calling this feature "empty cache memory overhead" > ("overhead" later for shot). > > The size of the memory consumed depends on many factors, and varying from 1 > to 50 Mb per cache on every node in the cluster. > > There is real systems uses >1000 caches within the cluster. So the heap > memory consumed on each node will be 50 Gb or more. > > I've found that overhead mainly depends on this factors: > > 1) local partitions count assigned to the node by the affinity function; > > 1.a) total number of partitions of the affinity function; > > 1.b) number of backups; > > 2) IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE > > 3) IGNITE_AFFINITY_HISTORY_SIZE > > After analyzing heapdumps and the sources I've found this countable objects > upon overhead depends: > > 1) First group. > > GridDhtPartitionTopologyImpl = cache count > > GridDhtLocalPartition = cache count * local partitions count > > GridCircularBuffer$Item = cache count * local partitions count * item > factor (default 32). > > Local partitions count = affinity function total partitions / node count * > (1 + number of backups) > > Item factor = map capacity for storing -> > IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE / affinity function partitions > count, but minimum 20. > > Real values: > > GridDhtPartitionTopologyImpl = 1000 > Affinity function total partitions = 1024 > Node count = 16 > Number of backups = 3 > Local partitions count = 256 > GridDhtLocalPartition = 256_000 > GridCircularBuffer$Item = 8_192_000 > > 2) Second group. > > GridAffinityAssignmentCache = cache count * node count > > GridAffinityAssignment = cache count * node count * assignment factor > > Assignment factor depends on topology version and > IGNITE_AFFINITY_HISTORY_SIZE, default 6-7. > > Real values: > > GridAffinityAssignmentCache = 16_000 > GridAffinityAssignment = 112_000 > > I think the implementation should be changed in the way the object counts > should depends on cache data size. And the small (or empty) caches should > be more lightweight as possible. > > -- > Thanks, > Alexandr Kuramshin |
Yes, it is relevant. All of these structures are not replaced with page
memory. On Thu, Jan 12, 2017 at 1:34 AM, Denis Magda <[hidden email]> wrote: > Alex G., > > Will this be still relevant in 2.0 when we’re expecting to release the > page memory? > https://issues.apache.org/jira/browse/IGNITE-3477 > > — > Denis > > On Jan 10, 2017, at 9:41 PM, Alexandr Kuramshin <[hidden email]> > wrote: > > Hi community, > > I'd like to share my investigations about the subject. > > Even if the caches is off-heap and contains no data, the JVM heap memory > consumed. I'm calling this feature "empty cache memory overhead" > ("overhead" later for shot). > > The size of the memory consumed depends on many factors, and varying from 1 > to 50 Mb per cache on every node in the cluster. > > There is real systems uses >1000 caches within the cluster. So the heap > memory consumed on each node will be 50 Gb or more. > > I've found that overhead mainly depends on this factors: > > 1) local partitions count assigned to the node by the affinity function; > > 1.a) total number of partitions of the affinity function; > > 1.b) number of backups; > > 2) IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE > > 3) IGNITE_AFFINITY_HISTORY_SIZE > > After analyzing heapdumps and the sources I've found this countable objects > upon overhead depends: > > 1) First group. > > GridDhtPartitionTopologyImpl = cache count > > GridDhtLocalPartition = cache count * local partitions count > > GridCircularBuffer$Item = cache count * local partitions count * item > factor (default 32). > > Local partitions count = affinity function total partitions / node count * > (1 + number of backups) > > Item factor = map capacity for storing -> > IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE / affinity function partitions > count, but minimum 20. > > Real values: > > GridDhtPartitionTopologyImpl = 1000 > Affinity function total partitions = 1024 > Node count = 16 > Number of backups = 3 > Local partitions count = 256 > GridDhtLocalPartition = 256_000 > GridCircularBuffer$Item = 8_192_000 > > 2) Second group. > > GridAffinityAssignmentCache = cache count * node count > > GridAffinityAssignment = cache count * node count * assignment factor > > Assignment factor depends on topology version and > IGNITE_AFFINITY_HISTORY_SIZE, default 6-7. > > Real values: > > GridAffinityAssignmentCache = 16_000 > GridAffinityAssignment = 112_000 > > I think the implementation should be changed in the way the object counts > should depends on cache data size. And the small (or empty) caches should > be more lightweight as possible. > > -- > Thanks, > Alexandr Kuramshin > > > -- Alexey Goncharuk Lead Architect GridGain Systems, Inc. www.gridgain.com |
Alexander,
Please feel free to create a JIRA ticket. Hope that someone will start working on it soon. — Denis > On Jan 11, 2017, at 11:05 PM, Alexey Goncharuk <[hidden email]> wrote: > > Yes, it is relevant. All of these structures are not replaced with page memory. > > On Thu, Jan 12, 2017 at 1:34 AM, Denis Magda <[hidden email] <mailto:[hidden email]>> wrote: > Alex G., > > Will this be still relevant in 2.0 when we’re expecting to release the page memory? > https://issues.apache.org/jira/browse/IGNITE-3477 <https://issues.apache.org/jira/browse/IGNITE-3477> > > — > Denis > >> On Jan 10, 2017, at 9:41 PM, Alexandr Kuramshin <[hidden email] <mailto:[hidden email]>> wrote: >> >> Hi community, >> >> I'd like to share my investigations about the subject. >> >> Even if the caches is off-heap and contains no data, the JVM heap memory >> consumed. I'm calling this feature "empty cache memory overhead" >> ("overhead" later for shot). >> >> The size of the memory consumed depends on many factors, and varying from 1 >> to 50 Mb per cache on every node in the cluster. >> >> There is real systems uses >1000 caches within the cluster. So the heap >> memory consumed on each node will be 50 Gb or more. >> >> I've found that overhead mainly depends on this factors: >> >> 1) local partitions count assigned to the node by the affinity function; >> >> 1.a) total number of partitions of the affinity function; >> >> 1.b) number of backups; >> >> 2) IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE >> >> 3) IGNITE_AFFINITY_HISTORY_SIZE >> >> After analyzing heapdumps and the sources I've found this countable objects >> upon overhead depends: >> >> 1) First group. >> >> GridDhtPartitionTopologyImpl = cache count >> >> GridDhtLocalPartition = cache count * local partitions count >> >> GridCircularBuffer$Item = cache count * local partitions count * item >> factor (default 32). >> >> Local partitions count = affinity function total partitions / node count * >> (1 + number of backups) >> >> Item factor = map capacity for storing -> >> IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE / affinity function partitions >> count, but minimum 20. >> >> Real values: >> >> GridDhtPartitionTopologyImpl = 1000 >> Affinity function total partitions = 1024 >> Node count = 16 >> Number of backups = 3 >> Local partitions count = 256 >> GridDhtLocalPartition = 256_000 >> GridCircularBuffer$Item = 8_192_000 >> >> 2) Second group. >> >> GridAffinityAssignmentCache = cache count * node count >> >> GridAffinityAssignment = cache count * node count * assignment factor >> >> Assignment factor depends on topology version and >> IGNITE_AFFINITY_HISTORY_SIZE, default 6-7. >> >> Real values: >> >> GridAffinityAssignmentCache = 16_000 >> GridAffinityAssignment = 112_000 >> >> I think the implementation should be changed in the way the object counts >> should depends on cache data size. And the small (or empty) caches should >> be more lightweight as possible. >> >> -- >> Thanks, >> Alexandr Kuramshin > > > > > -- > Alexey Goncharuk > Lead Architect > GridGain Systems, Inc. > www.gridgain.com <http://www.gridgain.com/> |
Free forum by Nabble | Edit this page |