Same Affinity For Same Key On All Caches

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

Same Affinity For Same Key On All Caches

Alper Tekinalp
Hi all.

Is it possible to configures affinities in a way that partition for same
key will be on same node? So calling
ignite.affinity(CACHE).mapKeyToNode(KEY).id() with same key for any cache
will return same node id. Is that possible with a configuration etc.?

--
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv.
Gardenya 5 Plaza K:6 Ataşehir
34758 İSTANBUL

Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
www.evam.com.tr
<http://www.evam.com>
Reply | Threaded
Open this post in threaded view
|

Re: Same Affinity For Same Key On All Caches

Andrew Mashenkov
Hi Alper,

You can implement you own affinityFunction to achieve this.
In AF you should implement 2 mappings: key to partition and partition to
node.

First mapping looks trivial, but second doesn't.
Even if you will lucky to do it, there is no way to choose what node wil be
primary and what will be backup for a partition,
that can be an issue.


On Thu, Feb 23, 2017 at 10:44 AM, Alper Tekinalp <[hidden email]> wrote:

> Hi all.
>
> Is it possible to configures affinities in a way that partition for same
> key will be on same node? So calling
> ignite.affinity(CACHE).mapKeyToNode(KEY).id() with same key for any cache
> will return same node id. Is that possible with a configuration etc.?
>
> --
> Alper Tekinalp
>
> Software Developer
> Evam Streaming Analytics
>
> Atatürk Mah. Turgut Özal Bulv.
> Gardenya 5 Plaza K:6 Ataşehir
> 34758 İSTANBUL
>
> Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
> www.evam.com.tr
> <http://www.evam.com>
>



--
Best regards,
Andrey V. Mashenkov
Reply | Threaded
Open this post in threaded view
|

Re: Same Affinity For Same Key On All Caches

Valentin Kulichenko
Actually, this should work this way out of the box, as long as the same
affinity function is configured for all caches (that's true for default
settings).

Andrey, am I missing something?

-Val

On Thu, Feb 23, 2017 at 7:02 AM, Andrey Mashenkov <
[hidden email]> wrote:

> Hi Alper,
>
> You can implement you own affinityFunction to achieve this.
> In AF you should implement 2 mappings: key to partition and partition to
> node.
>
> First mapping looks trivial, but second doesn't.
> Even if you will lucky to do it, there is no way to choose what node wil be
> primary and what will be backup for a partition,
> that can be an issue.
>
>
> On Thu, Feb 23, 2017 at 10:44 AM, Alper Tekinalp <[hidden email]> wrote:
>
> > Hi all.
> >
> > Is it possible to configures affinities in a way that partition for same
> > key will be on same node? So calling
> > ignite.affinity(CACHE).mapKeyToNode(KEY).id() with same key for any
> cache
> > will return same node id. Is that possible with a configuration etc.?
> >
> > --
> > Alper Tekinalp
> >
> > Software Developer
> > Evam Streaming Analytics
> >
> > Atatürk Mah. Turgut Özal Bulv.
> > Gardenya 5 Plaza K:6 Ataşehir
> > 34758 İSTANBUL
> >
> > Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
> > www.evam.com.tr
> > <http://www.evam.com>
> >
>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>
Reply | Threaded
Open this post in threaded view
|

Re: Same Affinity For Same Key On All Caches

Andrew Mashenkov
Val,

Yes, with same affinity function entries with same key should be saved in
same nodes.
As far as I know, primary node is assinged automatically by Ignite. And I'm
not sure that
there is a guarantee that 2 entries from different caches with same key
will have same primary and backup nodes.
So, get operation for 1-st key can be local while get() for 2-nd key will
be remote.


On Thu, Feb 23, 2017 at 6:49 PM, Valentin Kulichenko <
[hidden email]> wrote:

> Actually, this should work this way out of the box, as long as the same
> affinity function is configured for all caches (that's true for default
> settings).
>
> Andrey, am I missing something?
>
> -Val
>
> On Thu, Feb 23, 2017 at 7:02 AM, Andrey Mashenkov <
> [hidden email]> wrote:
>
> > Hi Alper,
> >
> > You can implement you own affinityFunction to achieve this.
> > In AF you should implement 2 mappings: key to partition and partition to
> > node.
> >
> > First mapping looks trivial, but second doesn't.
> > Even if you will lucky to do it, there is no way to choose what node wil
> be
> > primary and what will be backup for a partition,
> > that can be an issue.
> >
> >
> > On Thu, Feb 23, 2017 at 10:44 AM, Alper Tekinalp <[hidden email]> wrote:
> >
> > > Hi all.
> > >
> > > Is it possible to configures affinities in a way that partition for
> same
> > > key will be on same node? So calling
> > > ignite.affinity(CACHE).mapKeyToNode(KEY).id() with same key for any
> > cache
> > > will return same node id. Is that possible with a configuration etc.?
> > >
> > > --
> > > Alper Tekinalp
> > >
> > > Software Developer
> > > Evam Streaming Analytics
> > >
> > > Atatürk Mah. Turgut Özal Bulv.
> > > Gardenya 5 Plaza K:6 Ataşehir
> > > 34758 İSTANBUL
> > >
> > > Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
> > > www.evam.com.tr
> > > <http://www.evam.com>
> > >
> >
> >
> >
> > --
> > Best regards,
> > Andrey V. Mashenkov
> >
>



--
Best regards,
Andrey V. Mashenkov
Reply | Threaded
Open this post in threaded view
|

Re: Same Affinity For Same Key On All Caches

dsetrakyan
If you use the same (or default) configuration for the affinity, then the
same key in different caches will always end up on the same node. This is
guaranteed.

D.

On Thu, Feb 23, 2017 at 8:09 AM, Andrey Mashenkov <
[hidden email]> wrote:

> Val,
>
> Yes, with same affinity function entries with same key should be saved in
> same nodes.
> As far as I know, primary node is assinged automatically by Ignite. And I'm
> not sure that
> there is a guarantee that 2 entries from different caches with same key
> will have same primary and backup nodes.
> So, get operation for 1-st key can be local while get() for 2-nd key will
> be remote.
>
>
> On Thu, Feb 23, 2017 at 6:49 PM, Valentin Kulichenko <
> [hidden email]> wrote:
>
> > Actually, this should work this way out of the box, as long as the same
> > affinity function is configured for all caches (that's true for default
> > settings).
> >
> > Andrey, am I missing something?
> >
> > -Val
> >
> > On Thu, Feb 23, 2017 at 7:02 AM, Andrey Mashenkov <
> > [hidden email]> wrote:
> >
> > > Hi Alper,
> > >
> > > You can implement you own affinityFunction to achieve this.
> > > In AF you should implement 2 mappings: key to partition and partition
> to
> > > node.
> > >
> > > First mapping looks trivial, but second doesn't.
> > > Even if you will lucky to do it, there is no way to choose what node
> wil
> > be
> > > primary and what will be backup for a partition,
> > > that can be an issue.
> > >
> > >
> > > On Thu, Feb 23, 2017 at 10:44 AM, Alper Tekinalp <[hidden email]>
> wrote:
> > >
> > > > Hi all.
> > > >
> > > > Is it possible to configures affinities in a way that partition for
> > same
> > > > key will be on same node? So calling
> > > > ignite.affinity(CACHE).mapKeyToNode(KEY).id() with same key for any
> > > cache
> > > > will return same node id. Is that possible with a configuration etc.?
> > > >
> > > > --
> > > > Alper Tekinalp
> > > >
> > > > Software Developer
> > > > Evam Streaming Analytics
> > > >
> > > > Atatürk Mah. Turgut Özal Bulv.
> > > > Gardenya 5 Plaza K:6 Ataşehir
> > > > 34758 İSTANBUL
> > > >
> > > > Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
> > > > www.evam.com.tr
> > > > <http://www.evam.com>
> > > >
> > >
> > >
> > >
> > > --
> > > Best regards,
> > > Andrey V. Mashenkov
> > >
> >
>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>
Reply | Threaded
Open this post in threaded view
|

Re: Same Affinity For Same Key On All Caches

Alper Tekinalp
Hi.

Thanks for your comments. Let me investigate the issue deeper.

Regards.

On Thu, Feb 23, 2017 at 11:00 PM, Dmitriy Setrakyan <[hidden email]>
wrote:

> If you use the same (or default) configuration for the affinity, then the
> same key in different caches will always end up on the same node. This is
> guaranteed.
>
> D.
>
> On Thu, Feb 23, 2017 at 8:09 AM, Andrey Mashenkov <
> [hidden email]> wrote:
>
>> Val,
>>
>> Yes, with same affinity function entries with same key should be saved in
>> same nodes.
>> As far as I know, primary node is assinged automatically by Ignite. And
>> I'm
>> not sure that
>> there is a guarantee that 2 entries from different caches with same key
>> will have same primary and backup nodes.
>> So, get operation for 1-st key can be local while get() for 2-nd key will
>> be remote.
>>
>>
>> On Thu, Feb 23, 2017 at 6:49 PM, Valentin Kulichenko <
>> [hidden email]> wrote:
>>
>> > Actually, this should work this way out of the box, as long as the same
>> > affinity function is configured for all caches (that's true for default
>> > settings).
>> >
>> > Andrey, am I missing something?
>> >
>> > -Val
>> >
>> > On Thu, Feb 23, 2017 at 7:02 AM, Andrey Mashenkov <
>> > [hidden email]> wrote:
>> >
>> > > Hi Alper,
>> > >
>> > > You can implement you own affinityFunction to achieve this.
>> > > In AF you should implement 2 mappings: key to partition and partition
>> to
>> > > node.
>> > >
>> > > First mapping looks trivial, but second doesn't.
>> > > Even if you will lucky to do it, there is no way to choose what node
>> wil
>> > be
>> > > primary and what will be backup for a partition,
>> > > that can be an issue.
>> > >
>> > >
>> > > On Thu, Feb 23, 2017 at 10:44 AM, Alper Tekinalp <[hidden email]>
>> wrote:
>> > >
>> > > > Hi all.
>> > > >
>> > > > Is it possible to configures affinities in a way that partition for
>> > same
>> > > > key will be on same node? So calling
>> > > > ignite.affinity(CACHE).mapKeyToNode(KEY).id() with same key for any
>> > > cache
>> > > > will return same node id. Is that possible with a configuration
>> etc.?
>> > > >
>> > > > --
>> > > > Alper Tekinalp
>> > > >
>> > > > Software Developer
>> > > > Evam Streaming Analytics
>> > > >
>> > > > Atatürk Mah. Turgut Özal Bulv.
>> > > > Gardenya 5 Plaza K:6 Ataşehir
>> > > > 34758 İSTANBUL
>> > > >
>> > > > Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
>> > > > www.evam.com.tr
>> > > > <http://www.evam.com>
>> > > >
>> > >
>> > >
>> > >
>> > > --
>> > > Best regards,
>> > > Andrey V. Mashenkov
>> > >
>> >
>>
>>
>>
>> --
>> Best regards,
>> Andrey V. Mashenkov
>>
>
>


--
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv.
Gardenya 5 Plaza K:6 Ataşehir
34758 İSTANBUL

Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
www.evam.com.tr
<http://www.evam.com>
Reply | Threaded
Open this post in threaded view
|

Re: Same Affinity For Same Key On All Caches

Alper Tekinalp
Hi.

As I investigated the issue occurs when different nodes creates the caches.

Say I have 2 nodes node1 and node2 and 2 caches cache1 and cache2. If I create cache1 on node1 and create cache2 on node2 with same FairAffinityFunction with same partition size, keys can map different nodes on different caches.

You can find my test code and resuts as attachment. 

So is that a bug? Is there a way to force same mappings althought caches created on different nodes?


On Fri, Feb 24, 2017 at 9:37 AM, Alper Tekinalp <[hidden email]> wrote:
Hi.

Thanks for your comments. Let me investigate the issue deeper.

Regards.

On Thu, Feb 23, 2017 at 11:00 PM, Dmitriy Setrakyan <[hidden email]> wrote:
If you use the same (or default) configuration for the affinity, then the same key in different caches will always end up on the same node. This is guaranteed.

D.

On Thu, Feb 23, 2017 at 8:09 AM, Andrey Mashenkov <[hidden email]> wrote:
Val,

Yes, with same affinity function entries with same key should be saved in
same nodes.
As far as I know, primary node is assinged automatically by Ignite. And I'm
not sure that
there is a guarantee that 2 entries from different caches with same key
will have same primary and backup nodes.
So, get operation for 1-st key can be local while get() for 2-nd key will
be remote.


On Thu, Feb 23, 2017 at 6:49 PM, Valentin Kulichenko <
[hidden email]> wrote:

> Actually, this should work this way out of the box, as long as the same
> affinity function is configured for all caches (that's true for default
> settings).
>
> Andrey, am I missing something?
>
> -Val
>
> On Thu, Feb 23, 2017 at 7:02 AM, Andrey Mashenkov <
> [hidden email]> wrote:
>
> > Hi Alper,
> >
> > You can implement you own affinityFunction to achieve this.
> > In AF you should implement 2 mappings: key to partition and partition to
> > node.
> >
> > First mapping looks trivial, but second doesn't.
> > Even if you will lucky to do it, there is no way to choose what node wil
> be
> > primary and what will be backup for a partition,
> > that can be an issue.
> >
> >
> > On Thu, Feb 23, 2017 at 10:44 AM, Alper Tekinalp <[hidden email]> wrote:
> >
> > > Hi all.
> > >
> > > Is it possible to configures affinities in a way that partition for
> same
> > > key will be on same node? So calling
> > > ignite.affinity(CACHE).mapKeyToNode(KEY).id() with same key for any
> > cache
> > > will return same node id. Is that possible with a configuration etc.?
> > >
> > > --
> > > Alper Tekinalp
> > >
> > > Software Developer
> > > Evam Streaming Analytics
> > >
> > > Atatürk Mah. Turgut Özal Bulv.
> > > Gardenya 5 Plaza K:6 Ataşehir
> > > 34758 İSTANBUL
> > >
> > > Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153" target="_blank">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154" target="_blank">+90 216 455 01 54
> > > www.evam.com.tr
> > > <http://www.evam.com>
> > >
> >
> >
> >
> > --
> > Best regards,
> > Andrey V. Mashenkov
> >
>



--
Best regards,
Andrey V. Mashenkov




--
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv. 
Gardenya 5 Plaza K:6 Ataşehir 
34758 İSTANBUL

Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153" target="_blank">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154" target="_blank">+90 216 455 01 54
www.evam.com.tr



--
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv. 
Gardenya 5 Plaza K:6 Ataşehir 
34758 İSTANBUL

Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153" target="_blank">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154" target="_blank">+90 216 455 01 54
www.evam.com.tr

node1_output.txt (4K) Download Attachment
node0_output.txt (4K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Same Affinity For Same Key On All Caches

Andrew Mashenkov
Crossposting to dev list.

I've made a test. 
It looks ok for  Rendevouz A, partition distribution for caches with similar settings and same Rendevouz AF keep same.
But FairAF partition distribution can differed for two caches that one was created before and second - after rebalancing. 

So, collocation is not guarateed for same key and similar caches with same Fair AF.

PFA repro.

If it is a bug?

On Tue, Feb 28, 2017 at 3:38 PM, Alper Tekinalp <[hidden email]> wrote:
Hi.

I guess I was wrong about the problem. The issue does not occur when different nodes creates the caches but if partitions reassigned.

Say I created cache1 on node1, then added node2. Partitions for cache1 will be reassigned. Then I create cache2 (regardless of node). Partitions assignments for cache1 and cache2 are not same.

When partitions reassigned ctx.previousAssignment(part); refers to the node that creates the cache:

previousAssignment:
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]

assignment:
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]

backups: 1

tiers: 2
partition set for tier:0
PartSet [nodeId=5dff841e-c578-476d-8996-39618d39790b, size=16, parts=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]]
PartSet [nodeId=192f1ddb-89ed-417f-91ae-4cd16b5b1b69, size=0, parts=[]]
partition set for tier:1
PartSet [nodeId=5dff841e-c578-476d-8996-39618d39790b, size=0, parts=[]]
PartSet [nodeId=192f1ddb-89ed-417f-91ae-4cd16b5b1b69, size=0, parts=[]]

Full mapping for partitions:
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]

No pendings for tier 0 and then it tries to rebalance partitions and mapping becomes:

Full mapping for partitions:
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42]

After going through tier 1 for pendings, which is all, mapping becomes:


Full mapping for partitions:
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]

But if I destroy and recreate cache previous assignments are all null:

previousAssignment:
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null

assignment:
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]

backups: 1

tiers: 2
partition set for tier:0
PartSet [nodeId=5dff841e-c578-476d-8996-39618d39790b, size=0, parts=[]]
PartSet [nodeId=192f1ddb-89ed-417f-91ae-4cd16b5b1b69, size=0, parts=[]]
partition set for tier:1
PartSet [nodeId=5dff841e-c578-476d-8996-39618d39790b, size=0, parts=[]]
PartSet [nodeId=192f1ddb-89ed-417f-91ae-4cd16b5b1b69, size=0, parts=[]]

Full mapping for partitions:
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]


And after that it assign partitions as in round robin:

Full mapping for partitions:
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.28]

And after tier 1 assignments:

Full mapping for partitions:
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]
[127.0.0.1, 192.168.1.42] => [127.0.0.1, 192.168.1.28]
[127.0.0.1, 192.168.1.28] => [127.0.0.1, 192.168.1.42]


That is what I found while debugging. Sorry for verbose mail.


On Tue, Feb 28, 2017 at 9:56 AM, Alper Tekinalp <[hidden email]> wrote:
Hi Val,

We are using fair affinity function because we want to keep data more balanced among nodes. When I change "new FairAffinityFunction(128)"  with "new RendezvousAffinityFunction(false, 128)" I could not reproduce the problem.
 

On Tue, Feb 28, 2017 at 7:15 AM, vkulichenko <[hidden email]> wrote:
Andrey,

Is there an explanation for this? If this all is true, it sounds like a bug
to me, and pretty serious one.

Alper, what is the reason for using fair affinity function? Do you have the
same behavior with rendezvous (the default one)?

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p10933.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



--
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv. 
Gardenya 5 Plaza K:6 Ataşehir 
34758 İSTANBUL

Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153" target="_blank">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154" target="_blank">+90 216 455 01 54
www.evam.com.tr



--
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv. 
Gardenya 5 Plaza K:6 Ataşehir 
34758 İSTANBUL

Tel:  <a href="tel:%2B90%20216%20455%2001%2053" value="+902164550153" target="_blank">+90 216 455 01 53 Fax: <a href="tel:%2B90%20216%20455%2001%2054" value="+902164550154" target="_blank">+90 216 455 01 54
www.evam.com.tr



--
Best regards,
Andrey V. Mashenkov
Reply | Threaded
Open this post in threaded view
|

Re: Same Affinity For Same Key On All Caches

Valentin Kulichenko
Adding back the dev list.

Folks,

Are there any opinions on the problem discussed here? Do we really need
FairAffinityFunction if it can't guarantee cross-cache collocation?

-Val

On Thu, Mar 2, 2017 at 2:41 PM, vkulichenko <[hidden email]>
wrote:

> Hi Alex,
>
> I see your point. Can you please outline its advantages vs rendezvous
> function?
>
> In my view issue discussed here makes it pretty much useless in vast
> majority of use cases, and very error-prone in all others.
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-
> Caches-tp10829p11006.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>
Reply | Threaded
Open this post in threaded view
|

Re: Same Affinity For Same Key On All Caches

dmagda
What??? Unbelievable. It sounds like a design flaw to me. Any ideas how to fix?


Denis

> On Mar 2, 2017, at 2:43 PM, Valentin Kulichenko <[hidden email]> wrote:
>
> Adding back the dev list.
>
> Folks,
>
> Are there any opinions on the problem discussed here? Do we really need FairAffinityFunction if it can't guarantee cross-cache collocation?
>
> -Val
>
> On Thu, Mar 2, 2017 at 2:41 PM, vkulichenko <[hidden email] <mailto:[hidden email]>> wrote:
> Hi Alex,
>
> I see your point. Can you please outline its advantages vs rendezvous
> function?
>
> In my view issue discussed here makes it pretty much useless in vast
> majority of use cases, and very error-prone in all others.
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p11006.html <http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p11006.html>
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>