Dmitriy, you have agreed with me in the old thread, and now you don't?
Binarylizable (current) is longer than Binarizable (proposed). On Wed, Jul 20, 2016 at 10:24 AM, Dmitriy Setrakyan <[hidden email] > wrote: > > > > > On Jul 20, 2016, at 9:17 AM, Pavel Tupitsyn <[hidden email]> > wrote: > > > > How about renaming Binarylizable interface? > > > > > http://apache-ignite-developers.2346864.n4.nabble.com/Naming-Binarylizable-td4592.html > > Pavel, I would not rename. The name you are suggesting is very hard to > pronounce. > > > > > On Sat, Jul 16, 2016 at 10:25 AM, Sergi Vladykin < > [hidden email]> > > wrote: > > > >> Alexey K., > >> > >> No problem, here it is > https://issues.apache.org/jira/browse/IGNITE-3488 > >> > >> Sergi > >> > >> On Sat, Jul 16, 2016 at 2:00 AM, Valentin Kulichenko < > >> [hidden email]> wrote: > >> > >>> Folks, > >>> > >>> I created one more ticket related to SQL: > >>> https://issues.apache.org/jira/browse/IGNITE-3487. It's a usability > >> issue > >>> that pops up on user forum every now and then. Since it's a > compatibility > >>> breaking changed, it looks like a good candidate for 2.0. > >>> > >>> -Val > >>> > >>> On Fri, Jul 15, 2016 at 11:56 AM, Alexey Kuznetsov < > >>> [hidden email]> > >>> wrote: > >>> > >>>> Sergi, that was my idea to drop nulls but I have limited access to > >>> internet > >>>> (I'm on vacation) could you create issue in JIRA? > >>>> > >>>> Thanks. > >>>> > >>>> Alexey Kuznetsov > >>>> > >>>> 15 Июл 2016 г. 15:17 пользователь "Sergi Vladykin" < > >>>> [hidden email]> > >>>> написал: > >>>> > >>>> Huge +1 for dropping support for null in all names, not only for cache > >>>> names. Do we have ticket for this one? > >>>> > >>>> Sergi > >>>> > >>>> On Fri, Jul 15, 2016 at 2:00 PM, Andrey Velichko < > [hidden email] > >>> > >>>> wrote: > >>>> > >>>>> > >>>>> 15.07.2016 0:31, Dmitriy Setrakyan пишет: > >>>>> > >>>>>> On Fri, Jul 15, 2016 at 12:26 AM, AndreyVel<[hidden email]> > >>>> wrote: > >>>>>> > >>>>>> Good feature may be Aggregated cache - analog materialized view in > >>> DBMS > >>>>>>> Aggregated cache is great for performance (KPI, analytecal > >> reports). > >>>>>>> > >>>>>>> Do you mean a copy of the aggregated data in another cache? What > >>>> happens > >>>>>> when the data in the original caches is updated? > >>>>>> > >>>>>> > >>>>>> > >>>>> Yes, aggregated data can be store in another cache, > >>>>> embedded aggregating cache can be updated sync/async. Aggregating > >> from > >>>> the > >>>>> box has better performance then creating custom event listeners. > >>>>> > >>>>> If cache entry updated/deleted aggregate listener can get 2 values > >> old > >>>> and > >>>>> new. > >>>>> > >>>> > >>> > >> > |
I am allowed to flip-flop on my opinions every now and then :)
> On Jul 20, 2016, at 11:06 AM, Pavel Tupitsyn <[hidden email]> wrote: > > Dmitriy, you have agreed with me in the old thread, and now you don't? > Binarylizable (current) is longer than Binarizable (proposed). > > On Wed, Jul 20, 2016 at 10:24 AM, Dmitriy Setrakyan <[hidden email] >> wrote: > >> >> >> >>> On Jul 20, 2016, at 9:17 AM, Pavel Tupitsyn <[hidden email]> >> wrote: >>> >>> How about renaming Binarylizable interface? >>> >>> >> http://apache-ignite-developers.2346864.n4.nabble.com/Naming-Binarylizable-td4592.html >> >> Pavel, I would not rename. The name you are suggesting is very hard to >> pronounce. >> >>> >>> On Sat, Jul 16, 2016 at 10:25 AM, Sergi Vladykin < >> [hidden email]> >>> wrote: >>> >>>> Alexey K., >>>> >>>> No problem, here it is >> https://issues.apache.org/jira/browse/IGNITE-3488 >>>> >>>> Sergi >>>> >>>> On Sat, Jul 16, 2016 at 2:00 AM, Valentin Kulichenko < >>>> [hidden email]> wrote: >>>> >>>>> Folks, >>>>> >>>>> I created one more ticket related to SQL: >>>>> https://issues.apache.org/jira/browse/IGNITE-3487. It's a usability >>>> issue >>>>> that pops up on user forum every now and then. Since it's a >> compatibility >>>>> breaking changed, it looks like a good candidate for 2.0. >>>>> >>>>> -Val >>>>> >>>>> On Fri, Jul 15, 2016 at 11:56 AM, Alexey Kuznetsov < >>>>> [hidden email]> >>>>> wrote: >>>>> >>>>>> Sergi, that was my idea to drop nulls but I have limited access to >>>>> internet >>>>>> (I'm on vacation) could you create issue in JIRA? >>>>>> >>>>>> Thanks. >>>>>> >>>>>> Alexey Kuznetsov >>>>>> >>>>>> 15 Июл 2016 г. 15:17 пользователь "Sergi Vladykin" < >>>>>> [hidden email]> >>>>>> написал: >>>>>> >>>>>> Huge +1 for dropping support for null in all names, not only for cache >>>>>> names. Do we have ticket for this one? >>>>>> >>>>>> Sergi >>>>>> >>>>>> On Fri, Jul 15, 2016 at 2:00 PM, Andrey Velichko < >> [hidden email] >>>>> >>>>>> wrote: >>>>>> >>>>>>> >>>>>>> 15.07.2016 0:31, Dmitriy Setrakyan пишет: >>>>>>> >>>>>>>> On Fri, Jul 15, 2016 at 12:26 AM, AndreyVel<[hidden email]> >>>>>> wrote: >>>>>>>> >>>>>>>> Good feature may be Aggregated cache - analog materialized view in >>>>> DBMS >>>>>>>>> Aggregated cache is great for performance (KPI, analytecal >>>> reports). >>>>>>>>> >>>>>>>>> Do you mean a copy of the aggregated data in another cache? What >>>>>> happens >>>>>>>> when the data in the original caches is updated? >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> Yes, aggregated data can be store in another cache, >>>>>>> embedded aggregating cache can be updated sync/async. Aggregating >>>> from >>>>>> the >>>>>>> box has better performance then creating custom event listeners. >>>>>>> >>>>>>> If cache entry updated/deleted aggregate listener can get 2 values >>>> old >>>>>> and >>>>>>> new. >>>>>>> >>>>>> >>>>> >>>> >> |
In reply to this post by Alexey Goncharuk
Guys, I think we can also split event notification for user listeners and
internal system listeners. I have been seeing a lot of issues caused by some heavy or blocking operations in user-defined listeners. This may block internal component notification (e.g. on discovery event) causing topology hangings. --Yakov 2016-06-25 2:42 GMT+03:00 Alexey Goncharuk <[hidden email]>: > Folks, > > Recently I have seen a couple of emails suggesting tasks/improvements that > we cannot do in 1.x releases due to API compatibility reasons, so they are > postponed to 2.0. I would like to keep track of these tasks in some way in > our Jira to make sure we do not have anything obsolete when it comes to the > next major version release. > > My question for now is how should we track such tasks? Should it be a > label, a parent task with subtasks, something else? > > I would go with a label + release version. > > --AG > |
On Wed, Jul 27, 2016 at 11:36 AM, Yakov Zhdanov <[hidden email]> wrote:
> Guys, I think we can also split event notification for user listeners and > internal system listeners. I have been seeing a lot of issues caused by > some heavy or blocking operations in user-defined listeners. This may block > internal component notification (e.g. on discovery event) causing topology > hangings. > Sure. There are a lot of features being added. Would be nice to assign a release manager for Ignite 2.0 and document all the discussed features on the Wiki. > > --Yakov > > 2016-06-25 2:42 GMT+03:00 Alexey Goncharuk <[hidden email]>: > > > Folks, > > > > Recently I have seen a couple of emails suggesting tasks/improvements > that > > we cannot do in 1.x releases due to API compatibility reasons, so they > are > > postponed to 2.0. I would like to keep track of these tasks in some way > in > > our Jira to make sure we do not have anything obsolete when it comes to > the > > next major version release. > > > > My question for now is how should we track such tasks? Should it be a > > label, a parent task with subtasks, something else? > > > > I would go with a label + release version. > > > > --AG > > > |
One more point.
I insist on stop using marshaller and meta caches but switch to spreading this info via custom discovery events. --Yakov 2016-07-27 19:57 GMT+03:00 Dmitriy Setrakyan <[hidden email]>: > On Wed, Jul 27, 2016 at 11:36 AM, Yakov Zhdanov <[hidden email]> > wrote: > > > Guys, I think we can also split event notification for user listeners and > > internal system listeners. I have been seeing a lot of issues caused by > > some heavy or blocking operations in user-defined listeners. This may > block > > internal component notification (e.g. on discovery event) causing > topology > > hangings. > > > > Sure. There are a lot of features being added. Would be nice to assign a > release manager for Ignite 2.0 and document all the discussed features on > the Wiki. > > > > > > --Yakov > > > > 2016-06-25 2:42 GMT+03:00 Alexey Goncharuk <[hidden email]>: > > > > > Folks, > > > > > > Recently I have seen a couple of emails suggesting tasks/improvements > > that > > > we cannot do in 1.x releases due to API compatibility reasons, so they > > are > > > postponed to 2.0. I would like to keep track of these tasks in some way > > in > > > our Jira to make sure we do not have anything obsolete when it comes to > > the > > > next major version release. > > > > > > My question for now is how should we track such tasks? Should it be a > > > label, a parent task with subtasks, something else? > > > > > > I would go with a label + release version. > > > > > > --AG > > > > > > |
On Mon, Aug 1, 2016 at 3:46 AM, Yakov Zhdanov <[hidden email]> wrote:
> One more point. > > I insist on stop using marshaller and meta caches but switch to spreading > this info via custom discovery events. > Do we have a ticket explaining why this needs to be done? > > --Yakov > > 2016-07-27 19:57 GMT+03:00 Dmitriy Setrakyan <[hidden email]>: > > > On Wed, Jul 27, 2016 at 11:36 AM, Yakov Zhdanov <[hidden email]> > > wrote: > > > > > Guys, I think we can also split event notification for user listeners > and > > > internal system listeners. I have been seeing a lot of issues caused by > > > some heavy or blocking operations in user-defined listeners. This may > > block > > > internal component notification (e.g. on discovery event) causing > > topology > > > hangings. > > > > > > > Sure. There are a lot of features being added. Would be nice to assign a > > release manager for Ignite 2.0 and document all the discussed features on > > the Wiki. > > > > > > > > > > --Yakov > > > > > > 2016-06-25 2:42 GMT+03:00 Alexey Goncharuk <[hidden email] > >: > > > > > > > Folks, > > > > > > > > Recently I have seen a couple of emails suggesting tasks/improvements > > > that > > > > we cannot do in 1.x releases due to API compatibility reasons, so > they > > > are > > > > postponed to 2.0. I would like to keep track of these tasks in some > way > > > in > > > > our Jira to make sure we do not have anything obsolete when it comes > to > > > the > > > > next major version release. > > > > > > > > My question for now is how should we track such tasks? Should it be a > > > > label, a parent task with subtasks, something else? > > > > > > > > I would go with a label + release version. > > > > > > > > --AG > > > > > > > > > > |
Not yet. The thing is that our recent experience showed that it was not
very good idea to go with caches. E.g. when you try to deserialize inside continuous query listener on client side it implicitly calls cache.get() which in turn may cause deadlock under some circumstances. --Yakov 2016-08-02 2:41 GMT+03:00 Dmitriy Setrakyan <[hidden email]>: > On Mon, Aug 1, 2016 at 3:46 AM, Yakov Zhdanov <[hidden email]> wrote: > > > One more point. > > > > I insist on stop using marshaller and meta caches but switch to spreading > > this info via custom discovery events. > > > > Do we have a ticket explaining why this needs to be done? > > > > > > --Yakov > > > > 2016-07-27 19:57 GMT+03:00 Dmitriy Setrakyan <[hidden email]>: > > > > > On Wed, Jul 27, 2016 at 11:36 AM, Yakov Zhdanov <[hidden email]> > > > wrote: > > > > > > > Guys, I think we can also split event notification for user listeners > > and > > > > internal system listeners. I have been seeing a lot of issues caused > by > > > > some heavy or blocking operations in user-defined listeners. This may > > > block > > > > internal component notification (e.g. on discovery event) causing > > > topology > > > > hangings. > > > > > > > > > > Sure. There are a lot of features being added. Would be nice to assign > a > > > release manager for Ignite 2.0 and document all the discussed features > on > > > the Wiki. > > > > > > > > > > > > > > --Yakov > > > > > > > > 2016-06-25 2:42 GMT+03:00 Alexey Goncharuk < > [hidden email] > > >: > > > > > > > > > Folks, > > > > > > > > > > Recently I have seen a couple of emails suggesting > tasks/improvements > > > > that > > > > > we cannot do in 1.x releases due to API compatibility reasons, so > > they > > > > are > > > > > postponed to 2.0. I would like to keep track of these tasks in some > > way > > > > in > > > > > our Jira to make sure we do not have anything obsolete when it > comes > > to > > > > the > > > > > next major version release. > > > > > > > > > > My question for now is how should we track such tasks? Should it > be a > > > > > label, a parent task with subtasks, something else? > > > > > > > > > > I would go with a label + release version. > > > > > > > > > > --AG > > > > > > > > > > > > > > > |
Let’s add this [1] issue to the list because it requires significant work on the side of metrics SPI.
[1] https://issues.apache.org/jira/browse/IGNITE-3495 <https://issues.apache.org/jira/browse/IGNITE-3495> — Denis > On Aug 2, 2016, at 12:45 AM, Yakov Zhdanov <[hidden email]> wrote: > > Not yet. The thing is that our recent experience showed that it was not > very good idea to go with caches. E.g. when you try to deserialize inside > continuous query listener on client side it implicitly calls cache.get() > which in turn may cause deadlock under some circumstances. > > --Yakov > > 2016-08-02 2:41 GMT+03:00 Dmitriy Setrakyan <[hidden email]>: > >> On Mon, Aug 1, 2016 at 3:46 AM, Yakov Zhdanov <[hidden email]> wrote: >> >>> One more point. >>> >>> I insist on stop using marshaller and meta caches but switch to spreading >>> this info via custom discovery events. >>> >> >> Do we have a ticket explaining why this needs to be done? >> >> >>> >>> --Yakov >>> >>> 2016-07-27 19:57 GMT+03:00 Dmitriy Setrakyan <[hidden email]>: >>> >>>> On Wed, Jul 27, 2016 at 11:36 AM, Yakov Zhdanov <[hidden email]> >>>> wrote: >>>> >>>>> Guys, I think we can also split event notification for user listeners >>> and >>>>> internal system listeners. I have been seeing a lot of issues caused >> by >>>>> some heavy or blocking operations in user-defined listeners. This may >>>> block >>>>> internal component notification (e.g. on discovery event) causing >>>> topology >>>>> hangings. >>>>> >>>> >>>> Sure. There are a lot of features being added. Would be nice to assign >> a >>>> release manager for Ignite 2.0 and document all the discussed features >> on >>>> the Wiki. >>>> >>>> >>>>> >>>>> --Yakov >>>>> >>>>> 2016-06-25 2:42 GMT+03:00 Alexey Goncharuk < >> [hidden email] >>>> : >>>>> >>>>>> Folks, >>>>>> >>>>>> Recently I have seen a couple of emails suggesting >> tasks/improvements >>>>> that >>>>>> we cannot do in 1.x releases due to API compatibility reasons, so >>> they >>>>> are >>>>>> postponed to 2.0. I would like to keep track of these tasks in some >>> way >>>>> in >>>>>> our Jira to make sure we do not have anything obsolete when it >> comes >>> to >>>>> the >>>>>> next major version release. >>>>>> >>>>>> My question for now is how should we track such tasks? Should it >> be a >>>>>> label, a parent task with subtasks, something else? >>>>>> >>>>>> I would go with a label + release version. >>>>>> >>>>>> --AG >>>>>> >>>>> >>>> >>> >> |
I remember couple more thing for 2.0
How about to drop **ignite-scalar** module in Ignite 2.0? And may be drop **ignite-spark-2.10** module support as of **Spark** 2 is on **scala 2.11**. On Tue, Aug 2, 2016 at 11:09 PM, Denis Magda <[hidden email]> wrote: > Let’s add this [1] issue to the list because it requires significant work > on the side of metrics SPI. > > [1] https://issues.apache.org/jira/browse/IGNITE-3495 < > https://issues.apache.org/jira/browse/IGNITE-3495> > > — > Denis > > > On Aug 2, 2016, at 12:45 AM, Yakov Zhdanov <[hidden email]> wrote: > > > > Not yet. The thing is that our recent experience showed that it was not > > very good idea to go with caches. E.g. when you try to deserialize inside > > continuous query listener on client side it implicitly calls cache.get() > > which in turn may cause deadlock under some circumstances. > > > > --Yakov > > > > 2016-08-02 2:41 GMT+03:00 Dmitriy Setrakyan <[hidden email]>: > > > >> On Mon, Aug 1, 2016 at 3:46 AM, Yakov Zhdanov <[hidden email]> > wrote: > >> > >>> One more point. > >>> > >>> I insist on stop using marshaller and meta caches but switch to > spreading > >>> this info via custom discovery events. > >>> > >> > >> Do we have a ticket explaining why this needs to be done? > >> > >> > >>> > >>> --Yakov > >>> > >>> 2016-07-27 19:57 GMT+03:00 Dmitriy Setrakyan <[hidden email]>: > >>> > >>>> On Wed, Jul 27, 2016 at 11:36 AM, Yakov Zhdanov <[hidden email]> > >>>> wrote: > >>>> > >>>>> Guys, I think we can also split event notification for user listeners > >>> and > >>>>> internal system listeners. I have been seeing a lot of issues caused > >> by > >>>>> some heavy or blocking operations in user-defined listeners. This may > >>>> block > >>>>> internal component notification (e.g. on discovery event) causing > >>>> topology > >>>>> hangings. > >>>>> > >>>> > >>>> Sure. There are a lot of features being added. Would be nice to assign > >> a > >>>> release manager for Ignite 2.0 and document all the discussed features > >> on > >>>> the Wiki. > >>>> > >>>> > >>>>> > >>>>> --Yakov > >>>>> > >>>>> 2016-06-25 2:42 GMT+03:00 Alexey Goncharuk < > >> [hidden email] > >>>> : > >>>>> > >>>>>> Folks, > >>>>>> > >>>>>> Recently I have seen a couple of emails suggesting > >> tasks/improvements > >>>>> that > >>>>>> we cannot do in 1.x releases due to API compatibility reasons, so > >>> they > >>>>> are > >>>>>> postponed to 2.0. I would like to keep track of these tasks in some > >>> way > >>>>> in > >>>>>> our Jira to make sure we do not have anything obsolete when it > >> comes > >>> to > >>>>> the > >>>>>> next major version release. > >>>>>> > >>>>>> My question for now is how should we track such tasks? Should it > >> be a > >>>>>> label, a parent task with subtasks, something else? > >>>>>> > >>>>>> I would go with a label + release version. > >>>>>> > >>>>>> --AG > >>>>>> > >>>>> > >>>> > >>> > >> > > -- Alexey Kuznetsov GridGain Systems www.gridgain.com |
On Fri, Aug 5, 2016 at 2:46 AM, Alexey Kuznetsov <[hidden email]>
wrote: > I remember couple more thing for 2.0 > > How about to drop **ignite-scalar** module in Ignite 2.0? > Why? > And may be drop **ignite-spark-2.10** module support as of **Spark** 2 is > on **scala 2.11**. > I would drop it only if it is difficult to support. Do we know what kind of impact will it have on the community? Anyone is still using 2.10? > > On Tue, Aug 2, 2016 at 11:09 PM, Denis Magda <[hidden email]> wrote: > > > Let’s add this [1] issue to the list because it requires significant work > > on the side of metrics SPI. > > > > [1] https://issues.apache.org/jira/browse/IGNITE-3495 < > > https://issues.apache.org/jira/browse/IGNITE-3495> > > > > — > > Denis > > > > > On Aug 2, 2016, at 12:45 AM, Yakov Zhdanov <[hidden email]> > wrote: > > > > > > Not yet. The thing is that our recent experience showed that it was not > > > very good idea to go with caches. E.g. when you try to deserialize > inside > > > continuous query listener on client side it implicitly calls > cache.get() > > > which in turn may cause deadlock under some circumstances. > > > > > > --Yakov > > > > > > 2016-08-02 2:41 GMT+03:00 Dmitriy Setrakyan <[hidden email]>: > > > > > >> On Mon, Aug 1, 2016 at 3:46 AM, Yakov Zhdanov <[hidden email]> > > wrote: > > >> > > >>> One more point. > > >>> > > >>> I insist on stop using marshaller and meta caches but switch to > > spreading > > >>> this info via custom discovery events. > > >>> > > >> > > >> Do we have a ticket explaining why this needs to be done? > > >> > > >> > > >>> > > >>> --Yakov > > >>> > > >>> 2016-07-27 19:57 GMT+03:00 Dmitriy Setrakyan <[hidden email] > >: > > >>> > > >>>> On Wed, Jul 27, 2016 at 11:36 AM, Yakov Zhdanov < > [hidden email]> > > >>>> wrote: > > >>>> > > >>>>> Guys, I think we can also split event notification for user > listeners > > >>> and > > >>>>> internal system listeners. I have been seeing a lot of issues > caused > > >> by > > >>>>> some heavy or blocking operations in user-defined listeners. This > may > > >>>> block > > >>>>> internal component notification (e.g. on discovery event) causing > > >>>> topology > > >>>>> hangings. > > >>>>> > > >>>> > > >>>> Sure. There are a lot of features being added. Would be nice to > assign > > >> a > > >>>> release manager for Ignite 2.0 and document all the discussed > features > > >> on > > >>>> the Wiki. > > >>>> > > >>>> > > >>>>> > > >>>>> --Yakov > > >>>>> > > >>>>> 2016-06-25 2:42 GMT+03:00 Alexey Goncharuk < > > >> [hidden email] > > >>>> : > > >>>>> > > >>>>>> Folks, > > >>>>>> > > >>>>>> Recently I have seen a couple of emails suggesting > > >> tasks/improvements > > >>>>> that > > >>>>>> we cannot do in 1.x releases due to API compatibility reasons, so > > >>> they > > >>>>> are > > >>>>>> postponed to 2.0. I would like to keep track of these tasks in > some > > >>> way > > >>>>> in > > >>>>>> our Jira to make sure we do not have anything obsolete when it > > >> comes > > >>> to > > >>>>> the > > >>>>>> next major version release. > > >>>>>> > > >>>>>> My question for now is how should we track such tasks? Should it > > >> be a > > >>>>>> label, a parent task with subtasks, something else? > > >>>>>> > > >>>>>> I would go with a label + release version. > > >>>>>> > > >>>>>> --AG > > >>>>>> > > >>>>> > > >>>> > > >>> > > >> > > > > > > > -- > Alexey Kuznetsov > GridGain Systems > www.gridgain.com > |
HI
Due to approach to disable to store more than one type per cache the cache use becomes similar the table use for databases. So I suppose would be good to introduce a database/schema-similar concept for caches. It may be implemented as a non-default behavior for backward compatibility. On Sat, Aug 6, 2016 at 1:04 AM, Dmitriy Setrakyan <[hidden email]> wrote: > On Fri, Aug 5, 2016 at 2:46 AM, Alexey Kuznetsov <[hidden email]> > wrote: > > > I remember couple more thing for 2.0 > > > > How about to drop **ignite-scalar** module in Ignite 2.0? > > > > Why? > > > > And may be drop **ignite-spark-2.10** module support as of **Spark** 2 is > > on **scala 2.11**. > > > > I would drop it only if it is difficult to support. Do we know what kind of > impact will it have on the community? Anyone is still using 2.10? > > > > > > On Tue, Aug 2, 2016 at 11:09 PM, Denis Magda <[hidden email]> > wrote: > > > > > Let’s add this [1] issue to the list because it requires significant > work > > > on the side of metrics SPI. > > > > > > [1] https://issues.apache.org/jira/browse/IGNITE-3495 < > > > https://issues.apache.org/jira/browse/IGNITE-3495> > > > > > > — > > > Denis > > > > > > > On Aug 2, 2016, at 12:45 AM, Yakov Zhdanov <[hidden email]> > > wrote: > > > > > > > > Not yet. The thing is that our recent experience showed that it was > not > > > > very good idea to go with caches. E.g. when you try to deserialize > > inside > > > > continuous query listener on client side it implicitly calls > > cache.get() > > > > which in turn may cause deadlock under some circumstances. > > > > > > > > --Yakov > > > > > > > > 2016-08-02 2:41 GMT+03:00 Dmitriy Setrakyan <[hidden email]>: > > > > > > > >> On Mon, Aug 1, 2016 at 3:46 AM, Yakov Zhdanov <[hidden email]> > > > wrote: > > > >> > > > >>> One more point. > > > >>> > > > >>> I insist on stop using marshaller and meta caches but switch to > > > spreading > > > >>> this info via custom discovery events. > > > >>> > > > >> > > > >> Do we have a ticket explaining why this needs to be done? > > > >> > > > >> > > > >>> > > > >>> --Yakov > > > >>> > > > >>> 2016-07-27 19:57 GMT+03:00 Dmitriy Setrakyan < > [hidden email] > > >: > > > >>> > > > >>>> On Wed, Jul 27, 2016 at 11:36 AM, Yakov Zhdanov < > > [hidden email]> > > > >>>> wrote: > > > >>>> > > > >>>>> Guys, I think we can also split event notification for user > > listeners > > > >>> and > > > >>>>> internal system listeners. I have been seeing a lot of issues > > caused > > > >> by > > > >>>>> some heavy or blocking operations in user-defined listeners. This > > may > > > >>>> block > > > >>>>> internal component notification (e.g. on discovery event) causing > > > >>>> topology > > > >>>>> hangings. > > > >>>>> > > > >>>> > > > >>>> Sure. There are a lot of features being added. Would be nice to > > assign > > > >> a > > > >>>> release manager for Ignite 2.0 and document all the discussed > > features > > > >> on > > > >>>> the Wiki. > > > >>>> > > > >>>> > > > >>>>> > > > >>>>> --Yakov > > > >>>>> > > > >>>>> 2016-06-25 2:42 GMT+03:00 Alexey Goncharuk < > > > >> [hidden email] > > > >>>> : > > > >>>>> > > > >>>>>> Folks, > > > >>>>>> > > > >>>>>> Recently I have seen a couple of emails suggesting > > > >> tasks/improvements > > > >>>>> that > > > >>>>>> we cannot do in 1.x releases due to API compatibility reasons, > so > > > >>> they > > > >>>>> are > > > >>>>>> postponed to 2.0. I would like to keep track of these tasks in > > some > > > >>> way > > > >>>>> in > > > >>>>>> our Jira to make sure we do not have anything obsolete when it > > > >> comes > > > >>> to > > > >>>>> the > > > >>>>>> next major version release. > > > >>>>>> > > > >>>>>> My question for now is how should we track such tasks? Should it > > > >> be a > > > >>>>>> label, a parent task with subtasks, something else? > > > >>>>>> > > > >>>>>> I would go with a label + release version. > > > >>>>>> > > > >>>>>> --AG > > > >>>>>> > > > >>>>> > > > >>>> > > > >>> > > > >> > > > > > > > > > > > > -- > > Alexey Kuznetsov > > GridGain Systems > > www.gridgain.com > > > -- Sergey Kozlov GridGain Systems www.gridgain.com |
I'm aware of this issue. My plan was to allow setting the same schema name
to different caches using CacheConfiguration.setSqlSchema(...). This way we will have separate caches but from SQL point of view respective tables will reside in the same schema. No need to introduce new concepts. Sergi 2016-08-11 17:24 GMT+03:00 Sergey Kozlov <[hidden email]>: > HI > > Due to approach to disable to store more than one type per cache the cache > use becomes similar the table use for databases. > So I suppose would be good to introduce a database/schema-similar concept > for caches. It may be implemented as a non-default behavior for backward > compatibility. > > On Sat, Aug 6, 2016 at 1:04 AM, Dmitriy Setrakyan <[hidden email]> > wrote: > > > On Fri, Aug 5, 2016 at 2:46 AM, Alexey Kuznetsov < > [hidden email]> > > wrote: > > > > > I remember couple more thing for 2.0 > > > > > > How about to drop **ignite-scalar** module in Ignite 2.0? > > > > > > > Why? > > > > > > > And may be drop **ignite-spark-2.10** module support as of **Spark** 2 > is > > > on **scala 2.11**. > > > > > > > I would drop it only if it is difficult to support. Do we know what kind > of > > impact will it have on the community? Anyone is still using 2.10? > > > > > > > > > > On Tue, Aug 2, 2016 at 11:09 PM, Denis Magda <[hidden email]> > > wrote: > > > > > > > Let’s add this [1] issue to the list because it requires significant > > work > > > > on the side of metrics SPI. > > > > > > > > [1] https://issues.apache.org/jira/browse/IGNITE-3495 < > > > > https://issues.apache.org/jira/browse/IGNITE-3495> > > > > > > > > — > > > > Denis > > > > > > > > > On Aug 2, 2016, at 12:45 AM, Yakov Zhdanov <[hidden email]> > > > wrote: > > > > > > > > > > Not yet. The thing is that our recent experience showed that it was > > not > > > > > very good idea to go with caches. E.g. when you try to deserialize > > > inside > > > > > continuous query listener on client side it implicitly calls > > > cache.get() > > > > > which in turn may cause deadlock under some circumstances. > > > > > > > > > > --Yakov > > > > > > > > > > 2016-08-02 2:41 GMT+03:00 Dmitriy Setrakyan <[hidden email] > >: > > > > > > > > > >> On Mon, Aug 1, 2016 at 3:46 AM, Yakov Zhdanov < > [hidden email]> > > > > wrote: > > > > >> > > > > >>> One more point. > > > > >>> > > > > >>> I insist on stop using marshaller and meta caches but switch to > > > > spreading > > > > >>> this info via custom discovery events. > > > > >>> > > > > >> > > > > >> Do we have a ticket explaining why this needs to be done? > > > > >> > > > > >> > > > > >>> > > > > >>> --Yakov > > > > >>> > > > > >>> 2016-07-27 19:57 GMT+03:00 Dmitriy Setrakyan < > > [hidden email] > > > >: > > > > >>> > > > > >>>> On Wed, Jul 27, 2016 at 11:36 AM, Yakov Zhdanov < > > > [hidden email]> > > > > >>>> wrote: > > > > >>>> > > > > >>>>> Guys, I think we can also split event notification for user > > > listeners > > > > >>> and > > > > >>>>> internal system listeners. I have been seeing a lot of issues > > > caused > > > > >> by > > > > >>>>> some heavy or blocking operations in user-defined listeners. > This > > > may > > > > >>>> block > > > > >>>>> internal component notification (e.g. on discovery event) > causing > > > > >>>> topology > > > > >>>>> hangings. > > > > >>>>> > > > > >>>> > > > > >>>> Sure. There are a lot of features being added. Would be nice to > > > assign > > > > >> a > > > > >>>> release manager for Ignite 2.0 and document all the discussed > > > features > > > > >> on > > > > >>>> the Wiki. > > > > >>>> > > > > >>>> > > > > >>>>> > > > > >>>>> --Yakov > > > > >>>>> > > > > >>>>> 2016-06-25 2:42 GMT+03:00 Alexey Goncharuk < > > > > >> [hidden email] > > > > >>>> : > > > > >>>>> > > > > >>>>>> Folks, > > > > >>>>>> > > > > >>>>>> Recently I have seen a couple of emails suggesting > > > > >> tasks/improvements > > > > >>>>> that > > > > >>>>>> we cannot do in 1.x releases due to API compatibility reasons, > > so > > > > >>> they > > > > >>>>> are > > > > >>>>>> postponed to 2.0. I would like to keep track of these tasks in > > > some > > > > >>> way > > > > >>>>> in > > > > >>>>>> our Jira to make sure we do not have anything obsolete when it > > > > >> comes > > > > >>> to > > > > >>>>> the > > > > >>>>>> next major version release. > > > > >>>>>> > > > > >>>>>> My question for now is how should we track such tasks? Should > it > > > > >> be a > > > > >>>>>> label, a parent task with subtasks, something else? > > > > >>>>>> > > > > >>>>>> I would go with a label + release version. > > > > >>>>>> > > > > >>>>>> --AG > > > > >>>>>> > > > > >>>>> > > > > >>>> > > > > >>> > > > > >> > > > > > > > > > > > > > > > > > -- > > > Alexey Kuznetsov > > > GridGain Systems > > > www.gridgain.com > > > > > > > > > -- > Sergey Kozlov > GridGain Systems > www.gridgain.com > |
I mean not only SQL features for caches. Single type per cache definitely
reduces number of caches for regular user and grouping caches will help to manage caches in grid. On Thu, Aug 11, 2016 at 5:41 PM, Sergi Vladykin <[hidden email]> wrote: > I'm aware of this issue. My plan was to allow setting the same schema name > to different caches using CacheConfiguration.setSqlSchema(...). This way > we > will have separate caches but from SQL point of view respective tables will > reside in the same schema. No need to introduce new concepts. > > Sergi > > > 2016-08-11 17:24 GMT+03:00 Sergey Kozlov <[hidden email]>: > > > HI > > > > Due to approach to disable to store more than one type per cache the > cache > > use becomes similar the table use for databases. > > So I suppose would be good to introduce a database/schema-similar concept > > for caches. It may be implemented as a non-default behavior for backward > > compatibility. > > > > On Sat, Aug 6, 2016 at 1:04 AM, Dmitriy Setrakyan <[hidden email] > > > > wrote: > > > > > On Fri, Aug 5, 2016 at 2:46 AM, Alexey Kuznetsov < > > [hidden email]> > > > wrote: > > > > > > > I remember couple more thing for 2.0 > > > > > > > > How about to drop **ignite-scalar** module in Ignite 2.0? > > > > > > > > > > Why? > > > > > > > > > > And may be drop **ignite-spark-2.10** module support as of **Spark** > 2 > > is > > > > on **scala 2.11**. > > > > > > > > > > I would drop it only if it is difficult to support. Do we know what > kind > > of > > > impact will it have on the community? Anyone is still using 2.10? > > > > > > > > > > > > > > On Tue, Aug 2, 2016 at 11:09 PM, Denis Magda <[hidden email]> > > > wrote: > > > > > > > > > Let’s add this [1] issue to the list because it requires > significant > > > work > > > > > on the side of metrics SPI. > > > > > > > > > > [1] https://issues.apache.org/jira/browse/IGNITE-3495 < > > > > > https://issues.apache.org/jira/browse/IGNITE-3495> > > > > > > > > > > — > > > > > Denis > > > > > > > > > > > On Aug 2, 2016, at 12:45 AM, Yakov Zhdanov <[hidden email]> > > > > wrote: > > > > > > > > > > > > Not yet. The thing is that our recent experience showed that it > was > > > not > > > > > > very good idea to go with caches. E.g. when you try to > deserialize > > > > inside > > > > > > continuous query listener on client side it implicitly calls > > > > cache.get() > > > > > > which in turn may cause deadlock under some circumstances. > > > > > > > > > > > > --Yakov > > > > > > > > > > > > 2016-08-02 2:41 GMT+03:00 Dmitriy Setrakyan < > [hidden email] > > >: > > > > > > > > > > > >> On Mon, Aug 1, 2016 at 3:46 AM, Yakov Zhdanov < > > [hidden email]> > > > > > wrote: > > > > > >> > > > > > >>> One more point. > > > > > >>> > > > > > >>> I insist on stop using marshaller and meta caches but switch to > > > > > spreading > > > > > >>> this info via custom discovery events. > > > > > >>> > > > > > >> > > > > > >> Do we have a ticket explaining why this needs to be done? > > > > > >> > > > > > >> > > > > > >>> > > > > > >>> --Yakov > > > > > >>> > > > > > >>> 2016-07-27 19:57 GMT+03:00 Dmitriy Setrakyan < > > > [hidden email] > > > > >: > > > > > >>> > > > > > >>>> On Wed, Jul 27, 2016 at 11:36 AM, Yakov Zhdanov < > > > > [hidden email]> > > > > > >>>> wrote: > > > > > >>>> > > > > > >>>>> Guys, I think we can also split event notification for user > > > > listeners > > > > > >>> and > > > > > >>>>> internal system listeners. I have been seeing a lot of issues > > > > caused > > > > > >> by > > > > > >>>>> some heavy or blocking operations in user-defined listeners. > > This > > > > may > > > > > >>>> block > > > > > >>>>> internal component notification (e.g. on discovery event) > > causing > > > > > >>>> topology > > > > > >>>>> hangings. > > > > > >>>>> > > > > > >>>> > > > > > >>>> Sure. There are a lot of features being added. Would be nice > to > > > > assign > > > > > >> a > > > > > >>>> release manager for Ignite 2.0 and document all the discussed > > > > features > > > > > >> on > > > > > >>>> the Wiki. > > > > > >>>> > > > > > >>>> > > > > > >>>>> > > > > > >>>>> --Yakov > > > > > >>>>> > > > > > >>>>> 2016-06-25 2:42 GMT+03:00 Alexey Goncharuk < > > > > > >> [hidden email] > > > > > >>>> : > > > > > >>>>> > > > > > >>>>>> Folks, > > > > > >>>>>> > > > > > >>>>>> Recently I have seen a couple of emails suggesting > > > > > >> tasks/improvements > > > > > >>>>> that > > > > > >>>>>> we cannot do in 1.x releases due to API compatibility > reasons, > > > so > > > > > >>> they > > > > > >>>>> are > > > > > >>>>>> postponed to 2.0. I would like to keep track of these tasks > in > > > > some > > > > > >>> way > > > > > >>>>> in > > > > > >>>>>> our Jira to make sure we do not have anything obsolete when > it > > > > > >> comes > > > > > >>> to > > > > > >>>>> the > > > > > >>>>>> next major version release. > > > > > >>>>>> > > > > > >>>>>> My question for now is how should we track such tasks? > Should > > it > > > > > >> be a > > > > > >>>>>> label, a parent task with subtasks, something else? > > > > > >>>>>> > > > > > >>>>>> I would go with a label + release version. > > > > > >>>>>> > > > > > >>>>>> --AG > > > > > >>>>>> > > > > > >>>>> > > > > > >>>> > > > > > >>> > > > > > >> > > > > > > > > > > > > > > > > > > > > > > -- > > > > Alexey Kuznetsov > > > > GridGain Systems > > > > www.gridgain.com > > > > > > > > > > > > > > > -- > > Sergey Kozlov > > GridGain Systems > > www.gridgain.com > > > -- Sergey Kozlov GridGain Systems www.gridgain.com |
Sergey, I belive you mean "increase" instead of "reduce"?
How grouping will help? To do some operation, for example, clear on group of cashes at once? 11 Авг 2016 г. 22:06 пользователь "Sergey Kozlov" <[hidden email]> написал: > I mean not only SQL features for caches. Single type per cache definitely > reduces number of caches for regular user and grouping caches will help to > manage caches in grid. > > On Thu, Aug 11, 2016 at 5:41 PM, Sergi Vladykin <[hidden email]> > wrote: > > > I'm aware of this issue. My plan was to allow setting the same schema > name > > to different caches using CacheConfiguration.setSqlSchema(...). This way > > we > > will have separate caches but from SQL point of view respective tables > will > > reside in the same schema. No need to introduce new concepts. > > > > Sergi > > > > > > 2016-08-11 17:24 GMT+03:00 Sergey Kozlov <[hidden email]>: > > > > > HI > > > > > > Due to approach to disable to store more than one type per cache the > > cache > > > use becomes similar the table use for databases. > > > So I suppose would be good to introduce a database/schema-similar > concept > > > for caches. It may be implemented as a non-default behavior for > backward > > > compatibility. > > > > > > On Sat, Aug 6, 2016 at 1:04 AM, Dmitriy Setrakyan < > [hidden email] > > > > > > wrote: > > > > > > > On Fri, Aug 5, 2016 at 2:46 AM, Alexey Kuznetsov < > > > [hidden email]> > > > > wrote: > > > > > > > > > I remember couple more thing for 2.0 > > > > > > > > > > How about to drop **ignite-scalar** module in Ignite 2.0? > > > > > > > > > > > > > Why? > > > > > > > > > > > > > And may be drop **ignite-spark-2.10** module support as of > **Spark** > > 2 > > > is > > > > > on **scala 2.11**. > > > > > > > > > > > > > I would drop it only if it is difficult to support. Do we know what > > kind > > > of > > > > impact will it have on the community? Anyone is still using 2.10? > > > > > > > > > > > > > > > > > > On Tue, Aug 2, 2016 at 11:09 PM, Denis Magda <[hidden email]> > > > > wrote: > > > > > > > > > > > Let’s add this [1] issue to the list because it requires > > significant > > > > work > > > > > > on the side of metrics SPI. > > > > > > > > > > > > [1] https://issues.apache.org/jira/browse/IGNITE-3495 < > > > > > > https://issues.apache.org/jira/browse/IGNITE-3495> > > > > > > > > > > > > — > > > > > > Denis > > > > > > > > > > > > > On Aug 2, 2016, at 12:45 AM, Yakov Zhdanov < > [hidden email]> > > > > > wrote: > > > > > > > > > > > > > > Not yet. The thing is that our recent experience showed that it > > was > > > > not > > > > > > > very good idea to go with caches. E.g. when you try to > > deserialize > > > > > inside > > > > > > > continuous query listener on client side it implicitly calls > > > > > cache.get() > > > > > > > which in turn may cause deadlock under some circumstances. > > > > > > > > > > > > > > --Yakov > > > > > > > > > > > > > > 2016-08-02 2:41 GMT+03:00 Dmitriy Setrakyan < > > [hidden email] > > > >: > > > > > > > > > > > > > >> On Mon, Aug 1, 2016 at 3:46 AM, Yakov Zhdanov < > > > [hidden email]> > > > > > > wrote: > > > > > > >> > > > > > > >>> One more point. > > > > > > >>> > > > > > > >>> I insist on stop using marshaller and meta caches but switch > to > > > > > > spreading > > > > > > >>> this info via custom discovery events. > > > > > > >>> > > > > > > >> > > > > > > >> Do we have a ticket explaining why this needs to be done? > > > > > > >> > > > > > > >> > > > > > > >>> > > > > > > >>> --Yakov > > > > > > >>> > > > > > > >>> 2016-07-27 19:57 GMT+03:00 Dmitriy Setrakyan < > > > > [hidden email] > > > > > >: > > > > > > >>> > > > > > > >>>> On Wed, Jul 27, 2016 at 11:36 AM, Yakov Zhdanov < > > > > > [hidden email]> > > > > > > >>>> wrote: > > > > > > >>>> > > > > > > >>>>> Guys, I think we can also split event notification for user > > > > > listeners > > > > > > >>> and > > > > > > >>>>> internal system listeners. I have been seeing a lot of > issues > > > > > caused > > > > > > >> by > > > > > > >>>>> some heavy or blocking operations in user-defined > listeners. > > > This > > > > > may > > > > > > >>>> block > > > > > > >>>>> internal component notification (e.g. on discovery event) > > > causing > > > > > > >>>> topology > > > > > > >>>>> hangings. > > > > > > >>>>> > > > > > > >>>> > > > > > > >>>> Sure. There are a lot of features being added. Would be nice > > to > > > > > assign > > > > > > >> a > > > > > > >>>> release manager for Ignite 2.0 and document all the > discussed > > > > > features > > > > > > >> on > > > > > > >>>> the Wiki. > > > > > > >>>> > > > > > > >>>> > > > > > > >>>>> > > > > > > >>>>> --Yakov > > > > > > >>>>> > > > > > > >>>>> 2016-06-25 2:42 GMT+03:00 Alexey Goncharuk < > > > > > > >> [hidden email] > > > > > > >>>> : > > > > > > >>>>> > > > > > > >>>>>> Folks, > > > > > > >>>>>> > > > > > > >>>>>> Recently I have seen a couple of emails suggesting > > > > > > >> tasks/improvements > > > > > > >>>>> that > > > > > > >>>>>> we cannot do in 1.x releases due to API compatibility > > reasons, > > > > so > > > > > > >>> they > > > > > > >>>>> are > > > > > > >>>>>> postponed to 2.0. I would like to keep track of these > tasks > > in > > > > > some > > > > > > >>> way > > > > > > >>>>> in > > > > > > >>>>>> our Jira to make sure we do not have anything obsolete > when > > it > > > > > > >> comes > > > > > > >>> to > > > > > > >>>>> the > > > > > > >>>>>> next major version release. > > > > > > >>>>>> > > > > > > >>>>>> My question for now is how should we track such tasks? > > Should > > > it > > > > > > >> be a > > > > > > >>>>>> label, a parent task with subtasks, something else? > > > > > > >>>>>> > > > > > > >>>>>> I would go with a label + release version. > > > > > > >>>>>> > > > > > > >>>>>> --AG > > > > > > >>>>>> > > > > > > >>>>> > > > > > > >>>> > > > > > > >>> > > > > > > >> > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > Alexey Kuznetsov > > > > > GridGain Systems > > > > > www.gridgain.com > > > > > > > > > > > > > > > > > > > > > -- > > > Sergey Kozlov > > > GridGain Systems > > > www.gridgain.com > > > > > > > > > -- > Sergey Kozlov > GridGain Systems > www.gridgain.com > |
Alexey
You're right. Of course I meant growth of caches number. I see a few points here: 1. If a grid used by various applications the cache names may overlap (like "accounts") and the application needs to use prefixed names and etc. 2. For clear or destory caches I need to know their names (or iterate over caches but I'm not sure that it is supported by API). For destroy/clear caches belonging to same group we will do it by single operation without knowledge of cache names. 3. Cache group can have cache attributes which will be inherited by a cache created in that group (like eviction policy or cache mode). 4. No reason to add specific feature like SqlShema if it applicable for regular caches as well. On Thu, Aug 11, 2016 at 6:58 PM, Alexey Kuznetsov <[hidden email]> wrote: > Sergey, I belive you mean "increase" instead of "reduce"? > > How grouping will help? > To do some operation, for example, clear on group of cashes at once? > > 11 Авг 2016 г. 22:06 пользователь "Sergey Kozlov" <[hidden email]> > написал: > > > I mean not only SQL features for caches. Single type per cache definitely > > reduces number of caches for regular user and grouping caches will help > to > > manage caches in grid. > > > > On Thu, Aug 11, 2016 at 5:41 PM, Sergi Vladykin < > [hidden email]> > > wrote: > > > > > I'm aware of this issue. My plan was to allow setting the same schema > > name > > > to different caches using CacheConfiguration.setSqlSchema(...). This > way > > > we > > > will have separate caches but from SQL point of view respective tables > > will > > > reside in the same schema. No need to introduce new concepts. > > > > > > Sergi > > > > > > > > > 2016-08-11 17:24 GMT+03:00 Sergey Kozlov <[hidden email]>: > > > > > > > HI > > > > > > > > Due to approach to disable to store more than one type per cache the > > > cache > > > > use becomes similar the table use for databases. > > > > So I suppose would be good to introduce a database/schema-similar > > concept > > > > for caches. It may be implemented as a non-default behavior for > > backward > > > > compatibility. > > > > > > > > On Sat, Aug 6, 2016 at 1:04 AM, Dmitriy Setrakyan < > > [hidden email] > > > > > > > > wrote: > > > > > > > > > On Fri, Aug 5, 2016 at 2:46 AM, Alexey Kuznetsov < > > > > [hidden email]> > > > > > wrote: > > > > > > > > > > > I remember couple more thing for 2.0 > > > > > > > > > > > > How about to drop **ignite-scalar** module in Ignite 2.0? > > > > > > > > > > > > > > > > Why? > > > > > > > > > > > > > > > > And may be drop **ignite-spark-2.10** module support as of > > **Spark** > > > 2 > > > > is > > > > > > on **scala 2.11**. > > > > > > > > > > > > > > > > I would drop it only if it is difficult to support. Do we know what > > > kind > > > > of > > > > > impact will it have on the community? Anyone is still using 2.10? > > > > > > > > > > > > > > > > > > > > > > On Tue, Aug 2, 2016 at 11:09 PM, Denis Magda < > [hidden email]> > > > > > wrote: > > > > > > > > > > > > > Let’s add this [1] issue to the list because it requires > > > significant > > > > > work > > > > > > > on the side of metrics SPI. > > > > > > > > > > > > > > [1] https://issues.apache.org/jira/browse/IGNITE-3495 < > > > > > > > https://issues.apache.org/jira/browse/IGNITE-3495> > > > > > > > > > > > > > > — > > > > > > > Denis > > > > > > > > > > > > > > > On Aug 2, 2016, at 12:45 AM, Yakov Zhdanov < > > [hidden email]> > > > > > > wrote: > > > > > > > > > > > > > > > > Not yet. The thing is that our recent experience showed that > it > > > was > > > > > not > > > > > > > > very good idea to go with caches. E.g. when you try to > > > deserialize > > > > > > inside > > > > > > > > continuous query listener on client side it implicitly calls > > > > > > cache.get() > > > > > > > > which in turn may cause deadlock under some circumstances. > > > > > > > > > > > > > > > > --Yakov > > > > > > > > > > > > > > > > 2016-08-02 2:41 GMT+03:00 Dmitriy Setrakyan < > > > [hidden email] > > > > >: > > > > > > > > > > > > > > > >> On Mon, Aug 1, 2016 at 3:46 AM, Yakov Zhdanov < > > > > [hidden email]> > > > > > > > wrote: > > > > > > > >> > > > > > > > >>> One more point. > > > > > > > >>> > > > > > > > >>> I insist on stop using marshaller and meta caches but > switch > > to > > > > > > > spreading > > > > > > > >>> this info via custom discovery events. > > > > > > > >>> > > > > > > > >> > > > > > > > >> Do we have a ticket explaining why this needs to be done? > > > > > > > >> > > > > > > > >> > > > > > > > >>> > > > > > > > >>> --Yakov > > > > > > > >>> > > > > > > > >>> 2016-07-27 19:57 GMT+03:00 Dmitriy Setrakyan < > > > > > [hidden email] > > > > > > >: > > > > > > > >>> > > > > > > > >>>> On Wed, Jul 27, 2016 at 11:36 AM, Yakov Zhdanov < > > > > > > [hidden email]> > > > > > > > >>>> wrote: > > > > > > > >>>> > > > > > > > >>>>> Guys, I think we can also split event notification for > user > > > > > > listeners > > > > > > > >>> and > > > > > > > >>>>> internal system listeners. I have been seeing a lot of > > issues > > > > > > caused > > > > > > > >> by > > > > > > > >>>>> some heavy or blocking operations in user-defined > > listeners. > > > > This > > > > > > may > > > > > > > >>>> block > > > > > > > >>>>> internal component notification (e.g. on discovery event) > > > > causing > > > > > > > >>>> topology > > > > > > > >>>>> hangings. > > > > > > > >>>>> > > > > > > > >>>> > > > > > > > >>>> Sure. There are a lot of features being added. Would be > nice > > > to > > > > > > assign > > > > > > > >> a > > > > > > > >>>> release manager for Ignite 2.0 and document all the > > discussed > > > > > > features > > > > > > > >> on > > > > > > > >>>> the Wiki. > > > > > > > >>>> > > > > > > > >>>> > > > > > > > >>>>> > > > > > > > >>>>> --Yakov > > > > > > > >>>>> > > > > > > > >>>>> 2016-06-25 2:42 GMT+03:00 Alexey Goncharuk < > > > > > > > >> [hidden email] > > > > > > > >>>> : > > > > > > > >>>>> > > > > > > > >>>>>> Folks, > > > > > > > >>>>>> > > > > > > > >>>>>> Recently I have seen a couple of emails suggesting > > > > > > > >> tasks/improvements > > > > > > > >>>>> that > > > > > > > >>>>>> we cannot do in 1.x releases due to API compatibility > > > reasons, > > > > > so > > > > > > > >>> they > > > > > > > >>>>> are > > > > > > > >>>>>> postponed to 2.0. I would like to keep track of these > > tasks > > > in > > > > > > some > > > > > > > >>> way > > > > > > > >>>>> in > > > > > > > >>>>>> our Jira to make sure we do not have anything obsolete > > when > > > it > > > > > > > >> comes > > > > > > > >>> to > > > > > > > >>>>> the > > > > > > > >>>>>> next major version release. > > > > > > > >>>>>> > > > > > > > >>>>>> My question for now is how should we track such tasks? > > > Should > > > > it > > > > > > > >> be a > > > > > > > >>>>>> label, a parent task with subtasks, something else? > > > > > > > >>>>>> > > > > > > > >>>>>> I would go with a label + release version. > > > > > > > >>>>>> > > > > > > > >>>>>> --AG > > > > > > > >>>>>> > > > > > > > >>>>> > > > > > > > >>>> > > > > > > > >>> > > > > > > > >> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > Alexey Kuznetsov > > > > > > GridGain Systems > > > > > > www.gridgain.com > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > Sergey Kozlov > > > > GridGain Systems > > > > www.gridgain.com > > > > > > > > > > > > > > > -- > > Sergey Kozlov > > GridGain Systems > > www.gridgain.com > > > -- Sergey Kozlov GridGain Systems www.gridgain.com |
On Thu, Aug 11, 2016 at 7:25 AM, Sergey Kozlov <[hidden email]> wrote:
> Alexey > > You're right. Of course I meant growth of caches number. > > I see a few points here: > > 1. If a grid used by various applications the cache names may overlap (like > "accounts") and the application needs to use prefixed names and etc. > 2. For clear or destory caches I need to know their names (or iterate over > caches but I'm not sure that it is supported by API). For destroy/clear > caches belonging to same group we will do it by single operation without > knowledge of cache names. > 3. Cache group can have cache attributes which will be inherited by a cache > created in that group (like eviction policy or cache mode). > 4. No reason to add specific feature like SqlShema if it applicable for > regular caches as well. > Sergey K, setting the same SQL schema for multiple caches, as proposed by Sergi, solves a different problem of having too many SQL schemas due to too many different caches. I think Sergi proposed a good solution for it. > > On Thu, Aug 11, 2016 at 6:58 PM, Alexey Kuznetsov <[hidden email] > > > wrote: > > > Sergey, I belive you mean "increase" instead of "reduce"? > > > > How grouping will help? > > To do some operation, for example, clear on group of cashes at once? > > > > 11 Авг 2016 г. 22:06 пользователь "Sergey Kozlov" <[hidden email]> > > написал: > > > > > I mean not only SQL features for caches. Single type per cache > definitely > > > reduces number of caches for regular user and grouping caches will help > > to > > > manage caches in grid. > > > > > > On Thu, Aug 11, 2016 at 5:41 PM, Sergi Vladykin < > > [hidden email]> > > > wrote: > > > > > > > I'm aware of this issue. My plan was to allow setting the same schema > > > name > > > > to different caches using CacheConfiguration.setSqlSchema(...). This > > way > > > > we > > > > will have separate caches but from SQL point of view respective > tables > > > will > > > > reside in the same schema. No need to introduce new concepts. > > > > > > > > Sergi > > > > > > > > > > > > 2016-08-11 17:24 GMT+03:00 Sergey Kozlov <[hidden email]>: > > > > > > > > > HI > > > > > > > > > > Due to approach to disable to store more than one type per cache > the > > > > cache > > > > > use becomes similar the table use for databases. > > > > > So I suppose would be good to introduce a database/schema-similar > > > concept > > > > > for caches. It may be implemented as a non-default behavior for > > > backward > > > > > compatibility. > > > > > > > > > > On Sat, Aug 6, 2016 at 1:04 AM, Dmitriy Setrakyan < > > > [hidden email] > > > > > > > > > > wrote: > > > > > > > > > > > On Fri, Aug 5, 2016 at 2:46 AM, Alexey Kuznetsov < > > > > > [hidden email]> > > > > > > wrote: > > > > > > > > > > > > > I remember couple more thing for 2.0 > > > > > > > > > > > > > > How about to drop **ignite-scalar** module in Ignite 2.0? > > > > > > > > > > > > > > > > > > > Why? > > > > > > > > > > > > > > > > > > > And may be drop **ignite-spark-2.10** module support as of > > > **Spark** > > > > 2 > > > > > is > > > > > > > on **scala 2.11**. > > > > > > > > > > > > > > > > > > > I would drop it only if it is difficult to support. Do we know > what > > > > kind > > > > > of > > > > > > impact will it have on the community? Anyone is still using 2.10? > > > > > > > > > > > > > > > > > > > > > > > > > > On Tue, Aug 2, 2016 at 11:09 PM, Denis Magda < > > [hidden email]> > > > > > > wrote: > > > > > > > > > > > > > > > Let’s add this [1] issue to the list because it requires > > > > significant > > > > > > work > > > > > > > > on the side of metrics SPI. > > > > > > > > > > > > > > > > [1] https://issues.apache.org/jira/browse/IGNITE-3495 < > > > > > > > > https://issues.apache.org/jira/browse/IGNITE-3495> > > > > > > > > > > > > > > > > — > > > > > > > > Denis > > > > > > > > > > > > > > > > > On Aug 2, 2016, at 12:45 AM, Yakov Zhdanov < > > > [hidden email]> > > > > > > > wrote: > > > > > > > > > > > > > > > > > > Not yet. The thing is that our recent experience showed > that > > it > > > > was > > > > > > not > > > > > > > > > very good idea to go with caches. E.g. when you try to > > > > deserialize > > > > > > > inside > > > > > > > > > continuous query listener on client side it implicitly > calls > > > > > > > cache.get() > > > > > > > > > which in turn may cause deadlock under some circumstances. > > > > > > > > > > > > > > > > > > --Yakov > > > > > > > > > > > > > > > > > > 2016-08-02 2:41 GMT+03:00 Dmitriy Setrakyan < > > > > [hidden email] > > > > > >: > > > > > > > > > > > > > > > > > >> On Mon, Aug 1, 2016 at 3:46 AM, Yakov Zhdanov < > > > > > [hidden email]> > > > > > > > > wrote: > > > > > > > > >> > > > > > > > > >>> One more point. > > > > > > > > >>> > > > > > > > > >>> I insist on stop using marshaller and meta caches but > > switch > > > to > > > > > > > > spreading > > > > > > > > >>> this info via custom discovery events. > > > > > > > > >>> > > > > > > > > >> > > > > > > > > >> Do we have a ticket explaining why this needs to be done? > > > > > > > > >> > > > > > > > > >> > > > > > > > > >>> > > > > > > > > >>> --Yakov > > > > > > > > >>> > > > > > > > > >>> 2016-07-27 19:57 GMT+03:00 Dmitriy Setrakyan < > > > > > > [hidden email] > > > > > > > >: > > > > > > > > >>> > > > > > > > > >>>> On Wed, Jul 27, 2016 at 11:36 AM, Yakov Zhdanov < > > > > > > > [hidden email]> > > > > > > > > >>>> wrote: > > > > > > > > >>>> > > > > > > > > >>>>> Guys, I think we can also split event notification for > > user > > > > > > > listeners > > > > > > > > >>> and > > > > > > > > >>>>> internal system listeners. I have been seeing a lot of > > > issues > > > > > > > caused > > > > > > > > >> by > > > > > > > > >>>>> some heavy or blocking operations in user-defined > > > listeners. > > > > > This > > > > > > > may > > > > > > > > >>>> block > > > > > > > > >>>>> internal component notification (e.g. on discovery > event) > > > > > causing > > > > > > > > >>>> topology > > > > > > > > >>>>> hangings. > > > > > > > > >>>>> > > > > > > > > >>>> > > > > > > > > >>>> Sure. There are a lot of features being added. Would be > > nice > > > > to > > > > > > > assign > > > > > > > > >> a > > > > > > > > >>>> release manager for Ignite 2.0 and document all the > > > discussed > > > > > > > features > > > > > > > > >> on > > > > > > > > >>>> the Wiki. > > > > > > > > >>>> > > > > > > > > >>>> > > > > > > > > >>>>> > > > > > > > > >>>>> --Yakov > > > > > > > > >>>>> > > > > > > > > >>>>> 2016-06-25 2:42 GMT+03:00 Alexey Goncharuk < > > > > > > > > >> [hidden email] > > > > > > > > >>>> : > > > > > > > > >>>>> > > > > > > > > >>>>>> Folks, > > > > > > > > >>>>>> > > > > > > > > >>>>>> Recently I have seen a couple of emails suggesting > > > > > > > > >> tasks/improvements > > > > > > > > >>>>> that > > > > > > > > >>>>>> we cannot do in 1.x releases due to API compatibility > > > > reasons, > > > > > > so > > > > > > > > >>> they > > > > > > > > >>>>> are > > > > > > > > >>>>>> postponed to 2.0. I would like to keep track of these > > > tasks > > > > in > > > > > > > some > > > > > > > > >>> way > > > > > > > > >>>>> in > > > > > > > > >>>>>> our Jira to make sure we do not have anything obsolete > > > when > > > > it > > > > > > > > >> comes > > > > > > > > >>> to > > > > > > > > >>>>> the > > > > > > > > >>>>>> next major version release. > > > > > > > > >>>>>> > > > > > > > > >>>>>> My question for now is how should we track such tasks? > > > > Should > > > > > it > > > > > > > > >> be a > > > > > > > > >>>>>> label, a parent task with subtasks, something else? > > > > > > > > >>>>>> > > > > > > > > >>>>>> I would go with a label + release version. > > > > > > > > >>>>>> > > > > > > > > >>>>>> --AG > > > > > > > > >>>>>> > > > > > > > > >>>>> > > > > > > > > >>>> > > > > > > > > >>> > > > > > > > > >> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > Alexey Kuznetsov > > > > > > > GridGain Systems > > > > > > > www.gridgain.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > Sergey Kozlov > > > > > GridGain Systems > > > > > www.gridgain.com > > > > > > > > > > > > > > > > > > > > > -- > > > Sergey Kozlov > > > GridGain Systems > > > www.gridgain.com > > > > > > > > > -- > Sergey Kozlov > GridGain Systems > www.gridgain.com > |
There is one more use case where several types per cache can be useful (I
know that it's utilized by some of our users). The use case is the following: transactional updates with write-behind and foreign key constraints on the database. In case data within transaction is collocated and all types are in the same cache, it works, because there is only one write-behind queue. Once we split different types into different caches, there is no guarantee that objects will be written in the proper order and that the constraints will not be violated. However, I think this is not a very clean way to achieve the result. Probably we should just provide better support for this use case in 2.0. Basically, we somehow need to allow to share a single write-behind queue between different caches. Any thoughts? -Val On Thu, Aug 11, 2016 at 10:40 AM, Dmitriy Setrakyan <[hidden email]> wrote: > On Thu, Aug 11, 2016 at 7:25 AM, Sergey Kozlov <[hidden email]> > wrote: > > > Alexey > > > > You're right. Of course I meant growth of caches number. > > > > I see a few points here: > > > > 1. If a grid used by various applications the cache names may overlap > (like > > "accounts") and the application needs to use prefixed names and etc. > > 2. For clear or destory caches I need to know their names (or iterate > over > > caches but I'm not sure that it is supported by API). For destroy/clear > > caches belonging to same group we will do it by single operation without > > knowledge of cache names. > > 3. Cache group can have cache attributes which will be inherited by a > cache > > created in that group (like eviction policy or cache mode). > > 4. No reason to add specific feature like SqlShema if it applicable for > > regular caches as well. > > > > Sergey K, setting the same SQL schema for multiple caches, as proposed by > Sergi, solves a different problem of having too many SQL schemas due to too > many different caches. I think Sergi proposed a good solution for it. > > > > > > On Thu, Aug 11, 2016 at 6:58 PM, Alexey Kuznetsov < > [hidden email] > > > > > wrote: > > > > > Sergey, I belive you mean "increase" instead of "reduce"? > > > > > > How grouping will help? > > > To do some operation, for example, clear on group of cashes at once? > > > > > > 11 Авг 2016 г. 22:06 пользователь "Sergey Kozlov" < > [hidden email]> > > > написал: > > > > > > > I mean not only SQL features for caches. Single type per cache > > definitely > > > > reduces number of caches for regular user and grouping caches will > help > > > to > > > > manage caches in grid. > > > > > > > > On Thu, Aug 11, 2016 at 5:41 PM, Sergi Vladykin < > > > [hidden email]> > > > > wrote: > > > > > > > > > I'm aware of this issue. My plan was to allow setting the same > schema > > > > name > > > > > to different caches using CacheConfiguration.setSqlSchema(...). > This > > > way > > > > > we > > > > > will have separate caches but from SQL point of view respective > > tables > > > > will > > > > > reside in the same schema. No need to introduce new concepts. > > > > > > > > > > Sergi > > > > > > > > > > > > > > > 2016-08-11 17:24 GMT+03:00 Sergey Kozlov <[hidden email]>: > > > > > > > > > > > HI > > > > > > > > > > > > Due to approach to disable to store more than one type per cache > > the > > > > > cache > > > > > > use becomes similar the table use for databases. > > > > > > So I suppose would be good to introduce a database/schema-similar > > > > concept > > > > > > for caches. It may be implemented as a non-default behavior for > > > > backward > > > > > > compatibility. > > > > > > > > > > > > On Sat, Aug 6, 2016 at 1:04 AM, Dmitriy Setrakyan < > > > > [hidden email] > > > > > > > > > > > > wrote: > > > > > > > > > > > > > On Fri, Aug 5, 2016 at 2:46 AM, Alexey Kuznetsov < > > > > > > [hidden email]> > > > > > > > wrote: > > > > > > > > > > > > > > > I remember couple more thing for 2.0 > > > > > > > > > > > > > > > > How about to drop **ignite-scalar** module in Ignite 2.0? > > > > > > > > > > > > > > > > > > > > > > Why? > > > > > > > > > > > > > > > > > > > > > > And may be drop **ignite-spark-2.10** module support as of > > > > **Spark** > > > > > 2 > > > > > > is > > > > > > > > on **scala 2.11**. > > > > > > > > > > > > > > > > > > > > > > I would drop it only if it is difficult to support. Do we know > > what > > > > > kind > > > > > > of > > > > > > > impact will it have on the community? Anyone is still using > 2.10? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Tue, Aug 2, 2016 at 11:09 PM, Denis Magda < > > > [hidden email]> > > > > > > > wrote: > > > > > > > > > > > > > > > > > Let’s add this [1] issue to the list because it requires > > > > > significant > > > > > > > work > > > > > > > > > on the side of metrics SPI. > > > > > > > > > > > > > > > > > > [1] https://issues.apache.org/jira/browse/IGNITE-3495 < > > > > > > > > > https://issues.apache.org/jira/browse/IGNITE-3495> > > > > > > > > > > > > > > > > > > — > > > > > > > > > Denis > > > > > > > > > > > > > > > > > > > On Aug 2, 2016, at 12:45 AM, Yakov Zhdanov < > > > > [hidden email]> > > > > > > > > wrote: > > > > > > > > > > > > > > > > > > > > Not yet. The thing is that our recent experience showed > > that > > > it > > > > > was > > > > > > > not > > > > > > > > > > very good idea to go with caches. E.g. when you try to > > > > > deserialize > > > > > > > > inside > > > > > > > > > > continuous query listener on client side it implicitly > > calls > > > > > > > > cache.get() > > > > > > > > > > which in turn may cause deadlock under some > circumstances. > > > > > > > > > > > > > > > > > > > > --Yakov > > > > > > > > > > > > > > > > > > > > 2016-08-02 2:41 GMT+03:00 Dmitriy Setrakyan < > > > > > [hidden email] > > > > > > >: > > > > > > > > > > > > > > > > > > > >> On Mon, Aug 1, 2016 at 3:46 AM, Yakov Zhdanov < > > > > > > [hidden email]> > > > > > > > > > wrote: > > > > > > > > > >> > > > > > > > > > >>> One more point. > > > > > > > > > >>> > > > > > > > > > >>> I insist on stop using marshaller and meta caches but > > > switch > > > > to > > > > > > > > > spreading > > > > > > > > > >>> this info via custom discovery events. > > > > > > > > > >>> > > > > > > > > > >> > > > > > > > > > >> Do we have a ticket explaining why this needs to be > done? > > > > > > > > > >> > > > > > > > > > >> > > > > > > > > > >>> > > > > > > > > > >>> --Yakov > > > > > > > > > >>> > > > > > > > > > >>> 2016-07-27 19:57 GMT+03:00 Dmitriy Setrakyan < > > > > > > > [hidden email] > > > > > > > > >: > > > > > > > > > >>> > > > > > > > > > >>>> On Wed, Jul 27, 2016 at 11:36 AM, Yakov Zhdanov < > > > > > > > > [hidden email]> > > > > > > > > > >>>> wrote: > > > > > > > > > >>>> > > > > > > > > > >>>>> Guys, I think we can also split event notification > for > > > user > > > > > > > > listeners > > > > > > > > > >>> and > > > > > > > > > >>>>> internal system listeners. I have been seeing a lot > of > > > > issues > > > > > > > > caused > > > > > > > > > >> by > > > > > > > > > >>>>> some heavy or blocking operations in user-defined > > > > listeners. > > > > > > This > > > > > > > > may > > > > > > > > > >>>> block > > > > > > > > > >>>>> internal component notification (e.g. on discovery > > event) > > > > > > causing > > > > > > > > > >>>> topology > > > > > > > > > >>>>> hangings. > > > > > > > > > >>>>> > > > > > > > > > >>>> > > > > > > > > > >>>> Sure. There are a lot of features being added. Would > be > > > nice > > > > > to > > > > > > > > assign > > > > > > > > > >> a > > > > > > > > > >>>> release manager for Ignite 2.0 and document all the > > > > discussed > > > > > > > > features > > > > > > > > > >> on > > > > > > > > > >>>> the Wiki. > > > > > > > > > >>>> > > > > > > > > > >>>> > > > > > > > > > >>>>> > > > > > > > > > >>>>> --Yakov > > > > > > > > > >>>>> > > > > > > > > > >>>>> 2016-06-25 2:42 GMT+03:00 Alexey Goncharuk < > > > > > > > > > >> [hidden email] > > > > > > > > > >>>> : > > > > > > > > > >>>>> > > > > > > > > > >>>>>> Folks, > > > > > > > > > >>>>>> > > > > > > > > > >>>>>> Recently I have seen a couple of emails suggesting > > > > > > > > > >> tasks/improvements > > > > > > > > > >>>>> that > > > > > > > > > >>>>>> we cannot do in 1.x releases due to API > compatibility > > > > > reasons, > > > > > > > so > > > > > > > > > >>> they > > > > > > > > > >>>>> are > > > > > > > > > >>>>>> postponed to 2.0. I would like to keep track of > these > > > > tasks > > > > > in > > > > > > > > some > > > > > > > > > >>> way > > > > > > > > > >>>>> in > > > > > > > > > >>>>>> our Jira to make sure we do not have anything > obsolete > > > > when > > > > > it > > > > > > > > > >> comes > > > > > > > > > >>> to > > > > > > > > > >>>>> the > > > > > > > > > >>>>>> next major version release. > > > > > > > > > >>>>>> > > > > > > > > > >>>>>> My question for now is how should we track such > tasks? > > > > > Should > > > > > > it > > > > > > > > > >> be a > > > > > > > > > >>>>>> label, a parent task with subtasks, something else? > > > > > > > > > >>>>>> > > > > > > > > > >>>>>> I would go with a label + release version. > > > > > > > > > >>>>>> > > > > > > > > > >>>>>> --AG > > > > > > > > > >>>>>> > > > > > > > > > >>>>> > > > > > > > > > >>>> > > > > > > > > > >>> > > > > > > > > > >> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > Alexey Kuznetsov > > > > > > > > GridGain Systems > > > > > > > > www.gridgain.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > Sergey Kozlov > > > > > > GridGain Systems > > > > > > www.gridgain.com > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > Sergey Kozlov > > > > GridGain Systems > > > > www.gridgain.com > > > > > > > > > > > > > > > -- > > Sergey Kozlov > > GridGain Systems > > www.gridgain.com > > > |
Community,
Let me take a role of the release manager for Apache Ignite 2.0 and coordinate the process. So, my personal view is that Apache Ignite 2.0 should be released by the end of the year. This sounds like a good practice to make a major release once a year. I would suggest us following the same rule. Actual we have more than 3 month for development and I’ve prepare the wiki page that contains tickets that are required to be released in 2.0 and that are optional https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.0 Proposed release date is December 23rd, 2016. The tickets that are required for the release basically break the compatibility and we postpone fixing them in minor release or they bring significant improvements into the product. Please review the page, provide your comments and assign the tickets on yourself if you’re ready to work on them. — Denis > On Aug 11, 2016, at 4:06 PM, Valentin Kulichenko <[hidden email]> wrote: > > There is one more use case where several types per cache can be useful (I > know that it's utilized by some of our users). The use case is the > following: transactional updates with write-behind and foreign key > constraints on the database. In case data within transaction is collocated > and all types are in the same cache, it works, because there is only one > write-behind queue. Once we split different types into different caches, > there is no guarantee that objects will be written in the proper order and > that the constraints will not be violated. > > However, I think this is not a very clean way to achieve the result. > Probably we should just provide better support for this use case in 2.0. > Basically, we somehow need to allow to share a single write-behind queue > between different caches. > > Any thoughts? > > -Val > > On Thu, Aug 11, 2016 at 10:40 AM, Dmitriy Setrakyan <[hidden email]> > wrote: > >> On Thu, Aug 11, 2016 at 7:25 AM, Sergey Kozlov <[hidden email]> >> wrote: >> >>> Alexey >>> >>> You're right. Of course I meant growth of caches number. >>> >>> I see a few points here: >>> >>> 1. If a grid used by various applications the cache names may overlap >> (like >>> "accounts") and the application needs to use prefixed names and etc. >>> 2. For clear or destory caches I need to know their names (or iterate >> over >>> caches but I'm not sure that it is supported by API). For destroy/clear >>> caches belonging to same group we will do it by single operation without >>> knowledge of cache names. >>> 3. Cache group can have cache attributes which will be inherited by a >> cache >>> created in that group (like eviction policy or cache mode). >>> 4. No reason to add specific feature like SqlShema if it applicable for >>> regular caches as well. >>> >> >> Sergey K, setting the same SQL schema for multiple caches, as proposed by >> Sergi, solves a different problem of having too many SQL schemas due to too >> many different caches. I think Sergi proposed a good solution for it. >> >> >>> >>> On Thu, Aug 11, 2016 at 6:58 PM, Alexey Kuznetsov < >> [hidden email] >>>> >>> wrote: >>> >>>> Sergey, I belive you mean "increase" instead of "reduce"? >>>> >>>> How grouping will help? >>>> To do some operation, for example, clear on group of cashes at once? >>>> >>>> 11 Авг 2016 г. 22:06 пользователь "Sergey Kozlov" < >> [hidden email]> >>>> написал: >>>> >>>>> I mean not only SQL features for caches. Single type per cache >>> definitely >>>>> reduces number of caches for regular user and grouping caches will >> help >>>> to >>>>> manage caches in grid. >>>>> >>>>> On Thu, Aug 11, 2016 at 5:41 PM, Sergi Vladykin < >>>> [hidden email]> >>>>> wrote: >>>>> >>>>>> I'm aware of this issue. My plan was to allow setting the same >> schema >>>>> name >>>>>> to different caches using CacheConfiguration.setSqlSchema(...). >> This >>>> way >>>>>> we >>>>>> will have separate caches but from SQL point of view respective >>> tables >>>>> will >>>>>> reside in the same schema. No need to introduce new concepts. >>>>>> >>>>>> Sergi >>>>>> >>>>>> >>>>>> 2016-08-11 17:24 GMT+03:00 Sergey Kozlov <[hidden email]>: >>>>>> >>>>>>> HI >>>>>>> >>>>>>> Due to approach to disable to store more than one type per cache >>> the >>>>>> cache >>>>>>> use becomes similar the table use for databases. >>>>>>> So I suppose would be good to introduce a database/schema-similar >>>>> concept >>>>>>> for caches. It may be implemented as a non-default behavior for >>>>> backward >>>>>>> compatibility. >>>>>>> >>>>>>> On Sat, Aug 6, 2016 at 1:04 AM, Dmitriy Setrakyan < >>>>> [hidden email] >>>>>>> >>>>>>> wrote: >>>>>>> >>>>>>>> On Fri, Aug 5, 2016 at 2:46 AM, Alexey Kuznetsov < >>>>>>> [hidden email]> >>>>>>>> wrote: >>>>>>>> >>>>>>>>> I remember couple more thing for 2.0 >>>>>>>>> >>>>>>>>> How about to drop **ignite-scalar** module in Ignite 2.0? >>>>>>>>> >>>>>>>> >>>>>>>> Why? >>>>>>>> >>>>>>>> >>>>>>>>> And may be drop **ignite-spark-2.10** module support as of >>>>> **Spark** >>>>>> 2 >>>>>>> is >>>>>>>>> on **scala 2.11**. >>>>>>>>> >>>>>>>> >>>>>>>> I would drop it only if it is difficult to support. Do we know >>> what >>>>>> kind >>>>>>> of >>>>>>>> impact will it have on the community? Anyone is still using >> 2.10? >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, Aug 2, 2016 at 11:09 PM, Denis Magda < >>>> [hidden email]> >>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Let’s add this [1] issue to the list because it requires >>>>>> significant >>>>>>>> work >>>>>>>>>> on the side of metrics SPI. >>>>>>>>>> >>>>>>>>>> [1] https://issues.apache.org/jira/browse/IGNITE-3495 < >>>>>>>>>> https://issues.apache.org/jira/browse/IGNITE-3495> >>>>>>>>>> >>>>>>>>>> — >>>>>>>>>> Denis >>>>>>>>>> >>>>>>>>>>> On Aug 2, 2016, at 12:45 AM, Yakov Zhdanov < >>>>> [hidden email]> >>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>> Not yet. The thing is that our recent experience showed >>> that >>>> it >>>>>> was >>>>>>>> not >>>>>>>>>>> very good idea to go with caches. E.g. when you try to >>>>>> deserialize >>>>>>>>> inside >>>>>>>>>>> continuous query listener on client side it implicitly >>> calls >>>>>>>>> cache.get() >>>>>>>>>>> which in turn may cause deadlock under some >> circumstances. >>>>>>>>>>> >>>>>>>>>>> --Yakov >>>>>>>>>>> >>>>>>>>>>> 2016-08-02 2:41 GMT+03:00 Dmitriy Setrakyan < >>>>>> [hidden email] >>>>>>>> : >>>>>>>>>>> >>>>>>>>>>>> On Mon, Aug 1, 2016 at 3:46 AM, Yakov Zhdanov < >>>>>>> [hidden email]> >>>>>>>>>> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> One more point. >>>>>>>>>>>>> >>>>>>>>>>>>> I insist on stop using marshaller and meta caches but >>>> switch >>>>> to >>>>>>>>>> spreading >>>>>>>>>>>>> this info via custom discovery events. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Do we have a ticket explaining why this needs to be >> done? >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> --Yakov >>>>>>>>>>>>> >>>>>>>>>>>>> 2016-07-27 19:57 GMT+03:00 Dmitriy Setrakyan < >>>>>>>> [hidden email] >>>>>>>>>> : >>>>>>>>>>>>> >>>>>>>>>>>>>> On Wed, Jul 27, 2016 at 11:36 AM, Yakov Zhdanov < >>>>>>>>> [hidden email]> >>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Guys, I think we can also split event notification >> for >>>> user >>>>>>>>> listeners >>>>>>>>>>>>> and >>>>>>>>>>>>>>> internal system listeners. I have been seeing a lot >> of >>>>> issues >>>>>>>>> caused >>>>>>>>>>>> by >>>>>>>>>>>>>>> some heavy or blocking operations in user-defined >>>>> listeners. >>>>>>> This >>>>>>>>> may >>>>>>>>>>>>>> block >>>>>>>>>>>>>>> internal component notification (e.g. on discovery >>> event) >>>>>>> causing >>>>>>>>>>>>>> topology >>>>>>>>>>>>>>> hangings. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Sure. There are a lot of features being added. Would >> be >>>> nice >>>>>> to >>>>>>>>> assign >>>>>>>>>>>> a >>>>>>>>>>>>>> release manager for Ignite 2.0 and document all the >>>>> discussed >>>>>>>>> features >>>>>>>>>>>> on >>>>>>>>>>>>>> the Wiki. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> --Yakov >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> 2016-06-25 2:42 GMT+03:00 Alexey Goncharuk < >>>>>>>>>>>> [hidden email] >>>>>>>>>>>>>> : >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Folks, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Recently I have seen a couple of emails suggesting >>>>>>>>>>>> tasks/improvements >>>>>>>>>>>>>>> that >>>>>>>>>>>>>>>> we cannot do in 1.x releases due to API >> compatibility >>>>>> reasons, >>>>>>>> so >>>>>>>>>>>>> they >>>>>>>>>>>>>>> are >>>>>>>>>>>>>>>> postponed to 2.0. I would like to keep track of >> these >>>>> tasks >>>>>> in >>>>>>>>> some >>>>>>>>>>>>> way >>>>>>>>>>>>>>> in >>>>>>>>>>>>>>>> our Jira to make sure we do not have anything >> obsolete >>>>> when >>>>>> it >>>>>>>>>>>> comes >>>>>>>>>>>>> to >>>>>>>>>>>>>>> the >>>>>>>>>>>>>>>> next major version release. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> My question for now is how should we track such >> tasks? >>>>>> Should >>>>>>> it >>>>>>>>>>>> be a >>>>>>>>>>>>>>>> label, a parent task with subtasks, something else? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I would go with a label + release version. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> --AG >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Alexey Kuznetsov >>>>>>>>> GridGain Systems >>>>>>>>> www.gridgain.com >>>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Sergey Kozlov >>>>>>> GridGain Systems >>>>>>> www.gridgain.com >>>>>>> >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Sergey Kozlov >>>>> GridGain Systems >>>>> www.gridgain.com >>>>> >>>> >>> >>> >>> >>> -- >>> Sergey Kozlov >>> GridGain Systems >>> www.gridgain.com >>> >> |
Denis, thanks for taking a role of a release manger for 2.0. It is
definitely an important release for us and good supervision is going to be very helpful. I have looked at the tickets and the list seems nice. I would also add a ticket about migration of the JTA dependency to Geronimo as well, IGNITE-3793 [1], however I am not sure if we might be able to do it prior to 2.0. [1] https://issues.apache.org/jira/browse/IGNITE-3793 D. On Sat, Sep 3, 2016 at 3:17 AM, Denis Magda <[hidden email]> wrote: > Community, > > Let me take a role of the release manager for Apache Ignite 2.0 and > coordinate the process. > > So, my personal view is that Apache Ignite 2.0 should be released by the > end of the year. This sounds like a good practice to make a major release > once a year. I would suggest us following the same rule. > > Actual we have more than 3 month for development and I’ve prepare the wiki > page that contains tickets that are required to be released in 2.0 and that > are optional > https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.0 > > Proposed release date is December 23rd, 2016. > > The tickets that are required for the release basically break the > compatibility and we postpone fixing them in minor release or they bring > significant improvements into the product. Please review the page, provide > your comments and assign the tickets on yourself if you’re ready to work on > them. > > — > Denis > > > On Aug 11, 2016, at 4:06 PM, Valentin Kulichenko < > [hidden email]> wrote: > > > > There is one more use case where several types per cache can be useful (I > > know that it's utilized by some of our users). The use case is the > > following: transactional updates with write-behind and foreign key > > constraints on the database. In case data within transaction is > collocated > > and all types are in the same cache, it works, because there is only one > > write-behind queue. Once we split different types into different caches, > > there is no guarantee that objects will be written in the proper order > and > > that the constraints will not be violated. > > > > However, I think this is not a very clean way to achieve the result. > > Probably we should just provide better support for this use case in 2.0. > > Basically, we somehow need to allow to share a single write-behind queue > > between different caches. > > > > Any thoughts? > > > > -Val > > > > On Thu, Aug 11, 2016 at 10:40 AM, Dmitriy Setrakyan < > [hidden email]> > > wrote: > > > >> On Thu, Aug 11, 2016 at 7:25 AM, Sergey Kozlov <[hidden email]> > >> wrote: > >> > >>> Alexey > >>> > >>> You're right. Of course I meant growth of caches number. > >>> > >>> I see a few points here: > >>> > >>> 1. If a grid used by various applications the cache names may overlap > >> (like > >>> "accounts") and the application needs to use prefixed names and etc. > >>> 2. For clear or destory caches I need to know their names (or iterate > >> over > >>> caches but I'm not sure that it is supported by API). For destroy/clear > >>> caches belonging to same group we will do it by single operation > without > >>> knowledge of cache names. > >>> 3. Cache group can have cache attributes which will be inherited by a > >> cache > >>> created in that group (like eviction policy or cache mode). > >>> 4. No reason to add specific feature like SqlShema if it applicable for > >>> regular caches as well. > >>> > >> > >> Sergey K, setting the same SQL schema for multiple caches, as proposed > by > >> Sergi, solves a different problem of having too many SQL schemas due to > too > >> many different caches. I think Sergi proposed a good solution for it. > >> > >> > >>> > >>> On Thu, Aug 11, 2016 at 6:58 PM, Alexey Kuznetsov < > >> [hidden email] > >>>> > >>> wrote: > >>> > >>>> Sergey, I belive you mean "increase" instead of "reduce"? > >>>> > >>>> How grouping will help? > >>>> To do some operation, for example, clear on group of cashes at once? > >>>> > >>>> 11 Авг 2016 г. 22:06 пользователь "Sergey Kozlov" < > >> [hidden email]> > >>>> написал: > >>>> > >>>>> I mean not only SQL features for caches. Single type per cache > >>> definitely > >>>>> reduces number of caches for regular user and grouping caches will > >> help > >>>> to > >>>>> manage caches in grid. > >>>>> > >>>>> On Thu, Aug 11, 2016 at 5:41 PM, Sergi Vladykin < > >>>> [hidden email]> > >>>>> wrote: > >>>>> > >>>>>> I'm aware of this issue. My plan was to allow setting the same > >> schema > >>>>> name > >>>>>> to different caches using CacheConfiguration.setSqlSchema(...). > >> This > >>>> way > >>>>>> we > >>>>>> will have separate caches but from SQL point of view respective > >>> tables > >>>>> will > >>>>>> reside in the same schema. No need to introduce new concepts. > >>>>>> > >>>>>> Sergi > >>>>>> > >>>>>> > >>>>>> 2016-08-11 17:24 GMT+03:00 Sergey Kozlov <[hidden email]>: > >>>>>> > >>>>>>> HI > >>>>>>> > >>>>>>> Due to approach to disable to store more than one type per cache > >>> the > >>>>>> cache > >>>>>>> use becomes similar the table use for databases. > >>>>>>> So I suppose would be good to introduce a database/schema-similar > >>>>> concept > >>>>>>> for caches. It may be implemented as a non-default behavior for > >>>>> backward > >>>>>>> compatibility. > >>>>>>> > >>>>>>> On Sat, Aug 6, 2016 at 1:04 AM, Dmitriy Setrakyan < > >>>>> [hidden email] > >>>>>>> > >>>>>>> wrote: > >>>>>>> > >>>>>>>> On Fri, Aug 5, 2016 at 2:46 AM, Alexey Kuznetsov < > >>>>>>> [hidden email]> > >>>>>>>> wrote: > >>>>>>>> > >>>>>>>>> I remember couple more thing for 2.0 > >>>>>>>>> > >>>>>>>>> How about to drop **ignite-scalar** module in Ignite 2.0? > >>>>>>>>> > >>>>>>>> > >>>>>>>> Why? > >>>>>>>> > >>>>>>>> > >>>>>>>>> And may be drop **ignite-spark-2.10** module support as of > >>>>> **Spark** > >>>>>> 2 > >>>>>>> is > >>>>>>>>> on **scala 2.11**. > >>>>>>>>> > >>>>>>>> > >>>>>>>> I would drop it only if it is difficult to support. Do we know > >>> what > >>>>>> kind > >>>>>>> of > >>>>>>>> impact will it have on the community? Anyone is still using > >> 2.10? > >>>>>>>> > >>>>>>>> > >>>>>>>>> > >>>>>>>>> On Tue, Aug 2, 2016 at 11:09 PM, Denis Magda < > >>>> [hidden email]> > >>>>>>>> wrote: > >>>>>>>>> > >>>>>>>>>> Let’s add this [1] issue to the list because it requires > >>>>>> significant > >>>>>>>> work > >>>>>>>>>> on the side of metrics SPI. > >>>>>>>>>> > >>>>>>>>>> [1] https://issues.apache.org/jira/browse/IGNITE-3495 < > >>>>>>>>>> https://issues.apache.org/jira/browse/IGNITE-3495> > >>>>>>>>>> > >>>>>>>>>> — > >>>>>>>>>> Denis > >>>>>>>>>> > >>>>>>>>>>> On Aug 2, 2016, at 12:45 AM, Yakov Zhdanov < > >>>>> [hidden email]> > >>>>>>>>> wrote: > >>>>>>>>>>> > >>>>>>>>>>> Not yet. The thing is that our recent experience showed > >>> that > >>>> it > >>>>>> was > >>>>>>>> not > >>>>>>>>>>> very good idea to go with caches. E.g. when you try to > >>>>>> deserialize > >>>>>>>>> inside > >>>>>>>>>>> continuous query listener on client side it implicitly > >>> calls > >>>>>>>>> cache.get() > >>>>>>>>>>> which in turn may cause deadlock under some > >> circumstances. > >>>>>>>>>>> > >>>>>>>>>>> --Yakov > >>>>>>>>>>> > >>>>>>>>>>> 2016-08-02 2:41 GMT+03:00 Dmitriy Setrakyan < > >>>>>> [hidden email] > >>>>>>>> : > >>>>>>>>>>> > >>>>>>>>>>>> On Mon, Aug 1, 2016 at 3:46 AM, Yakov Zhdanov < > >>>>>>> [hidden email]> > >>>>>>>>>> wrote: > >>>>>>>>>>>> > >>>>>>>>>>>>> One more point. > >>>>>>>>>>>>> > >>>>>>>>>>>>> I insist on stop using marshaller and meta caches but > >>>> switch > >>>>> to > >>>>>>>>>> spreading > >>>>>>>>>>>>> this info via custom discovery events. > >>>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> Do we have a ticket explaining why this needs to be > >> done? > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> --Yakov > >>>>>>>>>>>>> > >>>>>>>>>>>>> 2016-07-27 19:57 GMT+03:00 Dmitriy Setrakyan < > >>>>>>>> [hidden email] > >>>>>>>>>> : > >>>>>>>>>>>>> > >>>>>>>>>>>>>> On Wed, Jul 27, 2016 at 11:36 AM, Yakov Zhdanov < > >>>>>>>>> [hidden email]> > >>>>>>>>>>>>>> wrote: > >>>>>>>>>>>>>> > >>>>>>>>>>>>>>> Guys, I think we can also split event notification > >> for > >>>> user > >>>>>>>>> listeners > >>>>>>>>>>>>> and > >>>>>>>>>>>>>>> internal system listeners. I have been seeing a lot > >> of > >>>>> issues > >>>>>>>>> caused > >>>>>>>>>>>> by > >>>>>>>>>>>>>>> some heavy or blocking operations in user-defined > >>>>> listeners. > >>>>>>> This > >>>>>>>>> may > >>>>>>>>>>>>>> block > >>>>>>>>>>>>>>> internal component notification (e.g. on discovery > >>> event) > >>>>>>> causing > >>>>>>>>>>>>>> topology > >>>>>>>>>>>>>>> hangings. > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Sure. There are a lot of features being added. Would > >> be > >>>> nice > >>>>>> to > >>>>>>>>> assign > >>>>>>>>>>>> a > >>>>>>>>>>>>>> release manager for Ignite 2.0 and document all the > >>>>> discussed > >>>>>>>>> features > >>>>>>>>>>>> on > >>>>>>>>>>>>>> the Wiki. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> --Yakov > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> 2016-06-25 2:42 GMT+03:00 Alexey Goncharuk < > >>>>>>>>>>>> [hidden email] > >>>>>>>>>>>>>> : > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> Folks, > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> Recently I have seen a couple of emails suggesting > >>>>>>>>>>>> tasks/improvements > >>>>>>>>>>>>>>> that > >>>>>>>>>>>>>>>> we cannot do in 1.x releases due to API > >> compatibility > >>>>>> reasons, > >>>>>>>> so > >>>>>>>>>>>>> they > >>>>>>>>>>>>>>> are > >>>>>>>>>>>>>>>> postponed to 2.0. I would like to keep track of > >> these > >>>>> tasks > >>>>>> in > >>>>>>>>> some > >>>>>>>>>>>>> way > >>>>>>>>>>>>>>> in > >>>>>>>>>>>>>>>> our Jira to make sure we do not have anything > >> obsolete > >>>>> when > >>>>>> it > >>>>>>>>>>>> comes > >>>>>>>>>>>>> to > >>>>>>>>>>>>>>> the > >>>>>>>>>>>>>>>> next major version release. > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> My question for now is how should we track such > >> tasks? > >>>>>> Should > >>>>>>> it > >>>>>>>>>>>> be a > >>>>>>>>>>>>>>>> label, a parent task with subtasks, something else? > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> I would go with a label + release version. > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> --AG > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> -- > >>>>>>>>> Alexey Kuznetsov > >>>>>>>>> GridGain Systems > >>>>>>>>> www.gridgain.com > >>>>>>>>> > >>>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> -- > >>>>>>> Sergey Kozlov > >>>>>>> GridGain Systems > >>>>>>> www.gridgain.com > >>>>>>> > >>>>>> > >>>>> > >>>>> > >>>>> > >>>>> -- > >>>>> Sergey Kozlov > >>>>> GridGain Systems > >>>>> www.gridgain.com > >>>>> > >>>> > >>> > >>> > >>> > >>> -- > >>> Sergey Kozlov > >>> GridGain Systems > >>> www.gridgain.com > >>> > >> > > |
Hi
I suppose we should redesign HTTP REST API. We've a dozen issues around the REST implementation and the provided functionality is very limited and doesn't follow last changes for Ignite. The most suitable ticket is IGNITE-1774 REST API should be implemented using Jersey <https://issues.apache.org/jira/browse/IGNITE-1774> but probably we need to have a root ticket for that. On Sat, Sep 3, 2016 at 9:28 PM, Dmitriy Setrakyan <[hidden email]> wrote: > Denis, thanks for taking a role of a release manger for 2.0. It is > definitely an important release for us and good supervision is going to be > very helpful. > > I have looked at the tickets and the list seems nice. I would also add a > ticket about migration of the JTA dependency to Geronimo as well, > IGNITE-3793 [1], however I am not sure if we might be able to do it prior > to 2.0. > > [1] https://issues.apache.org/jira/browse/IGNITE-3793 > > D. > > On Sat, Sep 3, 2016 at 3:17 AM, Denis Magda <[hidden email]> wrote: > > > Community, > > > > Let me take a role of the release manager for Apache Ignite 2.0 and > > coordinate the process. > > > > So, my personal view is that Apache Ignite 2.0 should be released by the > > end of the year. This sounds like a good practice to make a major release > > once a year. I would suggest us following the same rule. > > > > Actual we have more than 3 month for development and I’ve prepare the > wiki > > page that contains tickets that are required to be released in 2.0 and > that > > are optional > > https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.0 > > > > Proposed release date is December 23rd, 2016. > > > > The tickets that are required for the release basically break the > > compatibility and we postpone fixing them in minor release or they bring > > significant improvements into the product. Please review the page, > provide > > your comments and assign the tickets on yourself if you’re ready to work > on > > them. > > > > — > > Denis > > > > > On Aug 11, 2016, at 4:06 PM, Valentin Kulichenko < > > [hidden email]> wrote: > > > > > > There is one more use case where several types per cache can be useful > (I > > > know that it's utilized by some of our users). The use case is the > > > following: transactional updates with write-behind and foreign key > > > constraints on the database. In case data within transaction is > > collocated > > > and all types are in the same cache, it works, because there is only > one > > > write-behind queue. Once we split different types into different > caches, > > > there is no guarantee that objects will be written in the proper order > > and > > > that the constraints will not be violated. > > > > > > However, I think this is not a very clean way to achieve the result. > > > Probably we should just provide better support for this use case in > 2.0. > > > Basically, we somehow need to allow to share a single write-behind > queue > > > between different caches. > > > > > > Any thoughts? > > > > > > -Val > > > > > > On Thu, Aug 11, 2016 at 10:40 AM, Dmitriy Setrakyan < > > [hidden email]> > > > wrote: > > > > > >> On Thu, Aug 11, 2016 at 7:25 AM, Sergey Kozlov <[hidden email]> > > >> wrote: > > >> > > >>> Alexey > > >>> > > >>> You're right. Of course I meant growth of caches number. > > >>> > > >>> I see a few points here: > > >>> > > >>> 1. If a grid used by various applications the cache names may overlap > > >> (like > > >>> "accounts") and the application needs to use prefixed names and etc. > > >>> 2. For clear or destory caches I need to know their names (or iterate > > >> over > > >>> caches but I'm not sure that it is supported by API). For > destroy/clear > > >>> caches belonging to same group we will do it by single operation > > without > > >>> knowledge of cache names. > > >>> 3. Cache group can have cache attributes which will be inherited by a > > >> cache > > >>> created in that group (like eviction policy or cache mode). > > >>> 4. No reason to add specific feature like SqlShema if it applicable > for > > >>> regular caches as well. > > >>> > > >> > > >> Sergey K, setting the same SQL schema for multiple caches, as proposed > > by > > >> Sergi, solves a different problem of having too many SQL schemas due > to > > too > > >> many different caches. I think Sergi proposed a good solution for it. > > >> > > >> > > >>> > > >>> On Thu, Aug 11, 2016 at 6:58 PM, Alexey Kuznetsov < > > >> [hidden email] > > >>>> > > >>> wrote: > > >>> > > >>>> Sergey, I belive you mean "increase" instead of "reduce"? > > >>>> > > >>>> How grouping will help? > > >>>> To do some operation, for example, clear on group of cashes at once? > > >>>> > > >>>> 11 Авг 2016 г. 22:06 пользователь "Sergey Kozlov" < > > >> [hidden email]> > > >>>> написал: > > >>>> > > >>>>> I mean not only SQL features for caches. Single type per cache > > >>> definitely > > >>>>> reduces number of caches for regular user and grouping caches will > > >> help > > >>>> to > > >>>>> manage caches in grid. > > >>>>> > > >>>>> On Thu, Aug 11, 2016 at 5:41 PM, Sergi Vladykin < > > >>>> [hidden email]> > > >>>>> wrote: > > >>>>> > > >>>>>> I'm aware of this issue. My plan was to allow setting the same > > >> schema > > >>>>> name > > >>>>>> to different caches using CacheConfiguration.setSqlSchema(...). > > >> This > > >>>> way > > >>>>>> we > > >>>>>> will have separate caches but from SQL point of view respective > > >>> tables > > >>>>> will > > >>>>>> reside in the same schema. No need to introduce new concepts. > > >>>>>> > > >>>>>> Sergi > > >>>>>> > > >>>>>> > > >>>>>> 2016-08-11 17:24 GMT+03:00 Sergey Kozlov <[hidden email]>: > > >>>>>> > > >>>>>>> HI > > >>>>>>> > > >>>>>>> Due to approach to disable to store more than one type per cache > > >>> the > > >>>>>> cache > > >>>>>>> use becomes similar the table use for databases. > > >>>>>>> So I suppose would be good to introduce a database/schema-similar > > >>>>> concept > > >>>>>>> for caches. It may be implemented as a non-default behavior for > > >>>>> backward > > >>>>>>> compatibility. > > >>>>>>> > > >>>>>>> On Sat, Aug 6, 2016 at 1:04 AM, Dmitriy Setrakyan < > > >>>>> [hidden email] > > >>>>>>> > > >>>>>>> wrote: > > >>>>>>> > > >>>>>>>> On Fri, Aug 5, 2016 at 2:46 AM, Alexey Kuznetsov < > > >>>>>>> [hidden email]> > > >>>>>>>> wrote: > > >>>>>>>> > > >>>>>>>>> I remember couple more thing for 2.0 > > >>>>>>>>> > > >>>>>>>>> How about to drop **ignite-scalar** module in Ignite 2.0? > > >>>>>>>>> > > >>>>>>>> > > >>>>>>>> Why? > > >>>>>>>> > > >>>>>>>> > > >>>>>>>>> And may be drop **ignite-spark-2.10** module support as of > > >>>>> **Spark** > > >>>>>> 2 > > >>>>>>> is > > >>>>>>>>> on **scala 2.11**. > > >>>>>>>>> > > >>>>>>>> > > >>>>>>>> I would drop it only if it is difficult to support. Do we know > > >>> what > > >>>>>> kind > > >>>>>>> of > > >>>>>>>> impact will it have on the community? Anyone is still using > > >> 2.10? > > >>>>>>>> > > >>>>>>>> > > >>>>>>>>> > > >>>>>>>>> On Tue, Aug 2, 2016 at 11:09 PM, Denis Magda < > > >>>> [hidden email]> > > >>>>>>>> wrote: > > >>>>>>>>> > > >>>>>>>>>> Let’s add this [1] issue to the list because it requires > > >>>>>> significant > > >>>>>>>> work > > >>>>>>>>>> on the side of metrics SPI. > > >>>>>>>>>> > > >>>>>>>>>> [1] https://issues.apache.org/jira/browse/IGNITE-3495 < > > >>>>>>>>>> https://issues.apache.org/jira/browse/IGNITE-3495> > > >>>>>>>>>> > > >>>>>>>>>> — > > >>>>>>>>>> Denis > > >>>>>>>>>> > > >>>>>>>>>>> On Aug 2, 2016, at 12:45 AM, Yakov Zhdanov < > > >>>>> [hidden email]> > > >>>>>>>>> wrote: > > >>>>>>>>>>> > > >>>>>>>>>>> Not yet. The thing is that our recent experience showed > > >>> that > > >>>> it > > >>>>>> was > > >>>>>>>> not > > >>>>>>>>>>> very good idea to go with caches. E.g. when you try to > > >>>>>> deserialize > > >>>>>>>>> inside > > >>>>>>>>>>> continuous query listener on client side it implicitly > > >>> calls > > >>>>>>>>> cache.get() > > >>>>>>>>>>> which in turn may cause deadlock under some > > >> circumstances. > > >>>>>>>>>>> > > >>>>>>>>>>> --Yakov > > >>>>>>>>>>> > > >>>>>>>>>>> 2016-08-02 2:41 GMT+03:00 Dmitriy Setrakyan < > > >>>>>> [hidden email] > > >>>>>>>> : > > >>>>>>>>>>> > > >>>>>>>>>>>> On Mon, Aug 1, 2016 at 3:46 AM, Yakov Zhdanov < > > >>>>>>> [hidden email]> > > >>>>>>>>>> wrote: > > >>>>>>>>>>>> > > >>>>>>>>>>>>> One more point. > > >>>>>>>>>>>>> > > >>>>>>>>>>>>> I insist on stop using marshaller and meta caches but > > >>>> switch > > >>>>> to > > >>>>>>>>>> spreading > > >>>>>>>>>>>>> this info via custom discovery events. > > >>>>>>>>>>>>> > > >>>>>>>>>>>> > > >>>>>>>>>>>> Do we have a ticket explaining why this needs to be > > >> done? > > >>>>>>>>>>>> > > >>>>>>>>>>>> > > >>>>>>>>>>>>> > > >>>>>>>>>>>>> --Yakov > > >>>>>>>>>>>>> > > >>>>>>>>>>>>> 2016-07-27 19:57 GMT+03:00 Dmitriy Setrakyan < > > >>>>>>>> [hidden email] > > >>>>>>>>>> : > > >>>>>>>>>>>>> > > >>>>>>>>>>>>>> On Wed, Jul 27, 2016 at 11:36 AM, Yakov Zhdanov < > > >>>>>>>>> [hidden email]> > > >>>>>>>>>>>>>> wrote: > > >>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> Guys, I think we can also split event notification > > >> for > > >>>> user > > >>>>>>>>> listeners > > >>>>>>>>>>>>> and > > >>>>>>>>>>>>>>> internal system listeners. I have been seeing a lot > > >> of > > >>>>> issues > > >>>>>>>>> caused > > >>>>>>>>>>>> by > > >>>>>>>>>>>>>>> some heavy or blocking operations in user-defined > > >>>>> listeners. > > >>>>>>> This > > >>>>>>>>> may > > >>>>>>>>>>>>>> block > > >>>>>>>>>>>>>>> internal component notification (e.g. on discovery > > >>> event) > > >>>>>>> causing > > >>>>>>>>>>>>>> topology > > >>>>>>>>>>>>>>> hangings. > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>> > > >>>>>>>>>>>>>> Sure. There are a lot of features being added. Would > > >> be > > >>>> nice > > >>>>>> to > > >>>>>>>>> assign > > >>>>>>>>>>>> a > > >>>>>>>>>>>>>> release manager for Ignite 2.0 and document all the > > >>>>> discussed > > >>>>>>>>> features > > >>>>>>>>>>>> on > > >>>>>>>>>>>>>> the Wiki. > > >>>>>>>>>>>>>> > > >>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> --Yakov > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> 2016-06-25 2:42 GMT+03:00 Alexey Goncharuk < > > >>>>>>>>>>>> [hidden email] > > >>>>>>>>>>>>>> : > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>>> Folks, > > >>>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>>> Recently I have seen a couple of emails suggesting > > >>>>>>>>>>>> tasks/improvements > > >>>>>>>>>>>>>>> that > > >>>>>>>>>>>>>>>> we cannot do in 1.x releases due to API > > >> compatibility > > >>>>>> reasons, > > >>>>>>>> so > > >>>>>>>>>>>>> they > > >>>>>>>>>>>>>>> are > > >>>>>>>>>>>>>>>> postponed to 2.0. I would like to keep track of > > >> these > > >>>>> tasks > > >>>>>> in > > >>>>>>>>> some > > >>>>>>>>>>>>> way > > >>>>>>>>>>>>>>> in > > >>>>>>>>>>>>>>>> our Jira to make sure we do not have anything > > >> obsolete > > >>>>> when > > >>>>>> it > > >>>>>>>>>>>> comes > > >>>>>>>>>>>>> to > > >>>>>>>>>>>>>>> the > > >>>>>>>>>>>>>>>> next major version release. > > >>>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>>> My question for now is how should we track such > > >> tasks? > > >>>>>> Should > > >>>>>>> it > > >>>>>>>>>>>> be a > > >>>>>>>>>>>>>>>> label, a parent task with subtasks, something else? > > >>>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>>> I would go with a label + release version. > > >>>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>>> --AG > > >>>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>> > > >>>>>>>>>>>>> > > >>>>>>>>>>>> > > >>>>>>>>>> > > >>>>>>>>>> > > >>>>>>>>> > > >>>>>>>>> > > >>>>>>>>> -- > > >>>>>>>>> Alexey Kuznetsov > > >>>>>>>>> GridGain Systems > > >>>>>>>>> www.gridgain.com > > >>>>>>>>> > > >>>>>>>> > > >>>>>>> > > >>>>>>> > > >>>>>>> > > >>>>>>> -- > > >>>>>>> Sergey Kozlov > > >>>>>>> GridGain Systems > > >>>>>>> www.gridgain.com > > >>>>>>> > > >>>>>> > > >>>>> > > >>>>> > > >>>>> > > >>>>> -- > > >>>>> Sergey Kozlov > > >>>>> GridGain Systems > > >>>>> www.gridgain.com > > >>>>> > > >>>> > > >>> > > >>> > > >>> > > >>> -- > > >>> Sergey Kozlov > > >>> GridGain Systems > > >>> www.gridgain.com > > >>> > > >> > > > > > -- Sergey Kozlov GridGain Systems www.gridgain.com |
Free forum by Nabble | Edit this page |