Retries in write-behind store

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Retries in write-behind store

Valentin Kulichenko
Folks,

Is there a way to limit or disable retries of failed updates in the
write-behind store? I can't find one, it looks like if an update fails, it
is moved to the end of the queue and then eventually retried. If it fails
again, process is repeated.

Such behavior *might* be OK if failures are caused by database being
temporarily unavailable. But what if update fails deterministically, for
example due to a constraint violation? There is absolutely no reason to
retry it, and at the same time it can cause stability and performance
issues when buffer is full with such "broken" updates.

Does it makes sense to add an option that would allow to limit number of
retries (or even disable them)?

-Val
Reply | Threaded
Open this post in threaded view
|

Re: Retries in write-behind store

dmagda
Val,

Sounds like a handy configuration option. I would allow setting a number of
retries. If the number is set to 0 then a failed update is discarded right
away.

--
Denis

On Tue, Aug 28, 2018 at 9:14 PM Valentin Kulichenko <
[hidden email]> wrote:

> Folks,
>
> Is there a way to limit or disable retries of failed updates in the
> write-behind store? I can't find one, it looks like if an update fails, it
> is moved to the end of the queue and then eventually retried. If it fails
> again, process is repeated.
>
> Such behavior *might* be OK if failures are caused by database being
> temporarily unavailable. But what if update fails deterministically, for
> example due to a constraint violation? There is absolutely no reason to
> retry it, and at the same time it can cause stability and performance
> issues when buffer is full with such "broken" updates.
>
> Does it makes sense to add an option that would allow to limit number of
> retries (or even disable them)?
>
> -Val
>
Reply | Threaded
Open this post in threaded view
|

Re: Retries in write-behind store

gauravhb
Also in addition to that how about generating event when updates are failed
which can be listened to and custom logic can be added to handle the
failures?

On Wed, Aug 29, 2018 at 6:56 AM, Denis Magda <[hidden email]> wrote:

> Val,
>
> Sounds like a handy configuration option. I would allow setting a number of
> retries. If the number is set to 0 then a failed update is discarded right
> away.
>
> --
> Denis
>
> On Tue, Aug 28, 2018 at 9:14 PM Valentin Kulichenko <
> [hidden email]> wrote:
>
> > Folks,
> >
> > Is there a way to limit or disable retries of failed updates in the
> > write-behind store? I can't find one, it looks like if an update fails,
> it
> > is moved to the end of the queue and then eventually retried. If it fails
> > again, process is repeated.
> >
> > Such behavior *might* be OK if failures are caused by database being
> > temporarily unavailable. But what if update fails deterministically, for
> > example due to a constraint violation? There is absolutely no reason to
> > retry it, and at the same time it can cause stability and performance
> > issues when buffer is full with such "broken" updates.
> >
> > Does it makes sense to add an option that would allow to limit number of
> > retries (or even disable them)?
> >
> > -Val
> >
>
Reply | Threaded
Open this post in threaded view
|

Re: Retries in write-behind store

Alexey Goncharuk
In reply to this post by Valentin Kulichenko
Since the write-behind store wraps the store provided by a user, I think
the correct solution would be to catch the exception and not propagate it
further in the store, because only the end-user knows which errors can be
retried, and which errors cannot.

In this case no changes in the write-behind logic is needed.

ср, 29 авг. 2018 г. в 7:14, Valentin Kulichenko <
[hidden email]>:

> Folks,
>
> Is there a way to limit or disable retries of failed updates in the
> write-behind store? I can't find one, it looks like if an update fails, it
> is moved to the end of the queue and then eventually retried. If it fails
> again, process is repeated.
>
> Such behavior *might* be OK if failures are caused by database being
> temporarily unavailable. But what if update fails deterministically, for
> example due to a constraint violation? There is absolutely no reason to
> retry it, and at the same time it can cause stability and performance
> issues when buffer is full with such "broken" updates.
>
> Does it makes sense to add an option that would allow to limit number of
> retries (or even disable them)?
>
> -Val
>
Reply | Threaded
Open this post in threaded view
|

Re: Retries in write-behind store

Valentin Kulichenko
Alex,

I see your point, but I still think it should be incorporated into the
product.

First of all, your solution implies changing the CacheStore implementation
every time you switch between write-through and write-behind. This
contradicts with the overall design.

Second of all, one of the most commonly used implementations is the POJO
store which is provided out of the box. Moreover, usually users take
advantage of automatic integration feature and generate the config using
Web Console, so they often don't even know what "CacheJdbcPojoStore" is.

Said that, your suggestion works as a workaround, but it doesn't seem to be
very user-friendly. I actually like Gaurav's idea - instead of blindly
limiting number of retries we can have a callback to handle errors.

-Val

On Wed, Aug 29, 2018 at 1:31 AM Alexey Goncharuk <[hidden email]>
wrote:

> Since the write-behind store wraps the store provided by a user, I think
> the correct solution would be to catch the exception and not propagate it
> further in the store, because only the end-user knows which errors can be
> retried, and which errors cannot.
>
> In this case no changes in the write-behind logic is needed.
>
> ср, 29 авг. 2018 г. в 7:14, Valentin Kulichenko <
> [hidden email]>:
>
> > Folks,
> >
> > Is there a way to limit or disable retries of failed updates in the
> > write-behind store? I can't find one, it looks like if an update fails,
> it
> > is moved to the end of the queue and then eventually retried. If it fails
> > again, process is repeated.
> >
> > Such behavior *might* be OK if failures are caused by database being
> > temporarily unavailable. But what if update fails deterministically, for
> > example due to a constraint violation? There is absolutely no reason to
> > retry it, and at the same time it can cause stability and performance
> > issues when buffer is full with such "broken" updates.
> >
> > Does it makes sense to add an option that would allow to limit number of
> > retries (or even disable them)?
> >
> > -Val
> >
>
Reply | Threaded
Open this post in threaded view
|

Re: Retries in write-behind store

Alexey Goncharuk
Val,

There is no need to have two implementations of the store, the exception
may be handled based on the cache configuration, the store only need to
check whether write-behind is enabled. The configuration change will be
transparently handled by the store. This change can be easily incorporated
to our POJO store.

I am ok with the callback idea, but we need to discuss the API changes
first.

--AG

ср, 29 авг. 2018 г. в 16:07, Valentin Kulichenko <
[hidden email]>:

> Alex,
>
> I see your point, but I still think it should be incorporated into the
> product.
>
> First of all, your solution implies changing the CacheStore implementation
> every time you switch between write-through and write-behind. This
> contradicts with the overall design.
>
> Second of all, one of the most commonly used implementations is the POJO
> store which is provided out of the box. Moreover, usually users take
> advantage of automatic integration feature and generate the config using
> Web Console, so they often don't even know what "CacheJdbcPojoStore" is.
>
> Said that, your suggestion works as a workaround, but it doesn't seem to be
> very user-friendly. I actually like Gaurav's idea - instead of blindly
> limiting number of retries we can have a callback to handle errors.
>
> -Val
>
> On Wed, Aug 29, 2018 at 1:31 AM Alexey Goncharuk <
> [hidden email]>
> wrote:
>
> > Since the write-behind store wraps the store provided by a user, I think
> > the correct solution would be to catch the exception and not propagate it
> > further in the store, because only the end-user knows which errors can be
> > retried, and which errors cannot.
> >
> > In this case no changes in the write-behind logic is needed.
> >
> > ср, 29 авг. 2018 г. в 7:14, Valentin Kulichenko <
> > [hidden email]>:
> >
> > > Folks,
> > >
> > > Is there a way to limit or disable retries of failed updates in the
> > > write-behind store? I can't find one, it looks like if an update fails,
> > it
> > > is moved to the end of the queue and then eventually retried. If it
> fails
> > > again, process is repeated.
> > >
> > > Such behavior *might* be OK if failures are caused by database being
> > > temporarily unavailable. But what if update fails deterministically,
> for
> > > example due to a constraint violation? There is absolutely no reason to
> > > retry it, and at the same time it can cause stability and performance
> > > issues when buffer is full with such "broken" updates.
> > >
> > > Does it makes sense to add an option that would allow to limit number
> of
> > > retries (or even disable them)?
> > >
> > > -Val
> > >
> >
>