Client got stucked on get operation

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Client got stucked on get operation

Alper Tekinalp
Hi all.


We have 3 ignite servers. Server 1 works as standalone. Server 2 and 3
connects eachother as server but connects server 1 as client. In a point of
the time server 3 got stucked at:

  at sun.misc.Unsafe.park(Native Method)
  at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
  at
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
  at
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:994)
  at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1303)
  at
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:161)
  at
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:119)
  at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.get0(GridDhtAtomicCache.java:488)
  at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4665)
  at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1388)
  at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.get(IgniteCacheProxy.java:1121)
  at sun.reflect.GeneratedMethodAccessor634.invoke(Unknown Source)
  at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606)
  at
com.evam.cache.client.CachePassthroughInvocationHandler.invokeInternal(CachePassthroughInvocationHandler.java:99)
  at
com.evam.cache.client.CachePassthroughInvocationHandler.invoke(CachePassthroughInvocationHandler.java:78)
  at com.sun.proxy.$Proxy56.get(Unknown Source)

when getting record from server 1. Long after that server 2 also got
stucked at the same trace and also server 2 and server 3 disconnects from
each other.

We investigated gc logs and there is not an unusual behaviour. One things
is server 1 logs errors as follows when server 2 and server 3 disconnects:

[ERROR] 2017-03-18 22:01:21.881 [sys-stripe-82-#83%cache-server%] msg -
Received message without registered handler (will ignore)
[msg=GridNearSingleGetRequest [futId=1490866022968, key=BinaryObjectImpl
[arr= true, ctx=false, start=0], partId=199, flags=1,
topVer=AffinityTopologyVersion [topVer=33, minorTopVer=455],
subjId=53293ebb-f01b-40b6-a060-bec4209e9c8a, taskNameHash=0, createTtl=0,
accessTtl=-1], node=53293ebb-f01b-40b6-a060-bec4209e9c8a,
locTopVer=AffinityTopologyVersion [topVer=33, minorTopVer=2937],
msgTopVer=AffinityTopologyVersion [topVer=33, minorTopVer=455],
cacheDesc=null]
Registered listeners:


Where should we look for the main cause of the problem? What can cause such
a behaviour? There seems nothing wrong on server 1 logs etc.

We use ignite 1.8.3.

--
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv.
Gardenya 5 Plaza K:6 Ataşehir
34758 İSTANBUL

Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
www.evam.com.tr
<http://www.evam.com>
Reply | Threaded
Open this post in threaded view
|

Re: Client got stucked on get operation

Alper Tekinalp
Hi.

Can someone point me a direction that why a thread can stuck on the trace
above? Where should I look for? How can I investicate the issue? Where
should I look?

On Mon, Mar 20, 2017 at 12:57 PM, Alper Tekinalp <[hidden email]> wrote:

> Hi all.
>
>
> We have 3 ignite servers. Server 1 works as standalone. Server 2 and 3
> connects eachother as server but connects server 1 as client. In a point of
> the time server 3 got stucked at:
>
>   at sun.misc.Unsafe.park(Native Method)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.
> parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.
> doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:994)
>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.
> acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1303)
>   at org.apache.ignite.internal.util.future.GridFutureAdapter.
> get0(GridFutureAdapter.java:161)
>   at org.apache.ignite.internal.util.future.GridFutureAdapter.
> get(GridFutureAdapter.java:119)
>   at org.apache.ignite.internal.processors.cache.distributed.
> dht.atomic.GridDhtAtomicCache.get0(GridDhtAtomicCache.java:488)
>   at org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(
> GridCacheAdapter.java:4665)
>   at org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(
> GridCacheAdapter.java:1388)
>   at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.get(
> IgniteCacheProxy.java:1121)
>   at sun.reflect.GeneratedMethodAccessor634.invoke(Unknown Source)
>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at com.evam.cache.client.CachePassthroughInvocationHand
> ler.invokeInternal(CachePassthroughInvocationHandler.java:99)
>   at com.evam.cache.client.CachePassthroughInvocationHandler.invoke(
> CachePassthroughInvocationHandler.java:78)
>   at com.sun.proxy.$Proxy56.get(Unknown Source)
>
> when getting record from server 1. Long after that server 2 also got
> stucked at the same trace and also server 2 and server 3 disconnects from
> each other.
>
> We investigated gc logs and there is not an unusual behaviour. One things
> is server 1 logs errors as follows when server 2 and server 3 disconnects:
>
> [ERROR] 2017-03-18 22:01:21.881 [sys-stripe-82-#83%cache-server%] msg -
> Received message without registered handler (will ignore)
> [msg=GridNearSingleGetRequest [futId=1490866022968, key=BinaryObjectImpl
> [arr= true, ctx=false, start=0], partId=199, flags=1,
> topVer=AffinityTopologyVersion [topVer=33, minorTopVer=455],
> subjId=53293ebb-f01b-40b6-a060-bec4209e9c8a, taskNameHash=0, createTtl=0,
> accessTtl=-1], node=53293ebb-f01b-40b6-a060-bec4209e9c8a, locTopVer=AffinityTopologyVersion
> [topVer=33, minorTopVer=2937], msgTopVer=AffinityTopologyVersion
> [topVer=33, minorTopVer=455], cacheDesc=null]
> Registered listeners:
>
>
> Where should we look for the main cause of the problem? What can cause
> such a behaviour? There seems nothing wrong on server 1 logs etc.
>
> We use ignite 1.8.3.
>
> --
> Alper Tekinalp
>
> Software Developer
> Evam Streaming Analytics
>
> Atatürk Mah. Turgut Özal Bulv.
> Gardenya 5 Plaza K:6 Ataşehir
> 34758 İSTANBUL
>
> Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
> www.evam.com.tr
> <http://www.evam.com>
>



--
Alper Tekinalp

Software Developer
Evam Streaming Analytics

Atatürk Mah. Turgut Özal Bulv.
Gardenya 5 Plaza K:6 Ataşehir
34758 İSTANBUL

Tel:  +90 216 455 01 53 Fax: +90 216 455 01 54
www.evam.com.tr
<http://www.evam.com>