Hello everyone,
As adviced by @dsetrakyan, I would like to post one simple suggestion about Ignite and deployment, in particular container one's like docker. Docker Swarm allows to simply scale a service by setting the "replicas" parameter to a positive Integer. However, for lot of clustered applications, this isn't often well supported. For example, in this discussion https://stackoverflow.com/questions/48261239/running-ignite-on-swarm-cluster, we can see that Multicast isn't allowed in an overlay network in Docker, which is confirmed here : https://github.com/docker/libnetwork/issues/552. In the end, it looks like a limitations of Docker more than a limitation of ignite. However, I'm still posting this in cases of you have ideas on how to workaround this, or even implements an autodiscovery between ignite nodes with this limitation. The only solution found right now is to define a a single "ignite-master" service and a set of replicas in "ignite" services, which would point to this ignite-master. But we have a single point of failure for deployment. Any ideas, suggestions, or even better, support for docker services ? Thank you, Jonathan -- Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/ |
Hi Jonathan,
Does Docker Swarm go with any internal services concept that can be used to share Ignite nodes' IPs? Any other internal thing that can store and share the IPs? For instance, Ignite Kubernetes IP finder relies on K8 services to exchange the IPs on the nodes startup: https://apacheignite-mix.readme.io/docs/kubernetes-discovery -- Denis On Sun, May 6, 2018 at 11:44 AM, Jonathan Schoreels < [hidden email]> wrote: > Hello everyone, > > As adviced by @dsetrakyan, I would like to post one simple suggestion about > Ignite and deployment, in particular container one's like docker. Docker > Swarm allows to simply scale a service by setting the "replicas" parameter > to a positive Integer. However, for lot of clustered applications, this > isn't often well supported. > > For example, in this discussion > https://stackoverflow.com/questions/48261239/running- > ignite-on-swarm-cluster, > we can see that Multicast isn't allowed in an overlay network in Docker, > which is confirmed here : https://github.com/docker/libnetwork/issues/552. > > In the end, it looks like a limitations of Docker more than a limitation of > ignite. However, I'm still posting this in cases of you have ideas on how > to > workaround this, or even implements an autodiscovery between ignite nodes > with this limitation. The only solution found right now is to define a a > single "ignite-master" service and a set of replicas in "ignite" services, > which would point to this ignite-master. But we have a single point of > failure for deployment. > > Any ideas, suggestions, or even better, support for docker services ? > > Thank you, > > Jonathan > > > > -- > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/ > |
Hi Denis,
It seems like one workaround would be to configure the URL to one of the manager node, and then the Docker API allows to loop over all the nodes net interfaces : https://github.com/bitsofinfo/docker-discovery-swarm-service#status. The problem is it needs to know which node is a manager and which is a worker. However, that's still a limitation of docker. After some searching on "Docker Swarm Peer Discovery", I've found another implementation of the similar problem, but for rabbitmq, which also relies on a third party service discovery : Consul. Here : https://github.com/rabbitmq/rabbitmq-peer-discovery-consul One integration with ignite can be found here : https://github.com/andrea-zanetti/ignite-consul/blob/master/src/main/java/org/apache/ignite/spi/discovery/tcp/ipfinder/consul/TcpDiscoveryConsulIpFinder.java One other interesting possibility would be to use a dns lookup based on "dig <servicename> short" response. This implementation has been made for example here for zookeeper : https://github.com/itsaur/zookeeper-docker-swarm/blob/master/docker-swarm-entrypoint.sh. Indeed, Docker Swarm updates a DNS with all the containers loadbalanced behind. We could cycle through those information and get all the nodes IP. I've personnally tested it with scaling, and the new IP addresses are dynamically added : bash-4.4# dig tasks.zookeeper1 +short 10.0.0.12 docker service scale stack_zookeeper1=2 stack_zookeeper1 scaled to 2 overall progress: 2 out of 2 tasks 1/2: running [==================================================>] 2/2: running [==================================================>] verify: Service converged bash-4.4# dig tasks.zookeeper1 +short 10.0.0.12 10.0.0.17 Any thoughts ? Isn't it a bit too low level ? Jonathan -- Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/ |
Jonathan,
Thanks for the extensive summary. Personally, I like the manager node approach because it's built in Docker Swarm and doesn't require 3rd party dependencies. Moreover, it reminds me the way we supported Kubernetes where Ignite pods request IP addresses from a Kubernetes master. However, you mentioned that "it's still a limitation of docker". What is your concern about that approach? -- Denis On Tue, May 8, 2018 at 2:05 AM, Jonathan Schoreels < [hidden email]> wrote: > Hi Denis, > > It seems like one workaround would be to configure the URL to one of the > manager node, and then the Docker API allows to loop over all the nodes net > interfaces : > https://github.com/bitsofinfo/docker-discovery-swarm-service#status. The > problem is it needs to know which node is a manager and which is a worker. > > However, that's still a limitation of docker. After some searching on > "Docker Swarm Peer Discovery", I've found another implementation of the > similar problem, but for rabbitmq, which also relies on a third party > service discovery : Consul. > Here : https://github.com/rabbitmq/rabbitmq-peer-discovery-consul > One integration with ignite can be found here : > https://github.com/andrea-zanetti/ignite-consul/blob/ > master/src/main/java/org/apache/ignite/spi/discovery/tcp/ipfinder/consul/ > TcpDiscoveryConsulIpFinder.java > > One other interesting possibility would be to use a dns lookup based on > "dig > <servicename> short" response. This implementation has been made for > example > here for zookeeper : > https://github.com/itsaur/zookeeper-docker-swarm/blob/master/docker-swarm- > entrypoint.sh. > Indeed, Docker Swarm updates a DNS with all the containers loadbalanced > behind. We could cycle through those information and get all the nodes IP. > > I've personnally tested it with scaling, and the new IP addresses are > dynamically added : > > bash-4.4# dig tasks.zookeeper1 +short > 10.0.0.12 > > docker service scale stack_zookeeper1=2 > stack_zookeeper1 scaled to 2 > overall progress: 2 out of 2 tasks > 1/2: running [==================================================>] > 2/2: running [==================================================>] > verify: Service converged > > bash-4.4# dig tasks.zookeeper1 +short > 10.0.0.12 > 10.0.0.17 > > Any thoughts ? Isn't it a bit too low level ? > > Jonathan > > > > -- > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/ > |
Free forum by Nabble | Edit this page |