We see some annoying behavior with S3 discovery because Ignite will push to
the discovery S3 bucket the IP address of the local docker bridge network (172.17.0.1) in our case. Basically, each node when coming online tries that address first, and has to go through a network timeout to recover. To address this, have prototyped a simple extension to TcpCommunicationSpi to allow configuration of a list of IP addresses that should be completely ignored, and will create a ticket and generate a pull request for it. If there is a better approach, please let us know. Thanks Dave Harvey |
Hi David,
This is something we have also encountered recently and I was wondering how this can be mitigated in a general case. Do you know if an application can detect that it is being run in a docker container and add the corresponding list of bridge IPs automatically on start? If so, I this we can add this to the Ignite so that it works out of the box. --AG вт, 20 нояб. 2018 г. в 19:58, David Harvey <[hidden email]>: > We see some annoying behavior with S3 discovery because Ignite will push to > the discovery S3 bucket the IP address of the local docker bridge network > (172.17.0.1) in our case. Basically, each node when coming online tries > that address first, and has to go through a network timeout to recover. > > To address this, have prototyped a simple extension to TcpCommunicationSpi > to allow configuration of a list of IP addresses that should be completely > ignored, and will create a ticket and generate a pull request for it. > > If there is a better approach, please let us know. > > Thanks > Dave Harvey > |
What we prototyped was configuring via spring the list of IPs to ignore,
because a given installation seemed to have a constant address for the bridge network, and this approach was reliable, once you know the bridge IPs. When the container starts, you get a list of IP addresses from the kernel. At that point it is impossible to know from inside the container which of those addresses can be used by other ignite nodes, at least without external information. Similarly, if I have an AWS instance I am wondering On Tue, Nov 20, 2018 at 2:08 PM Alexey Goncharuk <[hidden email]> wrote: > Hi David, > > This is something we have also encountered recently and I was wondering how > this can be mitigated in a general case. Do you know if an application can > detect that it is being run in a docker container and add the corresponding > list of bridge IPs automatically on start? If so, I this we can add this to > the Ignite so that it works out of the box. > > --AG > > > вт, 20 нояб. 2018 г. в 19:58, David Harvey <[hidden email]>: > > > We see some annoying behavior with S3 discovery because Ignite will push > to > > the discovery S3 bucket the IP address of the local docker bridge network > > (172.17.0.1) in our case. Basically, each node when coming online tries > > that address first, and has to go through a network timeout to recover. > > > > To address this, have prototyped a simple extension to > TcpCommunicationSpi > > to allow configuration of a list of IP addresses that should be > completely > > ignored, and will create a ticket and generate a pull request for it. > > > > If there is a better approach, please let us know. > > > > Thanks > > Dave Harvey > > > |
What we prototyped was configuring via spring the list of IPs to ignore,
because a given installation seemed to have a constant address for the bridge network, and this approach was reliable, once you know the bridge IPs. It is also a more general solution. When the container starts, you get a list of IP addresses from the kernel. At that point it is impossible to know from inside the container which of those addresses can be used by other ignite nodes, at least without external information. For exampe, ifI have ignite running on an AWS instance that has an internal and external address, it is impossible to know which address will be able to reach the other nodes, unless you are told. So perhaps we should have used a list of ranges rather than a list in our prototype. For the docker sub-case where all the nodes seem to get the same useless address, I would think we can ignore IP address/port pairs that are the current node is also advertising. That does not generalize to other cases were the kernel provides unusable addresses. I didn't quite understand why if we try to connect to port we are advertising, this would need to timeout, rather than getting immediately rejected, unless Ignite has explicit code to do detected and ignore a self message. But if there is a IP:port pair that the current node is claiming as an endpoint, it should not try to use that IP:port to connect to other nodes..... On Tue, Nov 20, 2018 at 2:27 PM David Harvey <[hidden email]> wrote: > What we prototyped was configuring via spring the list of IPs to ignore, > because a given installation seemed to have a constant address for the > bridge network, and this approach was reliable, once you know the bridge > IPs. > > When the container starts, you get a list of IP addresses from the > kernel. At that point it is impossible to know from inside the container > which of those addresses can be used by other ignite nodes, at least > without external information. Similarly, if I have an AWS instance > > I am wondering > > > > On Tue, Nov 20, 2018 at 2:08 PM Alexey Goncharuk < > [hidden email]> wrote: > >> Hi David, >> >> This is something we have also encountered recently and I was wondering >> how >> this can be mitigated in a general case. Do you know if an application can >> detect that it is being run in a docker container and add the >> corresponding >> list of bridge IPs automatically on start? If so, I this we can add this >> to >> the Ignite so that it works out of the box. >> >> --AG >> >> >> вт, 20 нояб. 2018 г. в 19:58, David Harvey <[hidden email]>: >> >> > We see some annoying behavior with S3 discovery because Ignite will >> push to >> > the discovery S3 bucket the IP address of the local docker bridge >> network >> > (172.17.0.1) in our case. Basically, each node when coming online >> tries >> > that address first, and has to go through a network timeout to recover. >> > >> > To address this, have prototyped a simple extension to >> TcpCommunicationSpi >> > to allow configuration of a list of IP addresses that should be >> completely >> > ignored, and will create a ticket and generate a pull request for it. >> > >> > If there is a better approach, please let us know. >> > >> > Thanks >> > Dave Harvey >> > >> > |
David,
This is a problem, that many people encounter. And I think, it's about time to deal with it. We have the following method, that collects all local addresses, that will be sent with node attributes: *IgniteUtils#resolveLocalAddresses()*. And it iterates over all local interfaces, that the machine has. Maybe we don't need to do it? How about including only those interfaces, that are specified in configuration of *CommunicationSpi* and *DiscoverySpi*, and the ones, that are returned from a configured *AddressResolver*. To keep compatibility we could make a default address resolver return addresses from all local interfaces. What do you think? Denis вт, 20 нояб. 2018 г. в 22:45, David Harvey <[hidden email]>: > What we prototyped was configuring via spring the list of IPs to ignore, > because a given installation seemed to have a constant address for the > bridge network, and this approach was reliable, once you know the bridge > IPs. It is also a more general solution. > > When the container starts, you get a list of IP addresses from the kernel. > At that point it is impossible to know from inside the container which of > those addresses can be used by other ignite nodes, at least without > external information. For exampe, ifI have ignite running on an AWS > instance that has an internal and external address, it is impossible to > know which address will be able to reach the other nodes, unless you are > told. So perhaps we should have used a list of ranges rather than a list > in our prototype. > > For the docker sub-case where all the nodes seem to get the same useless > address, I would think we can ignore IP address/port pairs that are the > current node is also advertising. That does not generalize to other > cases were the kernel provides unusable addresses. I didn't quite > understand why if we try to connect to port we are advertising, this would > need to timeout, rather than getting immediately rejected, unless Ignite > has explicit code to do detected and ignore a self message. But if there > is a IP:port pair that the current node is claiming as an endpoint, it > should not try to use that IP:port to connect to other nodes..... > > On Tue, Nov 20, 2018 at 2:27 PM David Harvey <[hidden email]> wrote: > > > What we prototyped was configuring via spring the list of IPs to ignore, > > because a given installation seemed to have a constant address for the > > bridge network, and this approach was reliable, once you know the bridge > > IPs. > > > > When the container starts, you get a list of IP addresses from the > > kernel. At that point it is impossible to know from inside the > container > > which of those addresses can be used by other ignite nodes, at least > > without external information. Similarly, if I have an AWS instance > > > > I am wondering > > > > > > > > On Tue, Nov 20, 2018 at 2:08 PM Alexey Goncharuk < > > [hidden email]> wrote: > > > >> Hi David, > >> > >> This is something we have also encountered recently and I was wondering > >> how > >> this can be mitigated in a general case. Do you know if an application > can > >> detect that it is being run in a docker container and add the > >> corresponding > >> list of bridge IPs automatically on start? If so, I this we can add this > >> to > >> the Ignite so that it works out of the box. > >> > >> --AG > >> > >> > >> вт, 20 нояб. 2018 г. в 19:58, David Harvey <[hidden email]>: > >> > >> > We see some annoying behavior with S3 discovery because Ignite will > >> push to > >> > the discovery S3 bucket the IP address of the local docker bridge > >> network > >> > (172.17.0.1) in our case. Basically, each node when coming online > >> tries > >> > that address first, and has to go through a network timeout to > recover. > >> > > >> > To address this, have prototyped a simple extension to > >> TcpCommunicationSpi > >> > to allow configuration of a list of IP addresses that should be > >> completely > >> > ignored, and will create a ticket and generate a pull request for it. > >> > > >> > If there is a better approach, please let us know. > >> > > >> > Thanks > >> > Dave Harvey > >> > > >> > > > |
David,
Skipping connection to an address that is declared as local by the local node is an absolutely valid idea. Though, will it be always the case that nodes will have the same IP for this bogus interface? We have a similar check for loopback addresses, I see nothing wrong with adding the same check for non-loopback addresses (we need to be accurate with address resolver, though). ср, 21 нояб. 2018 г. в 12:05, Denis Mekhanikov <[hidden email]>: > David, > > This is a problem, that many people encounter. And I think, it's about time > to deal with it. > > We have the following method, that collects all local addresses, that will > be sent with node > attributes: *IgniteUtils#resolveLocalAddresses()*. And it iterates over all > local interfaces, that > the machine has. > Maybe we don't need to do it? How about including only those interfaces, > that are specified > in configuration of *CommunicationSpi* and *DiscoverySpi*, and the ones, > that are returned from > a configured *AddressResolver*. > To keep compatibility we could make a default address resolver return > addresses from all local > interfaces. > > What do you think? > > Denis > > вт, 20 нояб. 2018 г. в 22:45, David Harvey <[hidden email]>: > > > What we prototyped was configuring via spring the list of IPs to ignore, > > because a given installation seemed to have a constant address for the > > bridge network, and this approach was reliable, once you know the bridge > > IPs. It is also a more general solution. > > > > When the container starts, you get a list of IP addresses from the > kernel. > > At that point it is impossible to know from inside the container which > of > > those addresses can be used by other ignite nodes, at least without > > external information. For exampe, ifI have ignite running on an AWS > > instance that has an internal and external address, it is impossible to > > know which address will be able to reach the other nodes, unless you are > > told. So perhaps we should have used a list of ranges rather than a > list > > in our prototype. > > > > For the docker sub-case where all the nodes seem to get the same useless > > address, I would think we can ignore IP address/port pairs that are the > > current node is also advertising. That does not generalize to other > > cases were the kernel provides unusable addresses. I didn't quite > > understand why if we try to connect to port we are advertising, this > would > > need to timeout, rather than getting immediately rejected, unless Ignite > > has explicit code to do detected and ignore a self message. But if > there > > is a IP:port pair that the current node is claiming as an endpoint, it > > should not try to use that IP:port to connect to other nodes..... > > > > On Tue, Nov 20, 2018 at 2:27 PM David Harvey <[hidden email]> > wrote: > > > > > What we prototyped was configuring via spring the list of IPs to > ignore, > > > because a given installation seemed to have a constant address for the > > > bridge network, and this approach was reliable, once you know the > bridge > > > IPs. > > > > > > When the container starts, you get a list of IP addresses from the > > > kernel. At that point it is impossible to know from inside the > > container > > > which of those addresses can be used by other ignite nodes, at least > > > without external information. Similarly, if I have an AWS instance > > > > > > I am wondering > > > > > > > > > > > > On Tue, Nov 20, 2018 at 2:08 PM Alexey Goncharuk < > > > [hidden email]> wrote: > > > > > >> Hi David, > > >> > > >> This is something we have also encountered recently and I was > wondering > > >> how > > >> this can be mitigated in a general case. Do you know if an application > > can > > >> detect that it is being run in a docker container and add the > > >> corresponding > > >> list of bridge IPs automatically on start? If so, I this we can add > this > > >> to > > >> the Ignite so that it works out of the box. > > >> > > >> --AG > > >> > > >> > > >> вт, 20 нояб. 2018 г. в 19:58, David Harvey <[hidden email]>: > > >> > > >> > We see some annoying behavior with S3 discovery because Ignite will > > >> push to > > >> > the discovery S3 bucket the IP address of the local docker bridge > > >> network > > >> > (172.17.0.1) in our case. Basically, each node when coming online > > >> tries > > >> > that address first, and has to go through a network timeout to > > recover. > > >> > > > >> > To address this, have prototyped a simple extension to > > >> TcpCommunicationSpi > > >> > to allow configuration of a list of IP addresses that should be > > >> completely > > >> > ignored, and will create a ticket and generate a pull request for > it. > > >> > > > >> > If there is a better approach, please let us know. > > >> > > > >> > Thanks > > >> > Dave Harvey > > >> > > > >> > > > > > > |
Free forum by Nabble | Edit this page |