I have run two application with embedded mode having the following config:
public IgniteConfigurer config() {
return cfg -> {
// The node will be started as a client node.
cfg.setClientMode(false);
// Classes of custom Java logic will be transferred over the wire from this app.
cfg.setPeerClassLoadingEnabled(false);
// Setting up an IP Finder to ensure the client can locate the servers.
final TcpDiscoveryMulticastIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
ipFinder.setAddresses(Collections.singletonList(cacheServerIp));
cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(ipFinder));
// Cache Metrics log frequency. If 0 then log print disable.
cfg.setMetricsLogFrequency(Integer.parseInt(cacheMetricsLogFrequency));
// setting up storage configuration
final DataStorageConfiguration storageCfg = new DataStorageConfiguration();
storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
storageCfg.setStoragePath(cacheStorage);
// setting up data region for storage
final DataRegionConfiguration defaultRegion = new DataRegionConfiguration();
defaultRegion.setName(cacheDefaultRegionName);
// Sets initial memory region size. When the used memory size exceeds this value, new chunks of memory will be allocated
defaultRegion.setInitialSize(Long.parseLong(cacheRegionInitSize));
storageCfg.setDefaultDataRegionConfiguration(defaultRegion);
cfg.setDataStorageConfiguration(storageCfg);
cfg.setWorkDirectory(cacheStorage);
final TcpCommunicationSpi communicationSpi = new TcpCommunicationSpi();
// Sets message queue limit for incoming and outgoing messages
communicationSpi.setMessageQueueLimit(Integer.parseInt(cacheTcpCommunicationSpiMessageQueueLimit));
cfg.setCommunicationSpi(communicationSpi);
final CacheCheckpointSpi cpSpi = new CacheCheckpointSpi();
cfg.setCheckpointSpi(cpSpi);
final FifoQueueCollisionSpi colSpi = new FifoQueueCollisionSpi();
// Execute all jobs sequentially by setting parallel job number to 1.
colSpi.setParallelJobsNumber(Integer.parseInt(cacheParallelJobs));
cfg.setCollisionSpi(colSpi);
// set failure handler for auto connection if ignite server stop/starts.
cfg.setFailureHandler(new StopNodeFailureHandler());
};
}
App1 put data in cache where as App2 read data from cache. I have set local IP i.e ipFinder.setAddresses(Collections.singletonList("127.0.0.1:47500..47509"));
So locally both apps i.e app1 and app2 connected on cluster. When i put same config on server with IP change i.e ipFinder.setAddresses(Collections.singletonList("server1.com:47500..47509"));
Both servers i.e app1 and app2 not connected in cluster.
Is it embedded work only when all apps i.e app1 and app2 are on same machine?
Try using a static TcpDiscoveryVmIpFinder instead to locate the issue. By default, TcpDiscoveryMulticastIpFinder tries to scan all available hosts to discover Ignite nodes, and depending on timeouts this might take a while.
Assuming both of your nodes are still running on the same machine, you might keep the localhost configuration: "127.0.0.1:47500..47509". "server1.com:47500..47509" should also work if DNS name "server1.com" resolves to the correct IP address, the best way to check that - just to run a ping command to check how localhost and server1.com are being resolved.
If you are running on different machines, then you need to have a list of addresses rather than a singleton: "server1.com:47500..47509", "server2.com:47500.47509" etc.
It's also recommended to check whether the ports are opened and probably to configure a localHost explicitly if there are many different interfaces available.
Related
I just implemented a distibuted lock using Apache Curator und ZooKeeper in standalone mode.
I initialzed the CuratorFramework as follows:
CuratorFramework client = CuratorFrameworkFactory.newClient("localhost:2182", retryPolicy);
Everything worked fine, so I tried to use ZooKeeper in cluster mode. I started three instances and initialzed the CuratorFramework as follows:
CuratorFramework client = CuratorFrameworkFactory.newClient("localhost:2182,localhost:2182,localhost:2183", retryPolicy);
As you can see, I just added the addresses of the two new nodes.
So far so good.But how do I initialize the client, when I don't know the addresses of each node respectively the size of the cluster, because I want to scale it dynamically?
I could initialize it by only specifying the address of the first node which will always be started. But if that node goes down, Curator loses the connection to the whole cluster (I just tried it).
CuratorFrameworkFactory has a builder that allows you to specify an EnsembleProvider instead of a connectionString and to include an EnsembleTracker. This will keep your connectionString up to date, but you will need to persist the data somehow to ensure your application can find the ensemble when it restarts. I recommend implementing a decorating EnsembleProvider that encapsulates a FixedEnsembleProvider and writes the config to a properties file.
Example:
EnsembleProvider ensemble = new MyDecoratingEnsembleProvider(new FixedEnsembleProvider("localhost:2182", true));
CuratorFramework client = CuratorFrameworkFactory.builder()
.ensembleProvider(ensemble)
.retryPolicy(retryPolicy)
.ensembleTracker(true)
.build();
You should always know where your Zookeeper instances are. There's no way to connect to something when you don't know where it is - how could you?
If you can connect to any instance, you can get the configuration details and poll it regularly to keep your connection details up-to-date, perhaps?
maybe take a look at https://zookeeper.apache.org/doc/r3.5.5/zookeeperReconfig.html#ch_reconfig_rebalancing
I am trying to connect to IBM MQ using JMS and client channel definition table (CCDT). I was able to connect successfully to the QueueManager when i specify the MQ properties individually.
But when i try to use CCDT file i get the below exception.
As client channel definition table (CCDT) is used to determine the channel definitions used by client applications to connect to the queue manager i didnt set QueueManager Name.
ERROR> com.ssc.ach.mq.JMSMQReceiver[main]: errorMQJMS2005: failed to create MQQueueManager for ''
javax.jms.JMSException: MQJMS2005: failed to create MQQueueManager for ''
at com.ibm.mq.jms.services.ConfigEnvironment.newException(ConfigEnvironment.java:586)
at com.ibm.mq.jms.MQConnection.createQM(MQConnection.java:2110)
at com.ibm.mq.jms.MQConnection.createQMNonXA(MQConnection.java:1532)
at com.ibm.mq.jms.MQQueueConnection.<init>(MQQueueConnection.java:150)
at com.ibm.mq.jms.MQQueueConnectionFactory.createQueueConnection(MQQueueConnectionFactory.java:174)
at com.ibm.mq.jms.MQQueueConnectionFactory.createConnection(MQQueueConnectionFactory.java:1066)
Iam using the .setCCDTURL(ccdt); method to set the CCDT URL.
private MQQueueConnectionFactory mqQueueConnectionFactory = new MQQueueConnectionFactory();
mqQueueConnectionFactory.setCCDTURL(ccdt);
queueConnection = mqQueueConnectionFactory.createConnection(username, pwd);
When i try to connect using below configuration instead of the CCDT file it connects to the MQ.
mqQueueConnectionFactory.setHostName(host);
mqQueueConnectionFactory.setChannel(channel);
mqQueueConnectionFactory.setPort(port);
mqQueueConnectionFactory.setQueueManager(qManager);
mqQueueConnectionFactory.setTransportType(1);
Do i need to set setQueueManager as well along with the CCDT file , as the exception says failed to create MQQueueManager for ''
The CCDT is not meant to be read in a text editor, it is a binary formatted file. One of the parameters in the CCDT for each CLNTCONN channel is QMNAME. Knowing what QMNAME is set to and how many CLNTCONN channels you have defined in the CCDT and what you want to accomplish will help figure out what value if any should be specified for with setQueueManager.
If there is only one CLNTCONN channel then you could specify the following and it will connect using the single channel no matter what the QMNAME property is set to:
setQueueManager("*");
If there is more than one CLNTCONN channel in the file each with a different QMNAME specified, assuming the name matches the actual queue manager name listening on the host and port associated with the channel you would pass the queue manager name:
setQueueManager("QMGRNAME");
If there is more than one CLNTCONN channels in the file each with the same QMNAME specified where this name is not meant to reflect a actual queue manager name listening on the host and port associated with each channel, this is known as a queue manager group, this would be intended where you want the client to connect to any number of different hosts and ports and you do not need to know which queue manager you are connecting to, in this case you would pass the queue manager group name prefixed with a *:
setQueueManager("*QMGRGROUPNAME");
Another variation of the above is if there is more than one CLNTCONN channels in the file each with an all blank (spaces) or NULL QMNAME specified, this is known as a queue manager group, this would be intended where you want the client to connect to any number of different hosts and ports and you do not need to know which queue manager you are connecting to, in this case you would pass the queue manager name as either a single space or nothing at all ``:
setQueueManager(" ");
//or
setQueueManager("");
The last use case above would likely work if you did not use setQueueManager at all.
If you want to view the contents of the CCDT, you can use the runmqsc command that comes as part of the MQ v8 and higher client or server install.
For Unix ksh/bash shells use the following:
export MQCHLLIB=PATH/OF/CCDT
export MQCHLTAB=NAME_OF_CCDT
runmqsc -n
For Windows use the following:
set MQCHLLIB=PATH/OF/CCDT
set MQCHLTAB=NAME_OF_CCDT
runmqsc -n
Once the runmqsc program has started and displayed Starting local MQSC for 'NAME_OF_CCDT'. you can run the following command to see the channel details:
DIS CHL(*)
Below is a more specific command to narrow the number of fields returned:
DIS CHL(*) QMNAME CONNAME
I haven't look at it in a while but I thought the correct format is:
MQQueueConnectionFactory qcf = new MQQueueConnectionFactory();
qcf.setQueueManager(qManager);
qcf.setCCDTURL(ccdt);
conn = qcf.createConnection(username, pwd);
In my current setup, I'm using the default multicast option of the Hazelcast cluster manager. When I link the instances of my containerized Vertx modules (via Docker networking links), I can see that they are successfully creating Hazelcast cluster. However, when I try publishing events on the event bus from one module, the other module doesn't react to it. I'm not sure how the network settings in the Hazelcast cluster related to the network settings for the event bus.
At the moment, I have the following programmatic configuration for each of my Vert.x module, each deployed inside a docker container.
ClusterManager clusterManager = new HazelcastClusterManager();
VertxOptions vertxOptions = new VertxOptions()
.setClustered(true)
.setClusterManager(clusterManager);
vertxOptions.setEventBusOptions(new EventBusOptions()
.setClustered(true)
.setClusterPublicHost("application"));
The Vert.x Core manual states that I may have to configure clusterPublicHost, and clusterPublicPort for the event bus, but I'm not sure how those relate to the general network topology.
One answer is here https://groups.google.com/d/msg/vertx/_2MzDDowMBM/nFoI_k6GAgAJ
I see this question come up a lot, and what a lot of people miss in
the documentation (myself included) is that Event Bus does not use the
cluster manager to send event bus messages. I.e. in your example with
Hazelcast as the cluster manager, you have the Hazelcast cluster up
and communicating properly (so your Cluster Manager is fine); however,
the Event bus is failing to communicate with your other docker
instances due to one or more of the following:
It is attempting to use an incorrect IP address to the other node (i.e. the IP of the private interface on the Docker instance, not the
publicly mapped one)
It is attempting to communicate on a port Docker is not configured to forward (the event bus picks a dynamic port if you don't specify
one)
What you need to do is:
Tell Vertx the IP address that the other nodes should use to talk to each instance ( using the -cluster-host [command line] ,
setClusterPublicHost [VertXOptions] or "vertx.cluster.public.host"
[System Property] options)
Tell Vertx explicitly the Port to use for event bus communication and ensure Docker is forwarding traffic for those ports ( using the
"vertx.cluster.public.port" [System Property], setClusterPublicPort
[VertXOptions] or -cluster-port [command line] options). In the past,
I have used 15701 because it is easy to remember (just a '1' in fromt
of the Hazelcast ports).
The Event bus only uses the Cluster Manager to manage the IP/Port
information of the other Vertx Instances and the registration of the
Consumers/Producers. The communications are done independently of the
cluster manager, which is why you can have the cluster manager
configured properly and communicating, but still have no Event bus
communications.
You may not need to do both the steps above if both your containers
are running on the same host, but you definitely will once you start
running them on separate hosts.
Something what also can happen, is that vert.x uses the loopback interface, when not specifying the IP which vert.x (not hazelcast) should take to communicate over eventbus. The problem here is, that you don't know which interface is taken to communicate over (loopback, interface with IP, you could even have multiple interfaces with IP).
To overcome this problem, I wrote a method once https://github.com/swisspush/vertx-cluster-watchdog/blob/master/src/main/java/org/swisspush/vertx/cluster/ClusterWatchdogRunner.java#L101
The cluster manager works fine, the cluster manager configuration has to be the same on each node (machine/docker container) in your cluster or don't make any configurations at all (use the default configuration of your cluster manager).
You have to make the event bus configuration be consistent on each node, you have to set the cluster host on each node to be the IP address of this node itself and any arbitrary port number (unless you try to run more than Vert.x instance on the same node you have to choose a different port number for each Vert.x instance).
For example if a node's IP address is 192.168.1.12 then you would do the following:
VertxOptions options = new VertxOptions()
.setClustered(true)
.setClusterHost("192.168.1.12") // node ip
.setClusterPort(17001) // any arbitrary port but make sure no other Vert.x instances using same port on the same node
.setClusterManager(clusterManager);
on another node whose IP address is 192.168.1.56 then you would do the following:
VertxOptions options = new VertxOptions()
.setClustered(true)
.setClusterHost("192.168.1.56") // other node ip
.setClusterPort(17001) // it is ok because this is a different node
.setClusterManager(clusterManager);
found this solution that worked perfectly for me, below is my code snippet (important part is the options.setClusterHost()
public class Runner {
public static void run(Class clazz) {
VertxOptions options = new VertxOptions();
try {
// for docker binding
String local = InetAddress.getLocalHost().getHostAddress();
options.setClusterHost(local);
} catch (UnknownHostException e) { }
options.setClustered(true);
Vertx.clusteredVertx(options, res -> {
if (res.succeeded()) {
res.result().deployVerticle(clazz.getName());
} else {
res.cause().printStackTrace();
}
});
}
}
public class Publisher extends AbstractVerticle {
public static void main(String[] args) {
Runner.run(Publisher.class);
}
...
}
no need to define anything else...
Im trying to use elasticache as a memcache service with AWS's elasticache client library for java.
The following code works for connecting to the cluster:
_client = new MemcachedClient(_serverList);
But any attempt I've made to use consistent hashing results in memcache client failing to initialize:
_client = new MemcachedClient(new KetamaConnectionFactory(), _serverList);
or
ConnectionFactoryBuilder connectionFactoryBuilder = new ConnectionFactoryBuilder();
connectionFactoryBuilder.setLocatorType(Locator.CONSISTENT);
connectionFactoryBuilder.setHashAlg(DefaultHashAlgorithm.KETAMA_HASH);
connectionFactoryBuilder.setClientMode(ClientMode.Dynamic);
ConnectionFactory connectionFactory = connectionFactoryBuilder.build();
_client = new MemcachedClient(connectionFactory, _serverList);
Any attempt I've made to use anything but a vanilla MemcacheClient results in errors such as :
2015-04-07 07:00:32.914 WARN net.spy.memcached.ConfigurationPoller: The configuration is null in the server localhost
2015-04-07 07:00:32.914 WARN net.spy.memcached.ConfigurationPoller: Number of consecutive poller errors is 7. Number of minutes since the last successful polling is 0
Also, I've verified with telnet, spymecached libs, and the vanilla MemcacheClient constructor, that the security groups are permissive.
When using the AWS Client Library KetamaConnectionFactory defaults to the "dynamic" client mode which tries to poll the list of available memcached nodes from the configuration endpoint. For this to work your _serverList should only contain the configuration endpoint.
Your error message indicates the host was a "plain" memcached node which doesn't understand the ElastiCache extensions. If this is what you intend to do (specify the nodes yourself rather than use the autodiscovery feature) then you need to use the multiple-arg KetamaConnectionFactory constructor and pass in ClientMode.Static as the first argument.
You will need to use the AddrUtil.getAddresses() method.
_client = new MemcachedClient(new KetamaConnectionFactory(), AddrUtil.getAddresses("configEndpoint:port"));
or
ConnectionFactoryBuilder connectionFactoryBuilder = new ConnectionFactoryBuilder(new KetamaConnectionFactory());
// set any other properties you want on the builder
ConnectionFactory connectionFactory = connectionFactoryBuilder.build();
_client = new MemcachedClient(connectionFactory, AddrUtil.getAddresses("configEndpoint:port"));
I am trying to learn Cassandra and have setup a 4 node Cassandra cluster. I have written a client in Java using Hector, which currently connects to a hard coded single node in the cluster. Ideally, I would like my client to connect to the "cluster" rather then a specific node....so if any of the 4 nodes are down, the client will still connect to something. From the client application perspective how does this work exactly? I can't seem to find a good explanation.
My Hector connection string currently, I need to specify a specific node here:
Cluster c = getOrCreateCluster("Test Cluster", cassandraNode1:9160);
My Cassandra nodes are all configured with my rpc_address: 0.0.0.0
If you pass a CassandraHostConfigurator to getOrCreateCluster(), you can specify multiple nodes as a comma-separated string:
public CassandraHostConfigurator(String hosts) {
this.hosts = hosts;
}
...
String[] hostVals = hosts.split(",");
CassandraHost[] cassandraHosts = new CassandraHost[hostVals.length];
...
You can also toggle CassandraHostConfigurator#setAutoDiscoverHosts and #setUseAutoDiscoverAtStartup to use your initial host(s) to automatically add all hosts found on via the Thrift API method describe_keyspaces. This makes configuration a little bit easier in that you only need to reference a single host.
Keeping autoDiscover enabled (it is off by default) makes it a bit easier to scale out as new nodes will be added as they are discovered. The ability to add nodes is also available via JMX as well so adding nodes can be done manually at any time, though you would have to do it once per Hector instance.