How do you use consistent hashing with the java elasticache libs? - java

Im trying to use elasticache as a memcache service with AWS's elasticache client library for java.
The following code works for connecting to the cluster:
_client = new MemcachedClient(_serverList);
But any attempt I've made to use consistent hashing results in memcache client failing to initialize:
_client = new MemcachedClient(new KetamaConnectionFactory(), _serverList);
or
ConnectionFactoryBuilder connectionFactoryBuilder = new ConnectionFactoryBuilder();
connectionFactoryBuilder.setLocatorType(Locator.CONSISTENT);
connectionFactoryBuilder.setHashAlg(DefaultHashAlgorithm.KETAMA_HASH);
connectionFactoryBuilder.setClientMode(ClientMode.Dynamic);
ConnectionFactory connectionFactory = connectionFactoryBuilder.build();
_client = new MemcachedClient(connectionFactory, _serverList);
Any attempt I've made to use anything but a vanilla MemcacheClient results in errors such as :
2015-04-07 07:00:32.914 WARN net.spy.memcached.ConfigurationPoller: The configuration is null in the server localhost
2015-04-07 07:00:32.914 WARN net.spy.memcached.ConfigurationPoller: Number of consecutive poller errors is 7. Number of minutes since the last successful polling is 0
Also, I've verified with telnet, spymecached libs, and the vanilla MemcacheClient constructor, that the security groups are permissive.

When using the AWS Client Library KetamaConnectionFactory defaults to the "dynamic" client mode which tries to poll the list of available memcached nodes from the configuration endpoint. For this to work your _serverList should only contain the configuration endpoint.
Your error message indicates the host was a "plain" memcached node which doesn't understand the ElastiCache extensions. If this is what you intend to do (specify the nodes yourself rather than use the autodiscovery feature) then you need to use the multiple-arg KetamaConnectionFactory constructor and pass in ClientMode.Static as the first argument.

You will need to use the AddrUtil.getAddresses() method.
_client = new MemcachedClient(new KetamaConnectionFactory(), AddrUtil.getAddresses("configEndpoint:port"));
or
ConnectionFactoryBuilder connectionFactoryBuilder = new ConnectionFactoryBuilder(new KetamaConnectionFactory());
// set any other properties you want on the builder
ConnectionFactory connectionFactory = connectionFactoryBuilder.build();
_client = new MemcachedClient(connectionFactory, AddrUtil.getAddresses("configEndpoint:port"));

Related

Apache Ignite embedded cluster mode

I have run two application with embedded mode having the following config:
public IgniteConfigurer config() {
return cfg -> {
// The node will be started as a client node.
cfg.setClientMode(false);
// Classes of custom Java logic will be transferred over the wire from this app.
cfg.setPeerClassLoadingEnabled(false);
// Setting up an IP Finder to ensure the client can locate the servers.
final TcpDiscoveryMulticastIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
ipFinder.setAddresses(Collections.singletonList(cacheServerIp));
cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(ipFinder));
// Cache Metrics log frequency. If 0 then log print disable.
cfg.setMetricsLogFrequency(Integer.parseInt(cacheMetricsLogFrequency));
// setting up storage configuration
final DataStorageConfiguration storageCfg = new DataStorageConfiguration();
storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
storageCfg.setStoragePath(cacheStorage);
// setting up data region for storage
final DataRegionConfiguration defaultRegion = new DataRegionConfiguration();
defaultRegion.setName(cacheDefaultRegionName);
// Sets initial memory region size. When the used memory size exceeds this value, new chunks of memory will be allocated
defaultRegion.setInitialSize(Long.parseLong(cacheRegionInitSize));
storageCfg.setDefaultDataRegionConfiguration(defaultRegion);
cfg.setDataStorageConfiguration(storageCfg);
cfg.setWorkDirectory(cacheStorage);
final TcpCommunicationSpi communicationSpi = new TcpCommunicationSpi();
// Sets message queue limit for incoming and outgoing messages
communicationSpi.setMessageQueueLimit(Integer.parseInt(cacheTcpCommunicationSpiMessageQueueLimit));
cfg.setCommunicationSpi(communicationSpi);
final CacheCheckpointSpi cpSpi = new CacheCheckpointSpi();
cfg.setCheckpointSpi(cpSpi);
final FifoQueueCollisionSpi colSpi = new FifoQueueCollisionSpi();
// Execute all jobs sequentially by setting parallel job number to 1.
colSpi.setParallelJobsNumber(Integer.parseInt(cacheParallelJobs));
cfg.setCollisionSpi(colSpi);
// set failure handler for auto connection if ignite server stop/starts.
cfg.setFailureHandler(new StopNodeFailureHandler());
};
}
App1 put data in cache where as App2 read data from cache. I have set local IP i.e ipFinder.setAddresses(Collections.singletonList("127.0.0.1:47500..47509"));
So locally both apps i.e app1 and app2 connected on cluster. When i put same config on server with IP change i.e ipFinder.setAddresses(Collections.singletonList("server1.com:47500..47509"));
Both servers i.e app1 and app2 not connected in cluster.
Is it embedded work only when all apps i.e app1 and app2 are on same machine?
Try using a static TcpDiscoveryVmIpFinder instead to locate the issue. By default, TcpDiscoveryMulticastIpFinder tries to scan all available hosts to discover Ignite nodes, and depending on timeouts this might take a while.
Assuming both of your nodes are still running on the same machine, you might keep the localhost configuration: "127.0.0.1:47500..47509". "server1.com:47500..47509" should also work if DNS name "server1.com" resolves to the correct IP address, the best way to check that - just to run a ping command to check how localhost and server1.com are being resolved.
If you are running on different machines, then you need to have a list of addresses rather than a singleton: "server1.com:47500..47509", "server2.com:47500.47509" etc.
It's also recommended to check whether the ports are opened and probably to configure a localHost explicitly if there are many different interfaces available.

How do I initialize a CuratorFramework for a ZooKeeper cluster with dynamic size?

I just implemented a distibuted lock using Apache Curator und ZooKeeper in standalone mode.
I initialzed the CuratorFramework as follows:
CuratorFramework client = CuratorFrameworkFactory.newClient("localhost:2182", retryPolicy);
Everything worked fine, so I tried to use ZooKeeper in cluster mode. I started three instances and initialzed the CuratorFramework as follows:
CuratorFramework client = CuratorFrameworkFactory.newClient("localhost:2182,localhost:2182,localhost:2183", retryPolicy);
As you can see, I just added the addresses of the two new nodes.
So far so good.But how do I initialize the client, when I don't know the addresses of each node respectively the size of the cluster, because I want to scale it dynamically?
I could initialize it by only specifying the address of the first node which will always be started. But if that node goes down, Curator loses the connection to the whole cluster (I just tried it).
CuratorFrameworkFactory has a builder that allows you to specify an EnsembleProvider instead of a connectionString and to include an EnsembleTracker. This will keep your connectionString up to date, but you will need to persist the data somehow to ensure your application can find the ensemble when it restarts. I recommend implementing a decorating EnsembleProvider that encapsulates a FixedEnsembleProvider and writes the config to a properties file.
Example:
EnsembleProvider ensemble = new MyDecoratingEnsembleProvider(new FixedEnsembleProvider("localhost:2182", true));
CuratorFramework client = CuratorFrameworkFactory.builder()
.ensembleProvider(ensemble)
.retryPolicy(retryPolicy)
.ensembleTracker(true)
.build();
You should always know where your Zookeeper instances are. There's no way to connect to something when you don't know where it is - how could you?
If you can connect to any instance, you can get the configuration details and poll it regularly to keep your connection details up-to-date, perhaps?
maybe take a look at https://zookeeper.apache.org/doc/r3.5.5/zookeeperReconfig.html#ch_reconfig_rebalancing

Qpid receiver on Azure EventHub

I have already working application based on Azure EventHub. Now I need write java receiver that connects to the existing infrastructure.
Existing configuration:
Event Hub > SomeName > Consumer Group > SomeGroupName
In the administrative console I cannot see any QUEUE or TOPIC definitions. Analyzing working c# code I can see that hub-name + group-name is enough to connect.
I have reconstructed url that allows me to connect over java (and connection works so far).
amqps://SomeName.servicebus.windows.net
So my questions:
1) When instead of queue /topic I specify group-name then I get exception The messaging entity 'sb://SomeName.servicebus.windows.net/SomeGroupName' could not be found. What is the model used there instead of queue/topic?
2) How to work with such infrastructure from Apache-qpid?
Are you using the Event Hub created in the old portal or one created using the new portal?
EventHub is not a Message Bus, so there are no Queues or Topics, that is correct.
The consumer group is not a part of the address. The address is build using the namespace and the name of the eventhub in that namespace.
So the address becomes:
sb://SomeNameSpaceName.servicebus.windows.net/SomeEventHubName
Can you post the c# code you've analyzed? Since you have an already working application maybe we can workout the differences that prevents it from working now.
The greatest hint for resolve the question gave me following link: http://theitjourney.blogspot.com/2015/12/sendreceive-messages-using-amqp-in-java.html
So No queue neither topic in this model. You need to connect to specific provider and specify correct EventHub as following:
application.properties:
connectionfactory.SBCF=amqps://<PolicyName>:<PolicyKey>#<DomainName>.servicebus.windows.net
queue.EventHub=<EventHubName>/ConsumerGroups/$Default/Partitions/0
Where:
After that following code allowed me to create MessageConsumer:
Hashtable<String, String> env = new Hashtable<>();
env.put(Context.INITIAL_CONTEXT_FACTORY,
"org.apache.qpid.amqp_1_0.jms.jndi.PropertiesFileInitialContextFactory");
env.put(Context.PROVIDER_URL,
getClass().getResource("/application.properties").toString());
Context context = null;
context = new InitialContext(env);
// Look up ConnectionFactory
ConnectionFactory cf = (ConnectionFactory) context.lookup("SBCF");
Destination queue = (Destination) context.lookup("EventHub");
// Create Connection
Connection connection = cf.createConnection();
// Create receiver-side Session, MessageConsumer
Session receiveSession = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
MessageConsumer receiver = receiveSession.createConsumer(queue);

org.apache.activemq.transport.InactivityIOException: Cannot send, channel has already failed

I am using apache's activemq for queueing. We have started to see the following exception very often when writing things to the queue:
Caused by: org.apache.activemq.transport.InactivityIOException: Cannot send, channel has already failed:
at org.apache.activemq.transport.AbstractInactivityMonitor.doOnewaySend(AbstractInactivityMonitor.java:282)
at org.apache.activemq.transport.AbstractInactivityMonitor.oneway(AbstractInactivityMonitor.java:271)
at org.apache.activemq.transport.TransportFilter.oneway(TransportFilter.java:85)
at org.apache.activemq.transport.WireFormatNegotiator.oneway(WireFormatNegotiator.java:104)
at org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68)
at org.apache.activemq.transport.ResponseCorrelator.asyncRequest(ResponseCorrelator.java:81)
at org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:86)
at org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1366)
I can't figure out what could be causing this-- or even, frankly, where to start debugging what is causing this.
Here is the queue set up code:
camelContext = new DefaultCamelContext();
camelContext.setErrorHandlerBuilder(new LoggingErrorHandlerBuilder());
camelContext.getShutdownStrategy().setTimeout(SHUTDOWN_TIMEOUT_SECONDS);
routePolicy = new RoutePolicy();
routePolicy.setCamelContext(camelContext);
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory();
connectionFactory.setBrokerURL(queueUri);
// use a pooled connection factory between the module and the queue
pooledConnectionFactory = new PooledConnectionFactory(connectionFactory);
// how many connections should there be in the session pool?
pooledConnectionFactory.setMaxConnections(this.maxConnections);
pooledConnectionFactory.setMaximumActiveSessionPerConnection(this.maxActiveSessionPerConnection);
pooledConnectionFactory.setCreateConnectionOnStartup(true);
pooledConnectionFactory.setBlockIfSessionPoolIsFull(false);
JmsConfiguration jmsConfiguration = new JmsConfiguration(pooledConnectionFactory);
jmsConfiguration.setDeliveryPersistent(false); // do not store a copy of the messages on the queue
ActiveMQComponent activeMQComponent = ActiveMQComponent.activeMQComponent(queueUri);
activeMQComponent.setConfiguration(jmsConfiguration);
camelContext.addComponent("activemq", activeMQComponent);
Component activemq = camelContext.getComponent("activemq");
// register endpoints for queues and topics
Endpoint queueEndpoint = activemq.createEndpoint("activemq:queue:polaris.*");
Endpoint topicEndpoint = activemq.createEndpoint("activemq:topic:polaris.*");
producerTemplate = camelContext.createProducerTemplate();
camelContext.start();
queueEndpoint.start();
topicEndpoint.start();
Like I said, the error doesn't suggest any directions for debugging, and it doesn't happen in 100% of cases where I can be sure my configuration is not set up correctly.
Recently I ran into the same problem. I found this https://issues.apache.org/jira/browse/AMQ-6600
Apache ActiveMQ client throws InactivityIOException when one of the jars is missing in classpath. In my case it was hawtbuf-1.11.jar. When I added this jar to classpath it started to work without errors.
management port: 61616 (default)
service port : 8161(default)
change your broker url port to 61616 and run
refer this
Check, if there is a non-Jms client pinging your JMS broker. This may be an external monitoring tool, a load balancing tool such as keepalived, or another one.

How do you connect to a Multi-Instance Queue Manager using MQQueueConnectionFactory

We have an application which needs to communicate with a Multi-Instance QueueManager. Both (instances) are running on the default port and have unique addresses.
serverA.internal.company.address
serverB.internal.company.address
We use the following code to establish the ConnectionFactory:
MQQueueConnectionFactory connectionFactory = new MQQueueConnectionFactory();
connectionFactory.setTransportType(1);
connectionFactory.setPort(1414);
connectionFactory.setChannel("CLIENTCONNECTION");
connectionFactory.setQueueManager("queue.manager.name.here");
connectionFactory.setHostName("serverA.internal.company.address");
How can we specify both addresses so that failover is achieved without writing our own retry logic?
using the following:
connectionFactory.setConnectionNameList("serverA.internal.company.address(1414),"
+ "serverB.internal.company.address(1414)")
instead of
connectionFactory.setHostName("serverA.internal.company.address");
connectionFactory.setPort(1414);
did the trick for us.
You are on exactly the correct track - but please do review this technote for information.
http://www-01.ibm.com/support/docview.wss?uid=swg21508357

Categories