ehcache auto-discovery (via multicast) between 2 instances on the same host - java

I run 2 tomcat instances on the same host. Each instance runs the same web application which tries to communicate some ehcache caches via RMI replication. I use the autodiscovery configuration in ehcache so I don't have to explicitly define which are the hosts and which are the caches I want to replicate. The ehcache instances do not manage to find each other and communicate:
DEBUG (RMIBootstrapCacheLoader.java:211) - cache peers: []
DEBUG (RMIBootstrapCacheLoader.java:133) - Empty list of cache peers for cache org.hibernate.cache.UpdateTimestampsCache. No cache peer to bootstrap from.
If I try the same thing but this time run each tomcat instance on a separate host (box) then everything works like a charm.
Am I doing something wrong, or isn't autodiscovery via multicast possible when the instances are on the same host?
My configuration uses the defaults presented in the RMI Distributed Caching documentation:
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
properties="peerDiscovery=automatic, multicastGroupAddress=230.0.0.1,
multicastGroupPort=4446, timeToLive=32"/>
<cacheManagerPeerListenerFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
properties="port=40001, socketTimeoutMillis=2000"/>
And inside each cache region I want to replicate I have:
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"
properties="asynchronousReplicationIntervalMillis=500 " />
<bootstrapCacheLoaderFactory
class="net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory" />
thanks

Am I doing something wrong, or isn't
autodiscovery via multicast possible
when the instances are on the same
host?
While I'm not really familiar with ehcache I'd think this to be possible and they are in fact providing an example doing something similar at least (multiple nodes per host, though one instance only): see section Full Example in the RMI Distributed Caching documentation you mentioned.
Usually you cannot open the same TCP port (40001 here) more than once per host though, it is bound to the first application/service allocating it (there do exist things like TCP Port Sharing on Windows for example, but you'd have to specifically account for that).
Consequently, if you are really using their identical default configurations, the second Tomcat instance trying to allocate TCP port 40001 will fail to do so. Of course, this should manifest itself somewhere earlier in the Tomcat logs, have you had a thorough look already?
Just using another free port for one Tomcat instance should solve the issue; you can see this in action within the ehcache.xml's for the Full Example mentioned above: the port number is increased one by one from 40001 up to 40006 per node.

Related

Hazelcast Client Only Configuration

Is it possible to create a client only hazelcast node? We have hazelcast embedded in our Java Applications and they use a common hazelcast.xml. This works fine, however when one of our JVM's is distressed, it causes the other clustered JVM's to slow down and have issues. I want to run a hazelcast cluster outside of our application stack and update the common hazelcast.xml to point to the external cluster. I have tried various config options but the application JVM's always want to start a listener and become members. I realize I maybe asking for something that defeats the purpose of hazelcast, however I thought it may be possible to configure an instance to be a client only.
Thanks.
You can change your application to use Hazelcast client instances, but it requires a code change.
Instead of
HazelcastInstance hz = Hazelcast.newHazelcastInstance();
you'll need to initialize your instance by requesting a client one:
HazelcastInstance hz = HazelcastClient.newHazelcastClient();
Another option is to keep the code unchanged and configure your embedded members to be "lite" ones. So they don't own any partition (they don't store cluster data).
<hazelcast xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.hazelcast.com/schema/config
http://www.hazelcast.com/schema/config/hazelcast-config-4.0.xsd">
<!--
===== HAZELCAST LITE MEMBER CONFIGURATION =====
Configuration element's name is <lite-member>. When you want to use a Hazelcast member as a lite member,
set this element's "enabled" attribute to true in that member's XML configuration. Lite members do not store
data and are used mainly to execute tasks and register listeners. They do not have partitions.
-->
<lite-member enabled="true"/>
</hazelcast>

Tomcat max connections on a per context basis

I have multiple web applications running under a single Tomcat container. Since they all run under a single Tomcat connector (as defined in the server.xml file), attributes such as maxConnections and maxThreads govern the container as a whole. As a result it is possible for a single application to consume all available Tomcat threads, starving the other applications of threads and making them unresponsive. I would like to be able to define the maximum http threads on a per context basis so that this is no longer possible.
Here's what I've tried so far:
Create a custom filter in the application that keeps track of the current thread count and limits additional connections. (Got the filter here: How to set limit to the number of concurrent request in servlet?). I'm not sure I like this solution, as it isn't as full-featured (support for attributes such as acceptCount, maxConnections, maxThreads, and minSpareThreads) as Tomcat provides by default to the container; and adding in the features feels like I am attempting to build what already exists in Tomcat.
Create a separate Tomcat connector in the server.xml file for each context. This has a few issues. For one, each connector requires a separate port; this means I'll have to account for this in my apache config. Secondly, I plan to add more webapps regularly; this means a config change followed by a tomcat restart, which is disruptive to clients.
Has anyone else encountered something like this? I feel like there should be a "Tomcat supported" workflow to accomplish what I'm after.
I'm going to post an answer that was provided to me from the Tomcat user group: http://tomcat.apache.org/tomcat-9.0-doc/config/valve.html#Semaphore_Valve (The Semaphore Valve is not Tomcat 9 specific, but was actually introduced in Tomcat 6). I experimented with this concept, and I found the following practical applications:
(Untested) The Semaphore Valve should be able to be nested within the Host element in the server.xml file.
(Tested) A [context-name].xml file can be placed inside [tomcat-home]/conf/Catalina/localhost with the valve nested within the Context element.
This is not necessarily the solution that I am going with, as more testing will need to be performed. However, I thought I'd add this as it is a potential answer to the problem.
Update:
As a recap, the SemaphoreValve was an option that was recommended to me through the Tomcat user mailing list as a solution to the issue that I described above. It turns out it was easier to implement than I anticipated. Adding the following to context.xml in the Tomcat/conf directory did the trick:
<Valve className="org.apache.catalina.valves.SemaphoreValve"
concurrency="10"
fairness="true" />
Thanks to Mark Thomas from the Apache group for supplying the solution.

Vertx Hazelcast: Cluster Problems

im working with Vertx and HazelCast to distribute my Verticles about the network.
No I have the problem, that my co-worker also uses clustered verticles with HazelCastManager.
Is there a possibility to avoid, that our verticles see each other to prevent by-effects?
You can define Hazelcast Cluster Groups in your cluster.xml file.
Here's the manual section related to it:
http://docs.hazelcast.org/docs/3.6/manual/html-single/index.html#creating-cluster-groups
If you use Multicast (default config) for discovery, you can redefine the groupname and password. Apart from that you can just choose any other option for discovery supported by the given Hazelcast version inside vert.x:
http://docs.hazelcast.org/docs/3.6/manual/html-single/index.html#discovering-cluster-members
Old question, but still valid, here's simple answer:
If you want to limit your vertx system to the single server, i.e. not have the eventbus leak out across your local network, the simplest thing to do is create a local copy of Hazelcast's cluster.xml on your classpath, i.e. copy/edit vertx source (see git):
vertx-hazelcast/src/main/resources/default-cluster.xml
into a new file in your vertx project
src/main/resources/cluster.xml
The required change is to the <multicast> stanza disabling that feature:
<hazelcast ...>
...
<network>
...
<join>
...
<multicast enabled="false">
...
</multicast>
<tcp-ip enabled="true">
<interface>127.0.0.1</interface>
</tcp-ip>

spring integration sftp:inbound-channel-adapter delete-remote-files=false

We are using the spring integration sftp:inbound-channel-adapter to transfer data from a remote host. We would like to keep the files on the remote host. Thus we tried with the
delete-remote-files=false option.
<int-sftp:inbound-channel-adapter
id="sftpInboundChannelAdapter"
channel="filesToParse"
session-factory="..."
remote-directory="..."
filename-pattern="..."
local-directory="..."
temporary-file-suffix=".tmp"
delete-remote-files="false"
auto-create-local-directory="true" local-filter="localFileFilter"
>
Unfortunately these files are then processed multiple times. Is there a way of keeping the remote files and not processing them multiple times?
EDIT: this is because a subsequent process deletes the file on the local side.
<bean id="localFileFilter" class="org.springframework.integration.file.filters.AcceptAllFileListFilter"/>
Note that the AcceptOnceFileListFilter (which is in fact the default), will only prevent duplicates for the current execution; it keeps its state in memory.
To avoid duplicates across executions, you should use a FileSystemPersistentAcceptOnceFileListFilter configured with an appropriate metadata store.
Note that the PropertiesPersistingMetadataStore only persists its state to disk on an normal application context shutdown (close), so the most robust solution is Redis, or MongoDB (or your own implementation of ConcurrentMetadataStore).
You can also call flush() on the PropertiesPersistingMetadataStore from time-to-time (or within the flow).
I changed the filter: it now only retrieves them once.
<bean id="localFileFilter" class="org.springframework.integration.file.filters.AcceptOnceFileListFilter"/>

How are corbaloc: URLs load balanced?

I'm using JBoss EAP 5.1 and am connecting to remote EJBs, the java.naming.provider.url is set to:
corbaloc::server1:port,server2:port,server3:port,server4:port
How is this getting load balanced? It's not always going in first to last order is it? is it randomized some how?
That depends entirely on who provides the corbaloc: JNDI URL provider (there isn't one in JDK at least up to 1.6), but you're begging the question by describing it as 'load balancing'. It would be more accurate to describe it as 'failover'.
In a clustered Websphere environment you can have many multiple name servers to talk with in the form you describe.
About your question, here it mentions that:
You can specify the bootstrap addresses for all servers in the cluster in the URL. The operation succeeds if at least one of the servers is running, eliminating a single point of failure. There is no guarantee of any particular order in which the address list will be processed. For example, the second bootstrap address may be used to obtain the initial context even though the server at the first bootstrap address in the list is available.

Categories