Java Memcached not replicating entries across instances - java

I am using xmemcached-1.3.5 jar version of MemCache server.
Now, I have two instances of this server on different machine. I am
using xmemcached-1.3.5 jar to make entry in memcache-server through my
java application.
When both servers are up, the entry is made on only one of the MemCache instances.
Is there any configuration that needs to be made to get the entries duplicated onto both instances?

Memcached does not provide replication. You will need to investigate repcached if you want your cached data replicated.

Related

two tomcat sharing the cache and supporting each other in failure situation

Please advise as I am stuck up with this following scenario that is let's say there is a client who sends the request to the Apache server.
On the other side there are two tomcat servers running parallel on different ports in the cluster, and on both the tomcat servers the same war file is deployed.
Now the problem is that let's say if the client sends the request to apache server and apache server sends the request to the first tomcat and in that first tomcat cache was written and then the first tomcat goes down after writing the cache.
And then comes the second request from a client which again goes to apache server and then apache server redirect it two-second tomcat since the first tomcat is down and it also sends the cache of the first tomcat to the second tomcat please advise how we can achieve this scenario,
The purpose is that I do not want the application to suffer even if the first tomcat goes down. and also I am looking for best clustering that can be happened in tomcat as I am expecting heavy traffic from the end client
Have a look at distributed caches. They solve this kind of challenges.
Options are:
Ehcache/Terracotta
Redis
Memcached
Hazelcast
Bozho's tech blog has a good and current overview. You might want to read it.
Update
The challenge you are facing is that the cached data needs to be available to all Tomcat instances and that the cached data needs to stay consistent if it is modified and queried on several Tomcat instances.
That's exactly what these distributed caches provide:
They write modified data to a shared storage or send it to other instances so it will be available to other instances if needed.
They keep the distributed data consistent so that no Tomcat instance uses outdated data or modifies it in a way that would break consistency.
The distributed caches usually use a two stage concept where the some of the data is kept in memory and everything is persisted on disk.
So by using the distributed cahce, your second Tomcat instances can serve the clients of the first instance in case that instance goes down.

Tomcat In-Memory Caching - replication on Cluster

I have a question on Tomcat clustering. I have a java application in which we have implemented in-memory caching. So basically when Tomcat starts, it loads a few objects from the database. These objects are stored in the tomcat memory like static objects. so whenever we update something from the application, it writes to the database and also updates the object in memory.
My question is, if we implement clustering in tomcat with 2 or more nodes, will those cached objects be also shared? Is that possible? I dont think it is. HttpSession objects can be shared using the session replication provided by tomcat delta manager or backup manager. But can the in-memory things also be shared?
Additionally what happens to batch jobs that are running? Will they also run multiple times as there will be multiple tomcat instances in the cluster and they would each trigger the job? That would be a failure as well as.
Any thoughts \ ideas?
If you save something in memory, it will not be replicated unless you implement something specifically to send it to other machines. Each jvm keeps their memory independent from each other.
In general, if you want to have replicated caching, a good solution is to use ehcache (http://www.ehcache.org/).
With regard to batch jobs, it depends on the library you use but generally, if you use an established library (like http://www.quartz-scheduler.org/), it should be capable of making sure that only one instance runs the job. Perhaps you need to configure.
The important thing is to test to make sure that any solution you put in place actually does what you expect it to do.
Good luck!
Whenever moving to a cluster or a cluster-like topology you need to revise your application solution design/architecture to make sure it will support multiple instance execution.
Data cached in memory by a given Tomcat instance WILL NOT be shared across instances in the cluster. You will need to move such data outside the Tomcat instance to a shared cache instance - Redis seems to be a popular option this days.
Job execution probably needs to be revised and customized to be driven by configuration. Create a Boolean flag your app can read and kick off batch processing if required. Select the node within the cluster you need the job to run on and set the flag to true there. Set it to false in all other nodes. Quartz WILL NOT ensure/control/manage multiple instances of a job running in a cluster.

get servlet context variable data in tomcat cluster environment

Currently I am developing module to display list of online user in my application. I am using comet streaming technology. When users log in I put data in map and then sending data in message queue. Now message queue is stored in servlet context.
Now problem I am facing is it is working in local environment but it is not working in production environment because in production environment i have set up tomcat cluster. so data set in servlet context for tomcat 1 is not accessible in tomcat 2.
I have already develop module but not getting any way to solve above issue. I google and found that tomcat doesn't support context replication.
I have one doubt that how many JVM instance will be created in tomcat cluster web application. e.g I have two tomcat cluster.
I would not use servlet context to store data for a cluster. The common pattern is to use a database for data that must be shared across different servers.
For your use case, it is no need persisting the values between different runs, so the database is not necessarily a nice solution, even if it is easy to setup. IMHO what you need it just a shared data cache or better a memory data grid. hazelcast should be easy to use for your requirements. If I correctly understand them, what you need is a distributed map, with a concatenation of node_id, session_id as key (or maybe simply session_id), and a user object as value.
In tomcat7 this requires writing a custom valve to force replication, the same is true in tomcat 6. Refer to Is there a useDirtyFlag option for Tomcat 6 cluster configuration? to see how to do this.

Do client need to worry about multiple memcache servers?

Question:-
Does java client need to worry about multiple servers ?
Meaning:-
I have given two servers in memcached client, but when i set or get a key from cache, do i need to provide any server related info to it or memcache itself takes care of it??
My knowledge:-
Memcache itself takes care due to consistent hashing.
but does spymemcached 2.8.0 provides consistent hashing???
Memcached servers are pooling servers. Meaning that you define a pool (a list) of servers and when the Java client attempts a write it writes towards the pool.
It's the client's job to decide which server from the pool will receive and store the value and how it will retrieve the value from that pool.
Basically this allows you to start with one Memcached server (possibly on the same machine) and if push comes to shove you can add a few dozen more servers to the pool without touching the application code.
Since the client is responsible of distributing data across the pool of servers (the client has to choose right memcached server to store/fetch data) there are few distribution algorithms.
One of the simplest is modula. This algorithm distributes keys depending on the amount of memcached servers in pool. If the number of servers in the pool changes, the client won't be able to the find stored data, there will be cache misses. In such case it's better to use consistent hashing.
Most popular java memcached clients spymemached and xmemcached support consistent hashing.
In some use cases instead of directly using the memcached client, caching can be added to a spring application through AOP (interceptors) using simple-spring-memcached or Spring 3.1 Cache Abstraction. Spring Cache currently doesn't support memcached but simple-spring-memcached provides such integration in snapshot build and upcoming 3.0.0 release.
MemCached server will manage to store and retrieve key/value by itself.
While storing using hash generate the key and store it.
While retrieve again hash the given key and find on which server it has been stored and then fetch it, this will take some time.
Instead, there is one approach which can be used for storing and retrieving.
Create one HashMap and store the key with server address as value. Now next time if the same key needs to get then instead of searching, you will get the server address directly from the HashMap and you needs to fetch value from there only. Hence you can save the searching time of MemCahce server.
Hope you understand what i mean.

Sharing properties file from a single location

We have Java Enterprise applications deployed on to multiple servers. There are replicated servers running same application to load-balance(let's call them J2EE servers).Note that this is not clustered.
There is a common server (let's call it props server) which hosts all properties files relevant to all applications. The folder containing properties files is NFS shared between all the other J2EE servers.
The issue is that you can see props server is a single point of failure. If it did not come up or if the NFS share gets corrupted, other servers wont be able to load properties.
What are the options to avoid this hard dependency ?
Given the constraint that we do not want to duplicate property files to all servers.
If you are having this problem, the more scalable solution would be to look into using this:
http://java.sun.com/j2se/1.4.2/docs/guide/lang/preferences.html
This abstracts away things like where they are located. You can then have these settings stored in an LDAP server, cloned properties, or whatever is best - you can even use different mechanisms for different environments.
One of approaches would be for every J2EE server to have a cloned set of configuration files. This implies a constraint that every time a config is changed for one server it should be rsync-ed among all others (after the change is known to be OK).
The positive aspect is clear, you really have N independently configurable servers and a config change kills (if kills) only one server.
The negative aspect is that sometimes someone will forget to do 'rsync' & 'bounce' after a config change on a single box.
Given the constraint that we do not
want to duplicate property files to
all servers.
If you are ok to copy properties to some servers, elect a leader and make sure any modification is propagated to backups, then Paxos is your friend. If the leader fails, a new leader can be elected. I have updated the wikipedia page. It contained errors regarding the description of the algorithm.
Take a look at the PAXOS algorithm. It is designed to bring multiple servers to consensus.
http://en.wikipedia.org/wiki/Paxos_algorithm

Categories