Sharing hashmap between different JVM running on the same machien - java

I have some maps that contains cached data from db. Currently 5 instance of the same server is running on same machine in different JVM. How can I share maps between JVM? cache is write once and read many. Currently the problem is because of this cache JVM footprint is very big. So storing this map in all JVM is consuming lot of memory. I need some solution which may not consume much cpu time. Is there way to do this in the same way class sharing is done between JVM?
Thanks
Nikesh PL

Basically, you can't: those are two different address spaces.
You could serialize one and read it from the other, but that wouldn't be like sharing them.
How about a process to manage the cache, and a quick, low-bandwidth interface that your application programs can use to access the data?

Why dont you look at coherence a project from oracle. Its not free but you can download and test it for free on a development system. It does precisely what you are looking for. It is used as a cache for storing database data but is ultimately a map of keys and values. Its pretty simple to set up and use. Here's a link to get you started:
http://download.oracle.com/docs/cd/E13924_01/coh.340/e14135.pdf

Related

Big Redis setup

I need to store 32M records in Redis 3.0.1, each record needs around 422KB. Making a total of around 13GB of information.
The information is stored in disc in a zipped hashlist, and serialized in smile jackson. I'm using Java 6, Jedis and AIX.
I have a few questions:
Does that mean that the Redis process needs 13GB or RAM?
Is this a manageable size for a single instance or would you go for a cluster setup? I think we can have up to 4 servers. This would mean revisit the whole project and dates, so please consider other management impacts on this question.
Is there a better way of storing this amount of data?
Thanks
Carlos
Even if you use a Redis Cluster, all of your data should fit in memory.
With 13TB of data, as pointed by Alex, and limited to 4 servers as you said, it means each server should have more than 4TB of RAM...
Moreover Redis stores the data in memory in a format that is optimised for speed, and so does not try very hard to reduce their size. So it may take more than 13TB in practice.
That's why I would not recommend Redis in this case, or at least not Redis only.
Maybe you should consider an alternative NoSQL Database, that offer fast response time although it stores data on disk, like Couchbase (it uses Redis internaly as a cache).
Or, if your use case allows it, an easiest solution would be to add a Redis cache to your current architecture, without changing the current database you use. It will dramaticaly improve the access speed to data in cache (but wont reduce the first access). It depends if the data are likely to be requested more than once in a short period of time, or not.

Caching approach for a cluster of servers

I have a Java application deployed on a cluster of JBoss AS 5.1 which requires a lot (> 3 GB) of data to be cached.
Right now the server cluster has just 2 nodes (separate machines).
Here are specific requirements:
Both nodes should not require data to be loaded into cache (i.e., there should either be replication or cache should reside on a separate server)
The data should never expire.
Both of the above requirements are REALLY important for the application. I'd be thankful if the suggestion would be made keeping both of these in mind.
I should also add a third requirement:
ease of use
The application was initially using Hashmap. I tried replacing the hashmap with JBoss Cache 3.2.1 for its replication and thread safety features. But i'm not really happy with JBoss Cache performance. Also when i load the data in the cache the 8 Gig of RAM is almost entirely being used (most of it is used by the cache entries).
I'd like to hear the experience of people who have handled such kind of caching scenario themselves. Thanks for your time in advance.
You can try out using GigaSpaces XAP datagrid is a replicated cache. It is very highly performant.
http://www.gigaspaces.com/datagrid
If you want a cache that provides a Java HashMap interface and can easily support gigabytes of cache data, with no expiry, then check out Oracle Coherence. This would use the Coherence "distributed cache" option (which is the default configuration). For more info, see: http://coherence.oracle.com/
Elastic. Just add nodes. Auto-discovery. Auto-load-balancing. No data loss. No interruption. Every time you add a node, you get more data capacity and more throughput.
Use both RAM and flash. Transparently. Easily handle 10s or even 100s of gigabytes per Coherence node (e.g. up to a TB or more per physical server).
Automatic high availability (HA). Kill a process, no data loss. Kill a server, no data loss.
Datacenter continuous availability (CA). Kill a data center, no data loss.
For the sake of full disclosure, I work at Oracle. The opinions and views expressed in this post are my own, and do not necessarily reflect the opinions or views of my employer.
I have 'solved' this problem before (work code, can't show you)... but, I can tell you this much:
with large volumes, a large amount of memory is used in overhead in HashMaps.
you can save a lot of memory by replacing java.util.* classes with smart uses of arrays.
every time you have memory allocations you also have to scan/collect that memory in the GC, so saving memory also improves performance.
Wherever you can, use arrays....
Edit: Apparently the concept of Hash Maps has been forgotten.... Has the Java implementation of HashMap made people believe it is the only way? A structured set of arrays, with a hash function, and a binary sort.... all basic structures... http://en.wikipedia.org/wiki/Hash_table
One array to add keys to. A parallel array to store the values in, and an int-based hash table to make a fast lookup in to the key array...
Computer Science - maybe second year ;-)
Edit again: I used to core concepts I have described in the JDOM project here: https://github.com/hunterhacker/jdom/blob/master/core/src/java/org/jdom2/StringBin.java

Memory-mapped files: pros and cons?

I need to share data between two Java applications running on the same machine (two different JVMs). I precise that the data to be shared is large (about 7 GB). The applications must access the data very fast because they have to answer incoming queries at a very high rate. I don't want the applications to hold each one a copy of the data.
I've seen that one option is to use memory-mapped files. Application A gets the data from somewhere (let's say a database) and stores it in files. Then application B may access these files using java.nio. I don't know exactly how memory-mapped files work, I only know that the data is stored in a file and that this file (or a part of it) is mapped to a region of the memory (virtual memory?). So, the two applications can read-write the data in memory and the changes are automatically (I guess?) committed to the file. I also don't know if there is a maximum size for a file to be entirely mapped in memory.
My first question is what are the different possibilities for two applications to share data in this scenario (I mean taking into account that the amount of data is very large and that access to this data must be very fast)? I precise that this question is not related to memory-mapped I/O, it just to know what are the other ways to solve the same problem.
My second question is what are the pros and cons of using memory-mapped files?
Thanks
My first question is what are the different possibilities for two applications to share data?
As S.Lott points out, there's a lot of mechanisms:
OS-level message queues
OS-level POSIX shared memory segments (persist after process death)
OS-level memory mappings (could be anonymous or file-backed)
OS-level anonymous pipes (unidirectional)
OS-level named pipes (unidirectional)
OS-level sockets (bidirectional) -- whether AF_UNIX or AF_INET or AF_INET6
OS-level shared global memory -- suitable for multi-threaded programs
Storing data in files
Application-level message queues
Application-level blackboard-style tuplespaces
Application-level key/value stores
Application-level remote procedure call frameworks -- many are available
Application-level web-based frameworks
My second question is what are the pros and cons of using memory-mapped files?
Pros:
very fast -- depending upon how you access the data, potentially zero-copy mechanisms can be used to operate directly on the data with no speed penalties. Care must be taken to update objects in a consistent manner.
should be very portable -- available on Unix systems for probably 25 years (give or take), and apparently Windows has mechanisms too.
Cons:
Single-system sharing. If you want to distribute your application over multiple machines, shared memory isn't a great option. Distributed shared memory systems are available, but they feel very much like the wrong interface to my way of thinking.
Even on a single system, if the memory is located on a single NUMA node but needed to be accessed by processors from multiple nodes, the inter-node requests may significantly slow processing compared to giving each node their own segment of the memory.
You can't just store pointers -- everything must be stored as offsets to base addresses, because the memory may be mapped at different locations in different processes. I have no idea what this means for Java objects, though presumably someone smart did their best to make it transparent to Java programmers. If you're not using their provided mechanisms, then you probably must do the work yourself. (Without actual pointers in Java, perhaps this is not very onerous.)
Updating objects consistently has proven to be very difficult. Passing immutable objects in message-passing systems instead generally results in programs with fewer concurrency bugs. (Concurrent programming in Erlang feels very natural and straight-forward. Concurrent programming in more imperative languages tends to introduce a huge pile of new concurrency controls: semaphores, mutexes, spinlocks, monitors).
Memory mapped files sounds like a headache. A simple option and less error prone would be to use a shared database with a cluster aware cache. That way only writes go down to the database and reads can be served from the cache.
As an example of how to do this in hibernate see http://docs.jboss.org/hibernate/core/3.3/reference/en/html/performance.html#performance-cache

How to create a simple distributed Map for Java application?

I am looking for a Map to share information between two instances of a Java web application running on separate machines. Reads and writes to this map need to be very fast and don't have to be transactional i.e. its ok if one instance has stale data for a while.
Any recommendations?
I need to keep track of the last time a user did something in the application, so its not terribly bad if this information is out of date. Speed and ease of use are important. I don't want writes to the Map to impact response times.
I would try Hazelcast, JGroups or Ehcache. All support a distributed map.
EDIT: Another option is to use RMI top a service running in one or the other JVM. This avoids the need for an additional library.
Additionally, there is Memcached which is very robust and proven over the time.

memcached tomcat mysql on 1GB RAM

I am new to memcached and caching in general. I have a java web application running on Ubuntu + Tomcat + MySQL on a VPS Server with 1GB of memory.
Does it make sense to add a memcached layer with about 256MB for caching? Will this be too much load on the server? Which is more appropriate caching rendered html pages or database objects?
Please advise.
If you're going to cache pages, don't use memcached, use Varnish. However, there's a good chance that's not a great use of memory. Cacheing pages trades memory for computation and database work, but it does cost quite a lot of memory per page, so it's best for cases where the computation and database work needed to produce a single page amounts to a lot (or the pages are very small!). Also, consider that page cacheing won't be effective, or even possible, if you want to use per-user customisation on your pages (eg showing the number of items in a shopping cart). At least not without getting into some truly hairy shenanigans (edge-side includes, anyone?).
If you're not going to cache pages, and your app is on a single machine, then there's no point using memcached or similar. The point of cache servers like that is to make the memory on one machine work as a cache for another - like how a file server shares a disk, they're essentially memory servers. On a single machine, you might as well give all the memory to Java and cache objects on the heap.
Are you using an object-relational mapper? If so, see if it has any support for a second-level cache. The big three implementations (Hibernate, OpenJPA, and EclipseLink) all support in-memory caches. They're likely to do a much better job than you would if you did the cacheing yourself.
But, if you're not using a mapper, you have no choice but to do the cacheing yourself. There are extension points in LinkedHashMap for building LRU caches, and then of course there's the people's favourite, SoftReference, in combination with a HashMap. Plus, there are probably cache implementations out there you could download and use - i'd be shocked if there wasn't something in the Apache Commons libraries.
memcached won't add any noticeable load on your server, but it will be memory your app can't use. If you only plan to have a single app server for a while, you're better off using an in-JVM cache.
As far what to cache, the answer falls somewhere in the middle of the above. You don't want to cache exactly what's in your database and you certainly don't want to cache the final output. You have a data model representation in your application that isn't exactly what's in the DB (e.g. a User object might be made up of multiple queries from a few different tables). Cache that kind of thing as it's most reusable.
There's lots of info in the memcached site that should help you understand and get going with caching in general and memcached specifically.
It might make sense to do that, why don't try a smaller size like 64 MB and see how that goes. When you use more resources for the memcache, there is less for everything else. You should try it and see what will give you the best performance.

Categories