Scalability of Java EE Application. How would you approach it? - java

I've been working on the solution for financial industry. The main functionality of the application is the ability to load massive input files, digest them, update state in persistent store and generate extracts from persistent store on request. Pretty straightforward.
The input files are industry standard formatted XML large (more that hundreds of megabytes) messages containing many repeated entries. The persistent storage is relational database. The engine has been implemented as POJO-based (Spring Framework as back-bone) Java application deployable on J2EE application server.
The question is about the scalability and performance of the solution. If the application processes entries from XML in sequence the scalability of the solution is rather poor. there is no way to engage more than one instance of the application into the processing of the single file. This is why I've introduced parallel processing for entries form input XML file. Basically the idea is to dispatch processing of individual entries for workers from the pool. I decided to use JMS for dispatching. The component that loads the file reads the stream and simply extracts single entries and feeds the dispatching queue. There is a number of concurrent consumers on the other end of the queue. Each picks one message of the queue and processes the entry and it's immediately available to process other entry. This is pretty similar to servlets within the web container. What I found particularly powerful about this approach is that the workers can reside within separate instances of the application deployed on remote servers as long as the queue is shared. Unfortunately all workers connect to the same database that maintains persistence storage and this might be a bottleneck if database server is not powerful enough to handle load from concurrent workers.
What is your opinion on this architecture? Did you have similar application to design? What was your design choice then?

You can also have a look at Hadoop, a very handy platform for Map/Reduce jobs. The huge advantage is, that all infrastructure is provided by Hadoop, so you only apply new hardware nodes to scale. Implementing the Map and Reduce jobs should be only done once, after this, you can feed you cluster with massive load.

I think the architecture is generally sound. If the database is having trouble dealing with a high number of concurrent updates from the workers, you could introduce a 2nd queue on the other "side" of the app: as each worker completes their task, they add the results of that task to the queue. Then a single worker process periodically grabs the result objects from the 2nd queue and updates the database in a large batch operation? That would reduce database concurrency and might increase the efficiency of updates.

Also, take a look at Terracota clustering solution.

For parallel processing, as Mork0075 said, hadoop is a great solution. Actually many companies are use it for very large log analysis. And an interesting project Hive have been build based on hadoop for data warehousing.
Anyway, I think your current design is quite scalable. As for your concern about all of workers hitting on the database, you can just put another messaging queue between workers and database. Workers put processing results in the queue, and you build another program to subscribe to the queue and update the database. The drawback is that two queues might make system too complicated. Of course you can just add another topic to the existing MQ system. That will make system more simpler. Another approach is use a shared file system, such as NFS, each worker machine mount the same directory on the shared file server, and each worker write its processing results into a separate file on the shared file server. Then you build a program to check new files to update database. In this approach you introduce another complexity: shared file server. You can judge which one is more simpler in your case.

I recently spend some of my spare time investigating Spring Batch 2.0. This is new version of Java batching engine based on Spring framework. Guys who implemented Spring Batch concentrated on concurrency and parallelization of execution for this release. I must say it looks promising!

In answer to your questions:
What is your opinion on this architecture? Did you have similar application to design? What was your design choice then?
I think it's a good architecture, and you're right the DB is your bottleneck. However the design is flexible enough you can control the amount of input to the database.
I have and multi-threading across nodes works. I'm not entirely sure that Haddoop, or other distributed processing system will give you much more then what you already have, since your simply doing I/O to a database.
I've implemented something simliar using JMS queues for centralized logging, and it worked quite well with less impact to the code then writing the logs to disk. I think it'll work well for your application.

If you already are using Spring/Java EE, it is only natural to apply Spring Batch as a solution for your "concurrence architecture".
Two benefits right of the bat:
Spring Batch (starting from 2.0) implements partitioning, that means that the framework will take care of partitioning data for you in separate partition steps ( StepExecution ), and delegating the actual execution of these steps to multiple threads or other distributed systems ( PartitionHandlers, e.g. TaskExecutorPartitionHandler or to be more distributed MessageChannelPartitionHandler, etc.. )
Spring has a nice OXM package for dealing with XML + Spring Batch has a StaxEventItemReader that extracts fragments from the input XML document which would correspond to records for processing
Give Spring Batch a try. Let me know if you have any questions, I'll be glad to help out.
EDIT:
Also look at Scala/AKKA Actors and/or Scala parallel collections. If your task is applicable to be sharded/partitioned/distributed => that what Actor model is for.
If you'd like to consider a non JVM solution, take a look at Erlang OTP => simple and elegant.

Related

Run multiple spark queries in parallel in a multi-user environment on a static dataset

Is there a way to process different sparkSQL queries(read queries with different filters and groupbys) on a static dataset, being received from the front-end, in parallel and not in a FIFO manner, so that the users will not have to wait in a queue?
One way is to submit the queries from different threads of a thread pool but then wouldn't concurrent threads compete for the same resources i.e. the RDDs?
Source
Is there a more efficient way to achieve this using spark or any other big data framework?
Currently, I'm using sparkSQL and the data is stored in parquet format(200GB)
I assume you mean different users submitting their own programs or spark-shell activities and not parallelism within the same application per se.
That being so, Fair Scheduler Pools or Spark Dynamic Resource Allocation would be the best bets. All to be found here https://spark.apache.org/docs/latest/job-scheduling.html
This area is somewhat hard to follow, as there is the notion of as follows:
... " Note that none of the modes currently provide memory sharing across applications. If you would like to share data this way, we recommend running a single server application that can serve multiple requests by querying the same RDDs. ".
One can find opposing statements on Stack Overflow regarding this point. Apache Ignite is what is meant here, that may well serve you as well.

General architecture for a long-running data-processing system in Java?

I've been asked to port a legacy data processing application over to Java.
The current version of the system is composed of a nubmer of (badly written) Excel sheets. The sheets implement a big loop: A number of data-sources are polled. These source are a mixture of CSV and XML-based web-servics.
The process is conceptually simple:
It's stateless, that means the calculations which run are purely dependant on the inputs. The results from the calculations are published (currently by writing a number of CSV files in some standard locations on the network).
Having published the results the polling cycle begins again.
The process will not need an admin GUI, however it would be neat if I could implemnt some kind of web-based control panel. It would be nothing pretty and purely for internal use. The control panel would do little more than dispay stats about the source feeds and possibly force refresh the input feeds in the event of a problem. This component is purely optional in the first delivery round.
A critical feature of this system will be fault-tolerance. Some of the input feeds are notoriously buggy. I'd like my system to be able to recover in the event that some of the inputs are broken. In this case it would not be possible to update the output - I'd like it to keep polling until the system is resolved, possibly generating some XMPP messages to indicate the status of the system. Overall the system should work without intervention for long periods of time.
Users currently have a custom-client which polls the CSV files which (hopefully) will not need to be re-written. If I can do this job properly then they will not notice that the engine that runs this system has been re-implemented.
I'm not a java devloper (I mainly do Python), but JVM is the requirement in this case. The manager has given me generous time to learn.
What I want to know is how to begin architecting this kind of project. I'd like to make use of frameworks & good patterns possible. Are there any big building-blocks that might help me get a good quality system running faster?
UPDATE0: Nobody mentioned Spring yet - Does this framework have a role to play in this kind of application?
You can use lots of big complex frameworks to "help" you do this. Learning these can be CV++.
In your case I would suggest you try making the system as simple as possible. It will perform better and be easier to maintain (its also more likely to work)
So I would take each of the requirements and ask yourself; How simple can I make this? This is not about being lazy (you have to think harder) but good practice IMHO.
1) Write the code that processes the files, keep it simple one class per task, you might find the Apache CSV and Apache Commons useful.
2) Then look at Java Thread Pools to create a sperate process runner for those classes as seperate tasks, if they error it can restart them.
3) The best approach to start up depends on platform, but I'll assume your mention of Excel indicates it's windows PC. The simplest solution would therefore be to run the process runner from Windows->Startup menu item. A slightly better solution would be to use a windows service wrapper Alternatively you could run this under something like Apache ACD
There is a tool in Java ecosystem, which solves all (almost) integration problems.
It is called Apache Camel (http://camel.apache.org/). It relies on a concept of Consumers and Producers and Enterprise Integration Patterns in between. It provides fault-tolerance and concurrent processing configuration capabilities. There is a support for periodical polling. It has components for XML, CSV and XMPP. It is easy to define time-triggered background jobs and integrate with any messaging system you like for job queuing.
If you would be writing such system from scratch it would takes weeks and weeks and still you would probably miss some of the error conditions.
Have a look at Pentaho ETL tool or Talend OpenStudio.
This tools provide access to files, databases and so on. You can write your own plugin or adapter if you need it. Talend creates Java code which you can compile and run.

Hadoop, Mahout real-time processing alternative

I intended to use hadoop as "computation cluster" in my project. However then I read that Hadoop is not inteded for real-time systems because of overhead connected with start of a job. I'm looking for solution which could be use this way - jobs which could can be easly scaled into multiple machines but which does not require much input data. What is more I want to use machine learning jobs e.g. using created before neural network in real-time.
What libraries/technologies I can use for this purposes?
You are right, Hadoop is designed for batch-type processing.
Reading the question, I though about the Storm framework very recently open sourced by Twitter, which can be considered as "Hadoop for real-time processing".
Storm makes it easy to write and scale complex realtime computations on a cluster of computers, doing for realtime processing what Hadoop did for batch processing. Storm guarantees that every message will be processed. And it's fast — you can process millions of messages per second with a small cluster. Best of all, you can write Storm topologies using any programming language.
(from: InfoQ post)
However, I have not worked with it yet, so I really cannot say much about it in practice.
Twitter Engineering Blog Post: http://engineering.twitter.com/2011/08/storm-is-coming-more-details-and-plans.html
Github: https://github.com/nathanmarz/storm
Given the fact that you want a real-time response in de "seconds" area I recommend something like this:
Setup a batched processing model for pre-computing as much as possible. Essentially try to do everything that does not depend on the "last second" data. Here you can use a regular Hadoop/Mahout setup and run these batches daily or (if needed) every hour or even 15 minutes.
Use a real-time system to do the last few things that cannot be precomputed.
For this you should look at either using the mentioned s4 or the recently announced twitter storm.
Sometimes it pays to go really simple and store the precomputed values all in memory and simply do the last aggregation/filter/sorting/... steps in memory. If you can do that you can really scale because each node can run completely independently of all others.
Perhaps having a NoSQL backend for your realtime component helps.
There are lot's of those available: mongodb, redis, riak, cassandra, hbase, couchdb, ...
It all depends on your real application.
Also try S4, initially released by Yahoo! and its now Apache Incubator project. It has been around for a while, and I found it to be good for some basic stuff when I did a proof of concept. Haven't used it extensively though.
What you're trying to do would be a better fit for HPCC as it has both, the back end data processing engine (equivalent to Hadoop) and the front-end real-time data delivery engine, eliminating the need to increase complexity through third party components. And a nice thing of HPCC is that both components are programmed using the same exact language and programming paradigms.
Check them out at: http://hpccsystems.com

Choosing a distributed shared memory solution

I have a task to build a prototype for a massively scalable distributed shared memory (DSM) app. The prototype would only serve as a proof-of-concept, but I want to spend my time most effectively by picking the components which would be used in the real solution later on.
The aim of this solution is to take data input from an external source, churn it and make the result available for a number of frontends. Those "frontends" would just take the data from the cache and serve it without extra processing. The amount of frontend hits on this data can literally be millions per second.
The data itself is very volatile; it can (and does) change quite rapidly. However the frontends should see "old" data until the newest has been processed and cached. The processing and writing is done by a single (redundant) node while other nodes only read the data. In other words: no read-through behaviour.
I was looking into solutions like memcached however this particular one doesn't fulfil all our requirements which are listed below:
The solution must at least have Java client API which is reasonably well maintained as the rest of app is written in Java and we are seasoned Java developers;
The solution must be totally elastic: it should be possible to add new nodes without restarting other nodes in the cluster;
The solution must be able to handle failover. Yes, I realize this means some overhead, but the overall served data size isn't big (1G max) so this shouldn't be a problem. By "failover" I mean seamless execution without hardcoding/changing server IP address(es) like in memcached clients when a node goes down;
Ideally it should be possible to specify the degree of data overlapping (e.g. how many copies of the same data should be stored in the DSM cluster);
There is no need to permanently store all the data but there might be a need of post-processing of some of the data (e.g. serialization to the DB).
Price. Obviously we prefer free/open source but we're happy to pay a reasonable amount if a solution is worth it. In any way, paid 24hr/day support contract is a must.
The whole thing has to be hosted in our data centers so SaaS offerings like Amazon SimpleDB are out of scope. We would only consider this if no other options would be available.
Ideally the solution would be strictly consistent (as in CAP); however, eventual consistence can be considered as an option.
Thanks in advance for any ideas.
Have a look at Hazelcast. It is pure Java, open source (Apache license) highly scalable in-memory data grid product. It does offer 7X24 support. And it does solve all of your problems I tried to explain each of them below:
It has a native Java Client.
It is 100% dynamic. Add and remove nodes dynamically. No need to change anything.
Again everything is dynamic.
You can configure number of backup nodes.
Hazelcast support persistency.
Everything that Hazelcast offers is free(open source) and it does offer enterprise level support.
Hazelcast is single jar file. super easy to use. Just add jar to your classpath. Have a look at screen cast in main page.
Hazelcast is strictly consistent. You can never read stale data.
I suggest you to use Redisson - Redis based In-memory Data Grid for Java. Implements (BitSet, BloomFilter, Set, SortedSet, Map, ConcurrentMap, List, Queue, Deque, BlockingQueue, BlockingDeque, ReadWriteLock, Semaphore, Lock, AtomicLong, CountDownLatch, Publish / Subscribe, RemoteService, ExecutorService, LiveObjectService, SchedulerService) on top of Redis server! It supports master/slave, sentinel and cluster server modes. Automatic cluster/sentinel servers topology discovery supported also. This lib is free and open-source.
Perfectly works in cloud thanks to AWS Elasticache support
Depending of what you prefer, i would surely follow the others by suggesting Hazelcast if you're towards AP from the CAP Theorem but if you need CP, i would choose Redis
Have a look at Terracotta's JVM clustering, it's OpenSource ;)
It has no API while it works efficent at JVM level, when you store the value in a replicated object it is sent to all other nodes.
Even locking and all those things work transparent and without adding any new code.
You may want to checkout Java-specific solutions like Coherence: http://www.oracle.com/global/ru/products/middleware/coherence/index.html
However, I consider such solutions to be too complex and prefer to use solutions like memcached. Big disadvantage of memcached for your purpose is lack of record lock it seems and there is no built in way to replicate data for failover. That is why I would look into the key-value data stores. Many of them would satisfy your need completely.
Here is a list of key-value data stores that may help you with your task:
http://www.metabrew.com/article/anti-rdbms-a-list-of-distributed-key-value-stores
Just pick one that you fill comfortable with.
I am doing a similar project, but instead targeting the .NET platform. Apart from the already mentioned solutions, I think you should take a look at ScaleOut StateServer and Alachisoft NCache. I am afraid neither of these alternatives are cheap, but they are a safer bet than open source for commercial solutions according to my judgement.
Both provide Java client APIs, even though I have only played around with the .NET APIs.
StateServer features self-discovery of new cache nodes, and NCache has a management console where new cache nodes can be added.
Both should be able to handle failovers seamlessly.
StateServer can have 1 or 2 passive copies of the data. NCache features more caching topologies to choose between.
If you mean write-through/write-behind to a database that is available in both.
I have no idea how many cache servers you plan to use, but here are the full price specs:
ScaleOut StateServer
Alachisoft NCache
Both are installed and configured locally on your server and they both have GUI Management.
I am not sure exactly what strictly consistent involves, so I'll leave that for you to investigate..
Overall, StateServer is the best option if you want to skip configuring every little detail in the cache cluster, while NCache features very many features and caching topologies to choose from.
Depending on the behaviour of data towards the clients (if the data is read many times from the same client) it might be a good idea to mix local caching on the clients with the distributed caching in the cluster (available for both NCache and StateServer), just a thought.
The specified use case seems to fit into Netflix's Hollow. This is a read-only replicated cache with a single producer and multiple consumers.
Have you tought about using a standard messaging solution like rabbitmq ?
RabbitMQ is an open source implementation of the AMQP protocol.
Your application seems more or less like a Publish/subscribe system.
The Publisher node is the one that does the processing and puts messages (processed data) in a queue in the servers.
Subscribers can get messages from the server in various ways. AMQP decouples the producer and the consumer of messages and is very flexible in how you can combine the two sides.

Persistence strategy for low latency reads and writes

I am building an application that includes a feature to bulk tag millions of records, more or less interactively. The user interaction is very similar to Gmail where users can tag individual emails, or bulk tag large amounts of emails. I also need quick read access to these tag memberships as well, and where the read pattern is more or less random.
Right now we're using Mysql and inserting one row for every tag-document pair. Writing millions of rows to Mysql takes a while (high I/O), even with bulk insertions and heavy optimization. We need this to be an interactive process, not a batch process.
For the data that we're storing and reading, consistency and availability of the data are not as important as performance and scalability. So in the event of system failure while the writes are occurring, I can deal with some data loss. However, the data definitely needs to be persisted to secondary storage at some point.
So, to sum up, here are the requirements:
Low latency bulk writes of potentially tens of millions of records
Data needs to be persisted in some way
Low latency random reads
Durable writes not required
Eventual consistency is okay
Here are some solutions I've looked at:
Write behind caches (Terracotta, Gigaspaces, Coherence) where records are written to memory and drained to the database asynchronously. These scare me a little because they appear to add a certain amount of complexity to the app that I'd want to avoid.
Highly scalable key-value stores, like MongoDB, HBase, Tokyo Tyrant
If you have the budget to use Coherence for this, I highly recommend doing so. There is direct support for write-behind, eventual consistency behavior in Coherence and it is very survivable to both a database outage and Coherence cluster node outages (if you use >= 3 Coherence nodes on separate JVMs, preferably on separate hosts). I have implemented this for doing high-volume CRM for a Fortune 100 company's e-commerce site and it works fantastically.
One of the best aspects of this architecture is that you write your Java application code as if none of the write-behind behavior were taking place, and then plug in the Coherence topology and configuration that makes it happen. If you need to change the behavior or topology of Coherence later, no change in your application is required. I know there are probably a handful of reasonable ways to do this, but this behavior is directly supported in Coherence rather than having to invent or hand-roll a way of doing it.
To make a really fine point - your worry about adding application complexity is a good one. With Coherence, you simply write updates to the cache (or if you're using Hibernate it can be the L2 cache provider). Depending upon your Coherence configuration and topology, you have the option to deploy your application to use write-behind, distributed, caches. So, your application is no more complex (and, frankly unaware) due to the features of the cache.
Finally, I implemented the solution mentioned above from 2005-2007 when Coherence was made by Tangosol and they had the best possible support. I'm not sure how things are now under Oracle - hopefully still good.
I've worked on a large project that used asyncrhonous writes althoguh in that case it was just hand-written using background threads. You could also implement something like that by offloading the db write process to a JMS queue.
One thing that will certainly speed up db writes is to do them in batches. JDBC batch updates can be orders of magnitude faster than individual writes, and if you're doing them asynchronously you can just write them 500 at a time.
Depending on how your data is organized perhaps you would be able to use sharding,
if the read latency isn't low enough you can also try to add caching. Memcache is one popular solution.
Berkeley DB has a very high performance disk-based hash table that supports transactions, and integrates with a Java EE environment if you need that. If you're able to model the data as key/value pairs, this can be a very scalable solution.
http://www.oracle.com/technology/products/berkeley-db/je/index.html
(Note: oracle bought berkeley db about 5-10 years ago; the original product has been around for 15-20 years).

Categories