Pluggable cluster management system - java

Problem:
We have a cluster of servers, running on Java (Tomcat). Each server exposes a lot of information via JMX. It takes a lot of time to go through JMX to make sure the cluster is in valid state.
In some cases it's enough to check a certain status in every node. In other cases the data is spread across the nodes, so some logical analysis is required. Sometimes too much data needs to be analyzed and it cannot be done manually.
The question:
Is there a cluster management system that would provide a platform for automating such tests?
Requirements:
It has to be able to be extended by plugins (preferably written in Java), so for some complex customized test all we need is to develop a plugin with Business Logic.
It has to provide some JMX client platform
GUI/JMX interface for running tests and seeing results
Scheduling
SNMP monitoring

This does not satisfy all of your requirements, but you should take a look at JGroups
(A Toolkit for Reliable Multicast Communication)
Essentially, it is a java library for creating clustered nodes that can be kept in synch with each other via various protocols using mutlicast or unicast. It includes a rich set of Building Blocks to help you build the functional stack you need. Your customization of the stack can be implemented using the Building Blocks and/or your own custom state management or cluster invokable "business methods". By that I mean, you could define a business method called int getOpenPortCount() which you could then invoke, not on a single node, but on the cluster. Each attached node would then invoke the method locally and return the result of the invocation, resulting in your cluster invocation returning [effectively] a value of int[] whose length is the number of nodes on your JGroups cluster.
It has to provide some JMX client platform
I am not completely sure what you mean by this, but there is no built in JMX-Connector as such. However, you may not need it since you can communicate with individual nodes directly, or all nodes through the cluster using the JGroups API.
GUI/JMX interface for running tests and seeing results
I don't think you'll find anything like this, but, since you would essentially be using pure Java, you could use a combination of the JGroups API, the JMX API, jUnit (or TestNG) and an eclipse based test-runner which provides a fairly decent testing harness and visualizing UI.
Scheduling
You can schedule events to be executed across the cluster using the JGroups TimeScheduler, Quartz or simply use ScheduledThreadPoolExecutor.
SNMP monitoring
JGroups supports JMX for monitoring, which in turn can be bridged through the JVM's SNMP agent. This jboss specific link will give you an idea of how this can be implemented.

Related

How to effectively manage a bunch of jar files and their plumbing?

This is a rather high-level question so apologies if it's off-topic. I'm new to the enterprise Java world.
Suppose I have written some individual Java packages that do things like parse data feeds and store the parsed information to a queue. Another package might read from that queue and ingest those entries into a rules engine package. Tripped alerts get fed into another queue, which is polled by an alerting service (assume it's written in Python) that reads from the queue and issues emails.
As it stands I have to manually run each jar file and stick it in the background. While I could probably daemonize some or all of these services for resiliency or write some kind of service manager to do the same, this strikes me as being very amateur. Especially since I'd have to start a dozen services for this single workflow at boot.
I feel like I'm missing something, but I don't know what I don't know. Short of writing one giant, monolithic application, what should I be looking into to help me manage all these discrete components and be able to (conceptually) deliver a holistic application? I'd like to end up with some sort of hypervisor where I can click one button, it starts/stops all the above services, provides me some visibility into their status and makes sure the services are running when they should.
Is this where frameworks come into play? I see a number of them but don't know if that's just overkill, especially if I'm not actively developing a solution for that framework.
It seems you architected a system with a lot of components, and then after some time you decided to aggregate some of them because they happen to share the same programming language: Java. So, first a warning: this is not the best way to wire components together.
Also, it seems you don't know Java very well because you mix terms like package, jar and executable that are totally unrelated and distinct concepts.
However, let's assume that the current state of the art is the best possible and is immutable. Your current requirement is building a graphical interface (I guess HTTP/HTML based) to manage all the distinct components of the system written in Java. I suggest you use a single JVM, writing your components as EJB (essentially a start(), stop() and a method to query the component state that returns a custom object), and finally wire everything up with the Spring framework, that has a nice annotation-driven configuration for #Bean's.
SpringBoot also has an actuator package that simplify exposing objects. You may also find it useful to register your beans as Managed beans, and using the Hawtio framework to administer them (via a Jolokia agent).
I am not sure if you're actually using J2EE (i.e. Java Enterprise Edition). It is possible to write enterprise software also in J2SE. J2SE is not having too much available off the shelf for this, but in contrast has a lot of micro-frameworks such as Ninja, or full stack frameworks such as Play framework which work quite well, much easier to program, and performs much better than J2EE.
If you're not using J2EE, then you can go as simple as:
make one new Java project
add all the jars as dependency to that project (see the comment on Maven above by NimChimpsky)
start the classes in the jars by simply calling their constructor
This is quite a naive approach, but can serve you at this point. Of course, if you're aiming for a scalable platform, there is a lot more you need to learn first. For scalability, I suggest the Play! framework as a good start. Alternatively you can use Vert.x which has its own message queue implementation as well as support for high performance distributed caches.
The standard J2EE approach is doable (and considered "de-facto" in many oldschool enterprises) but has fundamental -flaws- or "differences" which makes a very steep learning curve and a very much non-scalable application.
It seems like you're writing your application in a microservice architecture.
You need an orchestrator.
If you are running everything in a single machine, a simple orchestrator that you probably is already running is systemd. You write systemd service description, and systemd will maintain your services according to your services description. You can specify the order the services should be brought up based on dependencies between services, restart policy if your service goes down unexpectedly, logging for stdout/stderr, etc. Note that this is the same systemd that runs the startup sequence of most modern Linux distros.
If you're running multiple machines, you can still keep using single machine orchestrator like systemd, but usually the requirement for the orchestrator will also become more complex. With multiple machines, you now have to take into account things like moving services between machines, phased roll out, etc. For these setups, there are software that adapts systemd for multi machine orchestration, like CoreOS's fleetd; and there are also standalone multi machine orchestrator like Kubernetes. Both uses docker as application container mechanism.
None of what I've described here is Java specific, which means you can use the same orchestration for Java as you used for Python or other languages or architecture.
You have to choose, As Raffaele suggested you can choose to write all your requirements into one app/service. Seems like a possible mission, using java Ejb's or using spring integration - ampqTemplate ( can write to a queue with ampqTemplate and receive the message with a dedicated listener (example).
Or choosing implementation with microservices architecture. write a service that will push to the queue another one that will contain the listener etc. a task that can be done easily with spring boot.
"One button to control them all" - in the case of a monolithic app - it's easy.
In case that you choose microservices architecture. It depends what are you needs. if its just the "start" "stop" operation I guess that that start and stop of your tomcat/other server will do. For other metrics, there is a variety of solutions. again, it depends on your needs.

Software used in grid computing to discover clients

In grid computing, what is the de facto software practice used by a server to discover clients and get information about them? For example, the name of the client, how much memory is available, is the client currently performing a task (and how much has it completed), etc. Or is it the other way around? Do the clients occasionally report that information to the server?
Would this be done via RPC? Or a messaging protocol (AMQP, STOMP)?
I'm also wondering if the same method is used to send clients various jobs/taks to complete?
I'm looking to find a Java friendly solution, if possible.
Thanks!
There is no actual de facto standard for server/node/client discovery in grid computing, at least none that is universally used. Many implementations use adhoc discovery based on UDP multicasting, others use registry-based discovery as in SOA architectures. There's plenty of solutions but no universal standard.
Some Java-firendly implementations you might want to look at: Unicore, JPPF, HTCondor, GridGain, Hadoop, Globus, Hazelcast
Zookeeper is something to consider. Perhaps combined with JMS messaging if your resources are distributed far and wide. I use Zookeeper with a SystemInfo service running on each node. The service registers the systems information: memory, number of CPUs, disk space and such as a znode in /Resources in Zookeeper.
Then whatever service needs a resource can query /Resources if looking for a resource to do something and check its specifications before allocating work.
The Java APi for Zookeeper is pretty good. I find it easy to work with.

Invoking a service on other java application running on the same machine

I created a command line interface on a small java application I created for personal use.
For the moment the cli is resided in the same project as the original application but I'm planning to extract it into it's own project, effectively building 2 separate executable jars enabling me to start the cli as needed and query the other running program for information.
I'm trying to figure out the easiest and most lightweight solution to call a remote service, on the same machine.
I looked at spring remoting but many of the provided solutions such as HttpInvoker, Hessian/Burlap, JAX RPC web services are based on HTTP or SOAP and therefore not suited for the job.
JMS also seems like overkill.
This leaves me with RMI, which looks rather heavyweight, and possibly JMX?
Suggestions?
JMX would use RMI underneath for remote access. JMX is meant for exposing admin apis (monitoring / management) - not intended as a general purpose remoting api.
RMI with the spring remoting support is fairly lightweight from a development point of view. Even runtime it is the option that adds least overhead compared to the other options you have listed.
Also with spring remoting support you can easily switch over to a different option if required later.
Take a look at this artcile, that compares / benchmarks performance of the above options.
I'd say it depends very much on where the project/functionality is heading. JMX is easy enough to set up, and you can make use of existing clients/guis to query and set parameters - this may save you a lot of work. It may also allow you system to integrate with monitoring tools out there.
If, on the other hand, the functionality has little to do with managment/monitoring, and more along the lines of pumping data in and out, one option may be Apache MINA. I've used it in the past with great results. But you'll effectively be creating your own protocol ! I doubt that MINA will end up being "less heavyweight" than simple RMI though.
In an app for personal use, I'd go with JMX because it should be the path of least resistance. I've had great experiences with this in the past. You'll be able to get it up and running in minutes, and you won't have to think about what message format to move data in (as long as your beans are Serializable, that is).
Put an interface in front of the remote call, so that later you can drop in another implementation later if JMX turns out to be inadequate.

Communication between local JVMs

My question: What approach could/should I take to communicate between two or more JVM instances that are running locally?
Some description of the problem:
I am developing a system for a project that requires separate JVM instances to isolate certain tasks from each other entirely.
In it's running, the 'parent' JVM will create 'child' JVMs that it will expect to execute and then return results to it (in the format of relatively simple POJO classes, or perhaps structured XML data). These results should not be transferred using the SysErr/SysOut/SysIn pipes as the child may already use these as part of its running.
If a child JVM does not respond with results within a certain time, the parent JVM should be able to signal to the child to cease processing, or to kill the child process. Otherwise, the child JVM should exit normally at the end of completing its task.
Research so far:
I am aware there are a number of technologies that may be of use e.g....
Using Java's RMI library
Using sockets to transfer objects
Using distribution libraries such as Cajo, Hessian
...but am interested in hearing what approaches others may consider before pursuing one of these options, or any others.
Thanks for any help or advice on this!
Edits:
Quantity of data to transfer- relatively small, it will mostly be just a handful of POJOs containing strings that will represent the result of the child executing. If any solution would be inefficient on larger amounts of information, this is unlikely to be a problem in my system. The amount being transferred should be pretty static and so this does not have to be scalable.
Latency of transfer- not a critical concern in this case, although if any 'polling' of results is needed this should be able to be fairly frequent without significant overheads, so I can maintain a responsive GUI on top of this at a later time (e.g. progress bar)
Not directly an answer to your question, but a suggestion of an alternative.
Have you considered OSGI?
It lets you run java projects in complete isolation from each other, within the SAME jvm.
The beauty of it is that communication between projects is very easy with services (see Core Specifications PDF page 123). This way there is not "serialization" of any sort being done as the data and calls are all in the same jvm.
Furthermore all your requirements of quality of service (response time etc...) go away - you only have to worry about whether the service is UP or DOWN at the time you want to use it. And for that you have a really nice specification that does that for you called Declarative Services (See Enterprise Spec PDF page 141)
Sorry for the off-topic answer, but I thought some other people might consider this as an alternative.
Update
To answer your question about security, I have never considered such a scenario. I don't believe there is a way to enforce "memory" usage within OSGI.
However there is a way of communicating outside of JVM between different OSGI runtimes. It is called Remote Services (see Enterprise Spec PDF, page 7). They also have nice discussion there of the factors to take into consideration when doing something like that (see 13.1 Fallacies).
Folks at Apache Felix (implementation of OSGI) I think have implementation of this with iPOJO, called Distributed Services with iPOJO (their wrapper to make using services easier). I've never used this - so ignore me if I am wrong.
I'd use KryoNet with local sockets since it specialises heavily in serialisation and is quite lightweight (you also get Remote Method Invocation! I'm using it right now), but disable the socket disconnection timeout.
RMI basically works on the principle that you have a remote type and that the remote type implements an interface. This interface is shared. On your local machine, you bind the interface via the RMI library to code 'injected' in-memory from the RMI library, the result being that you have something that satisfies the interface but is able to communicate with the remote object.
akka is another option, as well as other java actor frameworks, it provides communication and other goodies derived from the actor model.
If you can't use stdin/stdout, then i'd go with sockets. You need some sort of serialization layer on top of the sockets (as you would with stdin/stdout), and RMI is a very easy to use and pretty effective such layer.
If you used RMI and found the performance wasn't good enough, i'd switch to some more efficient serializer - there are plenty of options.
I wouldn't go anywhere near web services or XML. That seems like a complete waste of time, likely take more effort and deliver less performance than RMI.
Not many people seem to like RMI any longer.
Options:
Web Services. e.g. http://cxf.apache.org
JMX. Now, this is really a means of using RMI under the table, but it would work.
Other IPC protocols; you cited Hessian
Roll-your-own using sockets, or even shared memory. (Open a mapped file in the parent, open it again in the child. You'd still need something for synchronization.)
Examples of note are Apache ant (which forks all sorts of Jvms for one purpose or another), Apache maven, and the open source variant of the Tanukisoft daemonization kit.
Personally, I'm very facile with web services, so that's the hammer which which I tend to turn things into nails. A typical JAX-WS+JAX-B or JAX-RS+JAX-B service is very little code with CXF, and manages all the data serialization and deserialization for me.
It was mentioned above, but i wanted to expand a bit on the JMX suggestion. we actually are doing pretty much exactly what you are planning to do (from what i can glean from your various comments). we landed on using jmx for a variety of reasons, a few of which i'll mention here. for one thing, jmx is all about management, so in general it is a perfect fit for what you want to do (especially if you already plan on having jmx services for other management tasks). any effort you put into jmx interfaces will do double duty as apis you can call using java management tools like jvisualvm. this leads to my next point, which is the most relevant to what you want. the new Attach API in jdk 6 and above is very sweet. it enables you to dynamically discover and communicate with running jvms. this allows, for example, for your "controller" process to crash and restart and re-find all the existing worker processes. this is the makings of a very robust system. it was mentioned above that jmx basically rmi under the hood, however, unlike using rmi directly, you don't need to manage all the connection details (e.g. dealing with unique ports, discoverability, etc). the attach api is a bit of a hidden gem in the jdk, as it isn't very well documented. when i was poking into this stuff initially, i didn't know the name of the api, so figuring how the "magic" in jvisualvm and jconsole worked was very difficult. finally, i came across an article like this one, which shows how to actually use the attach api dynamically in your own program.
Although it's designed for potentially remote communication between JVMs, I think you'll find that Netty works extremely well between local JVM instances as well.
It's probably the most performant / robust / widely supported library of its type for Java.
A lot is discussed above. But be it sockets, rmi, jms - there is a lof of dirty work involved.
I would ratter advice akka. It is a actor based model which communicate with each other using Messages.
The beauty is, the actors can be on same JVM or another (very little config) and akka takes care the rest for you. I haven't seen a more cleaner way than doing this :)
Try out jGroups if the data to be communicated is not huge.
How about http://code.google.com/p/protobuf/
It is lightweight.
As you mentioned you can obviously send the objects over the network but that is a costly thing not to mention start up a separate JVM.
Another approach if you just want to separate your different worlds inside one JVM is to load the classes with different classloaders. ClassA#CL1!=ClassA#CL2 if they are loaded by CL1 and CL2 as sibling classloaders.
To enable communications between classA#CL1 and classA#CL2 you could have three classloaders.
CL1 that loads process1
CL2 that loads process2 (same classes as in CL1)
CL3 that loads communication classes (POJOs and Service).
Now you let CL3 be the parent classloader of CL1 and CL2.
In classes loaded by CL3 you can have a light-weight communication send/receive functionality (send(Pojo)/receive(Pojo)) the POJOs between classes in CL1 and classes in CL2.
In CL3 you expose a static service that enables implementations from CL1 and CL2 register to send and receive the POJOs.

Java HA framework

I am writing a small proxy application which should be redundant, e.g. primary proxy will be running on one server and the redundant one will run on a separate server. Is there a simple high-availability framework which I can use to implement this redundancy? For example, this HA framework would send pings between instances and raise some sort of exception or notification on the other instance when the first one goes down.
Building such a system has been my routine job in recent years. I have found jgroups
a very usable tools to receive and handle such kind of grouping events. This is the case if you want to build your own HA infrastructure. I don't know, but maybe in your case just a simple reverse proxy such as HAProxy can be enough.
If you want HA without hassle, just use some load balancer with HA capability e.g. Ultramonkey, LVS with keepalived etc.
In a HA configuration, you'd typically want to use virtual IP, so even if you'd have this ping/notify functionality as a framework, you'll still have stuff to do (start responding to requests to the virtual IP once the other instance has failed). So unless you are looking for a learning occasion, I'd advice using a middleware instead of coding this yourself using frameworks.
There are number of health-checks that you can configure for these middlewares. A simple healthcheck might for example, fire a GET request to your app. periodically and look for a specific string (e.g. "XXX running.") in the response to make sure your app. is running fine.
You don't provide much details about the work your application does, so depending on how stateful it is, whether it can tolerate minor dataloss, is it time-critical, do you value developer time over machine time, you can have a varying spectrum of solutions.
There are some good suggestions above, I'd add: take a look at JMS and persistent messaging. Usually these make recovery quite trivial, but at the cost of latency hit (unless you byu a commercial product and learn it well or pay the vendor to tune your application). With JMS queues you can implement active-active processing and save yourself the headache of failure detection.
Another direction to look at is distributed state management/clustering framework like Gigaspaces, Coherence, Gemstone, Infinispan, Gridgain and Teracotta. These can replicate your data and guarantee varying quality of services levels. Most of them come with some type of failure detection and distributed management mechanism.
hadoop is a good place to start

Categories