Using a PID controller to manage resources in programs - java

I was wondering if there is a precedent for using PID controller type mechanisms to manage computation resources (see http://en.wikipedia.org/wiki/PID_controller).
By computational resources I mean:
Spare Threads, Spare Processes, Queue Lengths, etc.
For example in apache.conf you can specify the number of spare server, min servers, etc.
The question I have is how do you control the spawning of new server or contraction of your resource pool.
The same could be applied to saw spawning nodes on say an Amazon Grid if your load increases beyond some level.
As a response to this question I am interested in:
If there is a Yes, No, Maybe answer to this questions
If there are accessible examples of where this is used in the open source world
If there are libraries that implement PID control in java, python, etc. for this purpose.
Thanks.

According to this research article, the Thread-pool in .NET framework seem to have one. I also found articles on load balancing Apache web servers using autonomous control, controlling memory footprint in DB2 etc.
The code here is a java implementation used in an open source project.

Related

2 programs that send messages to each other in Java [duplicate]

I have the following situation:
I have 2 JVM processes (really 2 java processes running separately, not 2 threads) running on a local machine. Let's call them ProcessA an ProcessB.
I want them to communicate (exchange data) with one another (e.g. ProcessA sends a message to ProcessB to do something).
Now, I work around this issue by writing a temporary file and these process periodically scan this file to get message. I think this solution is not so good.
What would be a better alternative to achieve what I want?
Multiple options for IPC:
Socket-Based (Bare-Bones) Networking
not necessarily hard, but:
might be verbose for not much,
might offer more surface for bugs, as you write more code.
you could rely on existing frameworks, like Netty
RMI
Technically, that's also network communication, but that's transparent for you.
Fully-fledged Message Passing Architectures
usually built on either RMI or network communications as well, but with support for complicated conversations and workflows
might be too heavy-weight for something simple
frameworks like ActiveMQ or JBoss Messaging
Java Management Extensions (JMX)
more meant for JVM management and monitoring, but could help to implement what you want if you mostly want to have one process query another for data, or send it some request for an action, if they aren't too complex
also works over RMI (amongst other possible protocols)
not so simple to wrap your head around at first, but actually rather simple to use
File-sharing / File-locking
that's what you're doing right now
it's doable, but comes with a lot of problems to handle
Signals
You can simply send signals to your other project
However, it's fairly limited and requires you to implement a translation layer (it is doable, though, but a rather crazy idea to toy with than anything serious.
Without more details, a bare-bone network-based IPC approach seems the best, as it's the:
most extensible (in terms of adding new features and workflows to your
most lightweight (in terms of memory footprint for your app)
most simple (in terms of design)
most educative (in terms of learning how to implement IPC). (as you mentioned "socket is hard" in a comment, and it really is not and should be something you work on)
That being said, based on your example (simply requesting the other process to do an action), JMX could also be good enough for you.
I've added a library on github called Mappedbus (http://github.com/caplogic/mappedbus) which enable two (or many more) Java processes/JVMs to communicate by exchanging messages. The library uses a memory mapped file and makes use of fetch-and-add and volatile read/writes to synchronize the different readers and writers. I've measured the throughput between two processes using this library to 40 million messages/s with an average latency of 25 ns for reading/writing a single message.
What you are looking for is inter-process communication. Java provides a simple IPC framework in the form of Java RMI API. There are several other mechanisms for inter-process communication such as pipes, sockets, message queues (these are all concepts, obviously, so there are frameworks that implement these).
I think in your case Java RMI or a simple custom socket implementation should suffice.
Sockets with DataInput(Output)Stream, to send java objects back and forth. This is easier than using disk file, and much easier than Netty.
I tend to use jGroup to form local clusters between processes. It works for nodes (aka processes) on the same machine, within the same JVM or even across different servers.
Once you understand the basics it is easy working with it and having the options to actually run two or more processes in the same JVM makes it easy to test those processes easily.
The overhead and latency is minimal if both are on the same machine (usually only a TCP rountrip of about >100ns per action).
socket may be a better choice, I think.
Back in 2004 I implement code which do the job with sockets. Until then, many times I search for a better solution, because socket approach triggers firewall and my clients worry. There is no better solution until now. Client must serialize your data, send and server must receive and unserialize.
It is easy.

How to build a distributed java application?

First of all, I have a conceptual question, Does the word "distributed" only mean that the application is run on multiple machines? or there are other ways where an application can be considered distributed (for example if there are many independent modules interacting togehter but on the same machine, is this distributed?).
Second, I want to build a system which executes four types of tasks, there will be multiple customers and each one will have many tasks of each type to be run periodically. For example: customer1 will have task_type1 today , task_type2 after two days and so on, there might be customer2 who has task_type1 to be executed at the same time like customer1's task_type1. i.e. there is a need for concurrency. Configuration for executing the tasks will be stored in DB and the outcomes of these tasks are going to be stored in DB as well. the customers will use the system from a web browser (html pages) to interact with system (basically, configure tasks and see the outcomes).
I thought about using a rest webservice (using JAX-RS) where the html pages would communicate with and on the backend use threads for concurrent execution.
Questions:
This sounds simple, But am I going in the right direction? or i should be using other technologies or concepts like Java Beans for example?
2.If my approach is fine, do i need to use a scripting language like JSP or i can submit html forms directly to the rest urls and get the result (using JSON for example)?
If I want to make the application distributed, is it possible with my idea? If not what would i need to use?
Sorry for having many questions , but I am really confused about this.
I just want to add one point to the already posted answers. Please take my remarks with a grain of salt, since all the web applications I have ever built have run on one server only (aside from applications deployed to Heroku, which may "distribute" your application for you).
If you feel that you may need to distribute your application for scalability, the first thing you should think about is not web services and multithreading and message queues and Enterprise JavaBeans and...
The first thing to think about is your application domain itself and what the application will be doing. Where will the CPU-intensive parts be? What dependencies are there between those parts? Do the parts of the system naturally break down into parallel processes? If not, can you redesign the system to make it so? IMPORTANT: what data needs to be shared between threads/processes (whether they are running on the same or different machines)?
The ideal situation is where each parallel thread/process/server can get its own chunk of data and work on it without any need for sharing. Even better is if certain parts of the system can be made stateless -- stateless code is infinitely parallelizable (easily and naturally). The more frequent and fine-grained data sharing between parallel processes is, the less scalable the application will be. In extreme cases, you may not even get any performance increase from distributing the application. (You can see this with multithreaded code -- if your threads constantly contend for the same lock(s), your program may even be slower with multiple threads+CPUs than with one thread+CPU.)
The conceptual breakdown of the work to be done is more important than what tools or techniques you actually use to distribute the application. If your conceptual breakdown is good, it will be much easier to distribute the application later if you start with just one server.
The term "distributed application" means that parts of the application system will execute on different computational nodes (which may be different CPU/cores on different machines or among multiple CPU/cores on the same machine).
There are many different technological solutions to the question of how the system could be constructed. Since you were asking about Java technologies, you could, for example, build the web application using Google's Web Toolkit, which will give you a rich browser based client user experience. For the server deployed parts of your system, you could start out using simple servlets running in a servlet container such as Tomcat. Your servlets will be called from the browser using HTTP based remote procedure calls.
Later if you run into scalability problems you can start to migrate parts of the business logic to EJB3 components that themselves can ultimately deployed on many computational nodes within the context of an application server, like Glassfish, for example. I don think you don't need to tackle this problem until you run it to it. It is hard to say whether you will without know more about the nature of the tasks the customer will be performing.
To answer your first question - you could get the form to submit directly to the rest urls. Obviously it depends exactly on your requirements.
As #AlexD mentioned in the comments above, you don't always need to distribute an application, however if you wish to do so, you should probably consider looking at JMS, which is a messaging API, which can allow you to run almost any number of worker application machines, readying messages from the message queue and processing them.
If you wanted to produce a dynamically distributed application, to run on say, multiple low-resourced VMs (such as Amazon EC2 Micro instances) or physical hardware, that can be added and removed at will to cope with demand, then you might wish to consider integrating it with Project Shoal, which is a Java framework that allows for clustering of application nodes, and having them appear/disappear at any time. Project Shoal uses JXTA and JGroups as the underlying communication protocol.
Another route could be to distribute your application using EJBs running on an application server.

moving java class bytecode from jvm to jvm

So i have a server jvm and a client jvm. The client communicates with the server by sending serialized java objects over tcp. Now, normally the server would have the classes of the objects it was receiving in its classpath, in order to deserialize the objects properly.
But what i'm looking for is some way to avoid that; ie, have the client "somehow" send the class bytecode over the wire, on-demand. This would of course require recursing down the class tree (in case any members of the original class where themselves objects of other classes that the server didn't know about).
So i was wondering about any technologies out there that do this sort of thing.
Thx.
RMI includes the notion of a "class server." Sounds like you're pretty much reinventing that, so consider looking into using all or part of RMI. Here's a tutorial.
RMI has the ability to dynamically download entire class file definitions over the wire on demand.
Even if you don't use (or want to use) RMI, the technologies underlying the classloading may be of interest, and they're standard Java.
You are asking about Code Mobility. Also the area of grid computing is somewhat relevant.
Take a look at Mobility-RPC, it's a library which does exactly what you ask at the same level of granularity (class-level).
Security is something to bear in mind. But I'd also remember that SQL sent to databases, bash commands executed over SSH, Business rules engines, Adobe Flash, Java Applets, RMI as described above, ActiveX, JavaScript, Hadoop/grid computing frameworks - all of these are examples of remote code execution in widespread use. Like everything, turning the security dial to the max is going to limit your options. But all of the above are used to good effect when properly firewalled or sandboxed.
In this instance, it sounds like you want something to eliminate a minor hassle, and you're not (say) designing a full-blown distributed application. So based on what you've said, despite myself being somewhat of a code mobility proponent, I'd say code mobility is probably overkill in this case. (But useful in others.)
Regarding grid computing, take a look at GridGain, and Hadoop. GridGain is a pure (CPU-centric) grid computing framework, whereas Hadoop is more a data mining/data warehouse platform with its own replicated distributed file system (HDFS).
Both GridGain and Hadoop transmit user-defined Java code implementing tasks/jobs to remote worker nodes. Last time I checked, they did this by transferring user-supplied jar files to the relevant nodes. I think the GridGain ClassLoader is more sophisticated than Hadoop's however (but less sophisticated than Mobility-RPC's). Hadoop basically starts a new JVM for every job, not especially efficient (but not exactly the bottleneck given the IO load!).
Mobility-RPC is somewhat different because it doesn't expect the remote machine to be a worker node at all, it could be any application running the library. So it's more like RPC or ad-hoc task/object transfer.
This sounds like a very bad idea. Basically, it would mean you allow your client to send code to the server which is directly executed inside the current process. Something like this is generally considered a serious vulnerability, namely Arbitrary code execution which is one of the worst vulnerabilities you could ever have.
Building a system design based on that is well, not so smart.
Create a classloader that loads classes from a stream. See the JarFileClassLoader example for details.
This, of course, will become hugely problematic, particularly if any of the classes use reflection and don't directly name an implementation in the bytecode, in addition to potential security issues; you'll need to look into secure classloaders.
If plain RMI does not work for your requirements take a look to mobile agents frameworks in Java (e.g., Aglets).

Communication between local JVMs

My question: What approach could/should I take to communicate between two or more JVM instances that are running locally?
Some description of the problem:
I am developing a system for a project that requires separate JVM instances to isolate certain tasks from each other entirely.
In it's running, the 'parent' JVM will create 'child' JVMs that it will expect to execute and then return results to it (in the format of relatively simple POJO classes, or perhaps structured XML data). These results should not be transferred using the SysErr/SysOut/SysIn pipes as the child may already use these as part of its running.
If a child JVM does not respond with results within a certain time, the parent JVM should be able to signal to the child to cease processing, or to kill the child process. Otherwise, the child JVM should exit normally at the end of completing its task.
Research so far:
I am aware there are a number of technologies that may be of use e.g....
Using Java's RMI library
Using sockets to transfer objects
Using distribution libraries such as Cajo, Hessian
...but am interested in hearing what approaches others may consider before pursuing one of these options, or any others.
Thanks for any help or advice on this!
Edits:
Quantity of data to transfer- relatively small, it will mostly be just a handful of POJOs containing strings that will represent the result of the child executing. If any solution would be inefficient on larger amounts of information, this is unlikely to be a problem in my system. The amount being transferred should be pretty static and so this does not have to be scalable.
Latency of transfer- not a critical concern in this case, although if any 'polling' of results is needed this should be able to be fairly frequent without significant overheads, so I can maintain a responsive GUI on top of this at a later time (e.g. progress bar)
Not directly an answer to your question, but a suggestion of an alternative.
Have you considered OSGI?
It lets you run java projects in complete isolation from each other, within the SAME jvm.
The beauty of it is that communication between projects is very easy with services (see Core Specifications PDF page 123). This way there is not "serialization" of any sort being done as the data and calls are all in the same jvm.
Furthermore all your requirements of quality of service (response time etc...) go away - you only have to worry about whether the service is UP or DOWN at the time you want to use it. And for that you have a really nice specification that does that for you called Declarative Services (See Enterprise Spec PDF page 141)
Sorry for the off-topic answer, but I thought some other people might consider this as an alternative.
Update
To answer your question about security, I have never considered such a scenario. I don't believe there is a way to enforce "memory" usage within OSGI.
However there is a way of communicating outside of JVM between different OSGI runtimes. It is called Remote Services (see Enterprise Spec PDF, page 7). They also have nice discussion there of the factors to take into consideration when doing something like that (see 13.1 Fallacies).
Folks at Apache Felix (implementation of OSGI) I think have implementation of this with iPOJO, called Distributed Services with iPOJO (their wrapper to make using services easier). I've never used this - so ignore me if I am wrong.
I'd use KryoNet with local sockets since it specialises heavily in serialisation and is quite lightweight (you also get Remote Method Invocation! I'm using it right now), but disable the socket disconnection timeout.
RMI basically works on the principle that you have a remote type and that the remote type implements an interface. This interface is shared. On your local machine, you bind the interface via the RMI library to code 'injected' in-memory from the RMI library, the result being that you have something that satisfies the interface but is able to communicate with the remote object.
akka is another option, as well as other java actor frameworks, it provides communication and other goodies derived from the actor model.
If you can't use stdin/stdout, then i'd go with sockets. You need some sort of serialization layer on top of the sockets (as you would with stdin/stdout), and RMI is a very easy to use and pretty effective such layer.
If you used RMI and found the performance wasn't good enough, i'd switch to some more efficient serializer - there are plenty of options.
I wouldn't go anywhere near web services or XML. That seems like a complete waste of time, likely take more effort and deliver less performance than RMI.
Not many people seem to like RMI any longer.
Options:
Web Services. e.g. http://cxf.apache.org
JMX. Now, this is really a means of using RMI under the table, but it would work.
Other IPC protocols; you cited Hessian
Roll-your-own using sockets, or even shared memory. (Open a mapped file in the parent, open it again in the child. You'd still need something for synchronization.)
Examples of note are Apache ant (which forks all sorts of Jvms for one purpose or another), Apache maven, and the open source variant of the Tanukisoft daemonization kit.
Personally, I'm very facile with web services, so that's the hammer which which I tend to turn things into nails. A typical JAX-WS+JAX-B or JAX-RS+JAX-B service is very little code with CXF, and manages all the data serialization and deserialization for me.
It was mentioned above, but i wanted to expand a bit on the JMX suggestion. we actually are doing pretty much exactly what you are planning to do (from what i can glean from your various comments). we landed on using jmx for a variety of reasons, a few of which i'll mention here. for one thing, jmx is all about management, so in general it is a perfect fit for what you want to do (especially if you already plan on having jmx services for other management tasks). any effort you put into jmx interfaces will do double duty as apis you can call using java management tools like jvisualvm. this leads to my next point, which is the most relevant to what you want. the new Attach API in jdk 6 and above is very sweet. it enables you to dynamically discover and communicate with running jvms. this allows, for example, for your "controller" process to crash and restart and re-find all the existing worker processes. this is the makings of a very robust system. it was mentioned above that jmx basically rmi under the hood, however, unlike using rmi directly, you don't need to manage all the connection details (e.g. dealing with unique ports, discoverability, etc). the attach api is a bit of a hidden gem in the jdk, as it isn't very well documented. when i was poking into this stuff initially, i didn't know the name of the api, so figuring how the "magic" in jvisualvm and jconsole worked was very difficult. finally, i came across an article like this one, which shows how to actually use the attach api dynamically in your own program.
Although it's designed for potentially remote communication between JVMs, I think you'll find that Netty works extremely well between local JVM instances as well.
It's probably the most performant / robust / widely supported library of its type for Java.
A lot is discussed above. But be it sockets, rmi, jms - there is a lof of dirty work involved.
I would ratter advice akka. It is a actor based model which communicate with each other using Messages.
The beauty is, the actors can be on same JVM or another (very little config) and akka takes care the rest for you. I haven't seen a more cleaner way than doing this :)
Try out jGroups if the data to be communicated is not huge.
How about http://code.google.com/p/protobuf/
It is lightweight.
As you mentioned you can obviously send the objects over the network but that is a costly thing not to mention start up a separate JVM.
Another approach if you just want to separate your different worlds inside one JVM is to load the classes with different classloaders. ClassA#CL1!=ClassA#CL2 if they are loaded by CL1 and CL2 as sibling classloaders.
To enable communications between classA#CL1 and classA#CL2 you could have three classloaders.
CL1 that loads process1
CL2 that loads process2 (same classes as in CL1)
CL3 that loads communication classes (POJOs and Service).
Now you let CL3 be the parent classloader of CL1 and CL2.
In classes loaded by CL3 you can have a light-weight communication send/receive functionality (send(Pojo)/receive(Pojo)) the POJOs between classes in CL1 and classes in CL2.
In CL3 you expose a static service that enables implementations from CL1 and CL2 register to send and receive the POJOs.

Monitoring a Java web application - is JMX the right choice?

We have a Java web application and we'd like to set up some basic monitoring with a view to expanding this monitoring in future. Our plan is as follows:
(1) Collect generic information (e.g. memory and threads) about the virtual machine of the web container that application is running in.
(2) Monitor the "state" of the application. This is rather vague but at the least we'd like to see if the web application is still alive and can respond to requests.
(3) In the future we'd like to collect more information that is specific to our application. Again this is rather vague but you can assume that we might want to make certain statistics collected internally by the application available to the support staff.
Usually the web application will be deployed in a Tomcat 5.5 or 6 environment. A quick bit of searching on the web shows that JMX can be enabled for Tomcat and that JConsole can then be used to connect to the server. This gives us lots of basic information that solves point (1). Also, some information is available in the MBeans section for "Catalina" and drilling down on this I can at least, for example, see how many requests a particular servlet has received. This is not quite what we want for point (2) but at least gives us some information. There seems to be quite a lot of information there but it's rather difficult to interpret using JConsole. Perhaps there is a better tool for interpreting the MBeans exposed by Tomcat.
For point (3), it seems, at first glance that we could write our own MBeans and then make these available to something like JConsole. Personally, this would involve me learning about JMX which I'm quite happy to do but I have a concern. Having looked around I notice that most of the textbooks on the subject haven't been updated for several years and the open source tools seem to be languishing without recent updates. So my main question is a simple one. What are your opinions on JMX? Does it have a future or is it/has it been superseded by something else? Given we already have our web application but we're starting from scratch for the management console, should we choose JMX or is there something more appropriate with a better future ahead of it?
I ask this question with no personal axe to grind, I'm simply interested to hear your opinions and experiences. I'm sure there's no one correct answer but I think an informed discussion would be useful.
Thanks in advance,
Adam.
JMX is certainly a good solution here. I wouldn't worry about it languishing. Most enterprises I've worked for recently use (or have plans to use) JMX, and I'd have to hear a pretty convincing argument before choosing something else in the Java world. It's easy to write clients (monitoring solutions) for it and you can return complex data very easily indeed. Most 3rd party components support monitoring via JMX as well.
Note that you may want to consider integration with any existing management solutions (e.g. Nagios, BNC Patrol, HP Openview etc.) as well. They may not be so Java-aware, but rather prefer tests like simple HTTP connectivity for testing if a web-site is up (easy using Nagios), or integration using SNMP (which Openview talks natively).
If applicable to your situation (Java 6 update 10 JDK or later, plus on the same machine) then consider using jvisualvm instead as it can dig even deeper than JConsole.
You may find that the easiest way to do what you need is a plugin to jvisualvm knowing your application

Categories