Understanding ESB [closed] - java

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
though I understand what system integration is, I am a bit new to all the newest approaches. I am farily familiar with web services and JMS but I feel utterly confused by the concept of an ESB.
I have done some research but I still don't really get it. I work much better by example rather than theory.
So can someone please illustrate a simplistic example to demonstrate why one would use an Enterprise Service Bus vs just a Queue , a web service , the file system or else?
I would like the example to amplify the capabilities of the ESB which could not be achieved by any other conventional intgration method or at least not with the same efficiency.
All replies are greatly appreciated.
Thanks,
Bob

This is going to sound a bit harsh, but basically if you needed an ESB, you'd know you needed an ESB.
For a majority of use cases, the ESB is a solution looking for a problem. It's a stack of software over engineered for most scenarios. Most folks simply do not do enough variety of processing to warrant it. The "E" for "Enterprise" is notable here.
In a simple case:
tail -F server.log | grep SEVERE >> severe.log
THAT is a trivial example of an instance of an ESB scenario.
"But that's just a UNIX command pipeline!"
Yes, exactly.
The "ESB" part is the "|" and the ">>"
The ESB is the run time within which you can link together modules, monitor traffic, design all sorts of whacky scenarios like fan outs and joins, etc. etc.
ESBs are notable for having a bunch of connectors to read a bunch of sources and write a bunch of destinations. They're notable for weaving more complicated graphs and workflows for processing using rather coarse logic blocks.
But what most folks typically do is:
input -> DO_STUFF -> output
With an ESB they get:
ESB[input -> DO_STUFF -> output]
In the wild, most pipelines simply are not that complicated. They tend to have one off logic that's not reusable, and folks tend to glob it together in to a single logic module.
Well, heck, you can do that with a Perl script.
Long pipelines in ESBs tend to be more inefficient than not. With lots of marshaling of data in to and out of generic modules (since you rarely use a binary payload).
So, say, CSV comes in, converts to XML, process it, output XML for input to another step as XML, which marshals it, works on it, converts it back in to XML for Yet Another step. Rinse and repeat until the CPU hits 400% (multi-core FTW).
Then some one comes up with "Hey, if I drag an drop these modules together in to a single routine, we skip all this XML junk!", and you end up with "input -> DO_STUFF -> output".
For large systems, with lots of web services that need to do casual, ad hoc integration, they can be fine. If you're in a business that does that a lot, they can work really well. When you have dozens of pipelines, they can help with the operational aspect of managing them.
But for complicated pipelines, if you have a lot of steps, maybe it's not such a good idea beyond prototyping, especially if there's any real volume involved. Mind, you may not having any choice, depends on the systems you're integrating.
If not, if you have a single interface you need to stand up -- then just do it. Do it in Perl, in Java, in C#, in whatever. Don't run out and spool up some odd 100MBs of infrastructure and complexity that you now to get to learn, master, and maintain.
So, again, if you needed an ESB, you'd know it. Really. You'd have whatever system you've built together of disparate stuff, you'd be fighting it, talking to colleagues about what a pain all this stuff is, and you'd stumble across some link to some site to some vendor and read a white paper and go "THAT'S IT!", but if you haven't done that yet, then you're not missing anything.

ESB is for the cases where you do have that web service and queue and file system all in the same system and need to integrate them.
An ESB product usually solves the following
Security
Message routing
Orchestration (which is advanced message routing)
Protocol transformation
Message transformation
Monitoring
Eventing
You can do all of these with other tools as well and if you just need one or two of these capabilities you can probably do without and ESB (as it introduced additional complexity) but when you need several of them an integrated solution in the form of an ESB can be a better solution.

As #WillHartung concluded, ESBs tend to be properly used in large, complex situations. And that's why it's named Enterprise Service Bus.
Now, to actually answer your question, ESBs typically:
Communicate over several protocols (e.g. HTTP, Message Queue, etc.), for both input and output
Establish a common message format, and often translate from other formats into the 'canonical' format
Provide endpoint transparency (e.g. you send a message to the bus, and get an answer back, but you don't explicitly know what service, also connected to the bus, handled your request.
Provide monitoring and management capabilities
Facilitate versioning of services and messages
Enforce security, when needed.
So, as you can see, it's for when doing a lot of point-to-point communication ("just do it") would be a huge, unmanageable pile of spaghetti. Indeed, most places that I've seen SOA implemented, it's replacing that huge pile of spaghetti that already exists.

An ESB is an enterprise service bus, an infrastructure backplane if you like for a service-oriented architecture. Imagine the chaos of hundreds of services happily reusing each other. How do manage such an environment? How do you provide flexible, decoupled routing between your services? How do you avoid point-to-point spaghetti architecture? How do you manage transactions and security across a hybrid technology landscape? How do you track where messages are in complex flows across multiple systems?
You use an ESB.
ESBs typically allow you to design flows across multiple systems in an XML configuration language, offering you a host of EIS adaptors, transformation and mediation plugins etc. Some will offer an IDE to help you design flows. Some ESBs are very expensive, some are open source.
If you want to get a feel for ESB, check out either Mule or WSO2, both good open source products, or even Spring Integration which is a non-clustered solution but excellent for decoupling Java from the underlying external interface points.

Related

2 programs that send messages to each other in Java [duplicate]

I have the following situation:
I have 2 JVM processes (really 2 java processes running separately, not 2 threads) running on a local machine. Let's call them ProcessA an ProcessB.
I want them to communicate (exchange data) with one another (e.g. ProcessA sends a message to ProcessB to do something).
Now, I work around this issue by writing a temporary file and these process periodically scan this file to get message. I think this solution is not so good.
What would be a better alternative to achieve what I want?
Multiple options for IPC:
Socket-Based (Bare-Bones) Networking
not necessarily hard, but:
might be verbose for not much,
might offer more surface for bugs, as you write more code.
you could rely on existing frameworks, like Netty
RMI
Technically, that's also network communication, but that's transparent for you.
Fully-fledged Message Passing Architectures
usually built on either RMI or network communications as well, but with support for complicated conversations and workflows
might be too heavy-weight for something simple
frameworks like ActiveMQ or JBoss Messaging
Java Management Extensions (JMX)
more meant for JVM management and monitoring, but could help to implement what you want if you mostly want to have one process query another for data, or send it some request for an action, if they aren't too complex
also works over RMI (amongst other possible protocols)
not so simple to wrap your head around at first, but actually rather simple to use
File-sharing / File-locking
that's what you're doing right now
it's doable, but comes with a lot of problems to handle
Signals
You can simply send signals to your other project
However, it's fairly limited and requires you to implement a translation layer (it is doable, though, but a rather crazy idea to toy with than anything serious.
Without more details, a bare-bone network-based IPC approach seems the best, as it's the:
most extensible (in terms of adding new features and workflows to your
most lightweight (in terms of memory footprint for your app)
most simple (in terms of design)
most educative (in terms of learning how to implement IPC). (as you mentioned "socket is hard" in a comment, and it really is not and should be something you work on)
That being said, based on your example (simply requesting the other process to do an action), JMX could also be good enough for you.
I've added a library on github called Mappedbus (http://github.com/caplogic/mappedbus) which enable two (or many more) Java processes/JVMs to communicate by exchanging messages. The library uses a memory mapped file and makes use of fetch-and-add and volatile read/writes to synchronize the different readers and writers. I've measured the throughput between two processes using this library to 40 million messages/s with an average latency of 25 ns for reading/writing a single message.
What you are looking for is inter-process communication. Java provides a simple IPC framework in the form of Java RMI API. There are several other mechanisms for inter-process communication such as pipes, sockets, message queues (these are all concepts, obviously, so there are frameworks that implement these).
I think in your case Java RMI or a simple custom socket implementation should suffice.
Sockets with DataInput(Output)Stream, to send java objects back and forth. This is easier than using disk file, and much easier than Netty.
I tend to use jGroup to form local clusters between processes. It works for nodes (aka processes) on the same machine, within the same JVM or even across different servers.
Once you understand the basics it is easy working with it and having the options to actually run two or more processes in the same JVM makes it easy to test those processes easily.
The overhead and latency is minimal if both are on the same machine (usually only a TCP rountrip of about >100ns per action).
socket may be a better choice, I think.
Back in 2004 I implement code which do the job with sockets. Until then, many times I search for a better solution, because socket approach triggers firewall and my clients worry. There is no better solution until now. Client must serialize your data, send and server must receive and unserialize.
It is easy.

How to effectively manage a bunch of jar files and their plumbing?

This is a rather high-level question so apologies if it's off-topic. I'm new to the enterprise Java world.
Suppose I have written some individual Java packages that do things like parse data feeds and store the parsed information to a queue. Another package might read from that queue and ingest those entries into a rules engine package. Tripped alerts get fed into another queue, which is polled by an alerting service (assume it's written in Python) that reads from the queue and issues emails.
As it stands I have to manually run each jar file and stick it in the background. While I could probably daemonize some or all of these services for resiliency or write some kind of service manager to do the same, this strikes me as being very amateur. Especially since I'd have to start a dozen services for this single workflow at boot.
I feel like I'm missing something, but I don't know what I don't know. Short of writing one giant, monolithic application, what should I be looking into to help me manage all these discrete components and be able to (conceptually) deliver a holistic application? I'd like to end up with some sort of hypervisor where I can click one button, it starts/stops all the above services, provides me some visibility into their status and makes sure the services are running when they should.
Is this where frameworks come into play? I see a number of them but don't know if that's just overkill, especially if I'm not actively developing a solution for that framework.
It seems you architected a system with a lot of components, and then after some time you decided to aggregate some of them because they happen to share the same programming language: Java. So, first a warning: this is not the best way to wire components together.
Also, it seems you don't know Java very well because you mix terms like package, jar and executable that are totally unrelated and distinct concepts.
However, let's assume that the current state of the art is the best possible and is immutable. Your current requirement is building a graphical interface (I guess HTTP/HTML based) to manage all the distinct components of the system written in Java. I suggest you use a single JVM, writing your components as EJB (essentially a start(), stop() and a method to query the component state that returns a custom object), and finally wire everything up with the Spring framework, that has a nice annotation-driven configuration for #Bean's.
SpringBoot also has an actuator package that simplify exposing objects. You may also find it useful to register your beans as Managed beans, and using the Hawtio framework to administer them (via a Jolokia agent).
I am not sure if you're actually using J2EE (i.e. Java Enterprise Edition). It is possible to write enterprise software also in J2SE. J2SE is not having too much available off the shelf for this, but in contrast has a lot of micro-frameworks such as Ninja, or full stack frameworks such as Play framework which work quite well, much easier to program, and performs much better than J2EE.
If you're not using J2EE, then you can go as simple as:
make one new Java project
add all the jars as dependency to that project (see the comment on Maven above by NimChimpsky)
start the classes in the jars by simply calling their constructor
This is quite a naive approach, but can serve you at this point. Of course, if you're aiming for a scalable platform, there is a lot more you need to learn first. For scalability, I suggest the Play! framework as a good start. Alternatively you can use Vert.x which has its own message queue implementation as well as support for high performance distributed caches.
The standard J2EE approach is doable (and considered "de-facto" in many oldschool enterprises) but has fundamental -flaws- or "differences" which makes a very steep learning curve and a very much non-scalable application.
It seems like you're writing your application in a microservice architecture.
You need an orchestrator.
If you are running everything in a single machine, a simple orchestrator that you probably is already running is systemd. You write systemd service description, and systemd will maintain your services according to your services description. You can specify the order the services should be brought up based on dependencies between services, restart policy if your service goes down unexpectedly, logging for stdout/stderr, etc. Note that this is the same systemd that runs the startup sequence of most modern Linux distros.
If you're running multiple machines, you can still keep using single machine orchestrator like systemd, but usually the requirement for the orchestrator will also become more complex. With multiple machines, you now have to take into account things like moving services between machines, phased roll out, etc. For these setups, there are software that adapts systemd for multi machine orchestration, like CoreOS's fleetd; and there are also standalone multi machine orchestrator like Kubernetes. Both uses docker as application container mechanism.
None of what I've described here is Java specific, which means you can use the same orchestration for Java as you used for Python or other languages or architecture.
You have to choose, As Raffaele suggested you can choose to write all your requirements into one app/service. Seems like a possible mission, using java Ejb's or using spring integration - ampqTemplate ( can write to a queue with ampqTemplate and receive the message with a dedicated listener (example).
Or choosing implementation with microservices architecture. write a service that will push to the queue another one that will contain the listener etc. a task that can be done easily with spring boot.
"One button to control them all" - in the case of a monolithic app - it's easy.
In case that you choose microservices architecture. It depends what are you needs. if its just the "start" "stop" operation I guess that that start and stop of your tomcat/other server will do. For other metrics, there is a variety of solutions. again, it depends on your needs.

IVR Development in java

I'm going to develop an on-line IVR application using Java (without PBX).
In the software requirements there are some mathematical calculations and database communication which I prefer to implement on Java side.
As you know, different technologies are ready to integrate with Java, such as JTAPI, Zanzibar OpenIVR, Moho, VoiceXML, CCXML, Jive, Prophecy, Voicent, Voxeo etc.
Now the question is: What is the best solution? Which one is easiest to reach? Which one have the best efficiency? Do you recommend Open Source frameworks? Is there any Windows API for handling IVR systems?
If you're going to do VoiceXML with Java, you should take a look at Rivr, an open-source VoiceXML dialogue engine.
Rivr let you code your callflow naturally in the Java language. Thus you can reuse all the available Java tools (e.g. debugger, unit testing framework, coverage test tool) to develop the callflow. You also benefit from all your IDE features too (refactorings, source navigation, version control, etc).
The API is very simple. You can code a complete callflow with a single method. No need to define "states" or to manipulate templates or XML files.
Integration with server-side logic is trivial since you are only coding for the server side.
There is far too little information here to provide a direct answer, but I'll try to give you some basics.
The standards for IVR application development is VoiceXML for dialog (caller interaction) and CCXML for call control. The latter is not as commonly available. There are also numerous proprietary solutions. Your choice of an open standard versus a proprietary solution should be more about vendor/solution lock in. Even with the open standards, you'll likely use custom enhancements and have some amount of lock in, but portability will be easier. You can code directly to the telephony boards (challenging and usually poorly documented if you are someone new to telephony) or work with solutions that provide end to end capability. I find very few people porting IVR applications so I would focus on supportability of your application, features and ease of use in your decision.
Platform choices run the spectrum. You have premise (onsite) and hosted solutions. You mostly have high end enterprise solutions and low end solutions. There are very few middle ground solutions. Features (telephony and integration capabilities) vary dramatically.
From a telephony perspective, take nothing for granted. In particular, transfers. There are many ways to transfer a call. How it is done will be constrained by your connection. An analog line to the CO (phone company) can have multiple mechanisms and the one in place will typically be dictated to you. Not all telephony platforms will support what you need. Hangup detection, at least on analog lines, can also catch the novice out. Hosted solutions will typically allow you to avoid most of these problems. VoIP solutions are even more complicated due to compatibility between devices (yes there are standards, lots of them, with lots of optional parts and then there are custom flavors).
For windows specifically, you can use Lync, but it is complicated...though many of the solutions you will explore will be complicated.
In short, there is no best solution. Your knowledge of the technologies, requirements and budget are going to drive the decision. I've generally worked with enterprise IVRs in on premise and hosted configurations that are typically fronting large call centers. I have come in contact with many of the open source solutions. Anything on premise is likely to be complicated because of the system and telephony configuration. Hosted solutions have typically done most of that for you.
I know that those are "de jure standards". But you should also take Asterisk(with AGI/AMI) as a consideration for your project. If you decide to try Asterisk and Java, take a look of astivetoolkit.org it may be very helpful.
Ricky from Twilio here.
For me, picking the best tool for a particular problem is one of my favorite tasks a developer. One technique to figuring this out is blocking off a day and spending an hour or two with each potential option. A few question I'll typically explore:
Which tool is the easiest to get started with?
Which tool has the best documentation?
Which tool has an engaged community that I can learn from?
I'm sure there are a ton more questions depending on your scenario you'd want to explore (Does it fit within my budget? Can I use it with the technologies I already know and love?).
If you're looking at building an IVR, we have an API that could help. We just dropped some new tutorials including a non-trivial, production ready IVR application using Java.

Alternatives to RMI for IPC?

I have 2 processes that need to communicate over the same PC and different PCs. In the local case the process communication is among different processes e.g Process A and Process B.
In the remote case it will be among 2 instances of Process A running in different PCs.
I will create them from scratch and I am wondering what is the best approach. I am aware of RMI and sockets but I was wondering for my case as described, and taking also into account that the messages exchanged are small and the number of APIs really small, if there is a standard approach/library for this.
Any suggesstions are highly welcome
Update after #EJP comments:
My interest is 1)to implement the requirement for communication in a light manner since the API exposed will be really small and the messages as well 2)use and learn a new popular framework if possible (I already know RMI and sockets)
If you are just looking for messaging frameworks, there's a bunch available out such as
RabbitMQ - http://www.rabbitmq.com/
ZeroC Ice - http://www.zeroc.com/ice.html
AMQP - http://www.amqp.org
OpenSplice DDS - http://www.prismtech.com/opensplice
But when you use a 3rd party framework, you are then adding an additional dependency to your application. If it is something very simple like your case, perhaps writing a TCP client/server would be sufficient for a client/server paradigm or if you are looking for publisher/subscriber paradigm then you can look into using UDP multicast. You just need your data class to extends Serializable if you want to be able to marshal and unmarshal your data to buffer and send it over to network using typical JAVA socket API.
I strongly suggest having a look at Thrift. From all the technologies I've used (web services, RMI, XML-RPC, Corba comes to mind) it is currently my favourite. Essentially the steps involved are:
Download the Thrift compiler.
Add the Maven dependency (make sure it is the same version as the compiler!) I currently use 0.8.0.
Write your Thrift IDL (incredibly easy, google for it as there are plenty of examples).
Compile it for Java.
Writer your server/client.
In general, you can whip together a server and a client in about 30 lines of code. In terms of speed and reliability it has never failed me before.
You might have a look at Versile Java (full disclosure: I am one of the developers), it satisfies at least your criteria #1. From the API documentation, here are some examples of writing remote-enabled objects, running a service, and connecting to a service.
If you want to learn something new then I'd look at OpenSplice. The reason is pretty simple, among the technologies suggested above is the only one that provides you with Data-Centric abstractions.
The cool thing about OpenSplice is that gives you the abstraction of a Global Data Space, yet the implementation of this global data space is fully distributed and very high performance.
Take a look at some of the slides available at http://www.slideshare.net/angelo.corsaro and I am sure you'll get in love with the technology.
Finally OpenSplice is Open Source.
Happy Hacking.
A+
JMX is a good alternative .
Example :
http://www.javalobby.org/java/forums/t49130.html
IMB JMX Example
http://alvinalexander.com/blog/post/java/source-code-java-jmx-hello-world-application

Communication between local JVMs

My question: What approach could/should I take to communicate between two or more JVM instances that are running locally?
Some description of the problem:
I am developing a system for a project that requires separate JVM instances to isolate certain tasks from each other entirely.
In it's running, the 'parent' JVM will create 'child' JVMs that it will expect to execute and then return results to it (in the format of relatively simple POJO classes, or perhaps structured XML data). These results should not be transferred using the SysErr/SysOut/SysIn pipes as the child may already use these as part of its running.
If a child JVM does not respond with results within a certain time, the parent JVM should be able to signal to the child to cease processing, or to kill the child process. Otherwise, the child JVM should exit normally at the end of completing its task.
Research so far:
I am aware there are a number of technologies that may be of use e.g....
Using Java's RMI library
Using sockets to transfer objects
Using distribution libraries such as Cajo, Hessian
...but am interested in hearing what approaches others may consider before pursuing one of these options, or any others.
Thanks for any help or advice on this!
Edits:
Quantity of data to transfer- relatively small, it will mostly be just a handful of POJOs containing strings that will represent the result of the child executing. If any solution would be inefficient on larger amounts of information, this is unlikely to be a problem in my system. The amount being transferred should be pretty static and so this does not have to be scalable.
Latency of transfer- not a critical concern in this case, although if any 'polling' of results is needed this should be able to be fairly frequent without significant overheads, so I can maintain a responsive GUI on top of this at a later time (e.g. progress bar)
Not directly an answer to your question, but a suggestion of an alternative.
Have you considered OSGI?
It lets you run java projects in complete isolation from each other, within the SAME jvm.
The beauty of it is that communication between projects is very easy with services (see Core Specifications PDF page 123). This way there is not "serialization" of any sort being done as the data and calls are all in the same jvm.
Furthermore all your requirements of quality of service (response time etc...) go away - you only have to worry about whether the service is UP or DOWN at the time you want to use it. And for that you have a really nice specification that does that for you called Declarative Services (See Enterprise Spec PDF page 141)
Sorry for the off-topic answer, but I thought some other people might consider this as an alternative.
Update
To answer your question about security, I have never considered such a scenario. I don't believe there is a way to enforce "memory" usage within OSGI.
However there is a way of communicating outside of JVM between different OSGI runtimes. It is called Remote Services (see Enterprise Spec PDF, page 7). They also have nice discussion there of the factors to take into consideration when doing something like that (see 13.1 Fallacies).
Folks at Apache Felix (implementation of OSGI) I think have implementation of this with iPOJO, called Distributed Services with iPOJO (their wrapper to make using services easier). I've never used this - so ignore me if I am wrong.
I'd use KryoNet with local sockets since it specialises heavily in serialisation and is quite lightweight (you also get Remote Method Invocation! I'm using it right now), but disable the socket disconnection timeout.
RMI basically works on the principle that you have a remote type and that the remote type implements an interface. This interface is shared. On your local machine, you bind the interface via the RMI library to code 'injected' in-memory from the RMI library, the result being that you have something that satisfies the interface but is able to communicate with the remote object.
akka is another option, as well as other java actor frameworks, it provides communication and other goodies derived from the actor model.
If you can't use stdin/stdout, then i'd go with sockets. You need some sort of serialization layer on top of the sockets (as you would with stdin/stdout), and RMI is a very easy to use and pretty effective such layer.
If you used RMI and found the performance wasn't good enough, i'd switch to some more efficient serializer - there are plenty of options.
I wouldn't go anywhere near web services or XML. That seems like a complete waste of time, likely take more effort and deliver less performance than RMI.
Not many people seem to like RMI any longer.
Options:
Web Services. e.g. http://cxf.apache.org
JMX. Now, this is really a means of using RMI under the table, but it would work.
Other IPC protocols; you cited Hessian
Roll-your-own using sockets, or even shared memory. (Open a mapped file in the parent, open it again in the child. You'd still need something for synchronization.)
Examples of note are Apache ant (which forks all sorts of Jvms for one purpose or another), Apache maven, and the open source variant of the Tanukisoft daemonization kit.
Personally, I'm very facile with web services, so that's the hammer which which I tend to turn things into nails. A typical JAX-WS+JAX-B or JAX-RS+JAX-B service is very little code with CXF, and manages all the data serialization and deserialization for me.
It was mentioned above, but i wanted to expand a bit on the JMX suggestion. we actually are doing pretty much exactly what you are planning to do (from what i can glean from your various comments). we landed on using jmx for a variety of reasons, a few of which i'll mention here. for one thing, jmx is all about management, so in general it is a perfect fit for what you want to do (especially if you already plan on having jmx services for other management tasks). any effort you put into jmx interfaces will do double duty as apis you can call using java management tools like jvisualvm. this leads to my next point, which is the most relevant to what you want. the new Attach API in jdk 6 and above is very sweet. it enables you to dynamically discover and communicate with running jvms. this allows, for example, for your "controller" process to crash and restart and re-find all the existing worker processes. this is the makings of a very robust system. it was mentioned above that jmx basically rmi under the hood, however, unlike using rmi directly, you don't need to manage all the connection details (e.g. dealing with unique ports, discoverability, etc). the attach api is a bit of a hidden gem in the jdk, as it isn't very well documented. when i was poking into this stuff initially, i didn't know the name of the api, so figuring how the "magic" in jvisualvm and jconsole worked was very difficult. finally, i came across an article like this one, which shows how to actually use the attach api dynamically in your own program.
Although it's designed for potentially remote communication between JVMs, I think you'll find that Netty works extremely well between local JVM instances as well.
It's probably the most performant / robust / widely supported library of its type for Java.
A lot is discussed above. But be it sockets, rmi, jms - there is a lof of dirty work involved.
I would ratter advice akka. It is a actor based model which communicate with each other using Messages.
The beauty is, the actors can be on same JVM or another (very little config) and akka takes care the rest for you. I haven't seen a more cleaner way than doing this :)
Try out jGroups if the data to be communicated is not huge.
How about http://code.google.com/p/protobuf/
It is lightweight.
As you mentioned you can obviously send the objects over the network but that is a costly thing not to mention start up a separate JVM.
Another approach if you just want to separate your different worlds inside one JVM is to load the classes with different classloaders. ClassA#CL1!=ClassA#CL2 if they are loaded by CL1 and CL2 as sibling classloaders.
To enable communications between classA#CL1 and classA#CL2 you could have three classloaders.
CL1 that loads process1
CL2 that loads process2 (same classes as in CL1)
CL3 that loads communication classes (POJOs and Service).
Now you let CL3 be the parent classloader of CL1 and CL2.
In classes loaded by CL3 you can have a light-weight communication send/receive functionality (send(Pojo)/receive(Pojo)) the POJOs between classes in CL1 and classes in CL2.
In CL3 you expose a static service that enables implementations from CL1 and CL2 register to send and receive the POJOs.

Categories