Ok I just need to understand this thing. I have a JVM installed on my machine. If I develop 2 programs (2 jars with their own main classes) and run them, will they be both running on the same JVM?
If they both run on same JVM instance, how can I make them communicate?
The system I am currently working on has many components installed in one machine but communicate with each other using RMI. Isn't it impractical for these components to use RMI when they are all running on one machine?
If I develop 2 programs (2 jars with
their own main classes) and run them,
will they be both running on the same
JVM?
Typically no, each will be run in their own JVM process (java), unless you start one from the other in a separate thread or something.
The system I am currently working on
has many components installed in one
machine but communicate with each
other using RMI. Isn't it impractical
for these components to use RMI when
they are all running on one machine?
It is sort of impractical, at least inefficient with all the (de)serialization that takes place. (RMI uses object serialization to marshal and unmarshal parameters)
OSGi (Dynamic Module System for Java)
If you are running both locally, and just need components to find each other, I suggest you make them into OSGi bundles. It has been engineering for this sort of use.
(As an example, this is how Eclipse IDE components and plugins interact with each other, while being loosely coupled, and without the unnecessary (de)serialization)
The two applications will run in different virutal machines if they are started separately (even if they are running on the same physical machine). OSGi has already been mentioned as a way of bring them together but if you want to maintain them as separate applications it might be worth considering web services as a communication method. The benefit of this over RMI is interoperability with other applications and flexability for future development.
you may also use observable pattern if both of the application runs on the same machine - a thought.
Related
I am about to create a java server/client construct, that uses bidirectional communication via tcp sockets. For every requesting client a new thread is created. At this moment it is running on a virtual machine from a hosting service. Now i thought about using docker. But does it really make sense to switch to docker in this case? Is docker really ment to run permanent applications like a java server?
I'm on thin ice here but, I'm going say it anyway... If you're running a process on Linux, to most intents and purposes, the process is running in a container.
Containers are "sugar" atop intrinsic Linux kernel features (namespaces, cgroups etc.). Solutions including Docker Engine mostly made these -- somewhat arcane -- capabilities of the kernel available more broadly available | easier to use.
Containers and VMs are very distinct technologies. Extending the above, you can run VMs in Containers and -- you're almost always -- running Containers in VMs.
It's containers all the way down :-)
To answer your question directly: you are already running your Java server in a container and it's running on a VM. You may decide to do two things but please read up more on each before deciding:
Add Docker (Engine) into your existing VM (if it's not already there) as a way to more easily manage your Java server as a Docker container. Benefits: unclear but see below.
Extract the Java server from the VM (!) and run the server instead as a Docker container. Benefits: unclear; Consequence: may not be possible with your hosting company; potential security concerns; no well-defined benefits; etc.
One benefit for you in using containers and deploying containers to your existing hosting provider (and continuing to use their VMs) is that you would be able to build and test in locations other than your hosting provider and be (mostly) guaranteed that a container image that worked during build and test will also work on your hosting service provider in production.
HTH!
Is it true to say that to a high degree what is now done in docker containers can also be done in java with jvm if someone wanted to ?
Besides being able to write an application in your own language and having much customisation flexibility does docker basically do what Java has been doing with its virtual machine for ages ? i.e. it provides executable environment separate from the underlying OS.
Generally Docker containers cannot be done "within Java" because Docker serves to encapsulate the application, and "within Java" is the code being loaded after the JVM launches.
The JVM is already running when it parses the class it will search for the main method. So encapsulation at the process level can't be done because the process (the JVM) is already running.
Java has encapsulation techniques which do provide protection between various Java elements (see class loader hierarchies in Tomcat, for an example); but those only isolate "application plugins" from each other, the main process that is running them all is Tomcat, which is really a program loaded into an already running JVM.
This doesn't mean you can't combine the two to achieve some object, it just means that the types of isolation provided between the two products isn't interchangeable.
what is now done in docker containers can also be done in java with jvm someone wanted to
Short Answer: No. You could wrap a docker container around your JVM, but you cannot wrap a JVM around a docker container, non-trivially.
docker basically do what Java has been doing with its virtual machine for ages ? i.e. it provides executable environment separate from the underlying OS.
Docker containers provide isolation from other containers without introducing a virtualisation layer. Thus, they are different and more performant than VMs.
Docker can perform several things that the Java JVM cannot do however, programming in Java and running on the JVM will provide several of the advantages of running in a Docker container.
I work on a large Java project and this project is 20 years old. We have been evolving and fixing our application without any tools or compatibility issues for all these years. Also, as a bonus, the application is platform independent. Several components of it can run in both Windows and Linux. Because no effort was made initially to build a multiplatform application, there is one component that cannot run on Linux. But it would be relatively easy to make it work on that platform.
It would have been much more difficult to do the same thing using C or C++ and associated toolchain.
I'm working on a Ruby on Rails app, currently hosted on Heroku.
We have about 5 web dynos and about 2 worker process running on average. But because we're using adeptscale these can change a lot, and the cost is increasing from month to month.
We're thinking about changing the process and the infrastructure (using our own, off of amazon/google etc). And also because of the performance, access to java libraries and other gains we're planning to go with jRuby.
I haven't got much experience with jRuby at all, but I do have Java experience. So I have a few questions:
Question intro: Since rails philosophy/approach differs from Javas, i.e ruby webserver uses far less memory but can only process one request at a time, and so having multiple servers sort of compensates the inability to process multiple requests.
If we go with jRuby (and have our rails project packaged as a war file and deployed to any servlet container i.e Tomcat or Jboss(more than just container)), will we be able to process multiple requests then?
Question intro: Currently we got some application logic running in the workers(instead of blocking the webserver, and not being able to serve other clients/browser clients). i.e when users submit some form and then our app needs to contact the 3rd party service to return the response, we simply let the worker do the workload of getting back from the 3rd party service and updating the ui (which reports waiting status) via websockets that the 3rd party service returned x/y or whatever status.
If we switch to jRuby, how will we achieve the similar logic? I mean do we go with the java code which has some kind of thread pool of workers and then free workers do the workload of contacting the 3rd party service etc? How would we go about this if we decide to go with jRuby?
1) You can serve multiple requests at a time in jruby with nearly any container, but you can also serve multiple requests at a time with mri-ruby. You only have to have a threadsafe app (config.threadsafe! is default in rails4). Different rack servers have different approaches to serve multiple requests at a time. For example unicorn uses multiple processes while passenger or puma go for a multi-threaded approach.
In my experience jruby containers like jboss or tomcat are more complicated to configure properly. But there are things like tourquebox, trinidad that help you with this. But you can even still go for some of the ruby servers (e.g. puma) that dont use c extensions.
2) If I understand you correctly you are looking for some background-processing library? You can use sidekiq or resque with ruby or jruby (while jruby will be faster in general, and its easier to debug memory leaks). You can even use ruby for your rack servers and jruby for your workers (can even be run in parallel with things like rvm/rbenv)
In general I would only go for the jruby option if you know what you are doing and need better performance for your app servers or if you want to speed up your worker servers. If I was you I would probably stay in the ruby world and use puma for your app and sidekiq as a background service. Both are very elegant and need not so much configuration.
Yes, JRuby uses Java threads and is really multithreaded. And I can say that it's really good in integration with Java, even using classes for JNI.
I can recommend next servers (some have already been mentioned):
puma (https://github.com/puma/puma)
any servlet container (even IBM WebSphere Application Server!) - just use warbler (https://github.com/jruby/warbler)
The 'simplest' way to run application on servlet container is make .war with warbler. Usually resulting .war file includes all dependencies and JRuby interpreter, so resulting file usually is 30 Mb. But I think that it is not so easy to setup warbler, then I wouldn't recommend this way if you don't really need to run Rails in enterprise Java environment.
And I would just remind that Rails opens DB connection for any request, then default size of DB connection pool of 5 isn't enough - don't forget to increase it before load testing :) (e.g. default thread pool for puma is 16, IBM WAS is 50, Tomcat - 200 threads).
I agree with smallbutton.com that puma is good choice. Finally, with puma you can switch between JRuby and other interpreter almost easy (in my experience there is one difference - gem's names)
In our application remote Procedure call is solved with an own netty based command dispatcher system. We have a lot of modules (about 20) and I want to run all modules in separate jvm-s. My problem is, that RMI spawns about 17 threads for each JVM. I do not need RMI at all (as far as I know).
Can I completely disable RMI for a jvm? Or at least configure it in a way that it does not use this many threads?
You are most likely looking at your JVMs with a monitoring application, right? Well: these monitoring applications use RMI. So you will see always RMI threads within your monitoring application, profiler, etc. And you will always see them using some amount of CPU time. Just as gathering profiling information does not work without transporting these information (via RMI) to your tool. You can implement your own transportation protocol and direct the management beans to use it but I doubt that you can save enough resources to return your development costs.
If you don’t use any RMI, RMI won’t start any threads. But if you are using it, even if you aren’t aware of this, disabling RMI implies that your software will not work any more.
StuartMarks is correct. RMI doesn't start any threads until you use it.
Possibly you are using it in some way of which you are unaware, e.g. JMX?
I'd like to run multiple Java processes on my web server, one for each web app. I'm using a web framework (Play) that has a lot of supporting classes and jar files, and the Java processes use a lot of memory. One Play process shows about 225MB of "resident private" memory. (I'm testing this on Mac OS X, with Java 1.7.0_05.) The app-specific code might only be a few MB. I know that typical Java web apps are jars added to one server process (Tomcat, etc), but it appears the standard way to run Play is as a standalone app/process. If these were C programs, most of that 200MB would be shared library and not duplicated in each app. Is there a way to make this happen in Java? I see some pages about class data sharing, but that appears to apply only to the core runtime classes.
At this time and with the Oracle VM, this isn't possible.
But I agree, it would be a nice feature, especially since Java has all the information it needs to do that automatically.
Of the top of my hat, I think that the JIT is the only reason why this can't work: The JIT takes runtime behavior into account. So if app A uses some code in a different pattern than app B, that would result in different assembler code generated at runtime.
But then, the usual "pattern" is "how often is this code used." So if app A called some method very often and B didn't, they could still share the code because A has already paid the price for optimizing/compiling it.
What you can try is deploy several applications as WAR files into a single VM. But from my experience, that often causes problems with code that doesn't correctly clean up thread locals or shutdown hooks.
IBM JDK has a jvm parameter to achieve this. Check out # http://www.ibm.com/developerworks/library/j-sharedclasses/
And this takes it to the next step : http://www.ibm.com/developerworks/library/j-multitenant-java/index.html
If you're using a servlet container with virtual hosts support (I believe Tomcat does it) you would be able to use the play2-war-plugin. From Play 2.1 the requirement of always being the root app is going to be lifted so you will probably be able to use any servlet container.
One thing to keep in mind if that you will probably have to tweak the war file to move stuff from WEB-INF/lib to your servlet container's lib directory to avoid loading all the classes again and this could affect your app if it uses singleton or other forms of class shared data.
The problem of sharing memory between JVM instances is more pressing on mobile platforms, and as far as I know Android has a pretty clever solution for that in Zygote: the VM is initialized and then when running the app it is fork()ed. Linux uses copy-on-write on the RAM pages, so most of the data won't be duplicated.
Porting this solution might be possible, if you're running on Linux and want to try using Dalvik as your VM (I saw claims that there is a working port of tomcat on Dalvik). I would expect this to be a huge amount of work, eventually saving you few $s on memory upgrades.