I'm looking for a runtime solution for finding memory usage of web apps.
I'm providing a framework that includes tomcat on which different clients deploys several web apps. sometimes one of them consumes a lot of memory, thus crushing the entire process.
I would like to have a manager web app (like the tomcat's manager) that will detect this and maybe undeploy \ re-deploy the problematic webapp.
another solution (I don't think it's possible) is to allocate a slice of the heap to each web app separately.
Demanding the clients to change the existing web apps is possible, but I'd rather not to.
any thoughts?
You can't intercept the allocations in each webapp, and there are no callbacks from the garbage collector, so you can't know how much memory each webapp uses. I think you're better off deploying several Tomcat instances, so that one "rogue" webapp does not kill all the others (up to one Tomcat per webapp, but you can also create groups to limit the number of instances, depending on the criticity of your different applications).
Tomact runs as single java process so it is hard to allocate memory per application. You can increase MaxPermSize,-Xmx only.
You can check leak detector for Tomcat but it will hardly help since you can not change source code of other apps.
I think what you want to look at would be VisualVM, this will give you an overview of Tomcats memory usage in the JVM.
http://techblog.zabuchy.net/2012/monitoring-of-tomcat-with-visualvm-and-visualgc/
Related
I am running a Web Based Java application on JBOSS and OFBIZ. I am suspecting some memory leak is happening so, did some memory profiling of the JVM on which the application along with JBOSS and OFBIZ are running. I am suspecting garbage collection is not working as expected for the application.
I used VisulaVM, JConsole, YourKit, etc to do the memory profiling. I could see how much heap memory is getting used, how many classes are getting loaded, how many threads are getting created, etc. But I need to know how much of memory is used only by the application, how much by JBOSS and how much by OFBIZ, respectively. I want to find out who is using how much memory and what is the usage pattern. That will help me identify where the memory leak is happening, and where tuning is needed.
But the memory profilers I ran so far, I was unable to differentiate the usage of each application separately. Can you please tell me which tool can help me with that?
There is no way to do this with Java since the Java runtime has no clear way to say "this is application A and this is B".
When you run several applications in one Java VM, you're just running one: JBoss. JBoss then has a very complex classloader but the app you're profiling is actually JBoss.
To do what you want to do, you have to apply filters but this only works when you have a memory leak in a class which isn't shared between applications (so when com.pany.app.a.Foo leaks, you can do this).
If you can't use filters, you have to look harder at the numbers to figure out what's going on. That means you'll probably have to let the app server run out of memory, create a heap dump and then look for what took most of the memory and work from there.
The only other alternative is to install a second server, deploy just one app there and watch it.
You can install and create Docker containers, allowing you to run processes in isolation. This will allow you to use multiple containers with the same base and without having to install the JDK multiple times. The advantage of this is separation of concerns- Every application can be deployed in a separate container. With this, you can then profile any specific application running on the JVM because each namespace is provided with a completely isolated application's view of the operating environment, including process trees, network, user ids and mounted file system.
Here are a couple of resources for Docker:
Deploying Java applications with Docker
JVM plus Docker: Better together
Docker
Please let me know if you have any questions!
Another good tool to use to find java memory leaks is Plumbr. you can try it out for free, it will find the cause for the java.lang.OutOfMemoryError and even shows you the exact location of the problem along with solution guidelines.
I explored various Java memory profilers, and found that YourKit can give me the closest result. In YourKit dashboard you can get links to individual classes running. So, if you are familiar with the codebase, you will know which class belongs to which app. You click on any class, you will see the CPU, Memory usage related to that. Also, if you notice any issues, YourKit can help you trace back to the particular line of the code in your source java files!
If you add YourKit to Eclipse, clicking on the object name in the 'issue area', will highlight the code line in the particular source file, which is the source of the problem.
Pretty cool!!
I just want to know that I want to web host my web site with cheapest plan. In java plan prices are according to the mainly on heap size(64mb, 128mb, 256mb etc..) how much we want.
Now before starting my application I just need to know is there will be any effect on memory if I develop web site using spring and hibernate instead of simple jsp/servlet? If yes than how much?
Is it possible to deploy medium sized web application with spring and hibernate with 64mb?
I just got confused when I just run my starting phase application in local tomcat it takes around 350mb of memory that makes me worried about costing when I will going to deploy my web site.
Please give me light in this if anyone has good knowledge about this.
Every library or framework you add/use will add memory usage.
If you want to save money getting the cheapest plan I will recommend you to do a profiling on your app (using VisualVm or something like that) and check how much memory is your app consuming and the most important: where. Then you can try to optimize it, and try to configure your appserver to work only with the memory you want.
Keep in mind that memory consumption will vary also depending on the load of your website.
I'd like to run multiple Java processes on my web server, one for each web app. I'm using a web framework (Play) that has a lot of supporting classes and jar files, and the Java processes use a lot of memory. One Play process shows about 225MB of "resident private" memory. (I'm testing this on Mac OS X, with Java 1.7.0_05.) The app-specific code might only be a few MB. I know that typical Java web apps are jars added to one server process (Tomcat, etc), but it appears the standard way to run Play is as a standalone app/process. If these were C programs, most of that 200MB would be shared library and not duplicated in each app. Is there a way to make this happen in Java? I see some pages about class data sharing, but that appears to apply only to the core runtime classes.
At this time and with the Oracle VM, this isn't possible.
But I agree, it would be a nice feature, especially since Java has all the information it needs to do that automatically.
Of the top of my hat, I think that the JIT is the only reason why this can't work: The JIT takes runtime behavior into account. So if app A uses some code in a different pattern than app B, that would result in different assembler code generated at runtime.
But then, the usual "pattern" is "how often is this code used." So if app A called some method very often and B didn't, they could still share the code because A has already paid the price for optimizing/compiling it.
What you can try is deploy several applications as WAR files into a single VM. But from my experience, that often causes problems with code that doesn't correctly clean up thread locals or shutdown hooks.
IBM JDK has a jvm parameter to achieve this. Check out # http://www.ibm.com/developerworks/library/j-sharedclasses/
And this takes it to the next step : http://www.ibm.com/developerworks/library/j-multitenant-java/index.html
If you're using a servlet container with virtual hosts support (I believe Tomcat does it) you would be able to use the play2-war-plugin. From Play 2.1 the requirement of always being the root app is going to be lifted so you will probably be able to use any servlet container.
One thing to keep in mind if that you will probably have to tweak the war file to move stuff from WEB-INF/lib to your servlet container's lib directory to avoid loading all the classes again and this could affect your app if it uses singleton or other forms of class shared data.
The problem of sharing memory between JVM instances is more pressing on mobile platforms, and as far as I know Android has a pretty clever solution for that in Zygote: the VM is initialized and then when running the app it is fork()ed. Linux uses copy-on-write on the RAM pages, so most of the data won't be duplicated.
Porting this solution might be possible, if you're running on Linux and want to try using Dalvik as your VM (I saw claims that there is a working port of tomcat on Dalvik). I would expect this to be a huge amount of work, eventually saving you few $s on memory upgrades.
I'm writing a Java servlet that I'm planning to deploy on Amazon AWS using Elastic Beanstalk. My tests show that things run well using a Small EC2 instance using their stock Tomcat AMI that Beanstalk uses.
I'm trying to figure out how to properly allocate Java heap space for this configuration. A Small instance has 1.7 GB of memory, so I'm thinking that a 1024MB heap will work well. I realize that memory will be needed for other things even though the only "real" purpose of this instance is to run Tomcat. And I also know that there's some point with large heaps where the standard Sun/Oracle JVM doesn't really work.
Is this a reasonable way to allocate the memory? Should I be using more or less? What tools can I use to help determine the optimal configuration?
1024 seems ok, maybe a bit much.
I'm not exactly sure what your servlet is doing, but to give you an idea, I have an ecommerce application with about 1000 daily users running on 2 small ec2 instances. Tomcat load is distributed via mod_jk.
I did not tune the JVM at all and kept the default settings. I'm also using terracotta distributed object caching, and that process seems to be consuming most of the memory.
Are you deployed on a linux based os or windows? IMO, linux does a better job at managing available memory.
As to tools, I would suggest deploying your application as is to a small ec2 instance and using a tool like JMeter to stress test the application. During the stress test, you can have the top utility open(assuming your app is on linux and top is installed).
Try to break your application by seeing how much load it can handle. That's the beauty of ec2, you can setup a test environment in minutes and discard it right after.
Check newrelic for java to determine you heap usage pattern.
I have an eCommerce site and I wanted to implement search in it. After reading a lot about Lucene and SOLR, I finally choose SOLR as it adds functionality like JSON API facets, and a lot more.
SOLR comes with a builtin Jetty server, running in background, and my webapp is running on Tomcat server. I wanted to know what would be better for me in long run, performance wise and ease of customization and use, whether to leave SOLR as standalone on a different Jetty server or to integrate SOLR within Tomcat, by listing in JAVA_OPTS in catalina.bat?
I personally feel putting SOLR in Tomcat will reduce my performance as it will take more time to load, and I don't want SOLR to restart every time I redeploy my webapp, but then it being all together at one place maybe a plus point (not sure). I am looking forward for some opinion from guys who have been using SOLR, as to what would be the best for me, the data set is huge and also attract thousands of users every day.
Having it all on a single Tomcat instance will make administration easier. You can easily redeploy your webapp independently of Solr, Tomcat is designed to host multiple applications. If you have to put both Solr and your web app in a single box, I'd avoid having two web servers unless you have a real, measurable, compelling reason.
Mauricio's answer is a good one.
The counter-point to the operational simplicity of sharing the same Tomcat is the potential safety from isolating Solr into its own entire JVM and heap. That way you don't have to worry about the case of some memory leak or query of death or gigantic GC pause in Solr taking down your main application.
At Websolr, we run Solr in Tomcat, because it's what we know. However, if we implemented a design where we were to run more than one Java service on the same instance type, we would give serious consideration to switching to multiple Jettys for the increased isolation.
Weigh the pros and cons: Operational simplicity versus total isolation. Your mileage may vary. If this kind of isolation is not compelling, stick to hosting within Tomcat. If anything about that makes you nervous, set up monit and munin to keep an eye on everything.
And don't restart all of Tomcat just to reload one of its web applications.
Considering security, you may want your solr server to be isolated and to be accessed only from your webapp.
In this case, it would be better to host to a different instance not binded on a public address.