I'm writing a Java servlet that I'm planning to deploy on Amazon AWS using Elastic Beanstalk. My tests show that things run well using a Small EC2 instance using their stock Tomcat AMI that Beanstalk uses.
I'm trying to figure out how to properly allocate Java heap space for this configuration. A Small instance has 1.7 GB of memory, so I'm thinking that a 1024MB heap will work well. I realize that memory will be needed for other things even though the only "real" purpose of this instance is to run Tomcat. And I also know that there's some point with large heaps where the standard Sun/Oracle JVM doesn't really work.
Is this a reasonable way to allocate the memory? Should I be using more or less? What tools can I use to help determine the optimal configuration?
1024 seems ok, maybe a bit much.
I'm not exactly sure what your servlet is doing, but to give you an idea, I have an ecommerce application with about 1000 daily users running on 2 small ec2 instances. Tomcat load is distributed via mod_jk.
I did not tune the JVM at all and kept the default settings. I'm also using terracotta distributed object caching, and that process seems to be consuming most of the memory.
Are you deployed on a linux based os or windows? IMO, linux does a better job at managing available memory.
As to tools, I would suggest deploying your application as is to a small ec2 instance and using a tool like JMeter to stress test the application. During the stress test, you can have the top utility open(assuming your app is on linux and top is installed).
Try to break your application by seeing how much load it can handle. That's the beauty of ec2, you can setup a test environment in minutes and discard it right after.
Check newrelic for java to determine you heap usage pattern.
Related
Using Java JDK 1.8.0_60, OrientDB 2.1.3 (embedded/plocal)
I am wondering if there is any way to configure OrientDB's memory settings to "share" memory space better within a Java process (especially when OrientDB is not necessarily the "primary/only" module running in the process).
E.g.
The memory profile below shows OrientDB running by itself outside of our application (we used it for load testing). OrientDB does great!
-Xmx500m -Dstorage.diskCache.bufferSize=500
As you can see, OrientDB honors the 500MB limit just fine. However, when we put it back into our application and any other part of our application requires more memory to do something else, well you can see how this might be a problem, especially if OrientDB is at one of the peak areas circled above. This is where an out of memory error would occur, when OrientDB seems to be competing for memory with the rest of the application that it is embedded in.
Any thoughts? Are there other memory settings that we should try adjusting?
Thanks!
After several experiments with OrientDB, we determined that it was never intended to be a second-class citizen in the process space. I mean that OrientDB is designed to be "the process" and not designed to be used like embedding SQLite for light use in an application.
As such, we moved OrientDB into a separate service/process so that it does not conflict with the memory space of our application. It seems to work fine in this manner.
I am running a Web Based Java application on JBOSS and OFBIZ. I am suspecting some memory leak is happening so, did some memory profiling of the JVM on which the application along with JBOSS and OFBIZ are running. I am suspecting garbage collection is not working as expected for the application.
I used VisulaVM, JConsole, YourKit, etc to do the memory profiling. I could see how much heap memory is getting used, how many classes are getting loaded, how many threads are getting created, etc. But I need to know how much of memory is used only by the application, how much by JBOSS and how much by OFBIZ, respectively. I want to find out who is using how much memory and what is the usage pattern. That will help me identify where the memory leak is happening, and where tuning is needed.
But the memory profilers I ran so far, I was unable to differentiate the usage of each application separately. Can you please tell me which tool can help me with that?
There is no way to do this with Java since the Java runtime has no clear way to say "this is application A and this is B".
When you run several applications in one Java VM, you're just running one: JBoss. JBoss then has a very complex classloader but the app you're profiling is actually JBoss.
To do what you want to do, you have to apply filters but this only works when you have a memory leak in a class which isn't shared between applications (so when com.pany.app.a.Foo leaks, you can do this).
If you can't use filters, you have to look harder at the numbers to figure out what's going on. That means you'll probably have to let the app server run out of memory, create a heap dump and then look for what took most of the memory and work from there.
The only other alternative is to install a second server, deploy just one app there and watch it.
You can install and create Docker containers, allowing you to run processes in isolation. This will allow you to use multiple containers with the same base and without having to install the JDK multiple times. The advantage of this is separation of concerns- Every application can be deployed in a separate container. With this, you can then profile any specific application running on the JVM because each namespace is provided with a completely isolated application's view of the operating environment, including process trees, network, user ids and mounted file system.
Here are a couple of resources for Docker:
Deploying Java applications with Docker
JVM plus Docker: Better together
Docker
Please let me know if you have any questions!
Another good tool to use to find java memory leaks is Plumbr. you can try it out for free, it will find the cause for the java.lang.OutOfMemoryError and even shows you the exact location of the problem along with solution guidelines.
I explored various Java memory profilers, and found that YourKit can give me the closest result. In YourKit dashboard you can get links to individual classes running. So, if you are familiar with the codebase, you will know which class belongs to which app. You click on any class, you will see the CPU, Memory usage related to that. Also, if you notice any issues, YourKit can help you trace back to the particular line of the code in your source java files!
If you add YourKit to Eclipse, clicking on the object name in the 'issue area', will highlight the code line in the particular source file, which is the source of the problem.
Pretty cool!!
I'm looking for a runtime solution for finding memory usage of web apps.
I'm providing a framework that includes tomcat on which different clients deploys several web apps. sometimes one of them consumes a lot of memory, thus crushing the entire process.
I would like to have a manager web app (like the tomcat's manager) that will detect this and maybe undeploy \ re-deploy the problematic webapp.
another solution (I don't think it's possible) is to allocate a slice of the heap to each web app separately.
Demanding the clients to change the existing web apps is possible, but I'd rather not to.
any thoughts?
You can't intercept the allocations in each webapp, and there are no callbacks from the garbage collector, so you can't know how much memory each webapp uses. I think you're better off deploying several Tomcat instances, so that one "rogue" webapp does not kill all the others (up to one Tomcat per webapp, but you can also create groups to limit the number of instances, depending on the criticity of your different applications).
Tomact runs as single java process so it is hard to allocate memory per application. You can increase MaxPermSize,-Xmx only.
You can check leak detector for Tomcat but it will hardly help since you can not change source code of other apps.
I think what you want to look at would be VisualVM, this will give you an overview of Tomcats memory usage in the JVM.
http://techblog.zabuchy.net/2012/monitoring-of-tomcat-with-visualvm-and-visualgc/
I created a Java application which is the only application active on a workstation. (Similar to a kiosk system)
The problem is that the application has to be up and running as fast as possible after starting the computer.
I'm wondering which one of the major operating systems can be configured to provide the shortest startup time?
I'm using 3rd party audio and graphics libraries so my choices are limited to Windows XP/Vista, Linux and Solaris.
Currently on my dual-boot machine Fedora takes a little longer than Vista, but on the other hand I don't have much experience with tuning boot time of Linux. So if someone knows that Linux could have much better chances of a quick startup then I would put my time in there.
I'd also appreciate general hints on tuning boot times and Java startup times.
I would look at BootChart to optimise your Fedora boot time. If you're running one app, then you can certainly remove a lot of the services that Fedora would normally come configured with.
I would also point out that perhaps for the amount of time you're going to spend optimising this, you may be better off investing in the appropriate hardware (e.g. SSDS and similar, if boot time is governed by your disk). Optimising can be a major time sink.
If you're running your application inside of a kiosk like machine where you don't need any other applications running, and you know which drivers/modules you'll need to load ahead of time, I think your best boot time will come from Linux.
It will just take some time to fine tune your boot process to load all the proper software in the fastest time possible.
For such a task a fine tuned Linux is best suited. You can take a look at some more customizable distro, where you can control which drivers and applications get in.
Debian is highly modularized and customizable, so you can get really good boot speed.
Another option can be Gentoo - there you can strictly choose what to compile and include.
Linux with SSD drives.
I'd also suggest a linux distro. E.g. Gentoo with initng (initng.org). Initng parallelizes the startup process. There are other startup system with which your system will be up in a few seconds.
And of course, fast hdds and enough ram for java ;)
My guess would be Windows XP embedded. I've found that Java apps start up fairly quickly under Windows, particularly if you use a client VM.
It is extremely likely that your 3rd party vendors will support XP embedded (particularly if you are a big customer to them). It is very similar to normal XP, just cut down.
If you're making a kiosk type app, why do you care about boot time?
Fedora can be easily optimized if you want to only run a single java application. There are many services which are pre-configured during boot time and they can be omitted. You could also go for SSD drives to improve the boot-time of the system, and at the same time if you spend some time on optimizing the boot chart, it would solve your problem.
Has anyone used the new Java 1.6 JDK tool, VisualVM, to profile a production application and how does the application perform while being profiled?
The documentation say that it is designed for both Production and Development use, but based on previous profiling experience, with other profiling tools, I am hesitant.
While i haven't personally used VisualVM, I saw this blog post just today that might have some useful information for you. He talks about profiling a production app using it.
I tried it on a dev box and found that when I turned off profiling it would shut Tomcat down unexpectedly. I'd be very cautious about rolling this out to production- can you simulate load in a staging environment instead? It's not as good as the real thing, but it probably won't get you fired if it goes wrong...
I've used VisualVM before to profile something running locally. A big win was that I just start it up, and it can connect to the running JVM. It's easier to use than other profiling tools I've used before and didn't seem to have as much overhead.
I think it does sampling. The overhead on a CPU intensive application didn't seem significant. I didn't measure anything (I was interested in how my app performed, not how the tool performed), but it definitely didn't have the factor of 10 slowdown I'm used to seeing from profiling.
For just monitoring your application, running VisualVM remotely should not slow it down much. If the system is not on the edge of collapsing, I still haven't seen any problems. It's basically just reading out information from the coarse grained built-in instrumentation of the JVM. If you start profiling, however, you'll have the same issues as with other profilers. Basically because they all work almost they same way, often using the support in the JVM.
Many people have problems with running VisualVM remotely, due to firewall issues, but you can even run Visual VM remotely over ssh, with some system properties set.
It is possible to remote connect to your server from a different computer using VisualVM. You just need to right click on the "Remote" node and say "Add Remote Host."
This would at least eliminate the VisualVM overhead (if there is any) from impacting performance while it is running.
This may not eliminate all performance concerns, especially in Production environments, but it will help a little.
I've used the Net Beans profiler which uses the same underpinnings as Visual VM.
I was working with an older version of Weblogic, which meant using the 1.5 JVM, so I couldn't do a dynamic attach. The application I was profiling had several thousand classes and my workstation was pretty much unusable while the profiler instrumented them all. Once instrumentation was complete, the system was sluggish but not completely unusable. The amount of slowdown really depends on what you need to capture. The basic CPU metrics are pretty light weight. Profiling memory allocation slows things down a lot.
I would not use it on a production system. Aside from the potential for slowdown, I eventually ran out of PermGen space because the profiler reinstruments and reloads classes when you change settings. (This may be fixed in the 1.6 agent, I don't know)
I've been using VisualVM a lot since before it was included in the JDK. It has a negligable impact on the performance of the system. I've never noticed it cause a problem with performance on the system, but then again, our Java server had enough headroom at the time to support a little extra load. If your server is running at a level that is completely tacked out and can't handle the VisualVM running, then I would say its more likely that you need to buy another server . Any production server should have some memory headroom , otherwise what you have is a disaster just waiting to happen.
I have used VVM(VavaVoom?) quite extensively, works like a charm in the light mode, i.e. no profiling, just getting the basic data from the VM. But once you start profiling and there are many classes, then there is considerable slowdown. I wouldn't profile in a production environment even if you have 128 core board with 2 tera of memory purely because the reloading and re-defining of the classes is tricky, the server classloaders are another thing, also vary from one server implementation to another, interfering with them in production is not a very good idea.