I need to deploy multiple instances of the same spring application.
I'm actually using docker as a container for my instances, and have got 1 container for each instance.
However noticed that one container consume up to 500mb, which is much for me, as i need my VPS to catch up as many instances as possible, and honestly, the containers are used just for the JVM and nothing else, would be a great waste of memory to dedicate a whole virtual environnement for a simple JVM instance and i don't really need that at all.
What i need is more like a tool to help me to administrate the instances (auto start if the processes are killed for some reasons, update easily the instances if a new version of the app has been developped, managing the instances through line commands for debugging or whatever.) just as i can do with Docker but without a whole virtualisation environnement which is too " greedy "
By the way, i've got a VPS from OVH Cloud Services, who doesn't provide any kind of spring like deployment services. I'm working on a traditionnal Ubuntu 18.04.
Thanks in advance
You should approach this from the bottom to the top of the stack. I'll try to answer each topic but I'd advise you to divide this into several questions, so people can give more detailed answer to each problem.
I'll give you just an overview so you can have a starting point.
JVM Memory Limit
First, control the memory allocation by setting coherent limits to the JVM. You can do so by setting the Max Heap Size, and the Max Metaspace size (Java 8).
You can set the max heap size by appending the flag -Xmx. -Xmx1G will limit the heap size to 1 Gigabyte.
The metaspace can be set using -MaxMetaspaceSize. The metaspace was introduced in Java 8. If you are using Java 6 or 7, you should take a look at PermGen.
Container Limit
You can also set the max memory of your container using appending the -m flag to the docker run command. You can take a look at docker's doc here: https://docs.docker.com/config/containers/resource_constraints/
Container Management
This deserves its own detailed answer. First of all, Spring Framework it is not related to what you are trying to achieve. It won't solve automatic restarts or anything like that.
What you are looking for is a Container Management Tool. You can take a look at docker swarm, portainer, or kubernetes.
They'll allow you start multiple instances of the same service without changing anything at code level.
For a quick and dirty implementation, you can use docker swarm, which is a no-brainer and integrates seamlessly with docker containers.
Related
Recently we have migrated all our companies applications from Websphere to Tomcat application server. As part of this process we had performance testing done.
We found that couple of applications are having over 100% performance degradation in Tomcat. We increased the number of threads, we configured datasource settings to accommodate our test, we have also increased the read and write buffer sizes in the Tomcat server.
Application Background:
-> Spring Framework
-> Hibernate
-> Oracle 12c
-> JSPs
-> OpenJDK 8
We already checked the database and found no issues with performance in DB.
The CPU utilization while running the test is always less than 10%.
Heap settings are -xms = 1.5G to -xmx = 2G, and it never utilizes more than 1.2G.
We also have two nodes and HAProxy on top to balance the load. (We don't have a web server in place).
Despite our best efforts we couldn't pinpoint the issue that is causing the performance degradation. I am aware that this information isn't enough to provide a solution for our problem, however, any suggestion on how to proceed will be very helpful as we hit a dead-end and are unable to proceed.
Appreciate it if you can share any points that will be helpful in finding the issue.
Thanks.
Take Thread Dumps and analyze which part of application is having issues and start troubleshoot from there.
Follow this article for detailed explanation about Thread Dumps analysis - https://dzone.com/articles/how-analyze-java-thread-dumps
There are plenty of possible reasons for the problem you've mentioned and there really isn't much data to work with. Regardless of that, as kann commented a good way to start would be gathering Thread Dumps of the java process.
I'd also question if you're running in the same servers or if it's newly setup servers and how they are looking (resource-wise). Is there any CPU/Memory/IO constraints during the test?
Regarding the Xmx it sounds like you're not passing the -XX:+AlwaysPreTouch flag to the JVM but I would advice you to look into it as it will make the JVM zero the heap memory on start-up instead of doing it in runtime (which can mean a performance hit).
We are running a set of Java applications in docker containers on OpenShift. On a regular basis we experience oom kills for our containers.
To analyse this issue we set up Instana and Grafana for monitoring. In Grafana we have graphs for each of our containers showing memory metrics e.g. JVM heap, memory.usage and memory.total_rss. From these graphs we know that the heap as well as the memory.total_rss of our containers is pretty stable on a certain level over a week. So we assume that we do not have a memory leak in our Java application. However, the memeory.total is constantly increasing over the time and after a couple of days it goes beyond the configured memory limit of the docker container. As far as we can see this doesn't cause Openshift to kill the container immediately but sooner or later it happens.
We increased the memory limit of all our containers and this seems to help since Openshift is not killing our containers that often anymore. However we still see in Grafana that the memeory.total is exceeding the configures memory limit of our containers significantly after a couple of days (rss memory is fine).
To better understand Openshifts OOM killer, does anybody know which memory metric Openshift takes into account to decide if a container has to be killed or not? Is the configured container memory limit related to the memory.usage or the memory.total_rss or something completely different?
Thanks for help in advance.
Can mounting jre directory from host system reduce ram memory usage by sharing heapspace? Or will this cause some problems?
I have a lot of containers running java service inside. The problem is, that sometimes when the services have very strong workload, they need (eventually) a lot if heapspace. When i assign for each container (for example) -Xmx2g, then im pretty fast running out of RAM on my host system. Unfortunately once java allocated heap, it will not be free anymore (for the container RAM, host RAM). Restarting the container will free the allocated memory for the heapspace used in the peak, but for container with solr inside it will (probably) take several hours to index all the data again, what makes the downtime only possible on the weekend.
The idea is to using common jre in the host system to share the heapspace between single services. Probably i can assign -Xmx the following value (only an example): 250m times a number of services plus 3g for the workload peaks. This way i will using much less memory, because the services sharing the heap space.
Is there an error in my idea or can it really be worth?
Maybe someone is already faced such a problem and and probably solved it in another way?
I don't think it is a good idea to share memory between containers. Docker is designed to isolate different environments and reduce the effects from other containers. So run with their own jvm is the current way to use Docker and other containers.
Also if you shared memory, it is hard to migrate the container.
I already found a solution here (schrinking java heapspace): https://stackoverflow.com/a/4952645/2893873
I assumed that shrinking java heap space is not possible, but it is. I think it will be a better solution instead of sharing the JVM between the container.
Since moving to Tomcat8/Java8, now and then the Tomcat server is OOM-killed. OOM = Out-of-memory kill by the Linux kernel.
How can I prevent the Tomcat server be OOM-killed?
Can this be the result of a memory leak? I guess I would get a normal Out-of-memory message, but no OOM-kill. Correct?
Should I change settings in the HEAP size?
Should I change settings for the MetaSpace size?
Knowing which Tomcat process is killed, how to retrieve info so that I can reconfigure the Tomcat server?
Firstly check that the oomkill isn't being triggered by another process in the system, or that the server isn't overloaded with other processes. It could be that Tomcat is being unfairly targeted by oomkill when some other greedy process is the culprit.
Heap should be set as a maximum size (-Xmx) to be smaller than the physical RAM on the server. If it is more than this, then paging will cause desperately poor performance when garbage collecting.
If it's caused by the metaspace growing in an unbounded fashion, then you need to find out why that is happening. Simply setting the maximum size of metaspace will cause an outofmemory error once you reach the limit you've set. And raising the limit will be pointless, because eventually you'll hit any higher limit you set.
Run your application and before it crashes (not easy of course but you'll need to judge it), kill -3 the tomcat process. Then analyse the heap and try to find out why metaspace is growing big. It's usually caused by dynamically loading classes. Is this something your application is doing? More likely, it's some framework doing this. (Nb oom killer will kill -9 the tomcat process, and you won't be able to diagnostics after that, so you need to let the app run and intervene before this happens).
Check out also this question - there's an intriguing answer which claims an obscure fix to an XML binding setting cleared the problem (highly questionable but may be worth a try) java8 "java.lang.OutOfMemoryError: Metaspace"
Another very good solution is transforming your application to a Spring Boot JAR (Docker) application. Normally this application has a lot less memory consumption.
So steps the get huge improvements (if you can move to Spring Boot application):
Migrate to Spring Boot application. In my case, this took 3 simple actions.
Use a light-weight base image. See below.
VERY IMPORTANT - use the Java memory balancing options. See the last line of the Dockerfile below. This reduced my running container RAM usage from over 650MB to ONLY 240MB. Running smoothly. So, SAVING over 400MB on 650MB!!
This is my Dockerfile:
FROM openjdk:8-jdk-alpine
ENV JAVA_APP_JAR your.jar
ENV AB_OFF true
EXPOSE 8080
ADD target/$JAVA_APP_JAR /deployments/
CMD ["java","-XX:+UnlockExperimentalVMOptions", "-XX:+UseCGroupMemoryLimitForHeap", "-jar","/deployments/your.jar"]
It looks like
MemoryError: PermGen space
java.lang.OutOfMemoryError: PermGen space
is a common problem. You can Increase the size of your perm space, but after 100 or 200 redeploys it will be full. Tracking ClassLoader memory leaks is nearly impossible.
What are your methods for Tomcat (or another simple servlet container - Jetty?) on production server? Is server restart after each deploy a solution?
Do you use one Tomcat for many applications ?
Maybe I should use many Jetty servers on different ports (or an embedded Jetty) and do undeploy/restart/deploy each time ?
I gave up on using the tomcat manager and now always shutdown tomcat to redeploy.
We run two tomcats on the same server and use apache webserver with mod_proxy_ajp so users can access both apps via the same port 80. This is nice also because the users see the apache Service Unavailable page when the tomcat is down.
You can try adding these Java options:
-XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled
This enables garbage collection in PermGen space (off by default) and allows the GC to unload classes. In addition you should use the -XX:PermSize=64m -XX:MaxPermSize=128m mentioned elsewhere to increase the amount of PermGen available.
Yes indeed, this is a problem. We're running three web apps on a Tomcat server: No. 1 uses a web application framework, Hibernate and many other JARs, no. 2 uses Hibernate and a few JARs and no. 3 is basically a very simple JSP application.
When we deploy no. 1, we always restart Tomcat. Otherwise a PermGen space error will soon bite us. No. 2 can sometimes be deployed without problem, but since it often changes when no. 1 does as well, a restart is scheduled anyway. No. 3 poses no problem at all and can be deployed as often as needed without problem.
So, yes, we usually restart Tomcat. But we're also looking forward to Tomcat 7, which is supposed to handle many memory / class loader problems that are burried into different third-party JARs and frameworks.
PermGen switches in HotSpot only delay the problem, and eventually you will get the OutOfMemoryError anyway.
We have had this problem a long time, and the only solution I've found so far is to use JRockit instead. It doesn't have a PermGen, so the problem just disappears. We are evaluating it on our test servers now, and we haven't had one PermGen issue since the switch. I also tried redeploying more than 20 times on my local machine with an app that gets this error on first redeploy, and everything chugged along beautifully.
JRockit is meant to be integrated into OpenJDK, so maybe this problem will go away for stock Java too in the future.
http://www.oracle.com/technetwork/middleware/jrockit/overview/index.html
And it's free, under the same license as HotSpot:
https://blogs.oracle.com/henrik/entry/jrockit_is_now_free_and
You should enable PermGen garbage collection. By default Hotspot VM does NOT collect PermGen garbage, which means all loaded class files remain in memory forever. Every new deployment loads a new set of class files which means you eventually run out of PermGen space.
Which version of Tomcat are you using? Tomcat 7 and 6.0.30 have many features to avoid these leaks, or at least warn you about their cause.
This presentation by Mark Thomas of SpringSource (and longtime Tomcat committer) on this subject is very interesting.
Just of reference, there is a new version of Plumbr tool, that can monitor and detect Permanent Generation leaks as well.