I have remote logged into my machine and trying to start tomcat server. But, I get the following error.
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
Apparently there are some memory restrictions when logged into remote desktop. Is there any way I can start the tomcat from RDP?
Thanks
Remote desktop could be causing the problem if you are using a 32-bit system. Java requires a single continuous block of memory to start Java. If you start Java with close to the maximum amount of memory, whether you get that memory as a single block depends on what you have already running on that server.
Solutions include
start Java as a service on start up.
use a 64-bit version of the OS and Java.
use less memory in the JVM, even 100 Mb could make a difference.
increase the amount of main memory in the machine.
In starting up the Tomcat server, you can change the program parameters when running Java. You can add to the VM arguments -Xmx###m to a smaller number that may work on your machine. Also, you may want to see if you're running other memory intensive Java apps.
Related
I want to run a java program on Cygwin. The code takes two very large files as an input. When I attempted to run the program on Cygwin, I got the following messege:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at code_name.main(code_name.java:52)
I tried to increase the heapsize (java -Xms1024m -Xmx2048m javafile inputfile1 inputfile2) but still it didn't work!
Does it solve the problem to run the code on a remote server with bigger memory? and if yes, how to define a server directory on Cygwin?
Note that I'm using Windows, and my machine is 64-bit
First of all, it is not Java's fault that the application is using too much memory. And it isn't Cygwin's fault either.
Most likely, it is one of the following:
The application is using memory inefficiently; e.g. loading the files into memory in their entirety.
The application has a memory leak
The problem is inherently too big to solve with a 2GB heap.
Does it solve the problem to run the code on a remote server with bigger memory?
Possibly yes, possibly no. It depends on the reason that the application ran out of memory, and on the problem size. It also depends on how much bigger the server is.
Hint: you should work out why the application is running out of memory with a 2GB heap. There will be some clues in the application's source code, and in the stacktrace.
And if yes, how to define a server directory on Cygwin?
I don't know what you mean by that.
I would suggest that you just login to the remote server, install cygwin, Java and the other tools that you need, and run the application from the remote server's command prompt.
My problem is that I have a Weblogic 12c and when I try to make an analysis of a data on my site, the data is so big that it give me PermGenSpaceError. Well my question is what can I do to fix it because it happen just when I analyze big data which have more than 8060576 total memory because that is all my memory RAM from server and how I can do. If you can give me an tutorial or video I will be very graceful because I searched over a day on this problem and I am still without an answer.
Thank you very much,
Vlad.
A bit of insight on how things work when you run a java-based application. Your application is compiled (set classes, etc) and they are run by JVM that stands for Java virtual machine.
Java virtual machine memory is composed of regions. The stack region where variables and methods are stored and the heap space region where everything else is stored. The Java heap space is once again structured in different regions called generations and where an object is stored depends on how long the object lives.
Now you get an idea where the PermGenSpace (Permanent Generation Space) comes from.
In your case, you need to increase the size of the PermGenSpace memory in order to allow your application to be able to process huge amount of data. There are loads of answers in stackoverflow that can teach you how to do that, listed below:
Dealing with "java.lang.OutOfMemoryError: PermGen space" error
Weblogic increase memory
facing perm gen space error in weblogic
Oracle help centre - tunning Java Virtual Machine
WebLogic PermnGenSpace Settings
More on PermnGen and what is causing it here
There are many more external link that can help you on how to setup PermGenSpace memory.
Before I conclude, I would like to mention another great tool that you can use to find out how your weblogic instance behaves. The Java Visual VM allows you to monitor how your application makes use of Java Heap Space, processor, network, etc resources. The Java Visual VM once installed, detects the running java-based application (local - you can also setup Java Visual VM to do monitor remote servers using RMI) and shows you details of the VM, the existing amount of memory allocated for Java heap space, the amount that application uses, the maximum memory of the computer, etc.
I had issues with an image management app when analysing image files (6 GB) in size. The problem I found was basically the image management app was not garbage collecting properly and Java Visual VM helped me spot the issue.
So I recommend, download and install Java Visual VM, run your application and then see how your memory and other resources are used. If the Java heap space is indeed running out of memory then you can increase the size using above links and then re-run the application and monitor its performance using Java Visual VM.
I am trying to run some different Eclipse RCP implementations simultaneously, and I receive the following error message: "Java was started but returned exit code=1".
I understand that happens when a xmx or xms parameter greater than between 1.2 and 1.8 GB is set (this quantity depends on the machine). But my problem is not only how big is this parameter for a single machine, I don't always receive this error message when I am trying to execute more than one virtual machine at the same time, I don't know which conditions are evaluated by the java virtual machine in order to launch this error message.
Do you know how could I see which conditions are evaluated from the Java VMs in order to launch this error message? This way I could establish the right xmx and xms parameters.
Thank you for your time.
The JVM needs a continuous memory space to have its Object Heap allocated.
Trying to have more than one JVM in execution at a time, makes quite more difficult to find such a memory block. Even by having free GBs of memory.
I found the answer here:
Tools to view/solve Windows XP memory fragmentation
I'm currently facing a very strange problem. I have written a simple servlet which runs within a self hosting jetty container. This servlet is a logging endpoint for JS scripts. So the script just runs very simple code to log to graylog and some files (managed by a log4j file appender.)
The admin complained to me that the servlet hogs up to 10.5GB of Virtual Memory which caused the whole machine to slow down. This had an impact on the performance of some other monitoring services.
Restarting the servlet fixed the problem temporarly but the question is how can I find and fix spots in the code causing such memory hogging?
Edit:
I start the application with the -Xmx50m switch.
Edit:
The following things have been investigated: I started Eclipse Memory Analyzer and jConsole to have a look into the application while some ruby scripts sent requests. (40 to 70 requests per minute. That's more than the servlet is getting in production at the moment.)
With this setting:
Heap size: 4MB
Running threads average: 19 (peak at 23)
Virtual Memory: 5GB
Restarting the servlet speeded up the server. The only suspicious parameter of the servlet were the 10.5GB Virtual Memory.
Virtual memory doesn't use much resources, only resident memory matters. You can create a process which uses 8 TB of virtual memory and it still has little impact on resources.
On Linux the "simplest" way to check virtual memory is to read /proc/{pid}/mmap even this is pretty cryptic.
I would check the resident memory and this is what really matters, but I suspect it is close to your 10.5 GB if they are complaining ( assuming they know what they are talking, which I wouldn't assume )
how can I find and fix spots in the code causing such memory hogging
Start by searching this site. There are literally thousands of results.
For your specific case, I'd look for the following:
An unreasonably large heap specification, using the -Xmx command-line argument when starting Java. For a simple servlet, you should use maybe 100-200 Mb.
An excessive number of threads. Each thread requires space (2 Mb by default) for its internal stack.
Large memory-mapped files. The way you describe your servlet, you shouldn't be using any of these.
Tomcat 5.5.x and 6.0.x
Grails 1.6.x
Java 1.6.x
OS CentOS 5.x (64bit)
VPS Server with memory as 384M
JAVA_OPTS : tried many combinations- including the following
export JAVA_OPTS='-Xms128M -Xmx512M -XX:MaxPermSize=1024m'
export JAVA_OPTS='-server -Xms128M -Xmx128M -XX:MaxPermSize=256M'
(As advised by http://www.grails.org/Deployment)
I have created a blank Grails application i.e simply by giving the command grails create-app and then WARed it
I am running Tomcat on a VPS Server
When I simply start the Tomcat server, with no apps deployed, the free memory is about 236M
and used memory is about 156M
When I deploy my "blank" application, the memory consumption spikes to 360M and finally the Tomcat instance is killed as soon as it takes up all free memory
As you have seen, my app is as light as it can be.
Not sure why the memory consumption is as high it is.
I am actually troubleshooting a real application, but have narrowed down to this scenario which is easier to share and explain.
UPDATE
I tested the same "blank" application on my local Tomcat 5.5.x on Windows and it worked fine
The memory consumption of the Java process shot from 32 M to 107M. But it did not crash and it remained under acceptable limits
So the hunt for answer continues... I wonder if something is wrong about my Linux box. Not sure what though...
UPDATE 2
Also see this http://www.grails.org/Grails+Test+On+Virtual+Server
It confirms my belief that my simple-blank app should work on my configuration.
It is a false economy to try to run a long running Java-based application in the minimal possible memory. The garbage collector, and hence the application will run much more efficiently if it has plenty of regular heap memory. Give an application too little heap and it will spend too much time garbage collecting.
(This may seem a bit counter-intuitive, but trust me: the effect is predictable in theory and observable in practice.)
EDIT
In practical terms, I'd suggest the following approach:
Start by running Tomcat + Grails with as much memory as you can possibly give it so that you have something that runs. (Set the permgen size to the default ... unless you have clear evidence that Tomcat + Grails are exhausting permgen.)
Run the app for a bit to get it to a steady state and figure out what its average working set is. You should be able to figure that out from a memory profiler, or by examining the GC logging.
Then set the Java heap size to be (say) twice the measured working set size or more. (This is the point I was trying to make above.)
Actually, there is another possible cause for your problems. Even though you are telling Java to use heaps of a given size, it may be that it is unable to do this. When the JVM requests memory from the OS, there are a couple of situations where the OS will refuse.
If the machine (real or virtual) that you are running the OS does not have any more unallocated "real" memory, and the OS's swap space is fully allocated, it will have to refuse requests for more memory.
It is also possible (though unlikely) that per-process memory limits are in force. That would cause the OS to refuse requests beyond that limit.
Finally, note that Java uses more virtual memory that can be accounted for by simply adding the stack, heap and permgen numbers together. There is the memory used by the executable + DLLs, memory used for I/O buffers, and possibly other stuff.
384MB is pretty small. I'm running a small Grails app in a 512MB VPS at enjoyvps.net (not affiliated in any way, just a happy customer) and it's been running for months at just under 200MB. I'm running a 32-bit Linux and JDK though, no sense wasting all that memory in 64-bit pointers if you don't have access to much memory anyway.
Can you try deploying a tomcat monitoring webapp e.g. psiprobe and see where the memory is being used?