Hi
I am debugging a Java application that fails when certain operations are invoked after VM memory is swapped to disk. Since I have to wait about an hour for windows to swap I was wondering if there is a way of forcing windows into swapping
You can create another application that allocates and accesses a large amount of memory. Assuming that you don't have enough memory for both to run, Windows will be forced to swap the inactive app to make room for the active app.
But before you do that, you might find help if you describe the exact problem that you're having with your app (including stack traces and sample code). The likelihood of swapping causing any problems other than delays is infinitesimally low.
Related
My Java app is experiencing a memory leak which I am trying to tackle. However depending on which mode I select during the profiling run I get opposite results. Consequently I am not sure the memory leak I targeted has been solved since moreover an "Out Of Memory Error : Java Heap Space" recently spurt out in production.
Visual Studio Code has a guide to understand the difference between the two modes (Sampling / Instrumentation or Tracing Profiler and there is also this Ninjas' guide about the same topic on VisualVM. I understand that with the instrumented mode the overhead is much higher. But this modes is required (at least in Netbeans profiler) to get the list of the objects that have the highest number of "surviving generations" which is the indicator to watch in order to see the efficiency of the counter-measures that fight against a memory leak.
Thanks to the OOM stack trace I have a clue which method caused the memory leak and so I was able to isolate it in a unit test.
My unit test starts with #RunWith(JfxTestRunner.class) to avoid errors regarding "toolkit not initialized errors" when something is sent to the Java Fx App Thread (ie when something is shown to the user in the GUI app).
Now if I launch this unit test (loop that reads the content of a Solr index and populates a List based on this content) and profile it in sampling mode I get the following graph :
This makes me think the problem is solved since the heap used increases during the index reading and decreases afterwards.
But now if I run the very same test in Instrumented mode with all classes (see "*" below), I get a very different memory shape :
This time the used heap keeps increasing over time as if there were still a memory leak.
I first used the instrumented mode in order to get the objects that were keeping surviving garbage collections and then without paying attention I switched to the sampling mode and thought the problem had been solved. Now I am quite unsure.
Consequently I wonder which mode (Sampled / Instrumented) I should base my assessment on, and why such differences appear (memory footprint is higher in instrumented mode and surviving generations are approximately heighfold greater in this mode).
Any help appreciated,
Kind Regards
I play modded Minecraft a fair bit. One downside to that is it takes a lot of time for all the mods to compile whenever I launch Minecraft. It can take around 15 minutes or so, which is too much time in my opinion. When a computer is running applications, everything it does is based off of inputs and data in RAM. I'm fairly certain that if one was to copy the RAM of their computer at a point in time and put that data back into RAM at another time, the computer would return to its former state. Though things may break down if the data in RAM doesn't actually agree with the data on the hard drive(like if windows explorer was open in the loaded RAM and showed files and folders which may not really be there on the hard drive).
I think it might be possible to copy the RAM data of an application(in my case a few GB of RAM after everything compiles and loads). I also think that if it were to be inserted back into the RAM at a later time, the application would appear already loaded without waiting for code to compile. How would I go about doing this? I think it's similar to save-state loading in emulators.
I'm fairly certain that if one was to copy the RAM of their computer at a point in time and put that data back into RAM at another time, the computer would return to its former state.
This is very perceptive of you; and this is precisely what happens when a computer "hibernates" [1]. You are also correct that unless the total state of RAM is saved and restored, or if the computer is allowed to operate in between the store and the restore, odd things are very likely to happen.
It is conceivable to store / restore the RAM state for a single application, but this would be a complex operation, and even with 25+ years of a career in IT, I have not heard of an application that can do this.
... Except for Phil Brubaker's comment, which mentions virtual machines. If you run Minecraft inside a virtual machine running on your physical machine, you can do just as Phil mentions : store and restore the running state of the VM at any point -- say, at the end of a Minecraft session. 'Snapshots' (again, as Phil mentions) are how this is done.
(VM applications might offer a 'suspend' feature, and while this might in some details be different from that VM application's 'snapshot' feature, the effect is the same, and it's just like hibernation for a physical machine : the running state (i.e., the contents of RAM and some details of what the CPU is doing at that exact moment) are saved to disk, and can be restored later to bring the VM back to exactly where it was at the point it was snapshot-ed / suspended.)
So I'd recommend a web search for "virtual machine applications for [fill in your operating system here]". VMWare and VirtualBox will be top hits; there will be others, depending on your operating system and such.
[1] Note that "sleeping" is different: in sleep, only some components are shut down, such as the hard drive, which is normally always spinning, whether or not it's actually reading/writing data. So sleep is a partial shutdown to save energy. Hibernation is a longer-term, very low power mode.
I have a program and a strong suspicion that memory swapping is occuring. The reason I belive that is the case is the fact that program just hangs from time to time. When I started logging, things became even more confusing as the program started hanging on different places, with different methods being executed.
At this time I'm using 32-bit IBM Java with 2gb dedicated to the program, so I'm right on the edge with the memory. Change to x64 is possible but before that:
Question 1: Can I programatically detect memory swapping at runtime? Or how could I at least give myslef some hints (via logging) that swapping is occuring.
And as of now, I dont have memory usage logs availible to me, however, if xmx is 2Gigs, that's just RAM and if memory swap occurs, would it even appear that i dont have enough memory?
Question 2: As I think about it now, can I log the start of garbage collenction? Can I detect it at runtime?
EDIT: said program exports very large amounts of data from database.
EDIT2: can I programatically forbid memory swapping for givem JVM?
Can I programatically detect memory swapping at runtime?
You can monitor in the OS how much swap is being used or how much is being written to the swap partitions. How you do this depends on yoru OS>
if memory swap occurs, would it even appear that i dont have enough memory?
If you had enough memory no swapping would occur. Note: even if no swapping occurs it that more memory wouldn't help. Most likely you need more than just your application memory e.g. disk caching also is important.
can I log the start of garbage collenction?
You can in another process, however you can't run anything while a stop-the-world action is occurring.
Change to x64 is possible but before that:
Java 5.0 was the last version where I would have said may be 32-bit is best. That was ten years ago.
program exports very large amounts of data from database.
1 GB is less than the cost of a cup of coffee these days. Even tens of GB is not much to worry about. Hundreds of GBs is getting big, and if you have a few TB you have an interesting problem. Some of my clients have 3 TB machines and they have very large amounts of data e.g. 100s TB.
I gave an old computer which hadn't been used for years to my oldest daughter and she gave it to my 8 year old daughter. It has 24 GB of memory and she uses it to mainly watch youtube videos.
can I programatically forbid memory swapping for givem JVM?
You can lock it into memory using JNI but when a machine swaps your heap space your machine is on the edge of dying anyway.
I have created one java-swing application.
The application runs perfectly runs perfect on my pc.
But it doesn't run perfectly on client pc.
I had increase my Virtual Memory, earlier on my pc.
So my question is..
Is changing memory limit, effect or change application behavior?
Is there anything that change the behavior of java application? bcz same application runs perfectly on my pc and same application does not running perfectly on client pc?
and there is no problem in code, i have checked three times.
EDIT:
This is one screen shot of my application, if u see in the screen shot there is one table in which there are many images in it.
Now on my pc i don't see any double images in that table.
But on client pc, i am seeing double images in that table but if u click on one of the images then after clicking , the image which is showing earlier changed.
now then i take database and all the images from client pc and then i tried to run the same application on my pc with same data which is also their on client machine.
But now there is no double images coming in that table on my pc.
i "increase my virtual memory" means changing the heap size of virtual memory of jvm by adding...
java -Xmx512m
Obviously, if you app runs into OutOfMemoryError, then increasing the maximum heap size can fix it.
Faulty (on the hardware level) memory can cause Java programs to crash. This can be diagnosed by using a memtest tool
Sometimes, Swing gets at odds with hardware graphics acceleration, especially with onboard graphics adpaters that share main memory. This can be fixed either by reducing hardware acceleration in Windows, or getting new drivers.
It's possible more memory to help. To optimize anything for better performance/scaling/size, you first need to understand the problem.
Examine how much memory your application uses - if you're on windows you can start by looking at it with task explorer. If the amount of memory used is a significant portion of the amount of memory your pc has, then adding more memory may well speed things up. Other things can affect performance such as hard disk speed and number of cpu's. You could add more memory, but find it's the hard disk speed that is the problem.
What kind of things does your program do. Does it work with files or large data-sets?
You haven't said how it doesn't run perfectly on the client's pc. Is it slow, or does it crash?
Changes in the amount of memory and thus possibly in the amount of swapping necessary to provide memory to all applications can change the timing of your program.
If your program is susceptible to timing changes (i.e. if it is timing-dependent or only works in "best-case" timing), then those timing changes can influence the behaviour of your application.
Additionally restricted memory constraints can lead to more frequent garbage collection which also change the timing and can have an additional influence if you use any kinds of weak or soft references in your application.
You must check if the problem is with memory. Use ProcessManager or other similar program to check how your program use memory from OS point of view. You can use jconsole to check, how it looks from Java perspective.
Run your application with JDK and add to its invocation:
-Dcom.sun.management.jmxremote.port=9876 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false
Then run jconsole and watch your app.
I have a Java program for doing a set of scientific calculations across multiple processors by breaking it into pieces and running each piece in a different thread. The problem is trivially partitionable so there's no contention or communication between the threads. The only common data they access are some shared static caches that don't need to have their access synchronized, and some data files on the hard drive. The threads are also continuously writing to the disk, but to separate files.
My problem is that sometimes when I run the program I get very good speed, and sometimes when I run the exact same thing it runs very slowly. If I see it running slowly and ctrl-C and restart it, it will usually start running fast again. It seems to set itself into either slow mode or fast mode early on in the run and never switches between modes.
I have hooked it up to jconsole and it doesn't seem to be a memory problem. When I have caught it running slowly, I've tried connecting a profiler to it but the profiler won't connect. I've tried running with -Xprof but the dumps between a slow run and fast run don't seem to be much different. I have tried using different garbage collectors and different sizings of the various parts of the memory space, also.
My machine is a mac pro with striped RAID partition. The cpu usage never drops off whether its running slowly or quickly, which you would expect if threads were spending too much time blocking on reads from the disk, so I don't think it could be a disk read problem.
My question is, what types of problems with my code could cause this? Or could this be an OS problem? I haven't been able to duplicate it on a windows a machine, but I don't have a windows machine with a similar RAID setup.
You might have thread that have gone into an endless loop.
Try connecting with VisualVM and use the Thread monitor.
https://visualvm.dev.java.net
You may have to connect before the problem occurs.
I second that you should be doing it with a profiler looking at the threads view - how many threads, what states are they in, etc. It might be an odd race condition happening every now and then. It could also be the case that instrumenting the classes with profiler hooks (which causes slowdown), sortes the race condition out and you will see no slowdown with the profiler attached :/
Please have a look at this post, or rather the answer, where there is Cache contention problem mentioned.
Are you spawning the same umber of threads each time? Is that number less or equal the number of threads available on your platform? That number could be checked or guestimated with a fair accuracy.
Please post any finidngs!
Do you have a tool to measure CPU temperature? The OS might be throttling the CPU to deal with temperature issues.
Is it possible that your program is being paged to disk sometimes? In this case, you will need to look at the memory usage of the operating system as whole, rather than just your program. I know from experience there is a huge difference in runtime performance when memory is being continually paged to the disk and back.
I don't know much about OSX, but in linux the "free" command is useful for this purpose.
Another issue that might cause this slowdown is log files? I've known at least some logging code that slowed down the system incrementally as the log files grew. It's possible that your threads are synchronizing on a log file which is growing in size, then when you restart your program, another log file is used.