Terribly slow on Grails app - java

The performance of grails app is terribly slow. It needs at least 5-7 seconds to load a page. Sometimes will prompt me OutOfMemory and then server error 500 for every page.
The terribly slow performance seriously affects my work and I am unable to test and develop the project in an acceptable time. I have to deal with this problem first.
I tried to:
Config the settings in idea64.exe.vmoptions and idea.exe.vmoptions like the settings in development handbook.
Config the settings of Java in Java Control Panel that I added a Runtime Parameters –Xms-4096m.
Config the settings of %GRAILS_HOME%\bin\startGrails.bat, GRAILS_OPTS.
However, this situation is not improveing.
I am using Win7-64 bit, 8GB Ram, Intellij 13.0.2 to develop.
Please help. Thank you very much!!

This is likely to be an issue with your database lookup.
Out of memory errors are probably caused by bringing back too much data (possibly filtering in the JVM instead of the database query).
Slowness is possibly caused again by bringing back too much data, or by n+1 selects

Thank you for #Houcem Berrayana answer:
Try to increase memory by using GRAILS_OPTS env variable
I just add the environment variable GRAILS_OPTS with parameter -Xms4096, then the Grails app run so fast!!
Thank you again!

Related

Performance issues with applications deployed on Tomcat

Recently we have migrated all our companies applications from Websphere to Tomcat application server. As part of this process we had performance testing done.
We found that couple of applications are having over 100% performance degradation in Tomcat. We increased the number of threads, we configured datasource settings to accommodate our test, we have also increased the read and write buffer sizes in the Tomcat server.
Application Background:
-> Spring Framework
-> Hibernate
-> Oracle 12c
-> JSPs
-> OpenJDK 8
We already checked the database and found no issues with performance in DB.
The CPU utilization while running the test is always less than 10%.
Heap settings are -xms = 1.5G to -xmx = 2G, and it never utilizes more than 1.2G.
We also have two nodes and HAProxy on top to balance the load. (We don't have a web server in place).
Despite our best efforts we couldn't pinpoint the issue that is causing the performance degradation. I am aware that this information isn't enough to provide a solution for our problem, however, any suggestion on how to proceed will be very helpful as we hit a dead-end and are unable to proceed.
Appreciate it if you can share any points that will be helpful in finding the issue.
Thanks.
Take Thread Dumps and analyze which part of application is having issues and start troubleshoot from there.
Follow this article for detailed explanation about Thread Dumps analysis - https://dzone.com/articles/how-analyze-java-thread-dumps
There are plenty of possible reasons for the problem you've mentioned and there really isn't much data to work with. Regardless of that, as kann commented a good way to start would be gathering Thread Dumps of the java process.
I'd also question if you're running in the same servers or if it's newly setup servers and how they are looking (resource-wise). Is there any CPU/Memory/IO constraints during the test?
Regarding the Xmx it sounds like you're not passing the -XX:+AlwaysPreTouch flag to the JVM but I would advice you to look into it as it will make the JVM zero the heap memory on start-up instead of doing it in runtime (which can mean a performance hit).

How to Profile Execution of RDF4J Server?

As I indicated in another post, I'm having trouble with some SPIN constructors taking an excessive amount of time to execute quite limited data. I thought I'd take a different approach and see if I can profile the execution of the constructors to gain insight into where specifically they are spending excessive time.
How do I go about profiling the execution of constructors under RDF4J Server? I'm instantiating via SPARQL update (INSERT DATA) queries. Here's the System Information on RDF4J workbench:
I've attempted to profile the Tomcat server under which the RDF4J Server runs using jvisualvm.exe, but I have not gained much insight. Ideally, I'd like to get down to the class/method level within RDF4J so that I can post a more detailed request for help on my slow execution problem or perhaps fix my queries to be more efficient themselves.
So here's the version of Java Visual VM:
RDF4J is running under Apache Tomcat 8.5.5:
I can see overview information on Tomcat:
I can also see the monitor tab and threads:
HOWEVER, what I really want to see is the profiler so that I can see where my slow queries are spending so much time. That hangs on Calibration since I don't have the profiler calibrated for Java 1.8.
This attempting to connect box will persist indefinitely. Canceling it leads to the Performing Calibration message which doesn't actually do anything and is a dead-end hang requiring the Java VisualVM to be killed.
After killing the Java Visual VM and restarting and looking at Options-->Profiling-->Calibration Data, I see that only Java 7 has calibration data.
I have tried switching Tomcat over to running on Java 7, and that did work:
The profiler did come up with Tomcat:
However, when I tried to access the RDF4J workbench while Tomcat ran on Java 7, I could not get the workbench running:
So, I'm still stuck. It would appear that RDF4J requires Tomcat running under Java 1.8, not 1.7. I can't profile under Java 1.8.
I have seen other posts on this problem with Java VisualVM, but the one applicable solution seems to be to bring everything up in a development environment (e.g. Eclipse) and dynamically invoke the profiler at a debugger breakpoint once the target code is running under Java 1.8. I'm not set up to do that with Tomcat and RDF4J and would need pointers. My intention was not to become a Tomcat or RDF4J contributer (because my tasking doesn't allow that... I wouldn't be paid for the time) but rather to get a specific handle on what's taking so long for my SPIN constructor(s) in terms of RDF4J server classes and then ask for help from the RDF4J developer community on gitub.
Can Java VisualVM calibration be bypassed? Could I load a calibration file or directory somewhere for Java VisualVM to use instead of trying to measure calibration data which fails? I'm only interested in the relative CPU loading of classes, not absolute metrics, and I don't need to compare to measurements on other machines.
Thanks.

Eclipse and/or node.exe using 100% CPU

OK, this is driving me insane! It used to be infrequent and now its practically ever character I type that causes Eclipse goes into 'Not Responding' and CPU rockets towards 100% and stays there for a minute. Sometimes this is accompanied by node.exe taking half the CPU and a LOT of memory. I kill node.exe and sometimes it stays off but mostly it comes back.
I've looked up node.exe and can't figure out what it has to do with my application. I'm writing a webapp using Tomcat, Struts, Java, JSP, JQuery. I disabled every plugin from Preferences->startup/shutdown with no effect.
Help! I can't develop when every keypress takes a minute!
Take a look at https://bugs.eclipse.org/bugs/show_bug.cgi?id=442049 or https://github.com/Nodeclipse/nodeclipse-1/issues/159
I overcomed by removing <nature>org.eclipse.wst.jsdt.core.jsNature</nature>
But you may have other JSDT related issue.
And you must know exactly what process is consuming CPU at what rate.
I would really suggest not using Eclipse for node. Try NotePad++ on Windows or Sublime or Atom or something...

OutOfMemoryError in LogService of Google AppEngine

I'm using Google App Engine v.1.9.1, Java edition, running inside Eclipse Kepler SR2. I've got JDK 1.7. My logging.properties is sending the logs to the java.util.logging.ConsoleHandler only.
My [edit] development server running in Eclipse [/edit] receives a lot of data from another server and just dumps it into a database. This generates a lot of logs. I get OutOfMemoryError after only a few hours.
I've run JProfiler and I figured out the object being kept around is com.google.apphosting.api.logservice.LogServicePb$LogLine. Somehow this isn't being discarded, ever, keeping millions of instances in memory.
Sure I can reduce the amount of data logged but that will only delay the problem.
I've looked everywhere to figure out how to flush out the log lines but I can't find any setting for this. The only option available is for Python not Java.
Any idea what's causing this and how to fix it?
As #Martin Berends said, the development server inside Eclipse is only for developing. It seems that log statements are kept in memory in that environment. Once I moved my app to a test server, the memory usage seems to be flat.
So the bottom line is; when running in a development environment, reduce the amount of logging and restart the server once in a while to avoid OutOfMemoryErrors. Secondly, do your tests on a real test server.

JVM/Java forces applications to run slower on first start, Windows 8?

I have tried three IDEs, all of which I'm fairly sure require Java to run, and all of them start up very very slow (30 seconds to 1 minute) on the first launch of the day. After that, they all start up lightening fast.
The three programs are: Aptana Studio 3, Eclipse, and PHP Webstorm.
Based on upon my web searches, I have modified the AptanaStudio3.ini using some of the suggestions on how to speed it up and they all work ... for every start up after the first launch, that is, but the first launch of the day remains painfully and inexplicably slow.
I have searched SO and I did not see any questions speaking to this issue. If anyone finds an answer here, thank you very much but I could not.
My only conclusion is that this issue is related to how Java runs on Windows 8 since all three software programs are adversely affected. Is this a known bug in Java on Windows 8? I have no idea what to think but I would be greatly appreciative if someone can offer help.
OBSERVATION: from my testing, it seems that if I start up my laptop and then launch Eclipse or Aptana within the first say the first 10 minutes of booting, it launches quicker (still slow but not as bad) then if I were to wait for about an hour and then launch my IDE. Not sure what this indicates.
Thanks
Though you can tune the Eclipse (or Aptana) .ini file and do things like disable class verification and boot using the JVM DLL, this has more to do with OS and hardware disk caching than the JVM. Boot each of the IDEs from a Ramdisk and you'll see that they boot equally as quickly from RAM the first time as they do from 'disk' the second time.
Source: I've spent a lot of time trying to solve this problem already. :)
It might be worth checking your antivirus scanner behaviour - I have precisely this problem.
In spite of SSD & reasonably quick i5 on win8 ultimate, the first boot time for eclipse is measured in many minutes (can be over 10), with subsequent restarts being done in a matter of tens of seconds. The whole PC can do a full restart in about half a minute, so its unlikely to be a raw I/O issue.
From looking at the cpu hogs & digging from there, it appears that the a/v (macafee) is doing an on-access scan for all the eclipse components & plugins after every boot & I suspect this is where much of the time is being taken.
I'll post an update when I've persauded someone to exclude eclipse & jvm from the on-access scan...
Since Aptana Studio is based upon Eclipse there is no big difference to be expected.
This is not a known Bug for Java on Windows 8, since I experienced it at least already in Windows 7. AFAIK it has to do with starting the JVM for the first time.
Of course you could throw a lot of memory at it or tweak the .ini of the IDE. The JVM-startupprocess wouldn't really be affected and it would still be slow. What is neglectable for a server is a problem on the desktop. For details take a look at http://en.wikipedia.org/wiki/Java_performance#Startup%5Ftime

Categories