I am new to java web application development. I just wrote the first hello world application using the netbeans 7.3 IDE. When the application is launched, it keeps loading for more than 30minutes. I don't think that this is usual. Is there a way around this problem? Your help will be appreciated.
Assuming that your app is a simple Hello World (without external dependencies) potential performance issues can be caused by:
Network Connection. 30 s sounds like a timeout. Some 3rd party libs used in web apps refer by default to Internet URIs resources like XML schemas for validation.
JVM swapping due not enough free memory. This can impact your JVM performance.
JVM running out of Permanent Space. If your JVM is Hot Spot look for error messages in the log. In this case just increase the -XX:MaxPermSize java argument (256m is a good starting point).
To help diagnosis:
- Capture a sequence of Thead Dumps ( kill -3 PID for Linux/UNIX or Ctrl+Brk in Windows) 3 or 5 every 5 secs) to see what hangs the JVM ( network access to schemas, file system access, ...). Some tools like Samurai can help you to detect threads stuck in the same line of code .
- Check your system free memory
- Check your logs for memory errors (stdout,stderr)
Related
I'm currently facing a very strange problem. I have written a simple servlet which runs within a self hosting jetty container. This servlet is a logging endpoint for JS scripts. So the script just runs very simple code to log to graylog and some files (managed by a log4j file appender.)
The admin complained to me that the servlet hogs up to 10.5GB of Virtual Memory which caused the whole machine to slow down. This had an impact on the performance of some other monitoring services.
Restarting the servlet fixed the problem temporarly but the question is how can I find and fix spots in the code causing such memory hogging?
Edit:
I start the application with the -Xmx50m switch.
Edit:
The following things have been investigated: I started Eclipse Memory Analyzer and jConsole to have a look into the application while some ruby scripts sent requests. (40 to 70 requests per minute. That's more than the servlet is getting in production at the moment.)
With this setting:
Heap size: 4MB
Running threads average: 19 (peak at 23)
Virtual Memory: 5GB
Restarting the servlet speeded up the server. The only suspicious parameter of the servlet were the 10.5GB Virtual Memory.
Virtual memory doesn't use much resources, only resident memory matters. You can create a process which uses 8 TB of virtual memory and it still has little impact on resources.
On Linux the "simplest" way to check virtual memory is to read /proc/{pid}/mmap even this is pretty cryptic.
I would check the resident memory and this is what really matters, but I suspect it is close to your 10.5 GB if they are complaining ( assuming they know what they are talking, which I wouldn't assume )
how can I find and fix spots in the code causing such memory hogging
Start by searching this site. There are literally thousands of results.
For your specific case, I'd look for the following:
An unreasonably large heap specification, using the -Xmx command-line argument when starting Java. For a simple servlet, you should use maybe 100-200 Mb.
An excessive number of threads. Each thread requires space (2 Mb by default) for its internal stack.
Large memory-mapped files. The way you describe your servlet, you shouldn't be using any of these.
We are currently using a modified version of Atlassian Confluence v.3.5
and have created a space containing large number of pages (about 5000) and large number of attachments (about 10000).
When navigating to the home page of this big space it takes about 3 minutes to load completely (the safari web browser shows a spinning wheel indicating page resources are still being loaded).
In these 3 minutes, we are unable to determine where the processing time is being spent.
We turned on confluence's profiling feature but it did not help because there was not much output in the log file.
The confluence process (which is a java process) is using about 8.2% CPU during the 3 minutes. How can I figure out what the process is doing?
You have all these options:
HeapDumpOnCtrlBreak
HeapDumpOnOutOfMemoryError
Jmap
HotSpotDiagnosticMXBean
A Thread Dump may also be useful. You can use it to figure out what the threads are waiting on.
You can also use a profiler. The best one I've used is JProfiler. But there's other ones available that are free and open source. I think netbeans comes with one. And sun makes one called VisualVM.
In Java profiling, it seems like all (free) roads nowadays lead to the VisualVM profiler included with JDK6. It looks like a fine program, and everyone touts how you can "attach it to a running process" as a major feature. The problem is, that seems to be the only way to use it on a local process. I want to be able to start my program in the profiler, and track its entire execution.
I have tried using the -Xrunjdwp option described in how to profile application startup with visualvm, but between the two transport methods (shared memory and server), neither is useful for me. VisualVM doesn't seem to have any integration with the former, and VisualVM refuses to connect to localhost or 127.0.0.1, so the latter is no good either. I also tried inserting a simple read of System.in into my program to insert a pause in execution, but in that case VisualVM blocks until the read completes, and doesn't allow you to start profiling until after execution is under way. I have also tried looking into the Eclipse plugin but the website is full of dead links and the launcher just crashes with a NullPointerException when I try to use it (this may no longer be accurate).
Coming from C, this doesn't seem like a particularly difficult task to me. Am I just missing something or is this really an impossible request? I'm open to any kinds of suggestions, including using a different (also free) profiler, and I'm not averse to the command line.
Consider using HPROF and opening the data file with a tool like HPjmeter - or just reading the resulting text file in your favorite editor.
Command used: javac -J-agentlib:hprof=heap=sites Hello.java
SITES BEGIN (ordered by live bytes) Fri Oct 22 11:52:24 2004
percent live alloc'ed stack class rank self accum bytes objs bytes objs trace name
1 44.73% 44.73% 1161280 14516 1161280 14516 302032 java.util.zip.ZipEntry
2 8.95% 53.67% 232256 14516 232256 14516 302033 com.sun.tools.javac.util.List
3 5.06% 58.74% 131504 2 131504 2 301029 com.sun.tools.javac.util.Name[]
4 5.05% 63.79% 131088 1 131088 1 301030 byte[]
5 5.05% 68.84% 131072 1 131072 1 301710 byte[]
HPROF is capable of presenting CPU usage, heap allocation statistics,
and monitor contention profiles. In addition, it can also report
complete heap dumps and states of all the monitors and threads in the
Java virtual machine.
The best way to solve this problem without modifying your application, is to not use VisualVM at all. As far as other free options are concerned, you could use either Eclipse TPTP or the Netbeans profiler, or whatever comes with your IDE.
If you can modify your application, to suspend it's state while you setup the profiler in VisualVM, it is quite possible to do so, using the VisualVM Eclipse plugin. I'm not sure why you are getting the NullPointerException, since it appears to work on my workstation. You'll need to configure the plugin by providing the path to the jvisualvm binary and the path of the JDK; this is done by visiting the VisualVM configuration dialog at Windows -> Preferences -> Run/Debug - > Launching -> VisualVM Configuration (as shown in the below screenshot).
You'll also need to configure your application to start with the VisualVM launcher, instead of the default JDT launcher.
All application launches from Eclipse, will now result in VisualVM tracking the new local JVM automatically, provided that VisualVM is already running. If you do not have VisualVM running, then the plugin will launch VisualVM, but it will also continue running the application.
Inferring from the previous sentence, it is evident that having the application halt in the main() method before performing any processing is quite useful. But, that is not the main reason for suspending the application. Apparently, VisualVM or its Eclipse plugin does not allow for automatically starting the CPU or memory profilers. This would mean that these profilers would have to be started manually, thereby necessitating the need to suspend the application.
Additionally, it is worth noting that adding the flags: -Xrunjdwp:transport=dt_socket,address=8000,server=y,suspend=y to the JVM startup will not help you in the case of VisualVM, to suspend the application and setup up the profilers. The flags are meant to help you in the case of profilers that can actually connect to the open port of the JVM, using the JDWP protocol. VisualVM does not use this protocol and therefore you would have to connect to the application using JDB or a remote debugger; but that would not resolve the problem associated with profiler configuration, as VisualVM (at least as of Java 6 update 26) does not allow you to configure the profilers on a suspended process as it simply does not display the Profiler tab.
This is now possible with the startup profiler plugin to VisualVM.
The advice with -Xrunjdwp is incorrect. It just enables debugger and with suspend=y it waits for debugger to attach. Since VisualVM is not debugger, it does not help you. However inserting System.in or Thread.sleep() will pause the startup and allows VisualVM to attach to your application. Be sure to read Profiling with VisualVM 1 and Profiling with VisualVM 2 to better understand profiler settings. Note also that instead of profiling, you can use 'Sampler' tab in VisualVM, which is more suitable for profiling entire java program execution. As other mentioned you can also use NetBeans Profiler, which directly support profiling of the application startup.
I am using Rational Application Developer v7.0 that ships with an integrated test environment. When I get to debugging my webapp, the server startup time in debug mode is close to 5-6 minutes - enough time to take a coffee break!
At times, it so pisses me off that I start cursing IBM for building an operating system instead of an app server! Spawning 20+ processes and useless services with no documented configuration to tuning it, to starting any faster.
I am sure there are many java developers out there who would agree with me on this. I tried to disable the default apps and a set of services via my admin console, however that hasn't helped much.
I have no webservices, no enterprise beans, no queues, just a simple web app which requires a connection pool. Have you done something in the past to make your integrated test environment, start fast in debug mode and there by consume less RAM?
UPDATE:
I tried disabling a few services (internationalization, default apps etc...) and now the WebSphere server went from bad to worse. Not only doesn't it take horrifying startup time, it keeps freezing every now and then for up to 2 minutes. :-( Sounds like, optimization is not such a good thing, always!
The best way to debug server code is to use remote debugging.
First you need to add the following to the JVM params in the server start script:
-Xdebug -Xnoagent -Djava.compiler=NONE -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005
This will cause the JVM to listen on the specified port, then from your IDE you can start a remote debug session against that port and debug as if the code was running in the same process.
Working this way prevent you restarting the server so frequently and hence side-steps your problem with Websphere's start-up time.
You can get some odd results if the binaries on the server and the source in the IDE get out of sync but on the whole that's not a problem.
One of the main reasons is that you have a large application with many modules, classes, manifests, XML descriptors so on, and the fact that Websphere application server start up process is single threaded per se (thus each application may be started in a separate thread if they has equal weight). One other reason is that the Eclipse EMF and JST frameworks are very I/O intensive during startup and publish/deploy.
One other reason for the tedious start up is the annotation scanning which will occur during publish/deploy. This annotation scanning can be controlled and modified in a various ways. Look at this site:
http://wasdynacache.blogspot.se/2012/05/how-to-speed-up-annotation-processing.html
First of all, examine and evaluate your hardware, both CPU, memory and HDD. Is your processor/s running 100% for a long time during start up? If so, the processor may be too weak. Is paging occur? then you may have to put in some more RAM. The Websphere/eclipse JST and EMF frameworks are very I/O intense so you should consider to invest in a SSD disc. You should also make sure that other processes on your machine (virus protection software etc.) don´t steal hardware resources from the Websphere java processes.
So for the hardware:
1. Processor - a pretty fast one, since the publish and the startup is mostly singlethreaded you do not need that many cpu cores
2. Memory - You will at least need 512Mb of physical RAM, this depends of the size of your application of course.
3. Storage - I would definitely go for a fast SSD since the underlying eclipse framework is I/O intensive.
Here are some tricks to reduce the footprint of the start up phase. Please before applying these settings make sure that you record a baseline start up so that you can observe the difference in start up, i.e. the reduced start up time.
JVM args : -Xverify:none -Xquickstart -Xnoclassgc -XX:+UseNUMA -XtlhPrefetch -Xgcthreads4 (I got 4 virtual processors installed on my machine)
Extend the heap size to match the demands of your application.
Disable the autostart of the application to reduce publish time.
Disable PMI and unnecessary tracing.
Profile your application during startup and fix bottlenecks if found any.
Other JVM arguments that may gain performance:
com.ibm.cacheLocalHost=true
com.ibm.ws.classloader.zipFileCacheSize=512
com.ibm.ws.classloader.resourceRequestCacheSize=1024
com.ibm.ws.management.event.pull_notification_timeout =20000
com.ibm.ws.amm.scan.context.filter.packages=true
org.eclipse.jst.j2ee.commonarchivecore.disableZip=true
Jvm arguments that will make the Websphere application server to stop immediately:
com.ibm.ejs.sm.server.quiesceTimeout=0
com.ibm.ejs.sm.server.quiesceInactiveRequestTime=1000
Webcontainer properties:
com.ibm.wsspi.jsp.disableTldSearch=true
com.ibm.wsspi.jsp.disableResourceInjection=true
JVM arguments that may be specified eclipse.ini (Note that the heap parameters is configured according to the conditions of my environment)
-Dcom.ibm.ws.management.event.max_polling_interval=5000
-Xquickstart
-Xverify:none
-Xmxcl25000
-Xjit:dataTotal=65536
-Xcodecache64m
-Xscmx48m
-Xnolinenumbers
-Xverify:none
-Xmnx64m
-Xmx1446m
-Xmnx64m
-XX:+UseCompressedOops
-XX:+UseNUMA
5 to 6 mins is not normal. I use RAD and WAS everyday and get decent startup times. Which version of WAS are you running and how much RAM do you have?
If you share several workspaces and projects for a same WAS profile, consider creating a new WAS profile for your workspace.
You probably tried that but here's a simple check list of things to try on first hand. Make sure that your server settings in RAD has the following options enabled:
Optimize server for testing and developing
Run server with resources on the workspace
Minimize application files copied to the server
Uncheck "Enable universal test client" if you don't need it.
In the admin console you can verify some server settings such as
Run in development mode
Parallel start
Start components as needed
You can also uninstall the ivt app that comes installed by default when creating a new WAS profile. Then the usual things such as a drive that is not too fragmented and a pagefile size that is properly set.
And one last thing that you probably know already, republish to your server instead of restarting it.
That's one reason why Spring was born.
You don't even have to give all the niceties like JMS, remoting, etc. You'd be better off with Tomcat, ActiveMQ, and OpenEJB.
Anything but WebSphere.
There's some hints and tips for tuning RAD 6 on developerworks that may help, many of these also apply for RAD 7.
I have seen a similar list for RAD 7, I'll post it if I can find it.
I did find some tuning tips for Portal on RAD 7.
I would say my experience with the test environment has been suboptimal. I now tend to use Tomcat/Pluto configured for remote debugging with an External launch configuration to manage it from within bare Eclipse and rely on having appropriate JNDI configurations to abstract the underlying server.
If you are coding to the relevant APIs it shouldn't matter for development purposes that you're not on Websphere. If you do have a Webpshere specific issue you can always crank up the beast to debug it.
If you have no EJBs, no JMS, etc., just deploy under a standalone servlet container such as Tomcat or Jetty, you'll be amazed how fast it is :-), being ironic here but it's true!
If the connection pool really is the only appserver feature you use then why don't you simply use apache commons dbcp (http://commons.apache.org/dbcp/) drop webfear alltogether and use jetty instead. That should reduce your startup time to about 5 seconds. You can then later easily switch to websphere again for your production environment if you should really feel the need to.
WAS V7 addresses some of these problems by allowing you to configure what starts up when the app server starts up.
So if and when you migrate to WAS V7 you might seem some improvements in this space.
We have a curious problem with our java processes dying.
The application doesn't stacktrace, or write anything to the logs, the process just randomly dies. It's a heavily used application, but the problem only appears about once a month.
We're currently looking into using Process Monitor but any other suggestions would be welcome.
Edit:
It's a distributed Java application, running on Weblogic with an in-house web framework (Yes, this is a terrible idea, but it's been running for eight years), connecting to Oracle.
-
Out of Memory?
Our logs would catch java.lang.OutOfMemoryException, according to Brian Agnew.
Write crashes to a log? I don't think Java ever gets the chance, the death is happening at a process level, rather than Java exiting.
Can you wrap it in some shell script that captures the log files (stdout/stderr) and the exit code (which should give some indication as to how it died) ? On JVM exit you can also capture machine level stats using WMI
IF the VM itself is crashing it'll leave behind an hs_err_pid... file that contains stacktraces, machine-level debug info. You can then use that to diagnose the VM issue. See this blog entry for further information.
If the problem is related to the app's behaviour, it may be worth looking at JConsole, although from your description of the issue, this sounds much more like a low level VM issue.
(I assume you're on the latest VM for your Java version number etc.)
You can use a Linux NAGIOS Server to monitor the health of your Windows machines and services! Have a look at: nagios-monitoring-windows.
If you have such problems with your java app! You should test it and debug it! Applications shouldn't die without a trace! Look for logfiles! From which vendor is the app? Or is it self written? Try to enforce another Log4J/Logger/Debug Level. Monitor your System with cacti etc. to reduce the possibilities for such a crash. Talk to the software vendor.
Is enogh memory available? Maybe the app runs out of memory? Is it a standalone java process or a java process from a tomcat/jboss server?
Have you written down the crash times to a log? Appear they in different time-slices? Or appear they nearly time-circular?
VisualVM is a new tool which makes monitoring Java applications easier:
https://visualvm.dev.java.net/description.html
"VisualVM is a tool that provides detailed information about Java applications while they are running. It provides an intuitive graphical user interface that allows you to easily see information about multiple Java applications."