In my tomcat app, I query records in mongodb using DBCursor, when there are too many records, the cursor will stuck at .next(), then, SocketTimeoutException will be thrown.
While, if I do it in a standalone java process(start by java -jar XXX.jar MyClass), this won't happen.
Any suggestion why this happened?
Thanks
PS.I doubt if it is caused by the memory limit of each tomcat thread or something.
Reduce your keepAliveTime to 5mins (300 seconds) for mongodb. Generally, kkepAliveTime is set to 2 hours (7200 seconds).
For more info.
Related
I have a spring batch app that's runs on tomcat 8.5.
This batch works with lots of data such as ten Million records and it is too slow.
I want to find most time consuming parts such as database queries E.G, socket IO, thread blocking or waiting, CPU consuming, or garbage collection that maybe slows down the app.
I 'm mostly suspicious to jdbc queries E.G, socket IO.
I tried to use local partitioning to scale it up and give more memory to tomcat and increase commit interval in spring batch settings.
I had a look at socketIO tab in Jmc and logged execution time of one of the methods it shows,but it only takes 15 up to 30 milliseconds.
Another problem is that Jmc only shows percentages not exact time. So, I could not figure out how long it takes.
I'm a little confused.
Thanks too much in advance.
All different files are processing fine, but this file seems to be special.
The solution is to restart both the Cassandra Database, Java application and re-upload the file into the S3 bucket for processing. Then the same file is processed correctly.
Right now, we're restarting the Java application and Cassandra database every Friday morning. We're suspecting accumulation of something to be a possible root cause of the problem, as the file is processed perfectly fine after a complete restart.
This is a screenshot of the error in Cassandra:
We're using Cassandra as a backend for Akka Persistence.
So a failure to ingest the file only happens when the cluster has been up for some time. I don't have a failure to ingest, if it's done soon after cluster start.
First, that's not an ERROR, it's an INFO. Secondly, it's telling you that you're writing into cache faster than the cache can be recycled. If you're not seeing any negative effects (data loss, stale replicas, etc), I wouldn't sweat this. Hence, the INFO and not ERROR.
If you are and you have some spare non-heap RAM on the nodes, you could try increasing file_cache_size_in_mb. It defaults to 512MB, so you could try doubling that and see if it helps.
we're restarting the Java application and Cassandra database every Friday morning
Also, there's nothing to really gain by restarting Cassandra on a regular basis. Unless you're running it on a Windows machine (seriously hope you are not), you're really not helping anything by doing this. My team supports high write throughput nodes that run for months, and are only restarted for security patching.
I am using the TFS Java SDK (version 11.0) to create some wrapper functions for a website. I have code that queries Work Items to retrieve information about defects. When I run the code in Eclipse it takes about 8-10 seconds to retrieve all 1000 Work Items. That same code when it is run in a web container (Tomcat) takes twice as long. I cannot figure out why it runs slower in Tomcat vs just running it in Eclipse. Any ideas?
With this data I can not figure out a reason, but you can try to use javOSize, particularly http://www.javosize.com/gettingStarted/slow.html, their tool is free and they are very collaborative to help you find your slow down problems.
You can follow a similar procedure, according to your data I would execute:
Lets imagine your wrapper function is called com.acme.WrapperClass, then do:
cd REPOSITORY
exec FIND_SLOW_METHODS_EXECUTING_CLASS com.acme.WrapperClass 100 20000
This will block for 20s, and will examine any method taking more than 100 ms. As soon as you execute the exec command execute your slow transaction and wait until javOSize returns the output, repeat the same procedure for your eclipse running process.
Paste both output here and hopefully we will find the answer.
We have a problem where our java process is hanging forever,
Unless a Kill -9 is issued against it.
The same Process is running successfully in the Other Solaris Envs,
Java process consist single thread which starts and end after doing some processing On the Data ,Though from the logs and data we can see that the code is completely executed and all the data is processed.
but if we do JPS we will always see that process is running.
we are Using EHcache with spring for caching purpose and UCP for the connection pool.
On The dB side we Have ORACLE RAC Structure.
took several Jstacks and can never See the Process sticking in the my code.
though from thread dump can see there are lot of UCP threads hanging there.
Also Adding a Shutdown hook and removing It in the end,but some reason seems the shutdownhook is never called.
Due to project restrictions ,cant paste the code.
can Anyone Please help
my customer is facing same problem with our installer hanging on Solaris. when installer ran in debug mode, we realized that java which is embedded with installer is hanging. Please post in case any of you found answer for it.
I am trying to execute a query in oracle db.
When i try to run thru SQLTools the query is executing in 2 seconds
and when i run the same query thru JAVA it is exectuting in more than a minute.
I am using a hint /*+ordered use_nl (a, b)*/
I am using ojdbc6.jar Is it because of any JARS?
Please help whats causing this?
For Oracle 9i or later, database engine can tune SQL automatically, few case need to specify hints.
How do you run query in Java, repeat to run in loop? use parameter? prepare before execute?
You need to profile your Java application to identify where in the Java code before or after the SQL execution that overhead time is being spent.
Some profiling tools that you can use for that are: YourKit, JProfiler and HPROF (this one is a command line tool).