I am trying to execute a query in oracle db.
When i try to run thru SQLTools the query is executing in 2 seconds
and when i run the same query thru JAVA it is exectuting in more than a minute.
I am using a hint /*+ordered use_nl (a, b)*/
I am using ojdbc6.jar Is it because of any JARS?
Please help whats causing this?
For Oracle 9i or later, database engine can tune SQL automatically, few case need to specify hints.
How do you run query in Java, repeat to run in loop? use parameter? prepare before execute?
You need to profile your Java application to identify where in the Java code before or after the SQL execution that overhead time is being spent.
Some profiling tools that you can use for that are: YourKit, JProfiler and HPROF (this one is a command line tool).
Related
As I indicated in another post, I'm having trouble with some SPIN constructors taking an excessive amount of time to execute quite limited data. I thought I'd take a different approach and see if I can profile the execution of the constructors to gain insight into where specifically they are spending excessive time.
How do I go about profiling the execution of constructors under RDF4J Server? I'm instantiating via SPARQL update (INSERT DATA) queries. Here's the System Information on RDF4J workbench:
I've attempted to profile the Tomcat server under which the RDF4J Server runs using jvisualvm.exe, but I have not gained much insight. Ideally, I'd like to get down to the class/method level within RDF4J so that I can post a more detailed request for help on my slow execution problem or perhaps fix my queries to be more efficient themselves.
So here's the version of Java Visual VM:
RDF4J is running under Apache Tomcat 8.5.5:
I can see overview information on Tomcat:
I can also see the monitor tab and threads:
HOWEVER, what I really want to see is the profiler so that I can see where my slow queries are spending so much time. That hangs on Calibration since I don't have the profiler calibrated for Java 1.8.
This attempting to connect box will persist indefinitely. Canceling it leads to the Performing Calibration message which doesn't actually do anything and is a dead-end hang requiring the Java VisualVM to be killed.
After killing the Java Visual VM and restarting and looking at Options-->Profiling-->Calibration Data, I see that only Java 7 has calibration data.
I have tried switching Tomcat over to running on Java 7, and that did work:
The profiler did come up with Tomcat:
However, when I tried to access the RDF4J workbench while Tomcat ran on Java 7, I could not get the workbench running:
So, I'm still stuck. It would appear that RDF4J requires Tomcat running under Java 1.8, not 1.7. I can't profile under Java 1.8.
I have seen other posts on this problem with Java VisualVM, but the one applicable solution seems to be to bring everything up in a development environment (e.g. Eclipse) and dynamically invoke the profiler at a debugger breakpoint once the target code is running under Java 1.8. I'm not set up to do that with Tomcat and RDF4J and would need pointers. My intention was not to become a Tomcat or RDF4J contributer (because my tasking doesn't allow that... I wouldn't be paid for the time) but rather to get a specific handle on what's taking so long for my SPIN constructor(s) in terms of RDF4J server classes and then ask for help from the RDF4J developer community on gitub.
Can Java VisualVM calibration be bypassed? Could I load a calibration file or directory somewhere for Java VisualVM to use instead of trying to measure calibration data which fails? I'm only interested in the relative CPU loading of classes, not absolute metrics, and I don't need to compare to measurements on other machines.
Thanks.
I am using the TFS Java SDK (version 11.0) to create some wrapper functions for a website. I have code that queries Work Items to retrieve information about defects. When I run the code in Eclipse it takes about 8-10 seconds to retrieve all 1000 Work Items. That same code when it is run in a web container (Tomcat) takes twice as long. I cannot figure out why it runs slower in Tomcat vs just running it in Eclipse. Any ideas?
With this data I can not figure out a reason, but you can try to use javOSize, particularly http://www.javosize.com/gettingStarted/slow.html, their tool is free and they are very collaborative to help you find your slow down problems.
You can follow a similar procedure, according to your data I would execute:
Lets imagine your wrapper function is called com.acme.WrapperClass, then do:
cd REPOSITORY
exec FIND_SLOW_METHODS_EXECUTING_CLASS com.acme.WrapperClass 100 20000
This will block for 20s, and will examine any method taking more than 100 ms. As soon as you execute the exec command execute your slow transaction and wait until javOSize returns the output, repeat the same procedure for your eclipse running process.
Paste both output here and hopefully we will find the answer.
I am doing a installation where mostly DB changes are running. I mean insert statements, creation of tables, procedures, functions and various other DDL and DML statements get executed. These scripts are executed through java-jdbc.
How can I track from DB (by running some query) whether the SQ: scripts are executing or have stopped?
I do have logging in place but for some scenarios I wish to know if the script is still running in DB or its processing has stopped. How can i check that?
You can use this tool to see what's going on in the JDBC driver.
Although it does not provide the query to see if the scripts are executed or not (not exactly what you're trying to achieve), but it will help you to understand what's going on
In my tomcat app, I query records in mongodb using DBCursor, when there are too many records, the cursor will stuck at .next(), then, SocketTimeoutException will be thrown.
While, if I do it in a standalone java process(start by java -jar XXX.jar MyClass), this won't happen.
Any suggestion why this happened?
Thanks
PS.I doubt if it is caused by the memory limit of each tomcat thread or something.
Reduce your keepAliveTime to 5mins (300 seconds) for mongodb. Generally, kkepAliveTime is set to 2 hours (7200 seconds).
For more info.
Basically I am trying to profile web application which runs on tomcat and uses hsqldb(insecure web application from OWASP). I am using java profiler(jp2-2.1 not widely used at all)to profile tomcat server. The profiler profiles sequence of method call in which they executed in xml format. In short it generates calling context tree of the program/application run.
I noticed that the sequence in which methods of hsqldb get executed differ for EXACTLY two same runs of an application. which I expect to be same. To confirm this, I tried to profile sample program of hsqldb and profiler again generated diffrent output for the same program.
I am running the sample program from here: (http://hsqldb.sourceforge.net/doc/guide/apb.html)
So now I am sure that, the sequence in which hsqldb methods get executed, differ for exact two same runs of the program.
Could someone please tell me what is the reason behind this ? I would be very curious to know this.
I have not used hsqldb ever so dont know in detail how it works exactly.
Thanks.
The sequence in which HSQLDB methods are executed should generally be the same if the executed SQL statements are exactly the same, and each run starts with an empty database.
There will be minor differences between the first run and the runs that follow, because some static data is initialised in the first run.