I am using the TFS Java SDK (version 11.0) to create some wrapper functions for a website. I have code that queries Work Items to retrieve information about defects. When I run the code in Eclipse it takes about 8-10 seconds to retrieve all 1000 Work Items. That same code when it is run in a web container (Tomcat) takes twice as long. I cannot figure out why it runs slower in Tomcat vs just running it in Eclipse. Any ideas?
With this data I can not figure out a reason, but you can try to use javOSize, particularly http://www.javosize.com/gettingStarted/slow.html, their tool is free and they are very collaborative to help you find your slow down problems.
You can follow a similar procedure, according to your data I would execute:
Lets imagine your wrapper function is called com.acme.WrapperClass, then do:
cd REPOSITORY
exec FIND_SLOW_METHODS_EXECUTING_CLASS com.acme.WrapperClass 100 20000
This will block for 20s, and will examine any method taking more than 100 ms. As soon as you execute the exec command execute your slow transaction and wait until javOSize returns the output, repeat the same procedure for your eclipse running process.
Paste both output here and hopefully we will find the answer.
Related
There is scenario where
At step 1
InvokeTakaraJar(parameter..) is called
Which does the work of updating table with records but this is a normal Java jar and not a Spark code
The at step 2
There is
var df = GetDBTable(parameter..) which should get the records from the table being updated in above step.
Problem is since the first step is just the invoke of main method of external Java jar, it runs from the driver
And the 2nd step does not wait for completion of the step 1.
Ideally 2nd step needs to wait for the first to complete.
How to achieve this in Spark scala code where there is a requirement to run a different Java jar which needs to complete first and then Spark step should execute.
Spark doesn't really do guaranteed order very well. It actually wants to complete several tasks in parallel. I would be concerned about running a java program because it may not scale up to be able to complete when you are using data at scale. (So let's pretend for the sake of the argument your data that java is updating will always be small.)
That said if you need to run this java program and then run spark why not launch the spark job from Java after you have completed your table update?
Why not run a shell/oozie/build script that runs your java program first and then launches the spark job.
If you are looking for performance, consider rewriting the java job so it can be done using spark tooling.
For the absolute best performance see if you can re-write the java tooling so that it's triggered on data entry so that you never need to run it as a batch job, you can depend on the data already being updated.
As I indicated in another post, I'm having trouble with some SPIN constructors taking an excessive amount of time to execute quite limited data. I thought I'd take a different approach and see if I can profile the execution of the constructors to gain insight into where specifically they are spending excessive time.
How do I go about profiling the execution of constructors under RDF4J Server? I'm instantiating via SPARQL update (INSERT DATA) queries. Here's the System Information on RDF4J workbench:
I've attempted to profile the Tomcat server under which the RDF4J Server runs using jvisualvm.exe, but I have not gained much insight. Ideally, I'd like to get down to the class/method level within RDF4J so that I can post a more detailed request for help on my slow execution problem or perhaps fix my queries to be more efficient themselves.
So here's the version of Java Visual VM:
RDF4J is running under Apache Tomcat 8.5.5:
I can see overview information on Tomcat:
I can also see the monitor tab and threads:
HOWEVER, what I really want to see is the profiler so that I can see where my slow queries are spending so much time. That hangs on Calibration since I don't have the profiler calibrated for Java 1.8.
This attempting to connect box will persist indefinitely. Canceling it leads to the Performing Calibration message which doesn't actually do anything and is a dead-end hang requiring the Java VisualVM to be killed.
After killing the Java Visual VM and restarting and looking at Options-->Profiling-->Calibration Data, I see that only Java 7 has calibration data.
I have tried switching Tomcat over to running on Java 7, and that did work:
The profiler did come up with Tomcat:
However, when I tried to access the RDF4J workbench while Tomcat ran on Java 7, I could not get the workbench running:
So, I'm still stuck. It would appear that RDF4J requires Tomcat running under Java 1.8, not 1.7. I can't profile under Java 1.8.
I have seen other posts on this problem with Java VisualVM, but the one applicable solution seems to be to bring everything up in a development environment (e.g. Eclipse) and dynamically invoke the profiler at a debugger breakpoint once the target code is running under Java 1.8. I'm not set up to do that with Tomcat and RDF4J and would need pointers. My intention was not to become a Tomcat or RDF4J contributer (because my tasking doesn't allow that... I wouldn't be paid for the time) but rather to get a specific handle on what's taking so long for my SPIN constructor(s) in terms of RDF4J server classes and then ask for help from the RDF4J developer community on gitub.
Can Java VisualVM calibration be bypassed? Could I load a calibration file or directory somewhere for Java VisualVM to use instead of trying to measure calibration data which fails? I'm only interested in the relative CPU loading of classes, not absolute metrics, and I don't need to compare to measurements on other machines.
Thanks.
I am using iText java lib with php script to fill-out pdf etc...
It worked perfectly on my computer, but when it went live on the server, it started acting up. For example the load time of pdf is totally unpredictable. Sometimes it would be almost instantaneous, like on my machine, and sometimes it would take up to 20 seconds.
I suspect it has something to do with JVM constantly loading up on every request. It it possible to somehow optimize for this situation?
The way I invoke it is simply:
exec('java -classpath ".;itextpdf-5.1.1.jar" StreamPdf blah.pdf blah.fdf target.pdf');
best bet... see if there is an interface or wrapper for this that you can use with an app server, such as apache tomcat or others.
looks like a tutorial already exists for this here http://www.geek-tutorials.com/java/itext/servlet_jsp_output_pdf.php
This will allow you to keep one instance of the app constantly running, avoiding the overhead of re-instantiating the jar every time.
You would need to then issue http requests in php to the running instance via curl etc...
The Google Closure Compiler for JavaScript is quite speedy when I use it online; however, it takes up to 10 seconds to run from the commandline (java -client -jar path/to/closure.jar options...).
Is there any way to reduce this to the same times as the web app (3 seconds max)? I cannot use the web app because my company requires all builds be able to work without an internet connection.
I suspect this is mostly startup time (why I added the -client tag), but I don't know.
I would suggest looking into Plovr[1]. You need only start it once, after which point it will monitor changes in your dependencies and recompile as needed. You can use the same config on your build server to create the output without starting it as a service.
http://www.plovr.com/
I am trying to execute a query in oracle db.
When i try to run thru SQLTools the query is executing in 2 seconds
and when i run the same query thru JAVA it is exectuting in more than a minute.
I am using a hint /*+ordered use_nl (a, b)*/
I am using ojdbc6.jar Is it because of any JARS?
Please help whats causing this?
For Oracle 9i or later, database engine can tune SQL automatically, few case need to specify hints.
How do you run query in Java, repeat to run in loop? use parameter? prepare before execute?
You need to profile your Java application to identify where in the Java code before or after the SQL execution that overhead time is being spent.
Some profiling tools that you can use for that are: YourKit, JProfiler and HPROF (this one is a command line tool).