I am doing a installation where mostly DB changes are running. I mean insert statements, creation of tables, procedures, functions and various other DDL and DML statements get executed. These scripts are executed through java-jdbc.
How can I track from DB (by running some query) whether the SQ: scripts are executing or have stopped?
I do have logging in place but for some scenarios I wish to know if the script is still running in DB or its processing has stopped. How can i check that?
You can use this tool to see what's going on in the JDBC driver.
Although it does not provide the query to see if the scripts are executed or not (not exactly what you're trying to achieve), but it will help you to understand what's going on
Related
I am using Hibernate with Oracle database. Regular calls to stored procedures using EntityManager's createStoredProcedureQuery and calling with procedure.execute().
Everything seems to work fine while I'm debugging and the results are also committed to the database.
Only recently, we set up the Dynatrace troubleshooting framework which detects the errors alongside the Network. So I discovered that there are thousands of org.hibernate.result.NoMoreReturnsException detected in most of my methods that execute the stored procedures.
This is the exception message:
Results have been exhausted
This is the stack trace:
OutputsImpl$CurrentReturnState.buildOutput
Hibernate | org.hibernate.result.internal
StoredProcedureQueryImpl.execute
Hibernate | org.hibernate.jpa.internal
I have try-catch blocks all around the code so any exception thrown during execution should be logged but I'm seeing none.
Also, I'm not using the result set in most of them, so the problem seems more general independent of whether the procedure has only input or output as well.
Could anyone advise on where should I seek for the problem or what can I try to solve this?
I'm working on a project that uses i-net Opta2000 driver for database connection in a java application. I'm stuck at a point where a complex query takes around 2 minutes time to execute but the same query when executed from query browser takes time from 2-5 seconds. Also, when I hard coded this query in the application and executed it, it again took just few seconds time.
I then searched online for Driver logging and found the following statement that I used in my applycation - DriverManager.setLogStream(System.out);
And I have written logs before and after the execute query to check the timestamp too. However, after this logging (driver logging mentioned above) I found that after the control reaches the execute query, it displays - "connection.close". Sometimes, it is just printed once, sometimes twice or thrice and sometimes I don't see it at all. This "connection.close" is beyond my understanding. If anyone can explain me why I see this and also if there is any other way to see the logging behind the execute query statement.
I searched and tried setting DB logs in log4j but this couldn't solve my problem.
I have a large Java EE system connecting to an Oracle database via JDBC on Windows.
I am debugging a piece of code that does a retrieval of a field from the database.
The code is retrieving the field but when I run the exact same SELECT, copied from Eclipse, it does not yield any results.
Can any of you tell me why this would be the case?
I'm at a loss..
One possible reason is that the application might see uncommited data which it just created, but not yet committed.
When you execute the same statement in a different session it doesn't see the data (depending on your transaction isolation level)
If I have a method in a Java application for inserting data in to a RDBMS does the method move forward once it has passed the query to the database. i.e will the java application finish the method(connect, run query, close connection) before the RDBMS has processed the query. I want to run some tests but was not sure if the application will finish before the RDBMS and as a result give very little insight into how quickly the database has processed the data?
I guess I could finish each test, with drop table close connection, to ensure that the application has had to wait for the RDBMS to catch up.
Also will using the Eclipse IDE to test RDBMS over different opperating systems affect the performance drastically of the O.S it is running on. (Windows7 Solaris Ubuntu)
Thanks to anyone who makes an attempt to answer.
Using an IDE won't affect performance - it's just a regular JVM that is started.
As for the other question - unless you are using some asynchronous driver (I don't know if such exist), or starting new threads for each operation, they are synchronous - i.e. the program waits for the database to return result (or to timeout, which should be configurable)
Generally, all such calls are synchronous; they block until the operation completes. Unless you are doing something very different.
You may want to look into connection pooling so that you can reuse the connections to avoid creation/destruction costs which can be substantial.
doing profiling on an java application running websphere 7 and DB2 we can see that we spend most of our time in the com.ibm.ws.rsadapter.jdbc package handling connections to and from the database.
How can we tune our jdbc performance?
What other strategies exist when database performance is a bottleneck?
Thanks
You should check your websphere manual for how you configure a connection pool.
Update 2021
Here is an introduction inculding code samples
Update 2021
One cause of slow connect times is a deactivated database, which does not open its files and allocate its memory buffers and heaps until the first application attempts to connect to it. Ask your DBA to confirm that the database is active before running your tests. The LIST ACTIVE DATABASES command (run from the local DB2 server or over a remote attachment) should show your database in its output. If the database is not activated, have your DBA activate it explicitly with ACTIVATE DATABASE yourDBname. That will ensure that the database files and memory structures remain available even when the last user disconnects from the database.
Use GET MONITOR SWITCHES to ensure all your monitor switches are enabled for your database, otherwise you'll miss out on some potentially revealing performance details. The additional overhead of tracking the data associated with those monitor switches is minimal, while the value of the performance data is significant.
If the database is always active and things still seem slow, there are detailed DB2 traces called event monitors that log everything they encounter to a file, pipe, or DB2 table. The statement event monitor is one I turn to fairly often to analyze SQL statement efficiency and UOW hygiene. I also prefer taking the extra hit to log the event monitor records to a table rather than a file, so I can use SQL to search the data for all sorts of patterns. The db2evtbl utility makes it fairly easy to define the event monitor you want and create the tables to store its output. The SET EVENT MONITOR STATE command is how you start and stop the event monitor you've created.
In my experience what you are seeing is pretty common. The question to ask is what exactly is the DB2 connection doing...
The first thing to do is to try and isolate the performance issue down to a section of the website - i.e. is there one part of the application that see poor performance, when you find that you can increase the trace logging to see if you can see the query causing issues.
Additionally, if you chat to your DBA's they may be able to run some analysis on the database to tell you what queries are taking the time to return values, this may also help in your troubleshooting.
Good luck!
Connection pooling
Caching
DBAs