Is there a way to see the logs behind the execute query? - java

I'm working on a project that uses i-net Opta2000 driver for database connection in a java application. I'm stuck at a point where a complex query takes around 2 minutes time to execute but the same query when executed from query browser takes time from 2-5 seconds. Also, when I hard coded this query in the application and executed it, it again took just few seconds time.
I then searched online for Driver logging and found the following statement that I used in my applycation - DriverManager.setLogStream(System.out);
And I have written logs before and after the execute query to check the timestamp too. However, after this logging (driver logging mentioned above) I found that after the control reaches the execute query, it displays - "connection.close". Sometimes, it is just printed once, sometimes twice or thrice and sometimes I don't see it at all. This "connection.close" is beyond my understanding. If anyone can explain me why I see this and also if there is any other way to see the logging behind the execute query statement.
I searched and tried setting DB logs in log4j but this couldn't solve my problem.

Related

Can not detect where the NoMoreReturnsException is coming from

I am using Hibernate with Oracle database. Regular calls to stored procedures using EntityManager's createStoredProcedureQuery and calling with procedure.execute().
Everything seems to work fine while I'm debugging and the results are also committed to the database.
Only recently, we set up the Dynatrace troubleshooting framework which detects the errors alongside the Network. So I discovered that there are thousands of org.hibernate.result.NoMoreReturnsException detected in most of my methods that execute the stored procedures.
This is the exception message:
Results have been exhausted
This is the stack trace:
OutputsImpl$CurrentReturnState.buildOutput
Hibernate | org.hibernate.result.internal
StoredProcedureQueryImpl.execute
Hibernate | org.hibernate.jpa.internal
I have try-catch blocks all around the code so any exception thrown during execution should be logged but I'm seeing none.
Also, I'm not using the result set in most of them, so the problem seems more general independent of whether the procedure has only input or output as well.
Could anyone advise on where should I seek for the problem or what can I try to solve this?

Track DB Changes

I am doing a installation where mostly DB changes are running. I mean insert statements, creation of tables, procedures, functions and various other DDL and DML statements get executed. These scripts are executed through java-jdbc.
How can I track from DB (by running some query) whether the SQ: scripts are executing or have stopped?
I do have logging in place but for some scenarios I wish to know if the script is still running in DB or its processing has stopped. How can i check that?
You can use this tool to see what's going on in the JDBC driver.
Although it does not provide the query to see if the scripts are executed or not (not exactly what you're trying to achieve), but it will help you to understand what's going on

Hibernate deadlocking with 20 simulated users

We have a basic Java EE app that runs under tomcat and maintains a connection pool to a SQL server database. We were having some data issues showing up only in production, so I created a testing tool that would simulate different numbers of users going through the system on different paths.
I've worked on this a bit and so the problem's evolved as I chased it. Now the problem is this.
Ten user threads works perfectly. Twenty user threads and the log record that gets created when the user logs into the system never gets inserted for any of the 20 users. In fact, Hibernate 3.3 goes through the motions of inserting the record, but when I use the show_sql setting, the insert statement never shows up in the dump. Again this works perfectly with 10 users. And more puzzling, every once in a while it will work for one of the 20 users. :(
I'm using the JTDS driver, btw, to avoid the problems we kept finding with the MS one.
I am running SQL Server Express 2008 R2 on my local box with tomcat and running my test app in my eclipse IDE. Has anyone seen anything like this? Any ideas as to why hibernate might be locking after 10 users?
I believe the problem is that you cannot open enough sessions as you need (Because they are pooled)
How do you open the session ?
What size does your connection pool have?
Do you always close the sessions?

Eclipse and database giving different results

I have a large Java EE system connecting to an Oracle database via JDBC on Windows.
I am debugging a piece of code that does a retrieval of a field from the database.
The code is retrieving the field but when I run the exact same SELECT, copied from Eclipse, it does not yield any results.
Can any of you tell me why this would be the case?
I'm at a loss..
One possible reason is that the application might see uncommited data which it just created, but not yet committed.
When you execute the same statement in a different session it doesn't see the data (depending on your transaction isolation level)

DB2 jdbc performance

doing profiling on an java application running websphere 7 and DB2 we can see that we spend most of our time in the com.ibm.ws.rsadapter.jdbc package handling connections to and from the database.
How can we tune our jdbc performance?
What other strategies exist when database performance is a bottleneck?
Thanks
You should check your websphere manual for how you configure a connection pool.
Update 2021
Here is an introduction inculding code samples
Update 2021
One cause of slow connect times is a deactivated database, which does not open its files and allocate its memory buffers and heaps until the first application attempts to connect to it. Ask your DBA to confirm that the database is active before running your tests. The LIST ACTIVE DATABASES command (run from the local DB2 server or over a remote attachment) should show your database in its output. If the database is not activated, have your DBA activate it explicitly with ACTIVATE DATABASE yourDBname. That will ensure that the database files and memory structures remain available even when the last user disconnects from the database.
Use GET MONITOR SWITCHES to ensure all your monitor switches are enabled for your database, otherwise you'll miss out on some potentially revealing performance details. The additional overhead of tracking the data associated with those monitor switches is minimal, while the value of the performance data is significant.
If the database is always active and things still seem slow, there are detailed DB2 traces called event monitors that log everything they encounter to a file, pipe, or DB2 table. The statement event monitor is one I turn to fairly often to analyze SQL statement efficiency and UOW hygiene. I also prefer taking the extra hit to log the event monitor records to a table rather than a file, so I can use SQL to search the data for all sorts of patterns. The db2evtbl utility makes it fairly easy to define the event monitor you want and create the tables to store its output. The SET EVENT MONITOR STATE command is how you start and stop the event monitor you've created.
In my experience what you are seeing is pretty common. The question to ask is what exactly is the DB2 connection doing...
The first thing to do is to try and isolate the performance issue down to a section of the website - i.e. is there one part of the application that see poor performance, when you find that you can increase the trace logging to see if you can see the query causing issues.
Additionally, if you chat to your DBA's they may be able to run some analysis on the database to tell you what queries are taking the time to return values, this may also help in your troubleshooting.
Good luck!
Connection pooling
Caching
DBAs

Categories