I have a large Java EE system connecting to an Oracle database via JDBC on Windows.
I am debugging a piece of code that does a retrieval of a field from the database.
The code is retrieving the field but when I run the exact same SELECT, copied from Eclipse, it does not yield any results.
Can any of you tell me why this would be the case?
I'm at a loss..
One possible reason is that the application might see uncommited data which it just created, but not yet committed.
When you execute the same statement in a different session it doesn't see the data (depending on your transaction isolation level)
Related
I am using Hibernate with Oracle database. Regular calls to stored procedures using EntityManager's createStoredProcedureQuery and calling with procedure.execute().
Everything seems to work fine while I'm debugging and the results are also committed to the database.
Only recently, we set up the Dynatrace troubleshooting framework which detects the errors alongside the Network. So I discovered that there are thousands of org.hibernate.result.NoMoreReturnsException detected in most of my methods that execute the stored procedures.
This is the exception message:
Results have been exhausted
This is the stack trace:
OutputsImpl$CurrentReturnState.buildOutput
Hibernate | org.hibernate.result.internal
StoredProcedureQueryImpl.execute
Hibernate | org.hibernate.jpa.internal
I have try-catch blocks all around the code so any exception thrown during execution should be logged but I'm seeing none.
Also, I'm not using the result set in most of them, so the problem seems more general independent of whether the procedure has only input or output as well.
Could anyone advise on where should I seek for the problem or what can I try to solve this?
I am calling a Oracle database package in my application. If I recompile that package, then my application need to be restarted, else it will say code modified.
Can anyone explain to me why this happens?
Oracle packages can have state information. Within the body of the package you can have variables defined at the package level. These "globals" exist between calls to the database and are associated with a database session. When the package is recompiled (I'm guessing this is what you're seeing in the error as "modified") you could have added or removed variables from the package body so Oracle has to throw out the old state of the package and make a fresh one. It warns you that it did this by raising an ORA-04068: Existing state of packages has been discarded.
If you're using some sort of connection pooling (including Database Resident Connection Pooling), which is typical on a web server, you need to remember that the connection isn't really closed when you close it in your code. It's just returned to the pool when the app server is done with it (by calling Closed, Dispose, etc) but it stays open and Oracle Database doesn't notice that you consider it "closed". When a new connection is needed it grabs the old connection from the pool and gives it to the server. Since Oracle never closed the connection the session is still active from the last time it was used. If the package was changed since the last time the connection was used you could still get an ORA-04068 even though to your code it looks like you had just opened a brand new connection. Restarting your application server would cause all connections in the pool to be closed at shutdown and recreated at start-up which would seem to be how you're solving the problem right now.
The best option if you can do it is to use edition based redefinition. This way you compile the new package but only new sessions will use the new code. Old sessions will continue to use the old code. Again if you're doing things like bug fixes this may not be ideal since you're at the mercy of the old sessions replacing the new before they'll pick up the fixes.
A second option is if you know you don't care if the internal state of that particular package is lost is to just run the package procedure/function call again. Oracle won't give you the ORA-04068 again (unless the package is recompiled again).
Hope this helps. If not some more details about the exact error your seeing and your environment would be helpful.
As far as I know, you don't need to restart your application, just recreate your connections to database, it's because the driver hold a in memory a link with the previous compiled version of your package and so a new connection will get the update version. It's usually is observed on PLSQL/Oracle databases and it's a thing related with the driver not with Java.
Have a look in this question/answer, Does Tomcat use cached versions of pl/sql modules?, its have some suggestions to how overcome this situations.
Hope it helps!
Flushing the shared pool after re-compile that package can help as it will force the connected sessions to do a re-parse whenever they try to access that package for the first time after the shared pool flush.
With DBA privilege:
alter system flush shared_pool;
Please note that other applications can experience some slowness for a little time after flushing the shared pool as their connected session will also be forced re-parse their SQL/PLSQL statements as well, so it's recommended to plan for package recompile along with flushing the shared pool out of peak time.
I am doing a installation where mostly DB changes are running. I mean insert statements, creation of tables, procedures, functions and various other DDL and DML statements get executed. These scripts are executed through java-jdbc.
How can I track from DB (by running some query) whether the SQ: scripts are executing or have stopped?
I do have logging in place but for some scenarios I wish to know if the script is still running in DB or its processing has stopped. How can i check that?
You can use this tool to see what's going on in the JDBC driver.
Although it does not provide the query to see if the scripts are executed or not (not exactly what you're trying to achieve), but it will help you to understand what's going on
doing profiling on an java application running websphere 7 and DB2 we can see that we spend most of our time in the com.ibm.ws.rsadapter.jdbc package handling connections to and from the database.
How can we tune our jdbc performance?
What other strategies exist when database performance is a bottleneck?
Thanks
You should check your websphere manual for how you configure a connection pool.
Update 2021
Here is an introduction inculding code samples
Update 2021
One cause of slow connect times is a deactivated database, which does not open its files and allocate its memory buffers and heaps until the first application attempts to connect to it. Ask your DBA to confirm that the database is active before running your tests. The LIST ACTIVE DATABASES command (run from the local DB2 server or over a remote attachment) should show your database in its output. If the database is not activated, have your DBA activate it explicitly with ACTIVATE DATABASE yourDBname. That will ensure that the database files and memory structures remain available even when the last user disconnects from the database.
Use GET MONITOR SWITCHES to ensure all your monitor switches are enabled for your database, otherwise you'll miss out on some potentially revealing performance details. The additional overhead of tracking the data associated with those monitor switches is minimal, while the value of the performance data is significant.
If the database is always active and things still seem slow, there are detailed DB2 traces called event monitors that log everything they encounter to a file, pipe, or DB2 table. The statement event monitor is one I turn to fairly often to analyze SQL statement efficiency and UOW hygiene. I also prefer taking the extra hit to log the event monitor records to a table rather than a file, so I can use SQL to search the data for all sorts of patterns. The db2evtbl utility makes it fairly easy to define the event monitor you want and create the tables to store its output. The SET EVENT MONITOR STATE command is how you start and stop the event monitor you've created.
In my experience what you are seeing is pretty common. The question to ask is what exactly is the DB2 connection doing...
The first thing to do is to try and isolate the performance issue down to a section of the website - i.e. is there one part of the application that see poor performance, when you find that you can increase the trace logging to see if you can see the query causing issues.
Additionally, if you chat to your DBA's they may be able to run some analysis on the database to tell you what queries are taking the time to return values, this may also help in your troubleshooting.
Good luck!
Connection pooling
Caching
DBAs
I did find other posts similar to this, but wanted a little extra information on these options of mysqldump. I have understood that the --single-transaction and --lock-tables are mutually exclusive operations. Following are my questions regarding these options.
a) Suppose I have chosen to use --lock-tables option. In this case the mysqldump acquires a read lock on all the tables. So any other process trying to write to the tables will go into a blocked (wait) state. But if the mysqldump takes a really long time, would the processes that are waiting continue to wait indefinitely?
I tried this experiment for example- I have a Java (JDBC) program writing to a mysql database table called MY_TEST. I logged in to mysql console and issue "LOCK TABLES MY_TEST READ;" command manually. So the Java process got blocked waiting for the lock to get released. My question is would there be a connection time out or any such problem if the read lock does not get released for long time? I waited for two minutes and did not notice any error and the java process continued normally once the lock was released using "UNLOCK tables" command. Is this behavior specific to the java mysql driver or can I expect the same thing from a C program using mysql driver?
b) My second question is on the --single-transaction option. Suppose I have 10 InnoDB tables, out of which 3 tables are related to each other (using FK) and the others being independent but still using InnoDB engine. Does single transaction apply only for the 3 tables which are inter-related using FK? or can I expect the state of the 7 independent tables to be exactly how it was when the 3 inter-dependent tables were dumped.
a.) I believe the answer is yes, at the mysql level the connections will wait indefinitely for mysqldump to release the table locks. You can control this a bit at the application level by using a connection pool with a validation query that queries against the tables getting locked and setting the timeout for retrieval to whatever you want. This would be pretty easy to do in c3p0 for example. However, in the absence of other information, I would not recommend this approach; it seems pretty kludgey. I've not used the mysql C driver so I can't so for certain, but I would assume similar behavior to Java. All of this is why for mysqldump is not a good option for a live backup of systems with non-trivial amounts of data and activity.
b. All tables dumped will be dumped as part of a single transaction, thereby yielding a consistent snapshot for all of the tables participating in the dump. Primary-foreign key relationships will have no bearing on the transaction. Using single-transaction is viable option for hot backups.