oracle data insertion failure without any error - java

I encountered the problem of data insertion failure, and oracle does not report any errors and prompts.
In order to verify this problem, we performed a query after the data was inserted, but the data can be queried at that time after the insertion, and after a period of time, it can not be queried again. I guess there may have been some regression.
Since Oracle does not have any log alerts, I have no more information to provide.
How can we solve this problem.

Related

Is there a way to see the logs behind the execute query?

I'm working on a project that uses i-net Opta2000 driver for database connection in a java application. I'm stuck at a point where a complex query takes around 2 minutes time to execute but the same query when executed from query browser takes time from 2-5 seconds. Also, when I hard coded this query in the application and executed it, it again took just few seconds time.
I then searched online for Driver logging and found the following statement that I used in my applycation - DriverManager.setLogStream(System.out);
And I have written logs before and after the execute query to check the timestamp too. However, after this logging (driver logging mentioned above) I found that after the control reaches the execute query, it displays - "connection.close". Sometimes, it is just printed once, sometimes twice or thrice and sometimes I don't see it at all. This "connection.close" is beyond my understanding. If anyone can explain me why I see this and also if there is any other way to see the logging behind the execute query statement.
I searched and tried setting DB logs in log4j but this couldn't solve my problem.

Stale Lucene index when using multiple machines

I've got a Java/Hibernate/MySQL application up and running, and it works very nicely.
Recently I've been using Lucene (Hibernate Search) to speed up the searching and avoid round trips to the database by using projection. That works great too, except that the index gets stale when the application gets used on multiple machines. Lucene does a good job of updating the local index when changes are made locally, but it can't see changes from other machines.
Currently, I am:
reindexing in full once a week
updating a "last modified" time on all records, and updating the local index at start time based on anything modified since last indexing
But this doesn't work for deletions. If something gets deleted on one machine, it still turns up in searches on other machines.
Is there a 'standard' way to deal with this? I can think of a few options, none of which excite me:
reindex in full every night (still stale during the day, though)
maintain a table of deleted records so that I can use it to update locally
perform a round trip to the db at startup time to find all entries in the index but not in the db
add some sort of trigger to the db to record something somewhere when something gets deleted (this would work for updates as well as deletions)
Hard to believe this is a new problem, but I couldn't find any convincing answers.
Any help much appreciated.

Increase JDBC performance for bulk inserts

Currently I have a throughput of about 350MB/hour which isn't alot. The bottleneck is the insertions into the Sybase database so I am looking for ways to increase the throughput.
I Can only use free JDBC drivers - none of which support driver level bulk inserts (as far as I know).
Currently I have got autoCommit set to false (so is transactional). Preparing statement, adding to the batch and then executing the batch every 2000 records (I have played with this number, but it doesn't help). Then commiting the transaction all the inserts have been executed.
Currently using the JTDS driver.
So I am resorting to any hacks, tips and tricks anyone has to increase the throughput.
Additional details:
There are no triggers on the table.
Only constraint is a public key consisting of 3 fields. (with indices)
The statement is literally INSERT INTO table([col],[col1],[col2],[col3]) VALUES (?,?,?,?)
I also ran into performance issues.
I came to know that multiple queries using JDBC causes lot of network overhead on Application and Database server. Also it results in delay due to network round trips.
Please consider following. It might help:
Multiple queries VS Stored Procedure
Write a redcords in file and insert into through bcp utility. it would be much faster then what you are currenlty doing.

Eclipse and database giving different results

I have a large Java EE system connecting to an Oracle database via JDBC on Windows.
I am debugging a piece of code that does a retrieval of a field from the database.
The code is retrieving the field but when I run the exact same SELECT, copied from Eclipse, it does not yield any results.
Can any of you tell me why this would be the case?
I'm at a loss..
One possible reason is that the application might see uncommited data which it just created, but not yet committed.
When you execute the same statement in a different session it doesn't see the data (depending on your transaction isolation level)

DB2 jdbc performance

doing profiling on an java application running websphere 7 and DB2 we can see that we spend most of our time in the com.ibm.ws.rsadapter.jdbc package handling connections to and from the database.
How can we tune our jdbc performance?
What other strategies exist when database performance is a bottleneck?
Thanks
You should check your websphere manual for how you configure a connection pool.
Update 2021
Here is an introduction inculding code samples
Update 2021
One cause of slow connect times is a deactivated database, which does not open its files and allocate its memory buffers and heaps until the first application attempts to connect to it. Ask your DBA to confirm that the database is active before running your tests. The LIST ACTIVE DATABASES command (run from the local DB2 server or over a remote attachment) should show your database in its output. If the database is not activated, have your DBA activate it explicitly with ACTIVATE DATABASE yourDBname. That will ensure that the database files and memory structures remain available even when the last user disconnects from the database.
Use GET MONITOR SWITCHES to ensure all your monitor switches are enabled for your database, otherwise you'll miss out on some potentially revealing performance details. The additional overhead of tracking the data associated with those monitor switches is minimal, while the value of the performance data is significant.
If the database is always active and things still seem slow, there are detailed DB2 traces called event monitors that log everything they encounter to a file, pipe, or DB2 table. The statement event monitor is one I turn to fairly often to analyze SQL statement efficiency and UOW hygiene. I also prefer taking the extra hit to log the event monitor records to a table rather than a file, so I can use SQL to search the data for all sorts of patterns. The db2evtbl utility makes it fairly easy to define the event monitor you want and create the tables to store its output. The SET EVENT MONITOR STATE command is how you start and stop the event monitor you've created.
In my experience what you are seeing is pretty common. The question to ask is what exactly is the DB2 connection doing...
The first thing to do is to try and isolate the performance issue down to a section of the website - i.e. is there one part of the application that see poor performance, when you find that you can increase the trace logging to see if you can see the query causing issues.
Additionally, if you chat to your DBA's they may be able to run some analysis on the database to tell you what queries are taking the time to return values, this may also help in your troubleshooting.
Good luck!
Connection pooling
Caching
DBAs

Categories