Somewhere way inside JBoss in a hibernate query I'm catching an error that leaves me with a ResultSet. This code is a plugged in custom data type.
It would be nice if I could simple do rs.getStatement().toString() and be done with it, but that unfortunately doesn't give away anything about the sql statement that went into it.
I was thinking doing something with ((PreparedStatement)rs.getStatement()).getMetaData().
I really wished hibernate would be a little more informative when it runs into errors.
Does anyone have a good solution to help reveal which table and which column that was used when the exception occurred?
Simply enable SQL logging in the Hibernate configuration properties by setting the hibernate.show_sql property to true.
This more reliable than examining the result sets metadata since the where clause is not available.
One way you can debug Hibernate is by turning on its detailed logging.
For example, you can log all SQL statements as they are executed by turning on logging for org.hibernate.SQL. From here you should be able to narrow down the last statement executed prior to your exception.
Documentation can be found here.
getting the MetaData for the ResultSet will not allow you to get the info that was passed in. In Hibernate you can have the statements be output to a log file.
Most JDBC drivers allow you to set tracing so that you can debug.
Related
I am working on a java plugin interfacing with an H2 database. What I really want is an "Insert Ignore" statement; however, I'm aware that H2 doesn't support this. I am also aware of Merge, but this is really not what I want, if the record exists I don't want to change it.
What I am considering is to just run the insert and let the duplicate key exception happen. However, I don't want this to fill my log file. The DB call happens in an imported class that I can't change. So my questions are:
Is this a reasonable thing to do? I'm not one for letting errors happen, but this seems like the best way in this case (it should not happen all that much).
How can I keep this exception from hitting my log file? If there isn't a way to block exceptions down the stack, can I redirect the output of the stack trace that is output?
Thanks.
One solution is to use:
insert into test
select 1, 'Hello' from dual
where not exists(select * from test where id = 1)
This should work for all databases (except for the dual part; you may need to create your own dummy table with one row).
To disable logging exceptions, append ;trace_level_file=0 to the database URL:
jdbc:h2:~/test;trace_level_file=0
or run the SQL statement:
set trace_level_file 0
I am working on a java plugin interfacing with an H2 database. What I really want is an "Insert Ignore" statement; however, I'm aware that H2 doesn't support this. I am also aware of Merge, but this is really not what I want, if the record exists I don't want to change it.
What I am considering is to just run the insert and let the duplicate key exception happen. However, I don't want this to fill my log file. The DB call happens in an imported class that I can't change. So my questions are:
Is this a reasonable thing to do? I'm not one for letting errors happen, but this seems like the best way in this case (it should not happen all that much).
How can I keep this exception from hitting my log file? If there isn't a way to block exceptions down the stack, can I redirect the output of the stack trace that is output?
Thanks.
One solution is to use:
insert into test
select 1, 'Hello' from dual
where not exists(select * from test where id = 1)
This should work for all databases (except for the dual part; you may need to create your own dummy table with one row).
To disable logging exceptions, append ;trace_level_file=0 to the database URL:
jdbc:h2:~/test;trace_level_file=0
or run the SQL statement:
set trace_level_file 0
I'm trying to find out the root cause of failure in existing system. I don't know much about it, but looks like the issue is in inserting big row into Postregsql via Hibernate.
It fails to insert record w/ TEXT field which is about 50-100k size.
Should not be an issue for postgresql itself. But I guess there might be some settings\parameters in hibernate which can affect it. Any suggestion for the search direction?
First I try to look at the exception,
if it's in your local machine or a
server log, to get more clues. Since
you say it's when inserting a row,
maybe you know where it's happening.
Try inserting a row where the text
field has only a few bytes to see if
that works. Maybe the connection is
slow and inserting more than 50k
causes a timeout followed by a
rollback.
Also check out if that insertion
belongs to a much larger transaction
or it's executing on a smaller one.
Try doing that insertion in plain jdbc (just temporarily) to see if that works and rule out connection issues.
If the problem is not in the connection then you can start tweaking Hibernate parameters. Maybe disabling the 2nd cache. The stack exception or a debugging session will be helpful to know what parameters to change.
I have an application which processes a very large file and sends data to an oracle database (using Java 6, oracle 9).
In a loop, I use a PreparedStatement ps and create all SQL statements generated with ps.addBatch().
I have a situation where a BatchUpdateException bue is thrown somewhere during the ps.executeBatch(). At that point, the batch stops to be executed.
I'd like the batch execution to continue, so that I can then check on failed updates in a method processUpdateCounts(bue.getUpdateCounts()).
The javadoc about class BatchUpdateException says:
After a command in a batch update
fails to execute properly and a
BatchUpdateException is thrown, the
driver may or may not continue to
process the remaining commands in the
batch.
Is there a way to enforce continuation or do I need to alter my program so that it will execute the statement individually?
Just found this link:
JDBC Batch Update Problem
Apparently, it says there there is
NO WAY WITH ORACLE BATCH JDBC to proceed after first failure,
thus I am resorting to sending the inserts one by one.
Thank you
(sorry for not looking better to find the link above before).
there is a workaround that would allow you to use the batch feature. Instead of executing a simple INSERT statement, you can execute a PL/SQL block that will deal with the error appropriately:
BEGIN
INSERT INTO your_table VALUES (?,?,...?);
EXCEPTION
WHEN OTHERS THEN
/* deal with the error. For example, log the error id and error msg
so that you can list them after the batch */
INSERT INTO error_table VALUES (?, sqlerrm);
END
The performance should be on par with the batch insert (should be faster than individual execution of the statements). You could also call a stored procedure instead of a PL/SQL block.
Oracle itself can, see here: http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14250/oci04sql.htm#sthref616
However, it doesn't seem that this functionality is exposed to JDBC, not even in the oracle specific classes.
Because of the rather useless JDBC error handling ("the driver may or may not continue"), I'm always setting a savepoint before the batch, and performing an rollback to that point on error. That's the only JDBC compliant way to establish a known state after an Oracle Batch Error--as far as I know.
Since the specification doesn't seem to mandate it (as clearly shown by the Javadoc), any "forced" continuation would have to be done on per-driver basis. A simple standard-compliant workaround would be to check the getUpdateCounts() returned array and "re-run" the batch for those statements which failed. You can make this approach a bit more sophisticated by putting in a logic for the number of retries.
Sure, this seems a bit messy (keeping track of the "batch" added and then checking the output) but would work across all databases and driver implementations. Just a thought...
One thing that always been a pain is to log SQL (JDBC) errors when you have a PreparedStatement instead of the query itself.
You always end up with messages like:
2008-10-20 09:19:48,114 ERROR LoggingQueueConsumer-52 [Logger.error:168] Error
executing SQL: [INSERT INTO private_rooms_bans (room_id, name, user_id, msisdn,
nickname) VALUES (?, ?, ?, ?, ?) ON DUPLICATE KEY UPDATE room_id = ?, name = ?,
user_id = ?, msisdn = ?, nickname = ?]
Of course I could write a helper method for retrieving the values and parsing/substitute the question marks with real values (and probably will go down that path if I don't get an outcome of this question), but I just wanted to know if this problem was resolved before by someone else and/or if is there any generic logging helper that would do that automagically for me.
Edited after a few answers:
The libraries provided so far seems to be suitable to logging the statements for debugging, which no doubt is useful. However, I am looking to a way of taking a PreparedStatement itself (not some subclass) and logging its SQL statement whenever an error occur. I wouldn't like to deploy a production app with an alternate implementation of PreparedStatement.
I guess what I am looking for an utility class, not a PreparedStatement specialization.
Thanks!
I tried log4jdbc and it did the job for me.
SECURITY NOTE: As of today August 2011, the logged results of a log4jdbc prepared statement are NOT SAFE to execute. They can be used for analysis, but should NEVER be fed back into a DBMS.
Example of log generated by logjdbc:
2010/08/12 16:30:56 jdbc.sqlonly
org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
8. INSERT INTO A_TABLE
(ID_FILE,CODE1,ID_G,ID_SEQUENCE,REF,NAME,BAR,DRINK_ID,AMOUNT,DESCRIPTION,STATUS,CODE2,REJECT_DESCR,ID_CUST_REJ)
VALUES
(2,'123',1,'2','aa','awe',null,'0123',4317.95,'Rccc','0',null,null,null)
The library is very easy to setup:
My configuration with HSQLDB :
jdbc.url=jdbc:log4jdbc:hsqldb:mem:sample
With Oracle :
jdbc.url=jdbc:log4jdbc:oracle:thin:#mybdd:1521:smt
jdbc.driverClass=net.sf.log4jdbc.DriverSpy
logback.xml :
<logger name="jdbc.sqlonly" level="DEBUG"/>
Too bad it wasn't on a maven repository, but still useful.
From what I tried, if you set
You will only get the statements in error, however, I don't know if this library has an impact on performance.
This is very database-dependent. For example, I understand that some JDBC drivers (e.g. sybase, maybe ms-sql) handle prepared statements by create a temporary stored procedure on the server, and then invoking that procedure with the supplied arguments. So the complete SQL is never actually passed from the client.
As a result, the JDBC API does not expose the information you are after. You may be able to cast your statement objects the internal driver implementation, but probably not - your appserver may well wrap the statements in its own implementation.
I think you may just have to bite the bullet and write your own class which interpolates the arguments into the placeholder SQL. This will be awkward, because you can't ask PreparedStatement for the parameters that have been set, so you'll have to remember them in a helper object, before passing them to the statement.
It seems to me that one of the utility libraries which wrap your driver's implementation objects is the most practical way of doing what you're trying to achieve, but it's going to be unpleasant either way.
Use P6Spy: Its Oracle, Mysql, JNDI, JMX, Spring and Maven friendly. Highly configurable.
Simple and low level integration
Can print the stacktrace.
Can only print heavy calls - time threashold based.
If you are using MySQL, MySQL Connector's PreparedStatement.toString() does include the bound parameters. Though third-party connection pools may break this.
Sub-class PreparedStatement to build up the query string as parameters are added. There's no way to extract the SQL from a PreparedStatement, as it uses a compiled binary form.
LoggedPreparedStatement looks promising, though I haven't tried it.
One advantage of these over a proxy driver that logs all queries is that you can modify the query string before logging it. For example in a PCI environment you might want to mask card numbers.