Using JdbcTemplate to insert large CLOB's into Oracle fails - java

I am interfacing with an Oracle database via Spring's JdbcTemplate utility class, and I have tried these two variants of code:
jdbcTemplate.update("INSERT INTO my_table (title, content) VALUES (?, ?)", title, content);
-- or --
jdbcTemplate.update(new PreparedStatementCreator() {
#Override
public PreparedStatement createPreparedStatement(Connection conn) throws SQLException {
OraclePreparedStatement ps = (OraclePreparedStatement)conn.prepareStatement("INSERT INTO my_table (title, content) VALUES (?, ?)");
ps.setString(1, title);
ps.setStringForClob(2, content);
return ps;
}
});
Where title is a traditional VARCHAR2, and content is a CLOB.
Either of these alternatives work for smaller values for content. However, when I have larger amounts of content, nothing gets inserted into the CLOB column.
Interestingly enough, in both cases, title gets updated. It's as if the query just ignores content if there's too much, but never throws an error.
Does anybody know how I should solve this?
Thanks.
EDIT:
Per the answer from #GreyBeardedGeek, I tried using OracleLobHandler and DefaultLobHandler, to the same effect. Things work until my CLOB's reach a certain size.
I also tried the following code, again to the same effect:
Connection conn = db.getDataSource().getConnection();
CLOB clob = CLOB.createTemporary(conn, false, CLOB.DURATION_SESSION);
clob.setString(1, myString);
OraclePreparedStatement ps = (OraclePreparedStatement)conn.prepareStatement("UPDATE my_table SET blob = ?");
ps.setCLOB(1, clob);
ps.execute();
I'm baffled as to why every one of these methods would work for smaller CLOB's, but then suddenly break for large ones. Is there some type of configuration in the DB that I'm missing? Or is the problem with the code?

Okay, I feel pretty silly. As it turns out, even this simple code was storing the CLOB correctly:
jdbcTemplate.update("UPDATE my_table SET title = ?, content = ? WHERE id = ?", getTitle(), getContentString(), getId());
The issue was my code that retrieved the CLOB back from the database. The following is my speculation based on my code (and the fix): it seems as though smaller CLOB's get cached in memory, and can be read at a later time (namely, after the connection is closed, they can still be read). However, for larger CLOB's, they must be read while the connection is still open.
For me, this meant the fix was as simple as reading the CLOB's contents as soon as it was available to my object. In my case, I'm not really worried about memory issues, because I don't expect that my CLOB's will contain inordinately sized contents, and so reading the value into memory immediately is an acceptable approach.

Oracle has, for as long as I can remember, required special handling for BLOBs and CLOBs.
Spring JDBC has org.springframework.jdbc.support.lob.OracleLobHandler for setting the value of BLOBs and CLOBs.
There's a pretty good full example of how to use it at http://techdive.in/spring/spring-handling-blobclob but basically, instead of ps.setStringForClob, you would do
oracleLobHandler.getLobCreator().setClobAsString(ps, 2, content);

SqlLobValue(String content) can be used for CLOB.
Follow the link:
http://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/jdbc/core/support/SqlLobValue.html
.

Related

Is it safe to use JDBC setString() to store CLOB data type with Oracle database?

I have a need to store a large string into Oracle database the length of which would be at most 10000 bytes. I understand that there is some configuration in Oracle 12c that can increase the 4000 byte limit of varchar2. But I do not have the option to use that configuration.
So I am inclined to use the CLOB data type. I have no previous experience in using CLOB. So I have my concerns.
I saw the following on SO
Java: How to insert CLOB into oracle database
I did not want to use any oracle package to handle the CLOB type. My question is, is the following safe enough for my purpose?
To store:
try {
String myclobstring = "xx ........";
String sql = "Insert into mytable (clobfield) values (?)";
Statement stmt = conn.prepareStatement(sql);
stmt.setString(1, myclobstring);
.
.
}
To Retrieve:
try {
String sql = "select clobfield from mytable";
stmt = conn.createStatement();
ResultSet result = stmt.executeQuery(sql);
String s = result.getString ("clobfield');
.
.
}
You can create the CLOB as a String, though using a stmt.setCharacterStream might be a bit better for something very large. Here is an example I found that shows this nicely:
Storing Clobs
You can also use the java.sql.Clob if you're wanting to not use the Oracle specific code.
From Oracle's documentation on CLOB
Use the java.sql.Clob and create the CLOB with the connection's createClob function.
I thought I will post an answer as I did not see an explicit answer. So far setString()/setString() is storing up to 10K characters into the CLOB field. I am getting back what I am storing without a problem.
I did see the following related SO post that gives me a bit more confidence.
How to use setClob() in PreparedStatement inJDBC

Will ResultSet be updated with the underlying database?

Before I explain my problem I would like to say that I know the basics of JDBC but not really used to it.
I am using an updatable result set to hold data from 2 different tables, as in the following sample code:
searchQry = "SELECT ct.CustomerName, ct.Email, ct.PhoneNo, ot.ItemName
FROM CUSTOMER_TABLE ct JOIN ORDER_Table ot
ON ct.OrderID = ot.OrderID";
prestmt = dbcon.prepareStatement(searchQry, ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_UPDATABLE);
uprs = prestmt.executeQuery();
uprs.updateLong("ut.PhoneNo", 7240987456L);
uprs.updateString("otItemName", "GTA5");
uprs.updateRow();
I would like to know if I will update the database from somewhere else (not using the same result set object) while the result set, upsr, connected to the database, whether uprs will get updated with it or it will throw an error or it will go with the old data itself. Sorry if it a newbie question but I can't really test that on my DB without knowing the outcomes and safe measures.
Please, suggest me if there is any better way to update the underlining db along with the data in the ResultSet without having any transaction issues when changing from different places.
Using:
Oracle Database for JDBC connection.

using CLOB in java throwing exception

I am inserting clob data into mysql database...here is my code
Clob cl=dbCon.createClob();
cl.setString(1,userAbout);
dbCon.setAutoCommit(false);
PreparedStatement insertClob=dbCon.prepareStatement("UPDATE user_data SET user_about=? WHERE user_id=?");
insertClob.setClob(1,cl);
insertClob.setInt(2,userId);
int count= insertClob.executeUpdate();
if(count==1){dbCon.commit();dbCon.close();out.write("success");}
else{dbCon.rollback();dbCon.close();out.print("error");}
this is throwing an exception
java.lang.AbstractMethodError: org.apache.tomcat.dbcp.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.createClob()Ljava/sql/Clob;
whats the problem here? and How can I solve it?
You don't need createClob() anyway. I find using setCharacterStream() to be much more stable (and much better supported by all JDBC drivers).
StringReader reader = new StringReader(userAbout);
PreparedStatement insertClob = dbCon.prepareStatement("UPDATE user_data SET user_about=? WHERE user_id=?");
insertClob.setCharacterStream(1, reader, userAbout.length());
insertClob.setInt(2,userId);
int count= insertClob.executeUpdate();
This also works with an INSERT statement. No need to create any intermediate clob (or blob) objects.
Note that I changed the wrong index 8 to the correct index 2 to match the placeholders in the UPDATE statement.
Many modern drivers also handle a "simple" setString() just as well for CLOB columns. It's worth trying out - would reduce the code even more.

Copying Java ResultSet

I have a java.sql.ResultSet object that I need to update. However the result set is not updatable. Unfortunately this is a constraint on the particular framework I'm using.
What I'm trying to achieve here is taking data from a database, then manipulating a small amount of the data and finally the data is being written to a CSV file.
At this stage I think my best option is to create a new result set object and copy the contents of the original result set into the new one, manipulating the data as I do so.
However, I've hunted high and low on Google and don't seem to be able to determine how to do this or whether it's even possible at all.
I'm new to everything Java so any assistance would be gratefully received.
Thanks for the responses. In the end I found CachedRowSet which is exactly what I needed. With this I was able to disconnect the ResultSet object and update it.
What's more, because CachedRowSet implements the ResultSet interface I was still able to pass it to my file generation method which requires an object that implements ResultSet.
The normal practice would be to map the ResultSet to a List<Entity> where Entity is your own class which contains information about the data represented by a single database row. E.g. User, Person, Address, Product, Order, etcetera, depending on what the table actually contains.
List<Entity> entities = new ArrayList<Entity>();
// ...
while (resultSet.next()) {
Entity entity = new Entity();
entity.setId(resultSet.getLong("id"));
entity.setName(resultSet.getString("name"));
entity.setValue(resultSet.getInt("value"));
// ...
entities.add(entity);
}
// ...
return entities;
Then, you can access, traverse and modify it the usual Java way. Finally, when persisting it back in the DB, use a PreparedStatement to update them in batches in a single go.
String sql = "UPDATE entity SET name = ?, value = ? WHERE id = ?";
// ...
statement = connection.prepareStatement(sql);
for (Entity entity : entities) {
statement.setString(1, entity.getName());
statement.setInt(2, entity.getValue());
statement.setLong(3, entity.getId());
// ...
statement.addBatch();
}
statement.executeBatch();
// ...
Note that some DB's have a limit on the batch size. Oracle's JDBC driver has a limit on around 1000 items. You may want to call executeBatch() every 1000 items then. It should be simple using a counter inside the loop.
See also:
Collections tutorial
PreparedStatement tutorial

What does Statement.setFetchSize(nSize) method really do in SQL Server JDBC driver?

I have this really big table with some millions of records every day and in the end of every day I am extracting all the records of the previous day. I am doing this like:
String SQL = "select col1, col2, coln from mytable where timecol = yesterday";
Statement.executeQuery(SQL);
The problem is that this program takes like 2GB of memory because it takes all the results in memory then it processes it.
I tried setting the Statement.setFetchSize(10) but it takes exactly the same memory from OS it does not make any difference. I am using Microsoft SQL Server 2005 JDBC Driver for this.
Is there any way to read the results in small chunks like the Oracle database driver does when the query is executed to show only a few rows and as you scroll down more results are shown?
In JDBC, the setFetchSize(int) method is very important to performance and memory-management within the JVM as it controls the number of network calls from the JVM to the database and correspondingly the amount of RAM used for ResultSet processing.
Inherently if setFetchSize(10) is being called and the driver is ignoring it, there are probably only two options:
Try a different JDBC driver that will honor the fetch-size hint.
Look at driver-specific properties on the Connection (URL and/or property map when creating the Connection instance).
The RESULT-SET is the number of rows marshalled on the DB in response to the query.
The ROW-SET is the chunk of rows that are fetched out of the RESULT-SET per call from the JVM to the DB.
The number of these calls and resulting RAM required for processing is dependent on the fetch-size setting.
So if the RESULT-SET has 100 rows and the fetch-size is 10,
there will be 10 network calls to retrieve all of the data, using roughly 10*{row-content-size} RAM at any given time.
The default fetch-size is 10, which is rather small.
In the case posted, it would appear the driver is ignoring the fetch-size setting, retrieving all data in one call (large RAM requirement, optimum minimal network calls).
What happens underneath ResultSet.next() is that it doesn't actually fetch one row at a time from the RESULT-SET. It fetches that from the (local) ROW-SET and fetches the next ROW-SET (invisibly) from the server as it becomes exhausted on the local client.
All of this depends on the driver as the setting is just a 'hint' but in practice I have found this is how it works for many drivers and databases (verified in many versions of Oracle, DB2 and MySQL).
The fetchSize parameter is a hint to the JDBC driver as to many rows to fetch in one go from the database. But the driver is free to ignore this and do what it sees fit. Some drivers, like the Oracle one, fetch rows in chunks, so you can read very large result sets without needing lots of memory. Other drivers just read in the whole result set in one go, and I'm guessing that's what your driver is doing.
You can try upgrading your driver to the SQL Server 2008 version (which might be better), or the open-source jTDS driver.
You need to ensure that auto-commit on the Connection is turned off, or setFetchSize will have no effect.
dbConnection.setAutoCommit(false);
Edit: Remembered that when I used this fix it was Postgres-specific, but hopefully it will still work for SQL Server.
Statement interface Doc
SUMMARY: void setFetchSize(int rows)
Gives the JDBC driver a hint as to the
number of rows that should be fetched
from the database when more rows are
needed.
Read this ebook J2EE and beyond By Art Taylor
Sounds like mssql jdbc is buffering the entire resultset for you. You can add a connect string parameter saying selectMode=cursor or responseBuffering=adaptive. If you are on version 2.0+ of the 2005 mssql jdbc driver then response buffering should default to adaptive.
http://msdn.microsoft.com/en-us/library/bb879937.aspx
It sounds to me that you really want to limit the rows being returned in your query and page through the results. If so, you can do something like:
select * from (select rownum myrow, a.* from TEST1 a )
where myrow between 5 and 10 ;
You just have to determine your boundaries.
Try this:
String SQL = "select col1, col2, coln from mytable where timecol = yesterday";
connection.setAutoCommit(false);
PreparedStatement stmt = connection.prepareStatement(SQL, SQLServerResultSet.TYPE_SS_SERVER_CURSOR_FORWARD_ONLY, SQLServerResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(2000);
stmt.set....
stmt.execute();
ResultSet rset = stmt.getResultSet();
while (rset.next()) {
// ......
I had the exact same problem in a project. The issue is that even though the fetch size might be small enough, the JDBCTemplate reads all the result of your query and maps it out in a huge list which might blow your memory. I ended up extending NamedParameterJdbcTemplate to create a function which returns a Stream of Object. That Stream is based on the ResultSet normally returned by JDBC but will pull data from the ResultSet only as the Stream requires it. This will work if you don't keep a reference of all the Object this Stream spits. I did inspire myself a lot on the implementation of org.springframework.jdbc.core.JdbcTemplate#execute(org.springframework.jdbc.core.ConnectionCallback). The only real difference has to do with what to do with the ResultSet. I ended up writing this function to wrap up the ResultSet:
private <T> Stream<T> wrapIntoStream(ResultSet rs, RowMapper<T> mapper) {
CustomSpliterator<T> spliterator = new CustomSpliterator<T>(rs, mapper, Long.MAX_VALUE, NON-NULL | IMMUTABLE | ORDERED);
Stream<T> stream = StreamSupport.stream(spliterator, false);
return stream;
}
private static class CustomSpliterator<T> extends Spliterators.AbstractSpliterator<T> {
// won't put code for constructor or properties here
// the idea is to pull for the ResultSet and set into the Stream
#Override
public boolean tryAdvance(Consumer<? super T> action) {
try {
// you can add some logic to close the stream/Resultset automatically
if(rs.next()) {
T mapped = mapper.mapRow(rs, rowNumber++);
action.accept(mapped);
return true;
} else {
return false;
}
} catch (SQLException) {
// do something with this Exception
}
}
}
you can add some logic to make that Stream "auto closable", otherwise don't forget to close it when you are done.

Categories