In my DB2 database, I have a table with a Blob:
CREATE TABLE FILE_STORAGE (
FILE_STORAGE_ID integer,
DATA blob(2147483647),
CONSTRAINT PK_FILE_STORAGE PRIMARY KEY (FILE_STORAGE_ID));
Using the db2jcc JDBC driver (db2jcc4-9.7.jar), I can read and write data in this table without any problems.
Now I need to be able to append data to existing rows, but DB2 gives the cryptic error
Invalid operation: setBinaryStream is not allowed on a locator or a reference. ERRORCODE=-4474, SQLSTATE=null
I use the following code to append my data:
String selectQuery = "SELECT DATA FROM FILE_STORAGE WHERE FILE_STORAGE_ID = ?";
try (PreparedStatement ps = conn.prepareStatement(selectQuery, ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_UPDATABLE)) {
ps.setInt(1, fileStorageID);
try (ResultSet rs = ps.executeQuery()) {
if (rs.next()) {
Blob existing = rs.getBlob(1);
try {
// The following line throws the exception:
try (OutputStream output = existing.setBinaryStream(existing.length() + 1)) {
// append the new data to the output:
writeData(output);
} catch (IOException e) {
throw new IllegalStateException("Error writing output stream to blob", e);
}
rs.updateBlob(1, existing);
rs.updateRow();
} finally {
existing.free();
}
} else {
throw new IllegalStateException("No row found for file storage ID: " + fileStorageID);
}
}
}
My code is using the methods as suggested in OutputStream to the BLOB column of a DB2 database table. There also seem to be other people who have the same problem: Update lob columns using lob locator.
As a workaround, I currently read all the existing data into memory, append the new data in memory, and then write the complete data back into the blob. This works, but it's very slow and obviously it will take longer if there's more data in the blob, getting slower with each update.
I do need to use Java to update the data, but apart from switching away from the JVM, I am happy to try any possible alternatives at all, I just need to append the data somehow.
Thanks in advance for any ideas!
If you only need to append data to the end of a BLOB column and don't want to read the entire value into your program, a simple UPDATE statement will be faster and more straightforward.
Your Java program could run something like this via executeUpdate():
UPDATE file_storage SET data = data || BLOB(?) WHERE file_storage_id = ?
The parameter markers for this would be populated by setBlob(1, dataToAppend) and setInt(2, fileStorageID).
Related
I would like to insert a file that hasn't an extension. It is a text file without a .txt extension. This is my code:
public boolean setData(List<String> data) {
SABConnection connection = new SABConnection();
boolean bool = false;
try {
PreparedStatement ps = connection.connectToSAB().prepareStatement("INSERT INTO AS400.ZXMTR03 VALUES (?)");
if (!data.isEmpty()) {
for (String file: data) {
File fi = new File(file);
FileInputStream fis = new FileInputStream(file);
ps.setAsciiStream(1, fis);
int done = ps.executeUpdate();
if (done > 0) {
System.out.println("File: " + fi.getName() + " Inserted successfully");
}else {
System.out.println("Insertion of File: " + fi.getName() + " failed");
}
}
bool = true;
}else {
System.out.println("Le repertoire est vide");
}
}catch (Exception e) {
System.out.println("error caused by: " + e.getMessage());
}
return bool;
}
I keep getting a data truncation error.
ps:
the file ZXMTR03 doesn't have columns.
to insert such thing manually into as400 I write this statement: insert into ZXMTR03 (select * from n.niama/ZXMTR02) it works. When I write insert into ZXMTR03 values ('12345') it works.
I'm using JT400 library.
You can't insert a stream file into a database table like that.
Assuming your text file has EOL indicators, you'd need to split it into rows to insert one row at time; or insert some distinct number of rows at a time using a multi-row insert.
Also you're wrong in thinking ZXMTR03 doesn't have columns, every DB table on the IBM i has at least 1 column.
Alternatively, you could copy, using FTP, SMB, NFS, ect. or even the JT400 AccessIfsFile class, the text file to the Integrated File System (IFS), which supports stream files. And make use of the Copy From Import File (CPYFRMIMPF) command or perhaps the IFS Copy (CPY) command. If on a current version of the OS, you might want to check out the QSYS2_IFS_READ() table functions
I am working on proof of concept to read a blob content from Oracle, manipulate it and then insert back as new record using Java. Currently, I am trying to just read and then write back blob content to Oracle but facing issues. I am able to write back but looks like the file is not getting inserted completely.
Error when trying to view/download Blob via SQL developer
Code used to read and write back
conn.setAutoCommit(false);
stmt = conn.createStatement();
sql = "SELECT DOC_ID, NAME, BLOB_CONTENT FROM DOCUMENTS WHERE DOC_ID = " + String.valueOf(docid);
ResultSet rs_docs = stmt.executeQuery(sql);
while (rs_docs.next()) {
Show_Message("Conversion sub process started ...");
doc_name = rs_docs.getString("name");
Blob ib = rs_docs.getBlob("blob_content");
Show_Message("Uploading converted pdf to database ... ");
InputStream input = ib.getBinaryStream();
String filename = doc_name;
CallableStatement callableStatement = conn.prepareCall("begin INSERT INTO DOCUMENTS(NAME, BLOB_CONTENT) VALUES(?,?) RETURNING DOC_ID INTO ?; end;");
callableStatement.setString(1, filename);
callableStatement.setBinaryStream(2, input, input.available());
callableStatement.registerOutParameter(3, Types.INTEGER);
callableStatement.executeQuery();
docid = callableStatement.getInt(3);
callableStatement.close();
Show_Message("New record created, doc # " + String.valueOf(docid));
Show_Message("Conversion Process completed!!!");
}
stmt.close();
conn.commit();
conn.close();
rs_docs.close();
Connection.prepareCall() is for creating a Statement that calls a stored procedure. If you want to do that then you should define an SP in the database, outside the scope of this method, and call it by name via your [Callable]Statement. But if the only point is to get the DOC_ID assigned to the new row, then there are other ways, such as Statement.getGeneratedKeys().
Do not use InputStream.available() to determine the size of the blob. "Available" means the number of bytes readable without blocking, right now, and it is allowed to be an arbitrarily inaccurate underestimate -- even zero. It is not a reliable measure of the total number of bytes that may eventually be readable from the stream. Instead, use the Blob's length() method.
I have one script which fetches around 25.000 different ID values and uses them to make some changes in other table. But the programmer created this code which searches ID (dialid in the code) through the table of 10 million records (line 3) and every query in loop is executing around 1 second. My idea is to fetch last 30 days of records with the SQL and to put it into an array and check only the array.
And my question is, how to do that in Java? Is it the in_array function? I'm solid in PHP, but beginner in Java code...
private Integer getDialId(int predictiveId) {
Integer dialid = null;
StringBuilder sql = new StringBuilder("SELECT dialid from dial where PREDICTIVE_DIALID=");
sql.append(predictiveId); //this predictiveId is calculated in other part of code
ResultSet rsDialId = null;
Statement s1 = null;
try {
s1 = oracle.getConn().createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,
ResultSet.CONCUR_UPDATABLE, ResultSet.CLOSE_CURSORS_AT_COMMIT);
rsDialId = s1.executeQuery(String.valueOf(sql));
if (rsDialId.next()) {
dialid = rsDialId.getInt("dialid");
}
} catch (SQLException ex) {
Logger.getLogger(MediatelCdrSync.class.getName()).log(Level.SEVERE, null, ex);
} finally {
try {
if (s1 != null) {
s1.close();
}
if (rsDialId != null) {
rsDialId.close();
}
} catch (SQLException ex) {
Logger.getLogger(MediatelCdrSync.class.getName()).log(Level.SEVERE, null, ex);
}
}
System.out.println("DIALID = " + dialid);
return dialid;
}
Thnx
If you have a performance problem I'd start to see why the query takes one second per execution, if it's database time because the dial table does not have and index on PREDICTIVE_DIALID column you can do very little at the java level.
Anyway the jdbc code reveals some problems especially when used with an oracle database.
The biggest issue is that you are hardcoding your query parameter causing Oracle to re"hard parse" the query every time; the second (minor one) is that the resultset is scrollable and updatable while you need only to load the first row. If you want to make some little modification to your code you should change to somethig like this pseudo code:
PreparedStatement ps =connection.prepareStatement("SELECT dialid from dial where PREDICTIVE_DIALID=?");
for (int i=0;i<10;i++) {//your 25000 loop elements is this one
//this shoudl be the start of the body of your getDialId function that takes also a prepared statement
ps.setInt(1, i);
ResultSet rs=ps.executeQuery();
if (rs.next()) {
rs.getInt("dialid");
}
rs.close();
//your getDialId end here
}
ps.close();
With this minimal java solution you should note a performance increase, but you must chek the performance of the single query since if there is a missing index you cand very little at a java code.
Another solution, more complicated, is to to create a temporart table, fill it with all the 25000 predictiveId values and then issue a query that joins dial and you temporary table; so with one resultset(and one query) you can find all the dialid you need. A jdbc batch insert into the temp table speeds up insertion time noticeably.
If you are planning to fetch less record and store that result in some array then
I think it is better for you to limit your search by creating a view in Database with limited record's (say record for last 2 year's)
And Use that view in your select query
"SELECT dialid from dial_view WHERE PREDICTIVE_DIALID = "
Hope it will help :)
Here are the info:
I have a String
I want to insert a record in a table with the String in a column whose
datatype is CLOB.
I would like to use setClob() method of the preparedstatement.
So my question is how to create a Clob object from this String so that I
can use setClob() method.
Thanks in advance,
Naveen
If you want to write a String to CLOB column just use PreparedStatement.setString.
If you want to know how to create a CLOB from String this is it
Clob clob = connection.createClob();
clob.setString(1, str);
You may create the clob from a connection object as follows
Connection con = null;// write code to make a connection object
Clob clob = con.createClob();
String str = "this is a stirng";
clob.setString(1, str );
PreparedStatement ps = null;// write code to create a prepared statement
ps.setClob(4, clob);
Or you may try the alternative code as follows :
//alternative way
String str = "this is a stirng";
ByteArrayInputStream inputStream = new ByteArrayInputStream(str.getBytes());
InputStreamReader inputStreamReader = new InputStreamReader(inputStream);
int parameterIndex = 1;
PreparedStatement ps = null;// write code to create a prepared statement
ps.setClob(parameterIndex, inputStreamReader);
For CLOB it is of String already. So, just use .setString() and that should work. One thing about ORACLE jdbc if you are using it, it like the CLOB INPUT parameter to be the last one in your statement especially with a large data.
Example:
INSERT INTO MY_TABL (NUM_COL, VARC_COL, VARC_COL, TS_COL, CLOB_COL)
VALUES(?,?,?,?,?);
As you can see, the CLOB_COL is of type CLOB and should be last so that when
you do .setString(5) and 5 is the last index.
I had a specific variation of this issue which required to insert a clob into an Oracle database from java code running on that db. None of the answers here quite worked for me.
I eventually found solution, the trick being to use oracle.sql.CLOB
This the approach I discovered:
create table test_clob (
c clob
);
create or replace and compile java source named java_clob_insert as
import java.sql.Connection;
import java.sql.PreparedStatement;
import oracle.sql.CLOB;
import java.io.Writer;
public class JavaClobInsert {
public static void doInsert () {
try {
//create the connection and statement
Connection oracleConn =
(new oracle.jdbc.OracleDriver()).defaultConnection();
String stmt = "INSERT INTO test_clob values (?)";
PreparedStatement oraclePstmt = oracleConn.prepareStatement(stmt);
//Imagine we have a mysql longtext or some very long string
String s = "";
for (int i = 0; i < 32768; i++) {
s += i % 10;
}
//Initialise the Oracle CLOB
CLOB clob;
clob = CLOB.createTemporary(oracleConn, true, CLOB.DURATION_CALL);
//Good idea to check the string is not null before writing to clob
if (s != null) {
Writer w = clob.setCharacterStream( 1L );
w.write(s);
w.close();
oraclePstmt.setClob(1, clob);
} else {
oraclePstmt.setString(1, "");
}
//clean up
oraclePstmt.executeUpdate();
oracleConn.commit();
oraclePstmt.close();
oracleConn.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}
/
create or replace procedure clob_insert as language java name
'JavaClobInsert.doInsert()';
/
begin
clob_insert;
end;
/
select *
from test_clob;
Today i had an issue with a Clob field because i was using "setString" to set the parameter, but then i had this error while testing with a very long string: "setString can handle only Strings with less than 32766 characters"
I used connection.createClob but it gave me this exception:
java.lang.AbstractMethodError: org.apache.tomcat.dbcp.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.createClob()Ljava/sql/Clob;
So looking for this exception i found this
using CLOB in java throwing exception and the accepted answer (using setCharacterStream instead of setClob) worked for me
Copy/Pasted from the accepted answer (so all credits are for a_horse_with_no_name )
StringReader reader = new StringReader(userAbout);
PreparedStatement insertClob = dbCon.prepareStatement("UPDATE user_data SET user_about=? WHERE user_id=?");
insertClob.setCharacterStream(1, reader, userAbout.length());
insertClob.setInt(2,userId);
My answer is slightly different than others...
I had a PreparedStatement, stmt, and was using stmt.setString(colIndex, value) for updates to my database that had a CLOB column.
This worked without fail for me when inserting and updating rows in the database table.
When others tested this code though they would occasionally see an exception occur:
ORA-22275: invalid LOB locator
It only seemed to happen on updates, not inserts - not sure why on that, when value was null. And I only ever had this occur with Oracle databases, not MSSQL or DB2.
Anyway to fix it I changed the logic to test for a null value
if (value == null) {
stmt.setNull(colIndex, java.sql.Types.CLOB);
}
else {
stmt.setString(colIndex, value);
}
This worked without fail for me and others!
I used the following codes to update Oracle Clob:
CLOB tempClob = null;
try {
Connection conn = getConnection();
PreparedStatement = = conn.prepareStatement("UPDATE PROGRAM_HISTORY SET DETAILS = ? WHERE ID = 12");
tempClob = CLOB.createTemporary(conn, true, CLOB.DURATION_SESSION);
tempClob.open(CLOB.MODE_READWRITE);
Writer tempClobWriter = tempClob.getCharacterOutputStream();
tempClobWriter.write(clobData);
tempClobWriter.flush();
tempClobWriter.close();
tempClob.close();
pStmt.setClob(1, tempClob);
pStmt.execute();
} catch (Exception ex) { // Trap errors
System.out.println(" Error Inserting Clob : "+ex.toString());
ex.printStackTrace();
}finally{
if(tempClob != null ) tempClob.freeTemporary();
opstmt.close();
conn.close();
}
As you can see, after creating temporary clob, I used tempClob.open(CLOB.MODE_READWRITE);
to open and use tempClob.close() to colse later; so My question is that is this necessary? if yes why? because some example codes I searched from google don't have this procedure.
My second question is that is this required to tempClob.close() in finally statement; we must close temporary clob just like connection after used out? or don't need do this it will be automatically released?
The Oracle session object will keep a reference to the CLOB, so the garbage collector won't touch it. It will be freed automaticly when the session is closed.
Note that the actual temp CLOB memory will not exist somewhere in the Java VM, but either in the Oracle server process (PGA) or in the temp tablespace (disk), depending on your database configuration and the current workload.