Invoking Postresql function from java - java

I have a small utility written in java which takes chunk of records from Postgresql database and puts them in Solr.
After indexing in Solr records have to be marked as indexed in Postgresql.
For this purpose I take a Postgresql procedure and pass into it array of records' primary keys.
This is snippet of java procedure for marking:
private void IndexDocumentsAndMark() throws SolrServerException, IOException, SQLException {
logger.debug("Indexing start execution ...");
UpdateResponse resp = solrClient.add(documents, solrCommitTimeOut);
if (resp.getStatus() != 0) {
logger.error("Indexing error: " + resp.getStatus());
} else {
logger.debug("Indexing completed. Indexed " + documents.size() + " documents ...");
logger.debug("Solr commit start execution ...");
solrClient.commit(); // Needs to be done in every bulk
logger.debug("Solr commit completed ...");
// Mark data in Postgresql as completed if bulk indexed successfully and Postgresql procedure for marking in defined
if (databaseQueryStringMark != null) {
// Prepare statement
PreparedStatement pstmt = con.prepareStatement(databaseQueryStringMark);
// Prepare parameters (primary keys)
java.sql.Array array = con.createArrayOf("integer", paramsArray.toArray());
pstmt.setArray(1, array);
logger.debug("Query for mark bulk data as completed start execution ...");
// Execute procedure
pstmt.executeQuery();
logger.debug("Query for mark bulk data as completed execution completed ...");
}
}
}
The problem with this stuff is:
If there is no high load on Postgresql server, utility works fine and there is no problem.
However whith high load on Postgresql, server marking procedure (Postgresql procedure) works long enough (about two hours). After marking process (finishing Postgresql procedure) I can see that my java process in memory. It didn't terminate.
The code of Postgresql function is
CREATE OR REPLACE FUNCTION affiliates_service_json_data_mark("id$arr" integer[])
RETURNS void AS
$BODY$
declare
begin
update solr_interface.affiliates_json_data
set is_transfered_to_solr = true
where id in (select unnest(id$arr)
);
end;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION affiliates_service_json_data_mark(integer[])
OWNER TO postgres;
Could you suggest me where is the source of this problem and how to fix it?

Related

JDBC SQL Server Stored Procedure with ResultSet, return value, and output parameters

I am in the process of converting an application from Jython to compiled Java. The application uses a host of SQL Server stored procedures to do CRUD operations. All of the procedures are defined with a return value that indicates status, and some output parameters used to provide feedback to the application. Most of the procedures also return a result set. I'm struggling with how to retrieve the return value and the result set and the output parameters.
I normally work with C# so the nuances of JDBC are new to me. I've been testing with one of the procedures that does an insert to the database and then does a select on the inserted object.
Here's a simplified example procedure just to use for the purpose of illustration. The actual procedures are more complex than this.
CREATE PROCEDURE [dbo].[sp_Thing_Add]
(
#Name NVARCHAR(50),
#Description NVARCHAR(100),
#ResultMessage NVARCHAR(200) = N'' OUTPUT
)
AS
BEGIN
SET NOCOUNT ON
DECLARE #Result INT = -1
DECLARE #ResultMessage = 'Procedure incomplete'
BEGIN TRY
INSERT INTO Things (Name, Description) VALUES (#Name, #Description)
SELECT * FROM Things WHERE ThingID = SCOPE_IDENTITY()
END TRY
BEGIN CATCH
SELECT #Result = CASE WHEN ERROR_NUMBER() <> 0 THEN ERROR_NUMBER() ELSE 1 END,
#ResultMessage = ERROR_MESSAGE()
GOTO EXIT_SUB
END CATCH
SUCCESS:
SET #Result = 0
SET #ResultMessage = N'Procedure completed successfully'
RETURN #Result
EXIT_SUB:
IF #Result <> 0
BEGIN
-- Do some error handling stuff
END
RETURN #Result
I can successfully retrieve the ResultSet using the following code.
var conn = myConnectionProvider.getConnection();
String sql = "{? = call dbo.sp_Thing_Add(?, ?, ?)}"
call = conn.prepareCall(sql);
call.registerOutParameter(1, TYPES.Integer); // Return value
call.setString("Name", thing.getName());
call.setString("Description", thing.getDescription());
call.registerOutParameter("ResultMessage", TYPES.NVARCHAR);
ResultSet rs = call.executeQuery();
// Try to get the return value. This appears to close the ResultSet and prevents data retrieval.
//int returnValue = call.getInt(1);
// Normally there'd be a check here to make sure things executed properly,
// and if necessary the output parameter(s) may also be leveraged
if (rs.next()) {
thing.setId(rs.getLong("ThingID"));
// Other stuff actually happens here too...
}
If I try retrieving the return value using the line that's commented out, I get an error stating that the ResultSet is closed.
com.microsoft.sqlserver.jdbc.SQLServerException: The result set is closed.
I've been through the documentation and have seen how to do return values, output parameters, and result sets. But how can I leverage all 3?
Given the order of processing in your stored procedure (insert, select, then populate result parameters), you need to process the result set before you retrieve the return value with CallableStatement.getXXX.
The output is in the ResultSet rs retrieved from executeQuery().
You may want to use the excute method as such:
call.execute();
String returnValue = call.getString("ResultMessage");
You also want to map correctly to the output type.
Your connection got closed once the execute query is executed. Basically mysql jdbc connection extends to AutoCloseable implicitly. Since your result is only entity from procedure,please get the value by index 0 and do a proper index out of bound exception handling.

Getting ResultSet from stored procedure within another stored procedure

I have a stored procedure that calls another stored procedure. The inner stored procedure returns a result set. After using a CallableStatement to execute the calling stored procedure I am unable to get the result set returned by called stored procedure.
I tried both execute and executeQuery for execution of callable statement. When I execute the calling stored procedure from SQL Server I am getting proper results.
Calling procedure:-
ALTER PROC [User].[Get_Data]
(#UserID NVARCHAR(20))
AS
BEGIN
Select 'User Data'
Exec [Order].[Get_Order] #UserID
END
Called procedure:-
ALTER PROC [Order].[Get_Order]
(#UserID NVARCHAR(20))
AS
BEGIN
Select * from orders where userId=#UserID
END
Your outer stored procedure is returning two result sets:
The results from Select 'User Data'
The results from Exec [Order].[Get_Order] #UserID
You need to call .getMoreResults() in order to retrieve the second result set, e.g.,
CallableStatement cs = connection.prepareCall("{CALL Get_Data (?)}");
cs.setString(1, "gord");
ResultSet rs = cs.executeQuery();
System.out.println("[First result set]");
while (rs.next()) {
System.out.printf("(No column name): %s%n", rs.getString(1));
}
cs.getMoreResults();
rs = cs.getResultSet();
System.out.println();
System.out.println("[Second result set]");
while (rs.next()) {
System.out.printf("userId: %s, orderId: %s%n",
rs.getString("userId"), rs.getString("orderId"));
}
producing
[First result set]
(No column name): User Data
[Second result set]
userId: gord, orderId: order1
userId: gord, orderId: order2
(Tested using mssql-jdbc-6.2.1.jre8.jar connecting to SQL Server 2014.)
For more details, see
How to get *everything* back from a stored procedure using JDBC
You cannot select the results of a stored procedure directly within SQL Server itself. You need to first insert the result into a temp table as per example below.
Example use:
-- Create a tempoary table to store the results.
CREATE TABLE #UserOrderDetail
(
UserData NVARCHAR(50) -- Your columns here
)
-- Insert result into temp table.
-- Note that the columns returned from procedure has to match columns in your temp table.
INSERT INTO #UserOrderDetail
EXEC [Order].[Get_Order] #UserID
-- Select the results out of the temp table.
SELECT *
FROM #UserOrderDetail
If the intent is to simply return one or more result sets to a client application, you should ensure that the SET NOCOUNT ON statement is added to the top of your stored procedures, this will prevent SQL Server from sending the DONE_IN_PROC messages to the client for each statement in the stored procedure. Database libraries like ODBC, JDBC and OLEDB can get confused by the row counts returned by the various insert and update statements executed within a SQL Server stored procedures. Your original procedure will look as follow:
ALTER PROC [User].[Get_Data]
(
#UserID NVARCHAR(20)
)
AS
BEGIN
SET NOCOUNT ON
SELECT 'User Data'
EXEC [Order].[Get_Order] #UserID
END
The correct way to do this with JDBC
Getting this right with JDBC is quite hard. The accepted answer by Gord Thompson might work, but it doesn't follow the JDBC spec to the word, so there might be edge cases where it fails, e.g. when there are interleaved update counts (known or accidental), or exceptions / messages.
I've blogged about the correct approach in detail here. The Oracle version is even more tricky, in case you need it. So here it goes:
// If you're daring, use an infinite loop. But you never know...
fetchLoop:
for (int i = 0, updateCount = 0; i < 256; i++) {
// Use execute(), instead of executeQuery() to handle
// leading update counts or exceptions
boolean result = (i == 0)
? s.execute()
: s.getMoreResults();
// Warnings here
SQLWarning w = s.getWarnings();
for (int j = 0; j < 255 && w != null; j++) {
System.out.println("Warning : " + w.getMessage());
w = w.getNextWarning();
}
// Don't forget this
s.clearWarnings();
if (result)
try (ResultSet rs = s.getResultSet()) {
System.out.println("Result :");
while (rs.next())
System.out.println(" " + rs.getString(1));
}
else if ((updateCount = s.getUpdateCount()) != -1)
System.out.println("Update Count: " + updateCount);
else
break fetchLoop;
}
Using jOOQ
Note that in case you're using jOOQ, you could leverage code generation for your stored procedures and call the simplified API to do this in a few lines only:
GetDatap = new GetData();
p.setUserId("gord");
p.execute(configuration);
Results results = p.getResults();
for (Result<?> result : results)
for (Record record : result)
System.out.println(record);
Disclaimer: I work for the company behind jOOQ

PostgreSQL's XMIN in Oracle & MySQL

I'm trying to get the equivalent for this code on Oracle & MySQL
if(vardbtype.equals("POSTGRESQL")){
Long previousTxId = 0L;
Long nextTxId = 0L;
Class.forName("org.postgresql.Driver");
System.out.println("----------------------------");
try(Connection c = DriverManager.getConnection("jdbc:postgresql://localhost:5432/"+ vardbserver, vardbuser, vardbpassword);
PreparedStatement stmts = c.prepareStatement("SELECT * FROM "+ vardbname +" where xmin::varchar::bigint > ? and xmin::varchar::bigint < ? ");
PreparedStatement max = c.prepareStatement("select max(xmin::varchar::bigint) as txid from "+ vardbname)
) {
c.setAutoCommit(false);
while(true) {
stmts.clearParameters();
try(ResultSet rss = max.executeQuery()) {
if(rss.next()) {
nextTxId = rss.getLong(1);
}
}
stmts.setLong(1, previousTxId);
stmts.setLong(2, nextTxId + 1);
try(ResultSet rss = stmts.executeQuery()) {
while(rss.next()) {
String message = rss.getString("MESSAGE");
System.out.println("Message = " + message);
TextMessage mssg = session.createTextMessage(message);
System.out.println("Sent: " + mssg.getText());
producer.send(mssg);
}
previousTxId = nextTxId;
}
Thread.sleep(batchperiod2);
}
}
}
Basically, the code works to get contents inside a database's table and sent it to ActiveMQ. And when the table updated, it will sent the content that just updated (not sending the past that was sent). But this code only works on PostgreSQL
Then i'm planning to create an "if" function. So i can use another database to getting the data (Oracle and MySQL).
I guess i must change this code right?
try(Connection c = DriverManager.getConnection("jdbc:postgresql://localhost:5432/"+ vardbserver, vardbuser, vardbpassword);
PreparedStatement stmts = c.prepareStatement("SELECT * FROM "+ vardbname +" where xmin::varchar::bigint > ? and xmin::varchar::bigint < ? ");
PreparedStatement max = c.prepareStatement("select max(xmin::varchar::bigint) as txid from "+ vardbname)
) {
A couple thoughts supplemental to Thorsten's answer.
First, xmin is a system column which is, iirc, stored in the row header on disk. It is updated by writes. I have not yet run into a case where the transaction id's don't increase. However, there has to be some wraparound point. I think you are better off with a trigger which stores the transaction ids in another table for processing for this reason (and using that to process things).
For Oracle and MySQL, underlying storage is sufficiently different that I don't see how you can do this directly.
If you want a common solution you want a queue table where you can use a trigger to insert waiting copies, and then select/delete from that in your worker. This will likely work better on MySQL than on PostgreSQL, and for Oracle you want to look for index-oriented tables. If autovacuum has trouble keeping up, ask more questions or hire a consultant.
After further research
InnoDB provides a DB_TRX_ID column which is similar. Note you cannot assume you have this column if you are running MySQL because MySQL has different table storage engines and not all even support transactions. So that is an important limitation.
I was unable to locate a similar column on Oracle.
This script is looking in intervals at a table and putting out all inserted messages since that last loop.
PostgreSQL stores the transaction number that inserted a record, so this can be used to find the newly inserted records (although I am not sure whether it is guaranteed for a new transaction to have a higher number than all previous ones as the script assumes).
Other DBMS don't have this pseudo column. So you would have to have a timestamp column in your table and use this instead. You'd have to change the two queries as well as the code to match the data type (I suppose java.sql.Timestamp instead of Long, but I am no Java guy).

Do not update row in ResultSet if data has changed

we are extracting data from various database types (Oracle, MySQL, SQL-Server, ...). Once it is successfully written to a file we want to mark it as transmitted, so we update a specific column.
Our problem is, that a user has the possibility to change the data in the meantime but might forget to commit. The record is blocked with a select for update statement. So it can happen, that we mark something as transmitted, which is not.
This is an excerpt from our code:
Statement stmt = conn.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_UPDATABLE);
ResultSet extractedData = stmt.executeQuery(sql);
writeDataToFile(extractedData);
extractedData.beforeFirst();
while (extractedData.next()) {
if (!extractedData.rowUpdated()) {
extractedData.updateString("COLUMNNAME", "TRANSMITTED");
// code will stop here if user has changed data but did not commit
extractedData.updateRow();
// once committed the changed data is marked as transmitted
}
}
The method extractedData.rowUpdated() returns false, because technically the user didn't change anything yet.
Is there any way to not update the row and detect if data was changed at this late stage?
Unfortunately I cannot change the program the user is using to change the data.
So you want to
Run through all rows of the table that have not been exported
Export this data somewhere
Mark these rows exported so your next iteration will not export them again
As there might be pending changes on a row, you don't want to mess with that information
How about:
You iterate over all rows.
for every row
generate a hash value for the contents of the row
compare column "UPDATE_STATUS" with calulated hash
if no match
export row
store hash into "UPDATE_STATUS"
if store fails (row locked)
-> no worries, will be exported again next time
if store succeeds (on data already changed by user)
-> no worries, will be exported again as hash will not match
This might further slow your export as you'll have to iterate over everything instead of over everything WHERE UPDATE_STATUS IS NULL but you might be able to do two jobs - one (fast)
iterating over WHERE UPDATE_STATUS IS NULL and one slow and thorough WHERE UPDATE_STATUS IS NOT NULL (with the hash-rechecking in place)
If you want to avoid store-failures/waits, you might want to store the hash /updated information into a second table copying the primary key plus the hash field value - that way user
locks on the main table would not interfere with your updates at all (as those would be on another table)
"a user [...] might forget to commit" > A user either commits or he doesn't. "Forgetting" to commit is tantamount to a bug in his software.
To work around that you need to either:
Start a transaction with isolation level SERIALIZABLE, and within that transaction:
Read the data and export it. Data read this way is blocked from being updated.
Update the data you processed. Note: don't do that with an updateable ResultSet, do that with an UPDATE statement. That way you don't need an CONCUR_UPDATABLE + TYPE_SCROLL_SENSITIVE which is much slower than a CONCUR_READ_ONLY + TYPE_FORWARD_ONLY.
Commit the transaction.
That way the buggy software will be blocked from updating data you are processing.
Another way
Start a TRANSACTION at a lower isolation level (default READ COMMITTED) and within that transaction
Select the data with proper Table Hints Eg for SQL Server these: TABLOCKX + HOLDLOCK (large datasets), or ROWLOCK + XLOCK + HOLDLOCK (small datasets), or PAGLOCK + XLOCK + HOLDLOCK. Having HOLDLOCK as a table hint is practically equivalent to having a SERIALIZABLE transaction. Note that lock escalation may escalate the latter two to table locks if the number of locks becomes too high.
Update the data you processed; Note: use an UPDATE statement. Lose the updatable/scroll_sensitive resultset.
Commit the TRANSACTION.
Same deal, the buggy software will be blocked from updating data you are processing.
In the end we had to implement optimistic locking. In some tables we already have a column that stores the version number. Some other tables have a timestamp column that holds the time of the last change (changed by trigger).
While a timestamp might not always be a reliable source for optimistic locking we went with it anyway. Several changes during a single second are not very realistic in our environment.
Since we have to know the primary key without describing it before hand, we had to access the resultset metadata. Some of our databases do not support this (DB/2 legacy tables for example). We are still using the old system for these.
Note: The tableMetaData is an XML-config file where our description of the table is stored. This is not directly related to the metadata of the table in the database.
Statement stmt = conn.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_UPDATABLE);
ResultSet extractedData = stmt.executeQuery(sql);
writeDataToFile(extractedData);
extractedData.beforeFirst();
while (extractedData.next()) {
if (tableMetaData.getVersion() != null) {
markDataAsExported(extractedData, tableMetaData);
} else {
markResultSetAsExported(extractedData, tableMetaData);
}
}
// new way with building of an update statement including the version column in the where clause
private void markDataAsExported(ResultSet extractedData, TableMetaData tableMetaData) throws SQLException {
ResultSet resultSetPrimaryKeys = null;
PreparedStatement versionedUpdateStatement = null;
try {
ResultSetMetaData extractedMetaData = extractedData.getMetaData();
resultSetPrimaryKeys = conn.getMetaData().getPrimaryKeys(null, null, tableMetaData.getTable());
ArrayList<String> primaryKeyList = new ArrayList<String>();
String sqlStatement = "update " + tableMetaData.getTable() + " set " + tableMetaData.getUpdateColumn()
+ " = ? where ";
if (resultSetPrimaryKeys.isBeforeFirst()) {
while (resultSetPrimaryKeys.next()) {
primaryKeyList.add(resultSetPrimaryKeys.getString(4));
sqlStatement += resultSetPrimaryKeys.getString(4) + " = ? and ";
}
sqlStatement += tableMetaData.getVersionColumn() + " = ?";
versionedUpdateStatement = conn.prepareStatement(sqlStatement);
while (extractedData.next()) {
versionedUpdateStatement.setString(1, tableMetaData.getUpdateValue());
for (int i = 0; i < primaryKeyList.size(); i++) {
versionedUpdateStatement.setObject(i + 2, extractedData.getObject(primaryKeyList.get(i)),
extractedMetaData.getColumnType(extractedData.findColumn(primaryKeyList.get(i))));
}
versionedUpdateStatement.setObject(primaryKeyList.size() + 2,
extractedData.getObject(tableMetaData.getVersionColumn()), tableMetaData.getVersionType());
if (versionedUpdateStatement.executeUpdate() == 0) {
logger.warn(Message.COLLECTOR_DATA_CHANGED, tableMetaData.getTable());
}
}
} else {
logger.warn(Message.COLLECTOR_PK_ERROR, tableMetaData.getTable());
markResultSetAsExported(extractedData, tableMetaData);
}
} finally {
if (resultSetPrimaryKeys != null) {
resultSetPrimaryKeys.close();
}
if (versionedUpdateStatement != null) {
versionedUpdateStatement.close();
}
}
}
//the old way as fallback
private void markResultSetAsExported(ResultSet extractedData, TableMetaData tableMetaData) throws SQLException {
while (extractedData.next()) {
extractedData.updateString(tableMetaData.getUpdateColumn(), tableMetaData.getUpdateValue());
extractedData.updateRow();
}
}

Obtain id of an insert in the same statement [duplicate]

This question already has answers here:
How to get the insert ID in JDBC?
(14 answers)
Closed 7 years ago.
Is there any way of insert a row in a table and get the new generated ID, in only one statement? I want to use JDBC, and the ID will be generated by a sequence or will be an autoincrement field.
Thanks for your help.
John Pollancre
using getGeneratedKeys():
resultSet = pstmt.getGeneratedKeys();
if (resultSet != null && resultSet.next()) {
lastId = resultSet.getInt(1);
}
You can use the RETURNING clause to get the value of any column you have updated or inserted into. It works with trigger (i-e: you get the values actually inserted after the execution of triggers). Consider:
SQL> CREATE TABLE a (ID NUMBER PRIMARY KEY);
Table created
SQL> CREATE SEQUENCE a_seq;
Sequence created
SQL> VARIABLE x NUMBER;
SQL> BEGIN
2 INSERT INTO a VALUES (a_seq.nextval) RETURNING ID INTO :x;
3 END;
4 /
PL/SQL procedure successfully completed
x
---------
1
SQL> /
PL/SQL procedure successfully completed
x
---------
2
Actually, I think nextval followed by currval does work. Here's a bit of code that simulates this behaviour with two threads, one that first does a nextval, then a currval, while a second thread does a nextval in between.
public void checkSequencePerSession() throws Exception {
final Object semaphore = new Object();
Runnable thread1 = new Runnable() {
public void run() {
try {
Connection con = getConnection();
Statement s = con.createStatement();
ResultSet r = s.executeQuery("SELECT SEQ_INV_BATCH_DWNLD.nextval AS val FROM DUAL ");
r.next();
System.out.println("Session1 nextval is: " + r.getLong("val"));
synchronized(semaphore){
semaphore.notify();
}
synchronized(semaphore){
semaphore.wait();
}
r = s.executeQuery("SELECT SEQ_INV_BATCH_DWNLD.currval AS val FROM DUAL ");
r.next();
System.out.println("Session1 currval is: " + r.getLong("val"));
con.commit();
} catch (Exception e) {
e.printStackTrace();
}
}
};
Runnable thread2 = new Runnable(){
public void run(){
try{
synchronized(semaphore){
semaphore.wait();
}
Connection con = getConnection();
Statement s = con.createStatement();
ResultSet r = s.executeQuery("SELECT SEQ_INV_BATCH_DWNLD.nextval AS val FROM DUAL ");
r.next();
System.out.println("Session2 nextval is: " + r.getLong("val"));
con.commit();
synchronized(semaphore){
semaphore.notify();
}
}catch(Exception e){
e.printStackTrace();
}
}
};
Thread t1 = new Thread(thread1);
Thread t2 = new Thread(thread2);
t1.start();
t2.start();
t1.join();
t2.join();
}
The result is as follows:
Session1 nextval is: 47
Session2 nextval is: 48
Session1 currval is: 47
I couldn't comment otherwise I would have added to Vinko Vrsalovic's post:
The id generated by a sequence can be obtained via
insert into table values (sequence.NextVal, otherval)
select sequence.CurrVal
ran in the same transaction as to get a consistent view.
Updating de sequence after getting a nextval from it is an autonomous transaction. Otherwise another session would get the same value from the sequence. So getting currval will not get the inserted id if anothers sesssion has selected from the sequence in between the insert and select.
Regards,
Rob
The value of the auto-generated ID is not known until after the INSERT is executed, because other statements could be executing concurrently and the RDBMS gets to decide how to schedule which one goes first.
Any function you call in an expression in the INSERT statement would have to be evaluated before the new row is inserted, and therefore it can't know what ID value is generated.
I can think of two options that are close to what you're asking:
Write a trigger that runs AFTER INSERT, so you have access to the generated ID key value.
Write a procedure to wrap the insert, so you can execute other code in the procedure and query the last generated ID.
However, I suspect what you're really asking is whether you can query for the last generated ID value by your current session even if other sessions are also inserting rows and generating their own ID values. You can be assured that every RDBMS that offers an auto-increment facility offers a way to query this value, and it tells you the last ID generated in your current session scope. This is not affected by inserts done in other sessions.
The id generated by a sequence can be obtained via
insert into table values (sequence.NextVal, otherval)
select sequence.CurrVal
ran in the same transaction as to get a consistent view.
I think you'll find this helpful:
I have a table with a
auto-incrementing id. From time to
time I want to insert rows to this
table, but want to be able to know
what the pk of the newly inserted row
is.
String query = "BEGIN INSERT INTO movement (doc_number) VALUES ('abc') RETURNING id INTO ?; END;";
OracleCallableStatement cs = (OracleCallableStatement) conn.prepareCall(query);
cs.registerOutParameter(1, OracleTypes.NUMBER );
cs.execute();
System.out.println(cs.getInt(1));
Source: Thread: Oracle / JDBC Error when Returning values from an Insert
I couldn't comment, otherwise I would have just added to dfa's post, but the following is an example of this functionality with straight JDBC.
http://www.ibm.com/developerworks/java/library/j-jdbcnew/
However, if you are using something such as Spring, they will mask a lot of the gory details for you. If that can be of any assistance, just good Spring Chapter 11, which is the JDBC details. Using it has saved me a lot of headaches.

Categories