I have a producer thread in Java pulling items from an Oracle table every n milliseconds.
The current implementation relies on a Java timestamp in order to retrieve data and never re-retrieve them again.
My objective is to get rid of the timestamp pattern and directly update the same items I'm pulling from the database.
Is there a way to SELECT a set of items and UPDATE them at the same time to mark them as "Being processed"?
If not, would a separate UPDATE query relying on the IN clause be a major performance hit?
I tried using a temporary table for that purpose, but I've seen that performance was severely affected.
Don't know if it helps, but the application is using iBatis.
If you are using oracle 10g or higher, you can use the RETURNING clause of the update statement. If you wish the retrieve more than one row you can use the BULK COLLECT statement.
Here is a link to some examples;
http://psoug.org/snippet/UPDATE-with-RETURNING-clause_604.htm
Related
I have a Korma based software stack that constructs fairly complex queries against a MySQL database. I noticed that when I am querying for datetime columns, the type that I get back from the Korma query changes depending on the syntax of the SQL query being generated. I've traced this down to the level of clojure.java.jdbc/query. If the form of the query is like this:
select modified from docs order by modified desc limit 10
then I get back maps corresponding to each database row in which :modified is a java.sql.Timestamp. However, sometimes our query generator generates more complex union queries, such that we need to apply an order by ... limit ... constraint to the final result of the union. Korma does this by wrapping the query in parentheses. Even with only a single subquery--i.e., a simple parenthesized select--so long as we add an "outer" order by ..., the type of :modified changes.
(select modified from docs order by modified desc limit 10) order by modified desc
In this case, clojure.java.jdbc/query returns :modified values as strings. Some of our higher level code isn't expecting this, and gets exceptions.
We're using a fork of Korma, which is using an old (0.3.7) version of clojure.java.jdbc. I can't tell if the culprit is clojure.java.jdbc or java.jdbc or MySQL. Anyone seen this and have ideas on how to fix it?
Moving to the latest jdbc in a similar situation changed several other things for us and was a decidedly "non-trvial" task. I would suggest getting off of a korma fork soon and then debugging this.
For us the changes focused around what korma returned on update calls changed between the verions of the backing jdbc. It was well worth getting current even though it's a moderately painful process.
Getting current with jdbc will give you fresh new problems!
best of luck with this :-) These things tend to be fairly specific to the DB server you are using.
Other options for you is to have a policy of aways specifying an order-by parameter or building a library to coerce the strings into dates. Both of these have some long term technical dept problems.
Helo,
I am a beginner java programmer.
I need to update multiple rows with a query using mysql database and java codes.
I need to update the age field (data type int) in the database based on the current date. I believe I need to iterate and use the hasnext ... but I just unable.
If you need to update all the rows using a common logic based on current date, write a single update query and execute it. It will update all the rows. If logic is different then use updatble result set.
Are you facing a problem in executing an update query or is it with the iteration on integer array. Please provide more details. And if you are attempting to update similar data, try doing it using a single query, as executing queries in loop is not recommended.
I get some thousands of data from webservice call. (It would be id and version number, list of objects)
I am required to check if the record exists for an id in the database.If it does and the version number mismatches , I need to update the table
or else insert a new record.
What do you think is the optimal solution
Fetch the records from DB and cache it. Remove the records which are matching from the list. Prepare a list which requires update and
the others which require insert and then call out procedure to insert and update accordingly
(Once I prepare the list, it could be relatively lesser records)
Loop through each one of the record I receive from the webservice and pass the id and version to a procdure which carries out insert/update
based on the need
(Using connection pool but for each record, I would be calling the procedure)
Which do you think is better approach of the two...or do you think of a better solution than these two
Limitiations to technologies to be used:
Spring Jdbc 2.x ,Java 1.7,Sybase database
No ORM technologies available.
Can I use jdbcTemplate.batchUpdate() for calling a procedure
First option is better than option 2.
No operation is costlier then network latency between application server and database server.
Thumb rule is lesser the call , better the performance.
Not sure, contraints with sysbase, but even if you can process 5-10 records in each SP call , that will be even more better than processing single record everytime.
I'm using SELECT GEN_ID(TABLE,1) FROM MON$DATABASE from a PreparedStatement to generate an ID that will be used in several tables.
I'm going to do a great number of INSERTs with PreparedStatements batches and I'm looking for a way to fetch a lot of new IDs at once from Firebird.
Doing a trigger seems to be out of the question, since I have to INSERT on other tables at another time with this ID in the Java code. Also, getGeneratedKeys() for batches seem to not have been implemented yet in (my?) Firebird JDBCdriver.
I'm answering from memory here, but I remember that I once had to load a bunch of transactions from a Quicken file into my Firebird database. I loaded an array with the transactions and set a variable named say iCount to the number. I then did SELECT GEN_ID(g_TABLE, iCount) from RDB$DATABASE. This gave me the next ID and incremented the generator by the number of records that I was going to insert. Then I started a transaction, stepped through the array and inserted the records one after the other and closed the transaction. I was surprised how fast this went. I think, at the time, I was working with about 28,000 transactions and the time was like a couple of seconds. Something like this might work for you.
As jrodenhi says, you can reserve a range of values using
SELECT GEN_ID(<generator>, <count>) FROM RDB$DATABASE
This will return a value of <count> higher than the previously generated key, so you can use all values from (value - count, value] (where ( signifies exclusive, ] inclusive). Say generator currently has value 10, calling GEN_ID(generator, 10) will return 20, you can then use 11...20 for ids.
This does assume that you normally use generators to generated ids for your table, and that no application makes up its own ids without using the generator.
As you noticed, getGeneratedKeys() has not been implemented for batches in Jaybird 2.2.x. Support for this option will be available in Jaybird 3.0.0, see JDBC-452.
Unless you are also targeting other databases, there is no real performance advantage to use batched updates (in Jaybird). Firebird does not support update batches, so the internal implementation in Jaybird does essentially the same as preparing a statement and executing it yourself repeatedly. This might change in the future as there are plans to add this to Firebird 4.
Disclosure: I am one of the Jaybird developers
I made Java/JDBC code which performs simple/basic operations on a database.
I want to add code which helps me to keep a track of when a particular database was accessed, updated, modified etc by this program.
I am thinking of creating another database inside my DBMS where these details or logs will be stored for each database involved.
Is this the best way to do it ? Are there any other ways (preferably simple) to do this ?
EDIT-
For now, I am using MySQL. But, I also want my code to work with at least
Oracle SQL and MS-SQL as well.
It is pretty standard to add a "last_modified" column to a table and then add an update trigger on the table to set it to the db current time. Then your apps don't need to worry about it. Also, a "create_time" is often used as well, populated by an insert trigger.
Update after comment:
Seems you are looking for audit logs. Some write apps where data manipulation only happens through stored procedures and not through inserts and updates. A fixed api. So you want to add an item to a table, you call the stored proc:
addItem(itemName, itemDescription)
Then the proc inserts into the item table and does what ever logging is necessary.
Another technique, if you are using some kind of framework for your jdbc access (say Spring) might be to intercept at that layer.
In almost all tables, I have the following columns:
CreatedBy
CreatedAt
These columns have default values of the current user and current time, respectively. They are populated when a row is added.
This solves only part of your problem. You can start adding triggers, but that gets complicated. Another method is to force modification access to the database through stored procedures, and then log the stored procedures. This has other advantages, in terms of controlling what users can do. But, you might want more flexibility.
A third possibility are auditing tools, that keep track of all queries being run on the database. I think most databases have a way of turning on internal auditing, although these are very specific to the database. There are also third party tools that allow you to see what has happened. Note, though, that these methods will affect performance if your database is doing high volume transactions.
For more information, you should revise your question to specify which database you are using or planning on using.