I periodically receive data that I use to update my database with. The external structure differs from my internal structure so what I end up doing is running the import and then running alter table commands. I do this manually. After I format it to my liking, I export the data and then import it into my existing schema.
My questions are:
1. How can I isolate the external SQL so that it does not adversely affect my database? Ideally, I would like to run it as another user in another database / workspace. Should I create a database temporarily and then drop it once this operation is complete?
Should I connect directly using JDBC to run all these queries since there will be a large sum of data? I am using Hibernate along with C3P0 to manage the primary connection.
Lastly, is there an API to automate/simplify exporting to SQL? If I go the JDBC route, I can iterate through each row and create the insert statements from that.
Any ideas?
Thanks,
Walter
IMO, its better to do that outside of Hibernate, using simple JDBC. Just create a connection for this thing, and execute all SQL statements. In the end close the connection. This way its handy to make a connection to another temporary database, if you choose this route. You will not need to configure all that into your Hibernate configuration.
Other way is to go with Hibernate and let it create the schema for you using entity objects and their mappings. This way you don't need to manually come up with the database structure required, it will be automatically created by Hibernate.
Related
We have schemas / libraries created directly by OS/400 commands in DB2. Hence journaling will not be enabled by default for any physical file (table) If we would create newly. We are using DB Migration tool like liquibase for all DB changes like table / view creation in spring boot. while trying to insert or update, I am getting error "java.sql.SQLException: [SQL7008] X in TABLE_NAME not valid for operation". This error is due to the journaling not done on the newly created table via liquibase. Now, I am trying to find the below possibilities If available
Is there any possibility of creating table (SQL) under the DB2 library 9created in OS/400) so that the journaling is not required while inserting or updating ?
Is there any possibility of creating a journal on a table via Java/Spring Boot?
or any suggestions rather than journaling the table everytime in DB2 side ?
Please give your comments
When commitment control (transaction isolation) is used, journaling of the tables is required.
You have two options:
turn off commitment control
Turn on journaling for the tables
For option 1, you can include
transaction isolation=none;
in the connection string, see this question for more detail
For option 2, if you use the SQL CREATE SCHEMA and CREATE TABLE commands, to create the library and files, then the tables will be automatically journaled.
You can also use the Start Journal Library (STRJRNLIB) command after creating a library via the Create Library (CRTLIB) command. Thereafter, when you create a table or physical file in the library it will be journaled automatically.
I had this error and fixed it using iAccess client tool.
You can add DB2 journal using IBM i Access client tool.
Steps
Open the schema ,
Right click on Tables , then click include on the right click menu.
MYTABLE (the table I want to add journal) will be appeared on the list, then right click on the table name then and go to journaling, now add the values for Journal and library
I'm creating a spring web application that uses the MySQL database with spring JDBCTemplate. The problem is I want to record any changes in data that store in the MySQL database. I couldn't find any solution for Spring Data Envers with JDBCTemplates to record the changes.
What is the best way to record any changes of data of database? or by simply writing a text file on the spring app?
Envers on which Spring Data Envers builds is an add on of Hibernate and uses it's change detection mechanism in order to trigger writing revisions to the database.
JdbcTemplate doesn't have any of that, it just eases the execution of SQL statements by abstracting away repetitive tasks like exception handling or iterating over the ResultSet of queries. JdbcTemplate has no knowledge of what the statement it is executing is actually doing.
As so often you have a couple of options:
put triggers on your database that record changes.
use some database dependent feature like Oracles Change Data Capture
You could create a wrapper of JdbcTemplate which analyses the SQL statement and produces a log entry. This is only feasible when you need only very limited information, like what kind of statement was executed and which table was affected.
If you need more semantic information it is probably best to use an even higher level of your application stack like the controller or service to gather the relevant information and write it to the database. Probably using the JdbcTemplate as well.
Earlier I was trying to get batch inserts working in Hibernate. I tried everything: For the config I set batch_size(50), order_inserts(true), order_updates(true), use_second_level_cache(false), use_query_cache(false). For the session I used setCacheMode(CacheMode.IGNORE) and setFlushMode(FlushMode.MANUAL). Still the MySQL query log showed that each insert was coming in separately.
The ONLY thing that worked was setting rewriteBatchedStatements=true in the JBDC connection string. This worries me, as my application is supposed to support any JBDC database and I'm trying to avoid DB specific optimizations.
Is the only reason hibernate can't actually use batch statements because the MySQL driver doesn't support them by default? What about other drivers, do I have to add options to the connection string so they can support batched inserts? If you need specific db's, think SQL server, sqlite, Postgres, etc
One reason it could not be working is that hibernate disables batching if you use the Identity id generation strategy.
Also MySQL doesn't support JDBC batch prepared statements the same way as other databases without turning on the rewrite option.
I don't see that it is a problem to turn this flag on though, if your are setting up your application for a different database you will have to change the settings such as dialect, driver name, etc. anyway and as this is part of the JDBC connect String then you are isolated from the configuration.
Basically I think you are doing the right thing.
As batch insert (or bulk insert) is part of the SQL standard, ORM frameworks like Hibernate support and implement it. Please see Chapter 13. Batch Processing and Hibernate / MySQL Bulk insert problem .
Basically, you need to set the JDBC batch size via the variable named hibernate.jdbc.batch_size to a reasonable size. Also don't forget to end the batch transaction with flush() and clear().
I've worked with liquibase 1.9.5 for a while now and got it to replace hibernate hbm2ddl strategy of creating tables and loading fixtures in it. Since it's a maven project and since I use hsqldb (using file create=true), I simply create the db in the target folder so that I have a fresh database anytime I test the application. Works fine till that I realize:
1 I will need the database to be recreated when doing integration test using mysql database now
2 I will definitely need the same solution for a non maven project.
So basically how do I drop and create the database when using liquibase as opposed to hbm2ddl?
The easiest way is to add a separate database call before liquibase update that runs the sql
DROP DATABASE X;
CREATE DATABASE X
Liquibase does have a dropAll command which can be used to drop everything in a schema, but it is slower than drop/create database on mysql and may miss some database objects.
I have a scenario where the unit of work is defined as:
Update table T1 in database server S1
Update table T2 in database server S2
And I want the above unit of work to happen either completely or none at all (as the case with any database transaction). How can I do this? I searched extensively and found this post close to what I am expecting but this seems to be very specific to Hibernate.
I am using Spring, iBatis and Tomcat (6.x) as the container.
It really depends on how robust a solution you need. The minimal level of reliability on such a thing is XA transactions. To use that, you need a database and JDBC driver that supports it for starters, then you could configure Spring to use it (here is an outline).
If XA isn't robust enough for you (XA has failure scenarios, such as if something goes wrong in the second phase of commits, such as a hardware failure) then what you really need to do is put all the data in one database and then have a separate process propagate it. So the data may be inconsistent, but it is recoverable.
Edit: What I mean is that put the whole of the data into one database. Either the first database, or a different database for this purpose. This database would essentially become a queue from which the final data view is fed. The write to that database (assuming a decent database product) will be complete, or fail completely. Then, a separate thread would poll that database and distribute any missing data to the other databases. So if the process should fail, when that thread starts up again it will continue the distribution process. The data may not exist in every place you want it to right away, but nothing would get lost.
You want a distributed transaction manager. I like using Atomikos which can be run within a JVM.