How to efficiently export/import database data with JDBC - java

I have a JAVA application that can use a SQL database from any vendor. Right now we have tested Vertica and PostgreSQL. I want to export all the data from one table in the DB and import it later on in a different instance of the application. The size of the DB is pretty big so there are many rows in there. The export and import process has to be done from inside the java code.
What we've tried so far is:
Export: we read the whole table (select * from) through JDBC and then dump it to an SQL file with all the INSERTS needed.
Import: The file containing those thousands of INSERTS is executed in the target database through JDBC.
This is not an efficient process. Firstly, the select * from part is giving us problems because of the size of it and secondly, executing a lot if inserts one after another gives us problems in Vertica (https://forum.vertica.com/discussion/235201/vjdbc-5065-error-too-many-ros-containers-exist-for-the-following-projections)
What would be a more efficient way of doing this? Are there any tools that can help with the process or there is no "elegant" solution?

Why not do the export/import in a single step with batching (for performance) and chunking (to avoid errors and provide a checkpoint where to start off after a failure).
In most cases, databases support INSERT queries with many values, e.g.:
INSERT INTO table_a (col_a, col_b, ...) VALUES
(val_a, val_b, ...),
(val_a, val_b, ...),
(val_a, val_b, ...),
...
The number of rows you generate into a single such INSERT statement is then your chunk-size, which might need tuning for the specific target database (big enough to speed things up but small enough to make the chunk not exceed some database limit and create failures).
As already proposed, each of this chunk should then be executed in a transaction and your application should remember which chunk it successfully executed last in case some error occurs so it can continue at the next run there.
For the chunks itself, you really should use LIMIT OFFSET .
This way, you can repeat any chunk at any time, each chunk by itself is atomic and it should perform much better than with single row statements.

I can only speak about PostgreSQL.
The size of the SELECT is not a problem if you use server-side cursors by calling setFetchSize with a value greater than 0 (perhaps 10000) on the statement.
The INSERTS will perform well if
you run them all in a single transaction
you use a PreparedStatement for the INSERT

Each insert into Vertica goes into WOS (memory), and periodically data from WOS gets moved to ROS (disk) into a single container. You can only have 1024 ROS containers per projection per node. Doing many thousands of INSERTs at a time is never a good idea for Vertica. The best way to do this is to copy all that data into a file and bulk load the file into Vertica using the COPY command.
This will create a single ROS container for the contents of the file. Depending on how many rows you want to copy it will be many times (sometimes even hundreds of times) faster.
https://www.vertica.com/docs/9.2.x/HTML/Content/Authoring/SQLReferenceManual/Statements/COPY/COPY.htm
https://www.vertica.com/docs/9.2.x/HTML/Content/Authoring/ConnectingToVertica/ClientJDBC/UsingCOPYLOCALWithJDBC.htm

Related

JDBC Read without cursor

I have to read huge data from the database (for example lets consider more than 500 000 records). Then I have to save the read data to a file. I have many issues with cursor (not only memory issue).
Is it possible to do it without cursor, for example using stream? If so how can I achieve it?
I have experienced working with huge data (almost 500 milions of records). I simply used PreparedStatement query, ResultSet and of cource some buffer tweaking through:
setFetchSize(int)
In my case, i split the program into threads because the huge table was partitioned (each thread processed one partition) but i think that this is not your case.
It is pointless to fetch data through cursor. I would rather use the database view or SQL query. Do not use ORM for this purpose.
According to your comment, your best option is to limit JDBC to fetch only specific number of rows instead of fetching all of them (this helps to begin processing faster and does not load entire table into ResultSet). Save your data into collection and write it into file using BufferedWriter. You can also benefit from multi-core CPU to make it run in more threads - like first fetched rows run in 1 thread, other fetched rows in second thread. In case of threading, use synchronized collections and be aware that you might face the problem of ordering.

Split database into smaller ones. Too much data in single commit

I need an advice :)
I have a database with almost 70 tables, many of them have over a dozen million big records. I want to split it into a few smaller ones. One for every big client data and one main database for the rest of the client's data(while also moving some of the data into NoSQL database). Because of many complicated relations between tables, before copying the data, I was disabling the triggers, that were checking the correctness of the foreign keys and then, just before a commit I was enabling them again.
It was all working with a small amount of data, but now, when I'm trying to copy one of the big client data I have a problem with the java heap size/GC out of memory.
I could increase the heap size, but it's not the point here.
I'm selecting data by some specific id from every table that has any relation to client data and copy it to another database. The process looks like this:
Select data from table
Insert data to another database
Copy sequence (max(id) of data being copied)
Flush/Clear
Repeat for every table containing client data
I was trying to select portions of data(something like select parts with 5000 rows instead of all 50 000 in one) but it fails in the exact same position.
And here I am asking for an advice, how to manage this problem. I think it is all because I am trying to copy all data in one big fatty commit. The reason of it is that I have to disable triggers while copying but also I must enable them before I can commit my changes.
When I'm trying to copy one of the big client data I have a problem with the java heap size/GC out of memory.
Copying data should not be using the heap, so it seems you're not using cursor-based queries.
See "Getting results based on a cursor" in the PostgreSQL JDBC documentation:
By default the driver collects all the results for the query at once. This can be inconvenient for large data sets so the JDBC driver provides a means of basing a ResultSet on a database cursor and only fetching a small number of rows.
A small number of rows are cached on the client side of the connection and when exhausted the next block of rows is retrieved by repositioning the cursor.
[...]
Changing code to cursor mode is as simple as setting the fetch size of the Statement to the appropriate size. Setting the fetch size back to 0 will cause all rows to be cached (the default behaviour).
So, add a stmt.setFetchSize(1000) (or something like that) to your code will ensure that the JDBC driver will not exhaust the heap.
If you still have trouble after that, then it's because your code is retaining all data, which means it's coded wrong for a copy operation.

PreparedStatement.addBatch and thousands of rows from a file and a confusion

Hi I am trying to write to Sybase IQ using JDBC from a file which contains thousands of rows. People say that I should use batchUpdate. So I am reading file by NIO and adding it to PreparedStatement batches. But I dont see any advantage here for all the rows I need to do the following
PreparedStatement prepStmt = con.prepareStatement(
"UPDATE DEPT SET MGRNO=? WHERE DEPTNO=?");
prepStmt.setString(1,mgrnum1);
prepStmt.setString(2,deptnum1);
prepStmt.addBatch();
I dont understand what is the advantage of batches. I have to anyhow execute addBatch for thousands of time for all the records of file. Or Should I even be using addBatch() to write records from a file to sybase iq. Please guide. Thanks a lot.
With batch updates, basically, you're cutting down on your Network I/O overhead. It's providing the benefits analogous to what a BufferedWriter provides you while writing to the disk. That's basically what this is: buffering of database updates.
Any kind of I/O has a cost; be it disk I/O or network. By buffering your inserts or updates in a batch and doing a bulk update you're minimizing the performance hit incurred every time you hit the database and come back.
The performance hit becomes even more obvious in case of a real world application where the database server is almost always under some load serving other clients as opposed to development where you're the only one.
When paired with a PreparedStatement the bulk updates are even more efficient because the Statement is pre-compiled and the execution plan is cached as well throughout the execution of the batch. So, the binding of variables happen as per your chosen batch size and then a single batchUpdate() call persists all the values in one go.
The advantage of addBatch is that it allows the jdbc driver to write chunks of data instead of sending single insert statements to the database.
This can be faster in certain situations, but real life performance may vary.
It should also be noticed that it's recommended to use batches of 50-100 rows, instead of adding all the data into a single batch.

Multi threaded insert using ORM?

I have one application where "persisting to database" is consuming 85% time of the entire application flow.
I was thinking of using multiple threads to do the insert because inserts are mostly independent here. Is there any way to achieve multi threaded insert using any of JPA implementation ? Or is it worth doing the mutli threaded insert, from improving the performance perspective ?
Note: Inserts are in the range of 10K to 100K records in a single run. Also performance is very very critical here.
Thanks.
Multi-threading insert statements on database won't really make it perform any faster because in most databases the table requires a lock for an insert. So your threads will just be waiting for the one before it to finish up and unlock the table before the next can insert - which really doesn't make it any more multi-threaded than with a single thread. If you where to do it, it would most likely slow it down.
If you inserting 10k-100k records you should consider using either batch insert statements or bulk insert commands that are native to the database your using. The fastest way would be the native bulk insert commands but it would require you to not use JPA and to work directly with JDBC calls for the inserts you want to use bulk commands on.
If you don't want to play around with native bulk commands I recommend using Spring's JDBCTemplate which has templated batch insert commands. It is very fast and I use it to batch insert 10k-20k entities every 30 seconds on a high transaction system and I am very pleased with the performance.
Lastly, make sure your database tables are optimized with the correct indexes, keys and options. Since your database is the bottleneck this should be one of the first places you look to increase performance.
Multi-threading insert statements on database won't really make it perform any faster
because in most databases the table requires a lock for an insert. So your threads will
just be waiting for the one before it to finish up and unlock the table before the next can
insert - which really doesn't make it any more multi-threaded than with a single thread. If
you where to do it, it would most likely slow it down.
Are you saying concurrent inserts from different db connections on the same table require exclusive locks to complete? I tested this on Oracle, and I didn't find this to be the case. Do you actually have a test case to back up what you wrote here?
Anyway, bulk insert is of course a lot faster than one insert at a time.
Are you periodically flushing your session when doing this? if not, you can hit nasty slowdowns that have nothing to do with the database. generally, you want to "batch" the inserts by periodically calling flush() then clear() on your session (assuming you are using some variant of JPA).
This article has many tips to improve batch writing performance with JPA. I'll quote the two that should give you the best result for fast reference.
Optimization #6 - Sequence
Pre-allocation
We have optimized the
first part of the application, reading
from the MySQL database. The second
part is to optimize the writing to
Oracle.
The biggest issue with the writing
process is that the Id generation is
using an allocation size of 1. This
means that for every insert there will
be an update and a select for the next
sequence number. This is a major
issue, as it is effectively doubling
the amount of database access. By
default JPA uses a pre-allocation size
of 50 for TABLE and SEQUENCE Id
generation, and 1 for IDENTITY Id
generation (a very good reason to
never use IDENTITY Id generation). But
frequently applications are
unnecessarily paranoid of holes in
their Id values and set the
pre-allocaiton value to 1. By changing
the pre-allocation size from 1 to 500,
we reduce about 1000 database accesses
per page.
Optimization #8 - Batch Writing
Many
databases provide an optimization that
allows a batch of write operations to
be performed as a single database
access. There is both parametrized and
dynamic batch writing. For
parametrized batch writing a single
parametrized SQL statement can be
executed with a batch of parameter
vales instead of a single set of
parameter values. This is very optimal
as the SQL only needs to be executed
once, and all of the data can be
passed optimally to the database.
Dynamic batch writing requires dynamic
(non-parametrized) SQL that is batched
into a single big statement and sent
to the database all at once. The
database then needs to process this
huge string and execute each
statement. This requires the database
do a lot of work parsing the
statement, so is no always optimal. It
does reduce the database access, so if
the database is remote or poorly
connected with the application, this
can result in an improvement.
In general parametrized batch writing
is much more optimal, and on Oracle it
provides a huge benefit, where as
dynamic does not. JDBC defines the API
for batch writing, but not all JDBC
drivers support it, some support the
API but then execute the statements
one by one, so it is important to test
that your database supports the
optimization before using it. In
EclipseLink batch writing is enabled
using the persistence unit property
"eclipselink.jdbc.batch-writing"="JDBC".
Another important aspect of using
batch writing is that you must have
the same SQL (DML actually) statement
being executed in a grouped fashion in
a single transaction. Some JPA
providers do not order their DML, so
you can end up ping-ponging between
two statements such as the order
insert and the order-line insert,
making batch writing in-effective.
Fortunately EclipseLink orders and
groups its DML, so usage of batch
writing reduces the database access
from 500 order inserts and 5000
order-line inserts to 55 (default
batch size is 100). We could increase
the batch size using
"eclipselink.jdbc.batch-writing.size",
so increasing the batch size to 1000
reduces the database accesses to 6 per
page.

Storing result set for later fetch

I have some queries that run for a quite long (20-30 minutes). If a lot of queries are started simultaneously, connection pool is drained quickly.
Is it possible to wrap the long-running query into a statement (procedure) that will store the result of a generic query into a temp table, terminanting the connection, and fetchin (polling) the results later on demand?
EDIT: queries and data stuctures are optimized, and tips like 'check your indices and execution plan' don't work for me. I'm looking for a way to store [maybe a] byte presentation of a generic result set, for later retreive.
First of all, 20-30 minutes is an extremely long time for a query - are you sure you aren't missing any indexes for the query? Do check your execution plan - you could get a huge performance gain from a well-placed index.
In MySQL, you could do
INSERT INTO `cached_result_table` (
SELECT your_query_here
)
(of course, cached_result_table needs to have the exact same column structure as your SELECT returns, otherwise you'll get an error).
Then, you could query these cached results (instead of the original tables), and only run the above query from time to time - to update the cached_result_table.
Of course, the query will need to run at least once initially, which will take the 20-30 minutes you mentioned. I suggest to pre-populate the cached table before the data are requested, and keep some locking mechanism to prevent the update query to run several times simultaneously. Pseudocode:
init:
insert select your_big_query
work:
if your_big_query cached table is empty or nearing expiration:
refresh in the background:
check flag to see if there's another "refresh" process running
if yes
end // don't run two your_big_queries at the same time
else
set flag
re-run your_big_query, save to cached table
clear flag
serve data to clients always from cached table
An easy way to do that in Oracle is "CREATE TABLE sometempname AS SELECT...". That will create a new table using the result columns from the select.
Not quite sure what you are requesting.
Currently you have 50 database sessions. Say you get 40 running long-running queries, that leaves 10 to service the rest.
What you seem to be asking for is, you want those 40 queries asynchronously (running in the background) not clogging up the connection pool of 50. The question is, do you want those 40 running concurrently with (potentially) another 50 queries from the connection pool, or do you want them queued up in some way ?
Queuing can be done (look into DBMS_SCHEDULER and DBMS_JOB). But you will need to deliver those results into some other table and know how to deliver that result set. The old fashioned way is simply to generate reports on request that get delivered to a directory on a shared drive or by email. Could be PDF or CSV or Excel.
If you want the 40 running concurrently alongside the 50 'connection pool' settings, then you may be best off setting up a separate connection pool for the long-running queries.
You can look into Resource Manager for terminating calls that take too long or too many resources. That way the quickie pool can't get bogged down in long running requests.
The most generic approach in Oracle I can think of is creating a stored procedure that will convert a result set into XML, and store it as CLOB XMLType in a table with the results of your long-running queries.
You can find more on generation XMLs from a generic result sets here.
SQL> select dbms_xmlgen.getxml('select employee_id, first_name,
2 last_name, phone_number from employees where rownum < 6') xml
3 from dual

Categories