I am looking for technical solution; To query data from one db and load it into a SQL Server database using java spring boot.
Mock query to get productNames which are updated between given time of 20 hours:
SELECT
productName, updatedtime FROM
products WHERE
updatedtime BETWEEN '2018-03-26 00:00:01' AND '2018-03-26 19:59:59';
Here is the approach we followed.
1) Its long running Oracle query, which runs approximately 1 hours on business hours and it returns ~1Million records.
2) We have to insert/ dump this resultset into a SQL Server Table using JDBC.
3) As I know Oracle JDBC driver supports kind of streaming. When we iterate over ResultSet it loads only fetchSize rows into memory.
int currentRow = 1;
while (rs.next()) {
// get your data from current row from Oracle database and accumulate in a batch
if (currentRow++ % BATCH_SIZE == 0) {
//insert whole accumulated batch into SqlServer database
}
}
In this case we do not need to store all huge dataset from Oracle in memory. And we will insert into SqlServer by batches of BATCH_SIZE. The only thing is that we need to think where to do commit into SqlServer database.
4)Here is the bottleneck is query execution waiting time to get the data from oracle db, So I am planing to split the query into 10 equal parts such each query to give updatedtime between each hour as shown. so that execution time also get reduced to ~10min for each query.
eg:
SELECT
productName, updatedtime FROM
products WHERE
updatedtime BETWEEN '2018-03-26 01:00:01' AND '2018-03-26 01:59:59';
5.For that I required 5 Oracle JDBC connections and 5 Sql server connection(to query the data and insert into db) to do its job independently. I am new to JDBC connection pooling
How can I do the connection pooling and closing the connection if its not in use etc?
Please suggest if you have any other better approach to get the data from the data source quickly as real time data. Please suggest. Thanks in advance.
This is a typical use case from spring batch.
There you have the concept of ItemReader(from your source db) and ItemWriter(into your destination db).
You can define multiple datasource and you will have capabilities for reading in fixed fetch size(JdbcCursorItemReader for instance) and also to create grid for parallel execution.
With a quick search you can find many examples online relative to this kind of tasks.
I know I'm not posting the code relative to the concept but it will take me some time to prepare a decent example
My use case is that I have to run a query on RDS instance and it returns 2 millions records. Now,I want to copy the result directly to disk instead of bringing it in memory then copying it to disk.
Following statement will bring all the records in memory, I want to transfer the results directly to file on disk.
SelectQuery<Record> abc = dslContext.selectQuery().fetch();
Can anyone suggest an pointer?
Update1:
I found the following way to read it :
try (Cursor<BookRecord> cursor = create.selectFrom(BOOK).fetchLazy()) {
while (cursor.hasNext()){
BookRecord book = cursor.fetchOne();
Util.doThingsWithBook(book);
}
}
How many records does it fetch at once and are those records brought in memory first?
Update2:
MySQL driver by default it fetches all the records at once. If fetch size is set to Integer.MIN_VALUE then it fetches one record at a time. If you want to fetch the records in batches then set useCursorFetch=true while setting connection properties.
Related wiki : https://dev.mysql.com/doc/connector-j/8.0/en/connector-j-reference-implementation-notes.html
Your approach using the ResultQuery.fetchLazy() method is the way to go for jOOQ to fetch records one at a time from JDBC. Note that you can use Cursor.fetchNext(int) to fetch a batch of records from JDBC as well.
There's a second thing you might need to configure, and that's the JDBC fetch size, see Statement.setFetchSize(int). This configures how many rows are fetched by the JDBC driver from the server in a single batch. Depending on your database / JDBC driver (e.g. MySQL), the default would again be to fetch all rows in one go. In order to specify the JDBC fetch size on a jOOQ query, use ResultQuery.fetchSize(int). So your loop would become:
try (Cursor<BookRecord> cursor = create
.selectFrom(BOOK)
.fetchSize(size)
.fetchLazy()) {
while (cursor.hasNext()){
BookRecord book = cursor.fetchOne();
Util.doThingsWithBook(book);
}
}
Please read your JDBC driver manual about how they interpret the fetch size, noting that MySQL is "special"
The code that I am working with primarily uses Spring and jdbcTemplate as a way to query the database.
As a non-working example, but just to get the idea across of how I get data and display it on my website...
There will be some object called Bike.
List<bikeObject> bikes = new ArrayList<>();
List<Map<String, Object>> rows = jdbcTemplate.queryForList(bikeQuery));
for (Map<String<Object> row : rows){
bikeObject b = new bikeObject();
b.setProperty((String row.get(-property-));
....
bikes.push(bikeObject)
}
However, sometimes the query can be too large and my computer can run out of memory or the database query can timeout.
A solution that was brought to my attention was to just query it into a ResultSet and then iterate through and stream it directly to a file. I can scrap the display on the website and just let the user download an excel table on a click of a button.
I see that I can use something like (copied from the oracle site)
OracleDataSource ods = new OracleDataSource();
ods.setURL(url);
ods.setUser(user);
ods.setPassword(password);
String URL = "jdbc:oracle:thin:scott/tiger#//myhost:1521/orcl");
ods.setURL(URL);
Connection conn = ods.getConnection();
Statement stmt = conn.createStatement();
ResultSet rset = stmt.executeQuery(query);
from here I think I can just iterate through rset and write to a file using BufferedWriter.
The issue I have with this is that my code is pretty consistent so how would I set the URL/User/Password from the Spring properties file that I have? I don't want to type it in the file on a one time occasion.
Also, is this the best way to approach this problem? Can I write to file using jdbcTemplate + ResultSet? I'm stuck on finding a way how.
Slight update:
I assume that the query (passed off from someone else) is optimal and that all the data is necessary. This leaves me with the conclusion of streaming the query results straight to file. Is there a way I can do this with jdbcTemplate or do I have to do it via
Connection conn = ods.getConnection();
Statement stmt = conn.createStatement();
ResultSet rset = stmt.executeQuery(swSb);
And iterating through it on a next() basis?
You don't describe well the problem: Do you really need all data? is database setup with indexes and is the query optimal?
You can use oracle pagination support http://www.oracle.com/technetwork/issue-archive/2007/07-jan/o17asktom-093877.html so the user get first X elements.
If you really need all data and it is a lot I would avoid mapping to an object specially object instantiation inside a loop.
It would help if you could tell how many rows are you expecting
I was doing this code, and something weird has happened.
I make a bulk insert from an external file. But the result it's just fragmented, or maybe corrupted.
cnx=factoryInstace.getConnection();
pstmt = cnx.prepareStatement("DELETE FROM TEMPCELULAR");
pstmt.executeUpdate();
pstmt = cnx.prepareStatement("EXECUTE BLOCK AS BEGIN if (exists(select 1 from rdb$relations where rdb$relation_name = 'EXT_TAB')) then execute statement 'DROP TABLE EXT_TAB;'; END");
pstmt.executeUpdate();
pstmt = cnx.prepareStatement("CREATE TABLE EXT_TAB EXTERNAL '"+txtarchivoProcesar.getText()+"'(CELULAR varchar(11))");
pstmt.executeUpdate();
pstmt = cnx.prepareStatement("INSERT INTO TEMPCELULAR (CELULAR)SELECT CELULAR FROM EXT_TAB");
pstmt.executeUpdate();
pstmt = cnx.prepareStatement("SELECT CELULAR FROM TEMPCELULAR");
ResultSet rs=pstmt.executeQuery();
while(rs.next()){
System.out.println("::"+rs.getString(1));
}
And now, all of a sudden the rows on my table look like this:
::c#gmail.com
::abc2#gmail.
::m
abc3#gma
::.com
abc4#
::ail.com
ab
::#gmail.com
::bc6#gmail.c
::abc7#gmai
::com
abc8#g
::il.com
abc
::gmail.com
::c10#gmail.c
::
The blank spaces between results were not made by me. This is the result as it is.
Source file for external table:
abc#gmail.com
abc2#gmail.com
abc3#gmail.com
abc4#gmail.com
abc5#gmail.com
abc6#gmail.com
abc7#gmail.com
abc8#gmail.com
abc9#gmail.com
abc10#gmail.com
sneciosup#hotmail.com
¿What's wrong with my code?
I've haven't seen this wack results in years.
The database is created of the users pc on the first run. Hence, while in production every time I run the program.
Any help, will be appreciated.
The external table file in Firebird is not just a plaintext file, it is a fixed width format with special requirements to the content and layout. See the Interbase 6.0 Data Definition Guide, page 107-111 (available for download from http://www.firebirdsql.org/en/reference-manuals/ ) or The Firebird Book by Helen Borrie page 281-287.
The problems I see right now are:
you declare the column in the external table to be VARCHAR(11),
while the shortest emailaddress in your file is 13 characters, the
longest is 21 characters, so Firebird will never be able to read a
full emailaddress from the file
you don't specify a separate column for your linebreaks, so your
linebreaks will simply be part of the data which is read
you have declared the column as VARCHAR, this requires the records
to have a very specific format, where the first two bytes declare
the actual data length followed by a string of that length (and even then it
only reads upto the declared length of the column). Either make sure
you follow the requirements for VARCHAR columns, or simply use the
CHAR datatype and make sure you pad the column with spaces upto the
declared length.
I am not 100% sure, but your default database characterset may also be involved in how the data is read.
I have this really big table with some millions of records every day and in the end of every day I am extracting all the records of the previous day. I am doing this like:
String SQL = "select col1, col2, coln from mytable where timecol = yesterday";
Statement.executeQuery(SQL);
The problem is that this program takes like 2GB of memory because it takes all the results in memory then it processes it.
I tried setting the Statement.setFetchSize(10) but it takes exactly the same memory from OS it does not make any difference. I am using Microsoft SQL Server 2005 JDBC Driver for this.
Is there any way to read the results in small chunks like the Oracle database driver does when the query is executed to show only a few rows and as you scroll down more results are shown?
In JDBC, the setFetchSize(int) method is very important to performance and memory-management within the JVM as it controls the number of network calls from the JVM to the database and correspondingly the amount of RAM used for ResultSet processing.
Inherently if setFetchSize(10) is being called and the driver is ignoring it, there are probably only two options:
Try a different JDBC driver that will honor the fetch-size hint.
Look at driver-specific properties on the Connection (URL and/or property map when creating the Connection instance).
The RESULT-SET is the number of rows marshalled on the DB in response to the query.
The ROW-SET is the chunk of rows that are fetched out of the RESULT-SET per call from the JVM to the DB.
The number of these calls and resulting RAM required for processing is dependent on the fetch-size setting.
So if the RESULT-SET has 100 rows and the fetch-size is 10,
there will be 10 network calls to retrieve all of the data, using roughly 10*{row-content-size} RAM at any given time.
The default fetch-size is 10, which is rather small.
In the case posted, it would appear the driver is ignoring the fetch-size setting, retrieving all data in one call (large RAM requirement, optimum minimal network calls).
What happens underneath ResultSet.next() is that it doesn't actually fetch one row at a time from the RESULT-SET. It fetches that from the (local) ROW-SET and fetches the next ROW-SET (invisibly) from the server as it becomes exhausted on the local client.
All of this depends on the driver as the setting is just a 'hint' but in practice I have found this is how it works for many drivers and databases (verified in many versions of Oracle, DB2 and MySQL).
The fetchSize parameter is a hint to the JDBC driver as to many rows to fetch in one go from the database. But the driver is free to ignore this and do what it sees fit. Some drivers, like the Oracle one, fetch rows in chunks, so you can read very large result sets without needing lots of memory. Other drivers just read in the whole result set in one go, and I'm guessing that's what your driver is doing.
You can try upgrading your driver to the SQL Server 2008 version (which might be better), or the open-source jTDS driver.
You need to ensure that auto-commit on the Connection is turned off, or setFetchSize will have no effect.
dbConnection.setAutoCommit(false);
Edit: Remembered that when I used this fix it was Postgres-specific, but hopefully it will still work for SQL Server.
Statement interface Doc
SUMMARY: void setFetchSize(int rows)
Gives the JDBC driver a hint as to the
number of rows that should be fetched
from the database when more rows are
needed.
Read this ebook J2EE and beyond By Art Taylor
Sounds like mssql jdbc is buffering the entire resultset for you. You can add a connect string parameter saying selectMode=cursor or responseBuffering=adaptive. If you are on version 2.0+ of the 2005 mssql jdbc driver then response buffering should default to adaptive.
http://msdn.microsoft.com/en-us/library/bb879937.aspx
It sounds to me that you really want to limit the rows being returned in your query and page through the results. If so, you can do something like:
select * from (select rownum myrow, a.* from TEST1 a )
where myrow between 5 and 10 ;
You just have to determine your boundaries.
Try this:
String SQL = "select col1, col2, coln from mytable where timecol = yesterday";
connection.setAutoCommit(false);
PreparedStatement stmt = connection.prepareStatement(SQL, SQLServerResultSet.TYPE_SS_SERVER_CURSOR_FORWARD_ONLY, SQLServerResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(2000);
stmt.set....
stmt.execute();
ResultSet rset = stmt.getResultSet();
while (rset.next()) {
// ......
I had the exact same problem in a project. The issue is that even though the fetch size might be small enough, the JDBCTemplate reads all the result of your query and maps it out in a huge list which might blow your memory. I ended up extending NamedParameterJdbcTemplate to create a function which returns a Stream of Object. That Stream is based on the ResultSet normally returned by JDBC but will pull data from the ResultSet only as the Stream requires it. This will work if you don't keep a reference of all the Object this Stream spits. I did inspire myself a lot on the implementation of org.springframework.jdbc.core.JdbcTemplate#execute(org.springframework.jdbc.core.ConnectionCallback). The only real difference has to do with what to do with the ResultSet. I ended up writing this function to wrap up the ResultSet:
private <T> Stream<T> wrapIntoStream(ResultSet rs, RowMapper<T> mapper) {
CustomSpliterator<T> spliterator = new CustomSpliterator<T>(rs, mapper, Long.MAX_VALUE, NON-NULL | IMMUTABLE | ORDERED);
Stream<T> stream = StreamSupport.stream(spliterator, false);
return stream;
}
private static class CustomSpliterator<T> extends Spliterators.AbstractSpliterator<T> {
// won't put code for constructor or properties here
// the idea is to pull for the ResultSet and set into the Stream
#Override
public boolean tryAdvance(Consumer<? super T> action) {
try {
// you can add some logic to close the stream/Resultset automatically
if(rs.next()) {
T mapped = mapper.mapRow(rs, rowNumber++);
action.accept(mapped);
return true;
} else {
return false;
}
} catch (SQLException) {
// do something with this Exception
}
}
}
you can add some logic to make that Stream "auto closable", otherwise don't forget to close it when you are done.