On our production application we recently become weird error from DB2:
Caused by: com.ibm.websphere.ce.cm.StaleConnectionException: [jcc][t4][2055][11259][4.13.80] The database manager is not able to accept new requests, has terminated all requests in progress, or has terminated your particular request due to an error or a force interrupt. ERRORCODE=-4499, SQLSTATE=58009
This occurs when hibernate tries to select data from one big table(More than 6 milions records and 320 columns).
I observed that when ResultSet lower that 10 elements, hibernate selects successfully.
Our architecture:
Spring 4.0.3
Hibernate 4.3.5
DB2 v10 z/Os
Websphere 7.0.0.31(with JDBC V9.7FP5)
This select works when I tried to executed this in Data Studio or when app is started localy from Tomcat(connected to production Data Source). I suppose that Data Source on Websphere is not corectly configured, but I tried some modifications and without results. I also tried to update JDBC Driver but that not helped. Actually I become then ERRORCODE = -1244.
Ok, so now I'm looking for any help ;).
I can obviously provide additional information when needed.
Maybe someone fighted earlier with this problem?
Thanks in advance!
We have the same problem and finally solved by running REORG and RUNSTAT on the table(s). In our case, databse and tables were damaged and after running both mentioned operations, it resolved.
This occurs when hibernate tries to select data from one big table(More than 6 milions records and 320 columns)
6 million records with 320 columns seems huge to be read at once through hibernate. How you tried creating a database cursor and streaming few records at a time? In plain JDBC it is done as follows
Statement stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,
java.sql.ResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(50); //fetch only 50 records at a time
while with hibernate you would need the below code
Query query = session.createQuery(query);
query.setReadOnly(true);
query.setFetchSize(50);
ScrollableResults results = query.scroll(ScrollMode.FORWARD_ONLY);
// iterate over results
while (results.next()) {
Object row = results.get();
// process row then release reference
// you may need to flush() as well
}
results.close();
This allows you to stream over the result set, however Hibernate will still cache results in the Session, so you’ll need to call session.flush() every so often. If you are only reading data, you might consider using a StatelessSession, though you should read its documentation beforehand.
Analyze the database table locking impact when using this approach.
Related
I am trying to insert 1000 records using ‘’’Hibernate jdbc batch’’’ and sometime we get unique constraints for one of the records. Is there anyway I can force hibernate to return which row data caused constraints issue?
Whenever constraint issue occurs hibernate return error just ‘’’ db constraints error’’’
I know I can go back to database and check but looking for some feature of hibernate which logs or return culprit data only.
My backed is oracle.
I am looking for technical solution; To query data from one db and load it into a SQL Server database using java spring boot.
Mock query to get productNames which are updated between given time of 20 hours:
SELECT
productName, updatedtime FROM
products WHERE
updatedtime BETWEEN '2018-03-26 00:00:01' AND '2018-03-26 19:59:59';
Here is the approach we followed.
1) Its long running Oracle query, which runs approximately 1 hours on business hours and it returns ~1Million records.
2) We have to insert/ dump this resultset into a SQL Server Table using JDBC.
3) As I know Oracle JDBC driver supports kind of streaming. When we iterate over ResultSet it loads only fetchSize rows into memory.
int currentRow = 1;
while (rs.next()) {
// get your data from current row from Oracle database and accumulate in a batch
if (currentRow++ % BATCH_SIZE == 0) {
//insert whole accumulated batch into SqlServer database
}
}
In this case we do not need to store all huge dataset from Oracle in memory. And we will insert into SqlServer by batches of BATCH_SIZE. The only thing is that we need to think where to do commit into SqlServer database.
4)Here is the bottleneck is query execution waiting time to get the data from oracle db, So I am planing to split the query into 10 equal parts such each query to give updatedtime between each hour as shown. so that execution time also get reduced to ~10min for each query.
eg:
SELECT
productName, updatedtime FROM
products WHERE
updatedtime BETWEEN '2018-03-26 01:00:01' AND '2018-03-26 01:59:59';
5.For that I required 5 Oracle JDBC connections and 5 Sql server connection(to query the data and insert into db) to do its job independently. I am new to JDBC connection pooling
How can I do the connection pooling and closing the connection if its not in use etc?
Please suggest if you have any other better approach to get the data from the data source quickly as real time data. Please suggest. Thanks in advance.
This is a typical use case from spring batch.
There you have the concept of ItemReader(from your source db) and ItemWriter(into your destination db).
You can define multiple datasource and you will have capabilities for reading in fixed fetch size(JdbcCursorItemReader for instance) and also to create grid for parallel execution.
With a quick search you can find many examples online relative to this kind of tasks.
I know I'm not posting the code relative to the concept but it will take me some time to prepare a decent example
My use case is that I have to run a query on RDS instance and it returns 2 millions records. Now,I want to copy the result directly to disk instead of bringing it in memory then copying it to disk.
Following statement will bring all the records in memory, I want to transfer the results directly to file on disk.
SelectQuery<Record> abc = dslContext.selectQuery().fetch();
Can anyone suggest an pointer?
Update1:
I found the following way to read it :
try (Cursor<BookRecord> cursor = create.selectFrom(BOOK).fetchLazy()) {
while (cursor.hasNext()){
BookRecord book = cursor.fetchOne();
Util.doThingsWithBook(book);
}
}
How many records does it fetch at once and are those records brought in memory first?
Update2:
MySQL driver by default it fetches all the records at once. If fetch size is set to Integer.MIN_VALUE then it fetches one record at a time. If you want to fetch the records in batches then set useCursorFetch=true while setting connection properties.
Related wiki : https://dev.mysql.com/doc/connector-j/8.0/en/connector-j-reference-implementation-notes.html
Your approach using the ResultQuery.fetchLazy() method is the way to go for jOOQ to fetch records one at a time from JDBC. Note that you can use Cursor.fetchNext(int) to fetch a batch of records from JDBC as well.
There's a second thing you might need to configure, and that's the JDBC fetch size, see Statement.setFetchSize(int). This configures how many rows are fetched by the JDBC driver from the server in a single batch. Depending on your database / JDBC driver (e.g. MySQL), the default would again be to fetch all rows in one go. In order to specify the JDBC fetch size on a jOOQ query, use ResultQuery.fetchSize(int). So your loop would become:
try (Cursor<BookRecord> cursor = create
.selectFrom(BOOK)
.fetchSize(size)
.fetchLazy()) {
while (cursor.hasNext()){
BookRecord book = cursor.fetchOne();
Util.doThingsWithBook(book);
}
}
Please read your JDBC driver manual about how they interpret the fetch size, noting that MySQL is "special"
We use java datastax cassandra driver 2.1.2. Cassandra version we use is 2.0.9.
We have statement which we build with QueryBuilder and we are setting consistency level to statement on TWO explicitly.
Select selectStatement = QueryBuilder.select().from(ARTICLES);
selectStatement.where(eq(ORGANIZATION_ID, organizationId));
selectStatement.setConsistencyLevel(ConsistencyLevel.TWO);
final ResultSet rs = session.execute(selectStatement);
//call to all() will be removed since it is enough to iterate over result set
//and then you get pagination for free instead of loading everything in
//memory
List<Row> rows = rs.all();
for (final Row row : rows) {
//do something with Row, convert to POJO
}
We get exception like this:
com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency ALL (3 responses were required but only 2 replica responded)
com.datastax.driver.core.exceptions.ReadTimeoutException.copy (ReadTimeoutException.java:69)
com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException (DefaultResultSetFuture.java:259)
com.datastax.driver.core.ArrayBackedResultSet$MultiPage.prepareNextRow (ArrayBackedResultSet.java:279)
com.datastax.driver.core.ArrayBackedResultSet$MultiPage.isExhausted (ArrayBackedResultSet.java:239)
com.datastax.driver.core.ArrayBackedResultSet$1.hasNext (ArrayBackedResultSet.java:122)
com.datastax.driver.core.ArrayBackedResultSet.all (ArrayBackedResultSet.java:111)
I know that calling all() on ResultSet makes it load all articles for organization in memory and work with it and creates load on cassandra. This will be removed as noted in comments. This can cause read timeout but I am still puzzled why in exception message there is ALL.
Question is why exception is telling that consistency level ALL is used when we set it to TWO for original statement. Is all() internally doing something with query and using CL ALL by default?
Your problem is almost certainly https://issues.apache.org/jira/browse/CASSANDRA-7947 . You are seeing an error message from failing to perform read repair. It is unrelated to your original consistency level. This is fixed in 2.1.3+.
Short version of my question is:
PreparedStatement ps;
ps = connection.prepareStatement("Insert into T values (?)");
ps.setBoolean(1, true);
ps.executeUpdate();
What can be the reasons for this code sample to produce query with value wrapped in quotes?
Long version of my question is:
I have JavaEE application with plain JDBC for DB interactions and recently I noticed that there are some MySQLDataTruncation exceptions appearing in my logs. These exceptions were occurring on attempt to save entity into DB table which have boolean column defined as BIT(1). And it was because generated query looked like this:
Insert into T values ('1');
Note that value is wrapped with quotes. Query was logged from application with Log4J log.info(ps); statement.
Previous logs demonstrate that there where no quotes.
Furthermore, even MySQL server logs started to look different. Before this happened I had given pairs of records for each query executed:
12345 Prepare Insert into T values (?)
12345 Execute Insert into T values (1)
And after:
12345 Query Insert into T values ('1')
It is worth noting that those changes wasn`t a result of deploying new version of application or even restarting MySQL/Application server and code, responsible of query generation, is as straightforward as example in this question.
Application server restart fixed the issue for about 12 hours, and then it happened again. As a temporary solution I changed BIT columns to TINYINT
P.S. Examining both aplication and MySQL logs allowed to narrow down the time span when something went wrong to about 2 minutes, but there were nothing abnormal in the logs in this period.
P.P.S. Application server is Glassfish 2.1.1, MySQL server version is 5.5.31-1~dotdeb and MySQL Connector/J version is 5.0.3.
Well, it turned out it was actually an issue with unclosed prepared statements.
When opened statements count at MySQL server reached its allowed maximum, application was still able to continue working somehow, withoout producing sql error:
Error Code: 1461 Can’t create more than max_prepared_stmt_count statements
But in that mode it started to wrap boolean values with quotes, causing all my troubles affecting BIT(1) columns.