Fetching single row result without iterating in loop from Mysql using Java - java

I have a simple query that returns only count of rows.
select count(*) cnt from table
I can read it by iterating through resultset.
like
while(rs.next()){
int rowCount= rs.getInt(cnt);
}
But is there any way,using which I can get count directly without looping.

How about:
int rowCount = rs.next() ? rs.getInt(cnt) : -1;
It doesn't save you much though

Related

How to get only 5 rows with JDBC from ResultSet?

I have a select SQL that may get many rows of data.
And I only want the first 5 rows.
Besides adding rownums = 5 or adding statement.setMaxRows(5).
Can I get the result from using java coding?
Thanks.
I tried for loop and while(rs.next() && i < 5). all of them does not work.
try (Connection connection = abc.getConnection();
PreparedStatement statement = connection.prepareStatement(sql))
{
statement.setString(1,idNum);
ResultSet rs = statement.executeQuery();
for (int i = 0; i < 5; i++) {
while (rs.next()) {
itemList.add(rs.getString("idName"));
}
}
}
It shows all of the result in the itemList from the select SQL.
Currently, a single iteration of your loop exhausts the whole resultset.
You may want to add up end conditions like :
for (int i = 0; i < 5 && rs.next(); i++) {
itemList.add(rs.getString("idName"));
}
Note that your attempt with while(rs.next() && i < 5) should also work, you were probably just missing the increment of i .
You can read only the first 5 rows from the resultSet on the client side, but ideally you should be limiting number of rows returned by the database. Use limit 5 in the query.
This will avoid a lot of unnecessary work needed to return those extra rows from the database to client.
Your query is responsible to fetch data from DB , so better to control retrieve rows from db itself, this will avoid your java code checks. use limit 5 in your query based on your requirements.

How to get Deleted Row Count in HQL

I´m making a DELETE FROM in HQL:
String queryText = DELETE FROM Books WHERE author = 'author1'
final Query query = getCurrentSession().createQuery(queryText);
query.executeUpdate();
How can I get the number of deleted rows in the query?
The method executeUpdate return the number of entities deleted.
So you will get the number of deleted rows (n) by the following code :
int n = query.executeUpdate();
The preparedStatement.executeUpdate() returns the number of rows affected.
For ex:
int row =preparedStatement.executeUpdate();

Java statement.setFetchSize() works well when ResultSet size == Fetch size

I have a problem with retrieving data from Oracle database. I use ojdbc8 and my code is:
Statement stmt = conn.createStatement();
stmt.setFetchSize(20000);
ResultSet rs = stmt.execute(sql);
while(rs.next()) {
for(int i = 0; i < columnCounter; i++) {
logger.info(rs.getString(i) + " ");
}
}
What I don't understand here is that when my query returns let say 53000 rows all together then in while loop first 40000 rows are printed in console very quickly but then there is a huge 20-25 seconds break, nothing happens and after that the rest rows are printed. It is always like that. If my query returns 81000 rows then 80000 rows is printed very quickly then long brake and then missing 1000 rows.
So I don't know why but it looks like when in ResultSet I have exactly 20000 rows which is a fetch size then it goes well, but if the ResultSet has less than number set in FetchSize then it slow downs. Can anyone explain what is going on here and how to fix it to get rid of this huge gap/brake??

Failure to enter the while loop even when the condition is true

I am working to extract the data from database. Please find the code below:
I am using "org.springframework.jdbc.support.rowset.SqlRowSet" from Springframework.jdbc.
String query="SELECT * from TABLE_NAME where id=? and password=?";
args.add(userId);
args.add(password);
SqlRowSet rs = this.jdbcTemplate.queryForRowSet(query, args.toArray());
while (rs.next()) {
---Some Code---
}
rs.next() is true, but it is not going into the loop. Need some help on how to overcome this issue. Any help is appreciated.
This is just a guess, but since you know that rs.next() returns true, it means you executed it (either in debug mode or printed to console or whatever).
Every time you execute it, it advances the rowset to the next row, if there is one. If your rowset contains only 1 row, and you "check" the value returned by rs.next() before the loop, the loop will never be entered because when it's called again there are no more rows, so it returns false.
I don't the reason for that. But I have changed the implementation. I have changed the while loop to for loop by getting the size from the resultSet. Thanks a lot for your suggestions.
int size =0;
if (rs != null)
{
rs.beforeFirst();
rs.last();
size = rs.getRow();
}
for (int i =0;i<size;i++){//Do SOMETHING}

Retrieve and Insert million records into table

there's column I want to retrieve and insert into another table
For example, below is first table I want to retrieve values
Table1
Records
1 ABC Singapore
2 DEF Vietnam
I retrieve above column value from Table1, then insert into another table as below
Table 2
ID Name Country
1 ABC Singapore
2 DEF Vietname
Currently, I can do with java, I first retrieve records then split the values and insert. However, I want to do it by batch or pagination for better performance when Table1 got million of records to retrieve and insert those million records into Table2.
Any pointer to show me how to use pagination in my case would be appreciated.
I"m use MSSQL 2008
If you need to do that in code (and not in SQL which should be easier even with multiple delimiters), what you probably want to use would be batched inserts with proper batch size combined with a good fetch-size on your select:
//Prepare statements first
try(PreparedStatement select = con.prepareStatement("SELECT * FROM SOURCE_TABLE");
PreparedStatement insert = con.prepareStatement("INSERT INTO TARGET_TABLE(col1, col2, col3) VALUES (?,?,?)")) {
//Define Parameters for SELECT
select.setFetchDirection(ResultSet.FETCH_FORWARD);
select.setFetchSize(10000);
int rowCnt = 0;
try(ResultSet rs = select.executeQuery()) {
while(rs.next()) {
String row = rs.getString(1);
String[] split = row.split(" |\\$|\\*"); //However you want to do that
//Todo: Error handling for array length
//Todo: Type-Conversions, if target data is not a string type
insert.setString(1, split[0]);
insert.setString(2, split[1]);
insert.setString(3, split[2]);
insert.addBatch();
//Submit insert in batches of a good size:
if(++rowCnt % 10000 == 0) {
int[] success = insert.executeBatch();
//Todo: Check if that worked.
}
}
//Handle remaining inserts
int[] success = insert.executeBatch();
//Todo: Check if that worked.
}
} catch(SQLException e) {
//Handle your Exceptions
}
On calculating on "good" fetch and batch sizes you'll want to consider some parameters:
Fetchsize impacts memory consumption in your client. If you have enough of that you can make it big.
Committing an insert of millions of rows will take some time. Depending on your requirements you might want to commit the insert transaction every once in a while (every 250.000 inserts?)
Think about your transaction isolation: Make sure auto-commit is turned off as committing each insert will make most of the batching gains go away.

Categories