I have a java.sql.ResultSet object that I need to update. However the result set is not updatable. Unfortunately this is a constraint on the particular framework I'm using.
What I'm trying to achieve here is taking data from a database, then manipulating a small amount of the data and finally the data is being written to a CSV file.
At this stage I think my best option is to create a new result set object and copy the contents of the original result set into the new one, manipulating the data as I do so.
However, I've hunted high and low on Google and don't seem to be able to determine how to do this or whether it's even possible at all.
I'm new to everything Java so any assistance would be gratefully received.
Thanks for the responses. In the end I found CachedRowSet which is exactly what I needed. With this I was able to disconnect the ResultSet object and update it.
What's more, because CachedRowSet implements the ResultSet interface I was still able to pass it to my file generation method which requires an object that implements ResultSet.
The normal practice would be to map the ResultSet to a List<Entity> where Entity is your own class which contains information about the data represented by a single database row. E.g. User, Person, Address, Product, Order, etcetera, depending on what the table actually contains.
List<Entity> entities = new ArrayList<Entity>();
// ...
while (resultSet.next()) {
Entity entity = new Entity();
entity.setId(resultSet.getLong("id"));
entity.setName(resultSet.getString("name"));
entity.setValue(resultSet.getInt("value"));
// ...
entities.add(entity);
}
// ...
return entities;
Then, you can access, traverse and modify it the usual Java way. Finally, when persisting it back in the DB, use a PreparedStatement to update them in batches in a single go.
String sql = "UPDATE entity SET name = ?, value = ? WHERE id = ?";
// ...
statement = connection.prepareStatement(sql);
for (Entity entity : entities) {
statement.setString(1, entity.getName());
statement.setInt(2, entity.getValue());
statement.setLong(3, entity.getId());
// ...
statement.addBatch();
}
statement.executeBatch();
// ...
Note that some DB's have a limit on the batch size. Oracle's JDBC driver has a limit on around 1000 items. You may want to call executeBatch() every 1000 items then. It should be simple using a counter inside the loop.
See also:
Collections tutorial
PreparedStatement tutorial
Related
The code that I am working with primarily uses Spring and jdbcTemplate as a way to query the database.
As a non-working example, but just to get the idea across of how I get data and display it on my website...
There will be some object called Bike.
List<bikeObject> bikes = new ArrayList<>();
List<Map<String, Object>> rows = jdbcTemplate.queryForList(bikeQuery));
for (Map<String<Object> row : rows){
bikeObject b = new bikeObject();
b.setProperty((String row.get(-property-));
....
bikes.push(bikeObject)
}
However, sometimes the query can be too large and my computer can run out of memory or the database query can timeout.
A solution that was brought to my attention was to just query it into a ResultSet and then iterate through and stream it directly to a file. I can scrap the display on the website and just let the user download an excel table on a click of a button.
I see that I can use something like (copied from the oracle site)
OracleDataSource ods = new OracleDataSource();
ods.setURL(url);
ods.setUser(user);
ods.setPassword(password);
String URL = "jdbc:oracle:thin:scott/tiger#//myhost:1521/orcl");
ods.setURL(URL);
Connection conn = ods.getConnection();
Statement stmt = conn.createStatement();
ResultSet rset = stmt.executeQuery(query);
from here I think I can just iterate through rset and write to a file using BufferedWriter.
The issue I have with this is that my code is pretty consistent so how would I set the URL/User/Password from the Spring properties file that I have? I don't want to type it in the file on a one time occasion.
Also, is this the best way to approach this problem? Can I write to file using jdbcTemplate + ResultSet? I'm stuck on finding a way how.
Slight update:
I assume that the query (passed off from someone else) is optimal and that all the data is necessary. This leaves me with the conclusion of streaming the query results straight to file. Is there a way I can do this with jdbcTemplate or do I have to do it via
Connection conn = ods.getConnection();
Statement stmt = conn.createStatement();
ResultSet rset = stmt.executeQuery(swSb);
And iterating through it on a next() basis?
You don't describe well the problem: Do you really need all data? is database setup with indexes and is the query optimal?
You can use oracle pagination support http://www.oracle.com/technetwork/issue-archive/2007/07-jan/o17asktom-093877.html so the user get first X elements.
If you really need all data and it is a lot I would avoid mapping to an object specially object instantiation inside a loop.
It would help if you could tell how many rows are you expecting
everyone. I'm new to Hibernate. And I'm making desktop application. I have 2 tables: Worker and Ceh (i.e. Department). Relation between them: many-to-one, i.e. 1 Ceh may contain many workers.
I run hql query with inner join to show info about all workers including name of the department and want to show the results in JTable.
The hql query:
private static String query_All_Workers="select W.fio, W.nomer, W.salary, C.name from Worker W Inner Join W.ceh C;
The method that runs query:
try {
Session session = HibernateUtil.getSessionFactory().openSession();
session.beginTransaction();
Query q = session.createQuery(hql);
List resultList = q.list();
displayResult(resultList);
session.getTransaction().commit();
} catch (HibernateException he) {
he.printStackTrace();
}
The method displayResult(List resultList):
Vector<String> tableHeaders = new Vector<>();
tableHeaders.add("FIO");
tableHeaders.add("Nomer");
tableHeaders.add("Salary");
tableHeaders.add("Ceh");
Vector tableData = new Vector();
for(Object o : resultList) {
Worker worker = (Worker)o;
Vector<Object> oneRow = new Vector<Object>();
oneRow.add(worker.getFio());
oneRow.add(worker.getNomer());
oneRow.add(worker.getSalary());
oneRow.add(worker.getCeh());
tableData.add(oneRow);
}
resultTable.setModel(new DefaultTableModel(tableData, tableHeaders));
And the exception occurs like this:
"java.lang.ClassCastException: [Ljava.lang.Object; cannot be cast to
workers.entity.Worker"
It happens because the list contains objects which are results of inner join query. So I don't know how I can correctly cast the object to Worker entity in order to use its getters.
You’re getting the “java.lang.ClassCastException” because you are trying to cast object of type java.lang.Object to custom class Worker:
Worker worker = (Worker)o;
There’s nothing wrong with what you’re trying to do just make sure that the result set returns actually Worker, which is not the case. In your example you are returning resultSet of Objects, because you’re writing regular JDBC select SQL statement.
In order to fix that you will need to checkout Hibernate’s Query Language (HQL) syntax and write HQL query instead of regular JDBC one.
Quick tutorial here http://www.tutorialspoint.com/hibernate/hibernate_query_language.htm
I will advise you do to so from now on, because you will gain the following benefits:
When you write select statements (or any other for that matter) with HQL you think and use Java objects, not DB tables (it helps a lot with table foreign key mappings);
HQL returns whole Java object back since Hibernate will do the necessary conversion for you.
In your case you just need to replace the query_All_Workers with this: “from Worker”. Yup, that’s it! Looks weird but as I said before, Hibernate is taking care of all conversions;
Once you’ve done that, and assuming that your Java class is properly mapped to Hibernate entity, resultSet will contain Workers this time, from which you can easily extract Ceh’s name by using Java getter method:
worker.getCeh.getName();
Also using HQL you will not need to make a second select to Ceh table, just to get the name, like you need to do right now.
Hope that helps.
I have an application developed based on MySQL that is connected through Hibernate. I used DAO utility code to query the database. Now I need optimize my database query by indexes. My question is, how can I query data through Hibernate DAO utility code and make sure indexes are used in MySQL database when queries are executed. Any hints or pointers to existing examples are appreciated!
Update: Just want to make the question more understandable a little bit. Following is the code I used to query the MySQL database through Hibernated DAO utility codes. I'm not directly using HQL here. Any suggestions for a best solution? If needed, I will rewrite the database query code and use HQL directly instead.
public static List<Measurements> getMeasurementsList(String physicalId, String startdate, String enddate) {
List<Measurements> listOfMeasurements = new ArrayList<Measurements>();
Timestamp queryStartDate = toTimestamp(startdate);
Timestamp queryEndDate = toTimestamp(enddate);
MeasurementsDAO measurementsDAO = new MeasurementsDAO();
PhysicalLocationDAO physicalLocationDAO = new PhysicalLocationDAO();
short id = Short.parseShort(physicalId);
List physicalLocationList = physicalLocationDAO.findByProperty("physicalId", id);
Iterator ite = physicalLocationList.iterator();
while(ite.hasNext()) {
PhysicalLocation physicalLocation = (PhysicalLocation)ite.next();
List measurementsList = measurementsDAO.findByProperty("physicalLocation", physicalLocation);
Iterator jte = measurementsList.iterator();
while(jte.hasNext()){
Measurements measurements = (Measurements)jte.next();
if(measurements.getMeasTstime().after(queryStartDate)
&& measurements.getMeasTstime().before(queryEndDate)) {
listOfMeasurements.add(measurements);
}
}
}
return listOfMeasurements;
}
Just like with SQL, you don't need to do anything special. Just execute your queries as usual, and the database will use the indices you've created to optimize them, if possible.
For example, let's say you have a HQL query that searches all the products that have a given name:
select p from Product where p.name = :name
This query will be translated by Hibernate to SQL:
select p.id, p.name, p.price, p.code from product p where p.name = ?
If you don't have any index set on product.name, the database will have to scan the whole table of products to find those that have the given name.
If you have an index set on product.name, the database will determine that, given the query, it's useful to use this index, and will thus know which rows have the given name thanks to the index. It willl thus be able to only read a small subset of the rows to return the queries data.
This is all transparent to you. You just need to know which queries are slow and frequent enough to justify the creation of an index to speed them up.
I am performing a call to a function which is part of a DB package. This package is deployed in two locations. One local and another remote (across the Atlantic).
I am retrieving the data via the Spring JDBC template.
There is one function which returns approximately 1000 rows (not all that much) and this is taking about 1.5 seconds when getting the data locally but it's taking in the region of 12 seconds when getting the data remotely.
In all sample code, names have been changed and code has been simplified a little.
Please see an example of the current Java code:
SimpleJdbcCall simpleJdbcCall = new SimpleJdbcCall(getDataSource())
.withSchemaName(MY_SCHEMA_NAME)
.withCatalogName("REFCURSOR_PKG")
.withFunctionName("GET_DATA")
.returningResultSet("RESULT_SET", new DataEntryMapper());
SqlParameterSource params = new MapSqlParameterSource()
.addValue("the_name", name)
.addValue("the_rev", rev);
Map resultSet = simpleJdbcCall.execute(params);
ArrayList list = (ArrayList) resultSet.get("RESULT_SET");
The RowMapper class looks something like this:
class RouteDataEntryMapper implements RowMapper {
public RouteDataEntry mapRow(ResultSet resultSet, int rowNum) throws SQLException {
return new DataEntry(resultSet.getString("name"),
Integer.parseInt(resultSet.getString("rev"));
}
}
SQL package spec snippet:
TYPE REF_CURSOR IS REF CURSOR;
SQL function:
FUNCTION GET_ROUTE_DATA(the_name VARCHAR2, the_rev VARCHAR2) RETURN REF_CURSOR AS
RESULT_SET REF_CURSOR;
BEGIN
OPEN RESULT_SET FOR
select *
from table_name tn
where tn.name = the_name
and tn.rev = the_rev;
RETURN RESULT_SET;
CLOSE RESULT_SET;
EXCEPTION WHEN OTHERS THEN
RAISE;
END GET_ROUTE_DATA;
I have tried using regular boiler plate JDBC also (create connection, prepare statement, execute statement, retrieve data from RESULT_SET, etc) and I found that the vast majority of time was spent looping over the RESULT_SET and extracting the data out of it and into some pojos. In the case of the Spring code above, most of the time was spent during the execute() method but this is probably because it creates the objects using the RowMapper at that time.
So, the common area between them is the performing of actions such as:
rs.getString("name")
and I'm guessing that this is where the problem lies but I could be wrong.
As I said, locally the delay is fine but remotely it's taking way too long. Is this because it's going to the DB on every rs.get... ? Is there a better way to do this?
Thanks in advance.
rs.getString("name")
ResultSet.get*(String columnName) can be replaced with ResultSet.get*(int columnNaumber) which is slightly faster but I doubt that the main problem here.
Is this because it's going to the DB on every rs.get... ?
While it really depends the driver I suspect it won't. For a cached result-set it might go to ther server when your scroll through the cursor but it would still fetch a bunch of rows in every roundtrip.
Two more suggestions I have are:
Use a network sniffing utility to see the data being transferred
Check your driver for any option to pre-fetch and such like.
add this line :-
.withoutProcedureColumnMetaDataAccess
in the following code lines
SimpleJdbcCall simpleJdbcCall = new SimpleJdbcCall(getDataSource())
.withSchemaName(MY_SCHEMA_NAME)
.withCatalogName("REFCURSOR_PKG")
.withFunctionName("GET_DATA")
.withoutProcedureColumnMetaDataAccess // to avoid fetching meta data info from database
I have this really big table with some millions of records every day and in the end of every day I am extracting all the records of the previous day. I am doing this like:
String SQL = "select col1, col2, coln from mytable where timecol = yesterday";
Statement.executeQuery(SQL);
The problem is that this program takes like 2GB of memory because it takes all the results in memory then it processes it.
I tried setting the Statement.setFetchSize(10) but it takes exactly the same memory from OS it does not make any difference. I am using Microsoft SQL Server 2005 JDBC Driver for this.
Is there any way to read the results in small chunks like the Oracle database driver does when the query is executed to show only a few rows and as you scroll down more results are shown?
In JDBC, the setFetchSize(int) method is very important to performance and memory-management within the JVM as it controls the number of network calls from the JVM to the database and correspondingly the amount of RAM used for ResultSet processing.
Inherently if setFetchSize(10) is being called and the driver is ignoring it, there are probably only two options:
Try a different JDBC driver that will honor the fetch-size hint.
Look at driver-specific properties on the Connection (URL and/or property map when creating the Connection instance).
The RESULT-SET is the number of rows marshalled on the DB in response to the query.
The ROW-SET is the chunk of rows that are fetched out of the RESULT-SET per call from the JVM to the DB.
The number of these calls and resulting RAM required for processing is dependent on the fetch-size setting.
So if the RESULT-SET has 100 rows and the fetch-size is 10,
there will be 10 network calls to retrieve all of the data, using roughly 10*{row-content-size} RAM at any given time.
The default fetch-size is 10, which is rather small.
In the case posted, it would appear the driver is ignoring the fetch-size setting, retrieving all data in one call (large RAM requirement, optimum minimal network calls).
What happens underneath ResultSet.next() is that it doesn't actually fetch one row at a time from the RESULT-SET. It fetches that from the (local) ROW-SET and fetches the next ROW-SET (invisibly) from the server as it becomes exhausted on the local client.
All of this depends on the driver as the setting is just a 'hint' but in practice I have found this is how it works for many drivers and databases (verified in many versions of Oracle, DB2 and MySQL).
The fetchSize parameter is a hint to the JDBC driver as to many rows to fetch in one go from the database. But the driver is free to ignore this and do what it sees fit. Some drivers, like the Oracle one, fetch rows in chunks, so you can read very large result sets without needing lots of memory. Other drivers just read in the whole result set in one go, and I'm guessing that's what your driver is doing.
You can try upgrading your driver to the SQL Server 2008 version (which might be better), or the open-source jTDS driver.
You need to ensure that auto-commit on the Connection is turned off, or setFetchSize will have no effect.
dbConnection.setAutoCommit(false);
Edit: Remembered that when I used this fix it was Postgres-specific, but hopefully it will still work for SQL Server.
Statement interface Doc
SUMMARY: void setFetchSize(int rows)
Gives the JDBC driver a hint as to the
number of rows that should be fetched
from the database when more rows are
needed.
Read this ebook J2EE and beyond By Art Taylor
Sounds like mssql jdbc is buffering the entire resultset for you. You can add a connect string parameter saying selectMode=cursor or responseBuffering=adaptive. If you are on version 2.0+ of the 2005 mssql jdbc driver then response buffering should default to adaptive.
http://msdn.microsoft.com/en-us/library/bb879937.aspx
It sounds to me that you really want to limit the rows being returned in your query and page through the results. If so, you can do something like:
select * from (select rownum myrow, a.* from TEST1 a )
where myrow between 5 and 10 ;
You just have to determine your boundaries.
Try this:
String SQL = "select col1, col2, coln from mytable where timecol = yesterday";
connection.setAutoCommit(false);
PreparedStatement stmt = connection.prepareStatement(SQL, SQLServerResultSet.TYPE_SS_SERVER_CURSOR_FORWARD_ONLY, SQLServerResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(2000);
stmt.set....
stmt.execute();
ResultSet rset = stmt.getResultSet();
while (rset.next()) {
// ......
I had the exact same problem in a project. The issue is that even though the fetch size might be small enough, the JDBCTemplate reads all the result of your query and maps it out in a huge list which might blow your memory. I ended up extending NamedParameterJdbcTemplate to create a function which returns a Stream of Object. That Stream is based on the ResultSet normally returned by JDBC but will pull data from the ResultSet only as the Stream requires it. This will work if you don't keep a reference of all the Object this Stream spits. I did inspire myself a lot on the implementation of org.springframework.jdbc.core.JdbcTemplate#execute(org.springframework.jdbc.core.ConnectionCallback). The only real difference has to do with what to do with the ResultSet. I ended up writing this function to wrap up the ResultSet:
private <T> Stream<T> wrapIntoStream(ResultSet rs, RowMapper<T> mapper) {
CustomSpliterator<T> spliterator = new CustomSpliterator<T>(rs, mapper, Long.MAX_VALUE, NON-NULL | IMMUTABLE | ORDERED);
Stream<T> stream = StreamSupport.stream(spliterator, false);
return stream;
}
private static class CustomSpliterator<T> extends Spliterators.AbstractSpliterator<T> {
// won't put code for constructor or properties here
// the idea is to pull for the ResultSet and set into the Stream
#Override
public boolean tryAdvance(Consumer<? super T> action) {
try {
// you can add some logic to close the stream/Resultset automatically
if(rs.next()) {
T mapped = mapper.mapRow(rs, rowNumber++);
action.accept(mapped);
return true;
} else {
return false;
}
} catch (SQLException) {
// do something with this Exception
}
}
}
you can add some logic to make that Stream "auto closable", otherwise don't forget to close it when you are done.