I have a project that is reading from a shared mysql database. There is a large account object that comes from the session (too large, but that is another story).
Problem is that this object has a blance field that is debited by a different process on the same machine. So the user sees the balance of say £20 but in reality their balance is probably a lot lower. There is no way for the app to know when this value changes.
So I thought what i would do is check mysql every time I am asked for the value from the database, but it is through faces, so it asks many times when it asks, so I put in a check so that if it hasn't asked for a few seconds, to do a createSQLquery on just the balance to see if it is different, and if it is, reload the object.
so I do something along the line of:
sess.createSQLQuery("SELECT Balance from Account").list();
and get the value. Some of the time it shows the right balance, but often it shows the previously read balance, where the mysql command line client shows the real value.
Is there a way of clearing this value, or another way of updating the object? I would like ultimately to remove hibernate as it causes me a lot more problems that it solves, but for the moment, I just need the balance to show a value based on the database.
You could try to reload that object in your hibernate session. http://docs.jboss.org/hibernate/core/3.5/api/org/hibernate/Session.html
sess.refresh(yourReadObject);
Maybe clearing the session
sess.clear();
could help as well. However it probably has more side effects.
Since you are anyway planning to ditch hibernate you can directly use a JDBC connection to bypass the hibernate session without affecting anything else.
Connection con = getSession().connection();
PreparedStatement s = con.prepareStatement("...");
s.setString(1, "...");
ResultSet rs = s.executeQuery();
I find the only way to be sure is to evict all the objects from he hibernate session once you have got them.
This means after you execute the statement above you need to evict each object separately, evicting the list as an object doesn't seem to work. (below assumes object names)
for(Balance b : listOfBalances) {
hibernateSession.evict(b);
}
I doubt this is the best way to do it (I am no hibernate expert), but it is the only way I have found where you can be 100% sure that you will not receive stale objects.
(I know this reply is a bit late but hopefully it helps someone)
Related
I am writing a system that holds a hibernate-managed entity called Voucher that has a field named serialNumber, which holds a unique number for the only-existing valid copy of the voucher instance. There may be old, invalid copies in the database table as well, which means that the database field may not be declared unique.
The operation that saves a new valid voucher instance (that will need a new serial number) is, first of all, synchronized on an appropriate entity. Thereafter the whole procedure is encapsulated in a transaction, the new value is fetched by the JPQL
SELECT MAX(serialNumber) + 1 FROM Voucher
the field gets the result from the query, the instance is thereafter saved, the session is flushed, the transaction is committed and the code finally leaves the synchronized block.
In spite of all this, the database sometimes (if seldom) ends up with Vouchers with duplicate serial numbers.
My question is: Considering that I am rather confident in the synchronization and transaction handling, is there anything more or less obvious that I should know about hibernate that I have missed, or should I go back to yet another debugging session, trying to find anything else causing the problem?
The service running the save process is a web application running on tomcat6 and is managed by Spring's HttpRequestHandlerServlet. The db connections are pooled by C3P0, running a very much default-based configuration.
I'd appreciate any suggestion
Thanks
You can use a MultipleHiLoPerTableGenerator: it generate #Id outside current transaction.
You do not need to debug to find the cause. In a multi-threaded environment it is likely to happen. You are selecting max from your table. So suppose that TX1 reads the max value which is a and inserts a row with serial number a+1; at this stage if any TX2 reads DB, the max value is still a as TX1 has not committed its data. So TX2 may insert a row with serial number of a+1 as well.
To avoid this issue you might decide to change Isolation Level of your database or change the way you are getting serial numbers (it entirely depends on circumstances of your project). But generally I do not recommend changing Isolation Levels as it is too much effort for such an issue.
Sorry in advance if someone has already answered this specific question but I have yet to find an answer to my problem so here goes.
I am working on an application (no I cannot give the code as it is for a job so I'm sorry about that one) which uses DAO's and Hibernate and POJO's and all that stuff for communicating and writing to the database. This works well for the application assuming I don't have a ton of data to check when I call Session.flush(). That being said, there is a page where a user can add any number of items to a product and there is one particular case where there are something along the lines of 25 items. Each item has about 8 fields a piece that are all stored in the database. When I call the flush it does save everything to the database but it takes FOREVER to complete. The three lines I am calling are:
merge(myObject);
Session.flush();
Session.refresh(myObject);
I have tried a number of different combinations of things to fix this problem and a number of different solutions so coming back and saying "Don't use flus()" isn't much help as the saveOrUpdate() and other hibernate sessions don't seem to work. The only solution I can think of is to scrap the entire project (the code we got was inherited and poorly written to say the least) or tell the user community to suck it up.
It is my understanding from Hibernate API that if you want to write the data to the database it runs a check on every item, if there is a difference it creates a queue of update queries, then runs the queries. It seems as though this data is being updated every time because the "DATE_CREATED" column in my database is different even if the other values are unchanged.
What I was wondering is if there was another way to prevent such a large committing of data or a way of excluding that particular column from the "check" hibernate does so I don't have to commit all 25 items if I only made a change to 1?
Thanks in advance.
Mike
Well, you really cannot avoid the dirty checking in hibernate unless you use a StatelessSession. Of course, you lose a lot of features (lazy-load etc.) with that, but it's up to you to make this decision.
Another option: I would definitely try to use dynamic-update=true in your entity. Like:
#Entity(dynamicUpdate = true)
class MyClass
Using that, Hibernate will update the modified columns only. In small tables, with few columns, it's not so effective, but in your case maybe it can help make the whole process faster as you cannot avoid dirty checking with a regular Hibernate Session. Updating a few columns instead of the whole object is always better, right?
This post talks more about dynamic-update attribute.
What I was wondering is if there was another way to prevent such a
large committing of data or a way of excluding that particular column
from the "check" hibernate does so I don't have to commit all 25 items
if I only made a change to 1?
I would profile the application to ensure that the dirty checking on flush is actually the problem. If you find that this is indeed the case you can use evict to manage the session size.
session.update(myObject);
session.flush();
session.evict(myObject);
My question is a little vague at the moment since I'm not sure that I'm supposed to post any company code online or anything. But here goes.
Suppose I need to update a specific field in a MySQL database. In order to do this using my Java client program, I have to use multiple SELECT statements in order to check that the field should be updated, and then appropriately update it using the information that has been retrieved.
eg.
//created a Connection called con already...
PreparedStatement selectStatement = con.prepareStatement("SELECT * FROM myTable" /*+ etc*/); //example query only! I'm not actually going to use "SELECT * FROM myTable"!
//more selectStatements follow
PreparedStatement updateStatement = con.prepareStatement("UPDATE myTable SET field1 = ? WHERE id = ?");
ResultSet rs = selectStatement.executeQuery();
//more ResultSets from the other selectStatements
//process ResultSets and retrieve information that indicates wwhether an update must take place
if(conditionOccurred) { //Assuming we need to update
updateStatement.setText(...);
updateStatement.executeUpdate();
}
(I haven't included try-catches in the code (sorry, I'm a bit lazy since this is just a contrived example) but I'd have to catch the potential SQLExceptions as well, I guess...)
What I'm wondering is: will it still be more "expensive", or costly in terms of speed if I delete the row and then insert a new row that contains all the updated information, given that I now need to use multiple select statements to check whether an update should occur? (memory is not such a big issue at the moment, though if something I've done has a massive flaw with regards to this I'd love to hear it!)
TL; DR: If I use multiple SELECT statements and then an UPDATE to some field(s), will it be more efficient to simply DELETE and then INSERT a new row?
Extra details: the table I'm working with at the moment has an auto-incremented ID, a VARCHAR field (the one to be updated, has a uniqueness constraint), 2 date fields and a CHAR(64) field. Not sure if it helps in answering the question, but I'll provide it anyway.
Please let me know if there are more details you'd need, and thank you in advance to anyone who might provide some insight.
To fully answer your question we would need to see your SELECT statement, however if your UPDATE does not alter the primary key values I would assume UPDATE is more efficient. The reasoning behind this is that an index values would not have to be adjusted where in the case of the DELETE & INSERT the index would be.
As in most cases the only sure fire way to test this is by using both methods and bench marking the elapsed time.
I'm answering your question based on the knowledge I acquired from my advanced database management course. I would say it would be very subjective as your concern is in terms of speed here and not the usage of memory.
When retrieval are done, in terms of your Select statements, your data are cached and when any necessary Update are required, you directly edit the fields in the cache. This save a read and write trip if you were performing the latter of Delete and Insert.
This would in my understanding, save you processing time in terms of millisecond for one single transaction, and if you look at a big picture, it will save you a lot when multiple transaction are performed. However, if your select statements involves too many queries dealing with a large size of data, it might turn out that your latter method is more efficient.
I believe with your additional inputs of your SQL statements, we would be able to give you a better and more accurate advise. :) I hope it helps.
I'm having trouble retrieving data from my database using Spring Jdbc. Here's my issue:
I have a getData() method on my DAO which is supposed to return ONE row from the result of some select statement. When invoked again, the getData() method should return the second row in a FIFO-like manner. I'm aiming for having only one result in memory at a time, since my table will get potentially huge in the future and bringing everything to memory would be a disaster.
If I were using regular jdbc code with a result set I could set its fetch size to 1 and everything would be fine. However I recently found out that Spring Jdbc operations via the JdbcTemplate object don't allow me to achieve such a behaviour (as far as I know... I'm not really knowledgeable about the Spring framework's features). I've heard of the RowCallbackHandler interface, and this post in the java ranch said I could somehow expose the result set to be used later (though using this method it stores the result set as many times over as there are rows, which is pretty dumb).
I have been playing with implementing the RowCallbackHandler interface for a day now and I still can't find a way to get it to retrieve one row from my select at a time. If anyone could enlighten me in this matter i'd greatly appreciate it.
JdbcTemplate.setFetchSize(int fetchSize):
Set the fetch size for this JdbcTemplate. This is important for processing large result sets: Setting this higher than the default value will increase processing speed at the cost of memory consumption; setting this lower can avoid transferring row data that will never be read by the application.
Default is 0, indicating to use the JDBC driver's default.
After a lot of searching and consulting with the rest of my team, we have come to the conclusion that this is not the best implementation path for our project. As Boris suggested, a different approach is the way to go. However, I'm doing something different and using SimpleJdbcTemplate instead and splitting my query so it'll fit in memory better. A "status" field in my records table will be responsbile for telling if the record was successfully processed or read, so i know what records to fetch next.
The question if Spring Jdbc is capable of the behaviour i mentioned in my OP is, however, still in the air. If anyone has an answer for that question I'm sure it would help someone else out there.
Cheers!
You can take a different approach. Create a query which will return just IDs of rows that you want to read. Keep this collection of IDs in memory. You really need to have huge data set to consume a lot of memory. Iterate over it and load one by one row referenced by its ID.
We have the same issue:
- Test fetching fetchSize records in raw jdbc Preparestatement works well: when stop Db after fetching a fetchSize of records, the error throw is Jdbc Connection when the resultset.next() get run.
- Test fetchSize with JdbcTemplate:
PreparedStatementSetter preparedStatementSetter = ps -> { ps.setFetchSize(_exportParams.getFetchSize()); };
RowCallbackHandler rowCallbackHandler = _rs -> { //do st here}
this.jdbcTemplate.query(_exportParams.getSqlscript(), preparedStatementSetter, rowCallbackHandler);
After getting first record, we stop the Postgres. The callback record handler can still handle the rest of records without error.
As far as I know, memcached runs in-memory and does not have a persistent backing-store.
Currently my team is not ready yet to use memcached.
So, I plan to write a simple database backed alternative.
My question is pretty much in similar vein to this other question
Concurrent logins in a web farm
My webapp has clear entry (login) and exit (logout) points.
The plan:
On login, I will add the userid into a table.
On logout, I will delete the row containing the userid.
Issues:
Is there a well-used method to timeout a row in Mysql ? By method, I mean a best practice. Once a timeout has been reached, the row is removed.
there is already a variant for memcache which is persistent:
http://memcachedb.org/
also check out tokyo cabinet: http://1978th.net/ which supposedly is much faster
R
EDIT:
rereading your question. Let me add this:
the way to implement a timetolive, is just add a timestamp column to your db.
The next time when you get the cached item, check if the timestamp is too old, and delete the entry at that time, get a fresh copy, and put it back in the DB cache with a current timestamp
This is also the way memcache does it
Not sure what u meant by
Is there a well-used method to timeout
a row in Mysql ?
we use Memcache as a means for Object based caching, it can be set to a timetolive value
for eg.;
MemcachedClient c= // get memcachedclient reference...
if (c != null) {
c.set(key, timeToLiveInSeconds, objectToCache);
}
After a stipulated time period it will be removed automatically