As far as I know, memcached runs in-memory and does not have a persistent backing-store.
Currently my team is not ready yet to use memcached.
So, I plan to write a simple database backed alternative.
My question is pretty much in similar vein to this other question
Concurrent logins in a web farm
My webapp has clear entry (login) and exit (logout) points.
The plan:
On login, I will add the userid into a table.
On logout, I will delete the row containing the userid.
Issues:
Is there a well-used method to timeout a row in Mysql ? By method, I mean a best practice. Once a timeout has been reached, the row is removed.
there is already a variant for memcache which is persistent:
http://memcachedb.org/
also check out tokyo cabinet: http://1978th.net/ which supposedly is much faster
R
EDIT:
rereading your question. Let me add this:
the way to implement a timetolive, is just add a timestamp column to your db.
The next time when you get the cached item, check if the timestamp is too old, and delete the entry at that time, get a fresh copy, and put it back in the DB cache with a current timestamp
This is also the way memcache does it
Not sure what u meant by
Is there a well-used method to timeout
a row in Mysql ?
we use Memcache as a means for Object based caching, it can be set to a timetolive value
for eg.;
MemcachedClient c= // get memcachedclient reference...
if (c != null) {
c.set(key, timeToLiveInSeconds, objectToCache);
}
After a stipulated time period it will be removed automatically
Related
I have a Client-Server app, and in my server I'm using hibernate for database handling. Now, my app requires among all the database tables, a simple table with only one row of one Biginteger field (which is the key in this row) in it. This table will actually contain only a global number (starting from 1) which I use every time a user performing some action, and when he does, I need to get this value, and increment the value in the database. (the table shoud contain only one row with only one value all the time)
I'm using the following code to accomplish that:
Biginteger func() {
Session s = null;
Biginteger idToReturn=null;
try{
s=factory.openSession();
s.beginTransaction();
Query queryResult = s.createQuery("from GlobalId");
List<GlobalID> theId=queryResult.list();
idToReturn=theId.get(0).get_id(); //getting the value from db to return
GlobalID toSave=new GlobalId();
toSave.set_id(idToReturn.add(BigInteger.valueOf(1))); //incrementing the id from db inorder to save it
s.delete(theId.get(0)); //deleting old id
s.save(toSave); //saving new id
s.getTransaction().commit();
}
catch(Exception e){
throw e;
}
finally{
if (s!=null)
s.close();
return idToReturn;
}
}
This code works fine. My concern is about if I'll need to use more than one server to approach a central database. In that case, if two seperate servers will run this function, I need to eliminate the case that the two of them will get the same value. I need to make sure the entire read and write will be "atomic", I need to lock this table so no more than one session will be able to read the value, and I also need to make sure in case the session ended unexpectedly, the lock will be removed.
I'm using the xampp bundle including MySQL 5.6 database.
The informationI found online regarding this issue is confusing to me- the information I found is "high level" and I could not find any examples.
You need to use pessimistic locking, which can be achieved by
setLockMode(String alias, LockMode lockMode)
on the query and use LockMode.UPGRADE.
See Query.setLockMode
However, this will certainly kill scalability and performance if you are doing a lot of access on this table. You are better either using a sequence or another strategy is to create a service to allocate numbers (e.g., an SSB) which grabs 100 numbers at a time, updates the database, and hands them out. That saves you 198 database accesses.
UPDATE:
You will also have to modify your table design slightly. It is better to have a single row with a known ID and to store the number you are incrementing in another column. Then you should update the row rather than deleting the old row and adding a new one. Otherwise, the row locking strategy won't work.
UPDATE2:
OP found that the following worked:
session.get(class.Class, id, lockOption)
I get some thousands of data from webservice call. (It would be id and version number, list of objects)
I am required to check if the record exists for an id in the database.If it does and the version number mismatches , I need to update the table
or else insert a new record.
What do you think is the optimal solution
Fetch the records from DB and cache it. Remove the records which are matching from the list. Prepare a list which requires update and
the others which require insert and then call out procedure to insert and update accordingly
(Once I prepare the list, it could be relatively lesser records)
Loop through each one of the record I receive from the webservice and pass the id and version to a procdure which carries out insert/update
based on the need
(Using connection pool but for each record, I would be calling the procedure)
Which do you think is better approach of the two...or do you think of a better solution than these two
Limitiations to technologies to be used:
Spring Jdbc 2.x ,Java 1.7,Sybase database
No ORM technologies available.
Can I use jdbcTemplate.batchUpdate() for calling a procedure
First option is better than option 2.
No operation is costlier then network latency between application server and database server.
Thumb rule is lesser the call , better the performance.
Not sure, contraints with sysbase, but even if you can process 5-10 records in each SP call , that will be even more better than processing single record everytime.
I hope someone can clarify the below scenerio for me.
From what I understand, when you request a 'row' from hibernate, for example:
User user = UserDao.get(1);
I know have the user with id=1 in memory.
In a web application, if 2 web pages request and load the user at the same time, and then both update a property on the user's object, what will happend? e.g.:
user.pageViews += 1; // the value is current 10 before the increment
UserDao.update(user);
Will this use the value that is in-memory (both requests have the value 10), or will it use the value in the database?
You must use two hibernate sessions for the two users. This means there are two instances of the object in the memory. If you use only one hibernate session (and so one instance of the object in memory), then the result is unpredictable.
In the case of a concurrent update the second update wins. The value of the first update is overwritten by the second update. To avoid the loss of the first update you normally use a version column (see the hibernate doc), and the second update then gets an error which you can catch and react on it (for example with an error message "Your record was modified in meantime. Please reload." which allows the second user to redo his modification on the modified record, to ensure his modif does not get lost.
in the case of a page view counter, like in your example, as a different solution you could write a synchronized methods which counts the page views sequentially.
By default the in memory value is used for the update.
In the following I assume you want to implement an automatic page view counter, not to modify the User in a web user interface. If you want this take a look at Hibernate optimistic locking.
So, supposing you need 100% accuracy when counting the page views, you can lock your User entity while you modify their pageView value to obtain exclusivity on the table row:
Session session = ...
Transaction tx = ...
session.lock(user, LockMode.UPGRADE);
user.increasePageViews();
tx.commit();
session.close();
The LockMode.UPGRADE will translate in a SELECT ... FOR UPDATE in your database so be careful to maintain the lock as little as possible to not impact application scalability.
I wrote an application that uses JPA (and hibernate as persistence provider).
It works on a database with several tables.
I need to create an "offline mode", where a copy of the programa, which acts as a client, allows the same functionality while keeping their data synchronized with the server when it is reachable.
The aim is to get a client that you can "detach" from the server, make changes on the data and then merge changes back. A bit like a revision control system.
It is not important to manage conflicts, in case the user will decide which version to keep.
My idea, but it can't work, was to assign to each row in the database the last edit timestamp. The client initially download a copy of the entire database and also records a second timestamp when it modify a row while non connected to the server. In this way, it knows what data has changed and the last timestamp where it is synchronized with the server. When you reconnect to the server, he will have to ask what are the data that have been changed since the last synchronization from the server and sends the data it has changed. (a bit simplified, but the management of conflicts should not be a big problem)
This, of course, does not work in case of deleting a row. If both the server or the client deletes a row they will not notice it and the other will never know.
The solution would be to maintain a table with the list of deleted rows, but it seems too expensive.
Does anyone know a method that works? there is already something similar?
Enver:
If you like to have a simple solution, you can create Version-Fields that acts like your "Timestamp".
Audit:
If you like to have a complex, powerfull solution, you should use the Hibernateplugin
I have a project that is reading from a shared mysql database. There is a large account object that comes from the session (too large, but that is another story).
Problem is that this object has a blance field that is debited by a different process on the same machine. So the user sees the balance of say £20 but in reality their balance is probably a lot lower. There is no way for the app to know when this value changes.
So I thought what i would do is check mysql every time I am asked for the value from the database, but it is through faces, so it asks many times when it asks, so I put in a check so that if it hasn't asked for a few seconds, to do a createSQLquery on just the balance to see if it is different, and if it is, reload the object.
so I do something along the line of:
sess.createSQLQuery("SELECT Balance from Account").list();
and get the value. Some of the time it shows the right balance, but often it shows the previously read balance, where the mysql command line client shows the real value.
Is there a way of clearing this value, or another way of updating the object? I would like ultimately to remove hibernate as it causes me a lot more problems that it solves, but for the moment, I just need the balance to show a value based on the database.
You could try to reload that object in your hibernate session. http://docs.jboss.org/hibernate/core/3.5/api/org/hibernate/Session.html
sess.refresh(yourReadObject);
Maybe clearing the session
sess.clear();
could help as well. However it probably has more side effects.
Since you are anyway planning to ditch hibernate you can directly use a JDBC connection to bypass the hibernate session without affecting anything else.
Connection con = getSession().connection();
PreparedStatement s = con.prepareStatement("...");
s.setString(1, "...");
ResultSet rs = s.executeQuery();
I find the only way to be sure is to evict all the objects from he hibernate session once you have got them.
This means after you execute the statement above you need to evict each object separately, evicting the list as an object doesn't seem to work. (below assumes object names)
for(Balance b : listOfBalances) {
hibernateSession.evict(b);
}
I doubt this is the best way to do it (I am no hibernate expert), but it is the only way I have found where you can be 100% sure that you will not receive stale objects.
(I know this reply is a bit late but hopefully it helps someone)