I want each action taken on certain tables to be logged. I want logging at column level (not all but some) so if a value has been changed for certain column I would like to log that for eg.
Price for product x has been changed by user U
(assume price & product are in the same table.)
for this I want to monitor price column of product x.
I cannot use a trigger to do this as I want the user to be logged as well, User information atm is the in the portal application (can't pass that to the trigger).
I am currently using apache cayenne and on the pre-update (in the entity class) call back I want to compare the new price (which the user has chosen in portal) to the one sitting in Database
when I try to get the product from database, cayenne does not returns me a fresh object, rather returns the same object with changed values
I was wondering if anyone is aware of some way Cayenne can return me fresh object for the same pk(id) (thats what I am using to get a fresh object from the db)
or
can advice me some other way
There are a few ways to approach this. Here is the one that IMO is the most transparent. The trick is to use a different ObjectContext from the one committing changes. Then you will get a separate copy of the object that will contain currently saved value:
// 'this' refers to the DataObject being committed (assuming things happen in its callback)
ObjectContext parallelContext = ... // create a new context here like you normally would
// 3.1 API; 3.0.x has a similar method with a slightly different sig
MyClass clone = parallelContext.localObject(this);
// if you are ok with cached old value, ignore the 'invalidateObjects' call.
// If not, uncomment it to ensure the object gets refetched.
// Also 3.1 API. Should be easy to adjust for 3.0
// parallelContext.invalidateObjects(clone);
Object oldValue = clone.getXyz();
Related
I have an application which gets data from a database (Mongo) when a user connects, and saves it when a user disconnects and at fixed intervals to reduce the likelihood of data loss if a server goes down. I am using data access objects to save users to the database which updates every field regardless of if it has been changed. This can lead to problems such as when a user joins multiple servers and makes changes on one of them but the changes are overwritten when the user disconnects from another.
Are there any established ways of persisting only modified fields or any frameworks that do this? I would rather not use a boolean for every field as I have many fields inside the User object and adding a dirty flag to each of them would increase the class size dramatically.
The steps your application takes:
User gets data from MongoDB
This data get's partially modified
The modifications should get saved
This means: The part of your application that modifies the data should take care of that.
The Spring team introduces some Diff tool, a few months ago: https://spring.io/blog/2014/10/22/introducing-spring-sync
Using that, you'll get a Patch object, which only contains the changes.
Patch patch = Diff.diff(original, modified);
Here's an approach that might work:
Object data = mongoClient.getData();
Object modifiedData = modify(data);
Patch patch = Diff.diff(data, modifiedData);
The patch now contains everything that has changed. Now you must somehow use the internals of the Patch object and map that to MongoDB's $set commands.
I hope someone can clarify the below scenerio for me.
From what I understand, when you request a 'row' from hibernate, for example:
User user = UserDao.get(1);
I know have the user with id=1 in memory.
In a web application, if 2 web pages request and load the user at the same time, and then both update a property on the user's object, what will happend? e.g.:
user.pageViews += 1; // the value is current 10 before the increment
UserDao.update(user);
Will this use the value that is in-memory (both requests have the value 10), or will it use the value in the database?
You must use two hibernate sessions for the two users. This means there are two instances of the object in the memory. If you use only one hibernate session (and so one instance of the object in memory), then the result is unpredictable.
In the case of a concurrent update the second update wins. The value of the first update is overwritten by the second update. To avoid the loss of the first update you normally use a version column (see the hibernate doc), and the second update then gets an error which you can catch and react on it (for example with an error message "Your record was modified in meantime. Please reload." which allows the second user to redo his modification on the modified record, to ensure his modif does not get lost.
in the case of a page view counter, like in your example, as a different solution you could write a synchronized methods which counts the page views sequentially.
By default the in memory value is used for the update.
In the following I assume you want to implement an automatic page view counter, not to modify the User in a web user interface. If you want this take a look at Hibernate optimistic locking.
So, supposing you need 100% accuracy when counting the page views, you can lock your User entity while you modify their pageView value to obtain exclusivity on the table row:
Session session = ...
Transaction tx = ...
session.lock(user, LockMode.UPGRADE);
user.increasePageViews();
tx.commit();
session.close();
The LockMode.UPGRADE will translate in a SELECT ... FOR UPDATE in your database so be careful to maintain the lock as little as possible to not impact application scalability.
I'm trying to delete a record from the GAE datastore via an ajax query which sends the object "primary key" (Long Id with auto increment).
Currently, I'm doing this (hard coded the key=6):
Objectify ofy = ObjectifyService.begin();
ofy.delete( Test1.class , 6);
This works : it deletes the entity which has the Key=6.
But for security reasons, I need another parameter (fyi : "parent_user") so only the owner can delete this object.
It seems Objectify.delete() doesn't allow to pass more parameters than the key...
How could I solve this ? Because making a Objectify.get() with my optional parameters+key to get the full object then sending the whole object to the delete() is nubish & unoptimized...
As presented at http://objectify-appengine.googlecode.com/svn/trunk/javadoc/index.html, Objectify.delete() does not take any additional parameters besides object keys, ids, or strings.
So, you need to first get the object based on your filters and then delete them. However, in order to optimize this, you can get only the key of the object and not the full object. Then you delete based on the key.
Hope this helps!
If your data model allows you to let the user be the Datastore ancestor of your objects, you can get rid of the query, since the ancestor is part of the key.
What I often do is to authenticate the user in the beginning of every request, which uses the #Cached annotation of Objectify to cache all users (and their privileges, which are embedded into the user).
Then, most of the user related data has the user as the ancestor. This way, whenever a user tries to access or delete a resource, I will never accidently allow the user to do it on any objects that isn't hers. All-in-all, only gets which are quick and cachable.
I have a page where a user can edit a lot of information, right now about 100 lines worth of DDLs and a text area, I want to update a data object after each change so that I only have to save to the database the changed rows instead of updating every row.
i.e. when the DDL value changes or when the text area data has changed (this is done in a pop up so that it will only be changed when 'Ok' is clicked) it will be stored into an array holding each updated row as an object. When the user hits save, it will only save the rows that were changed.
Right now im using AJAX so that its making a HTTPRequest, getting the array from the session and adding a new entry with the new value. Unfortunately I believe the page is stepping on itself at times and not keeping the data correct. I'm not sure why, but was wondering what would be the best way of implementing this, and if this is a good way of doing this.
Would a Java bean or anything else be better to represent the data object?
Would not accessing and storing in the session be faster and prevent this?
Java bean is very good for this purpose (as compared to java Map).
As I understand you want to call UPDATE only for items that has change, the best would be to implement equals() for that java bean class.
You have to store old values in session or anywhere else on server, to be able to determine what have change.
Anyway, you'll have to loop and do compare for each object:
if (!prevValue.equals(currValue)) {
DAO.update(currValue);
}
I'm currently using ORMLite to work with a SQLite database on Android. As part of this I am downloading a bunch of data from a backend server and I'd like to have this data added to the SQLite database in the exact same format it is on the backend server (ie the IDs are the same, etc).
So, my question to you is if I populate my database entry object (we'll call it Equipment), including Equipment's generatedId/primary key field via setId(), and I then run a DAO.create() with that Equipment entry will that ID be saved correctly? I tried it this way and it seems to me that this was not the case. If that is the case I will try again and look for other problems, but with the first few passes over the code I was not able to find one. So essentially, if I call DAO.create() on a database object with an ID set will that ID be sent to the database and if it is not, how can I insert a row with a primary key value already filled out?
Thanks!
#Femi is correct that an object can either be a generated-id or an id, but not both. The issue is more than how ORMLite stores the object but it also has to match the schema that the database was generated with.
ORMLite supports a allowGeneratedIdInsert=true option to #DatabaseField annotation that allows this behavior. This is not supported by some database types (Derby for example) but works under Android/SQLite.
For posterity, you can also create 2 objects that share the same table -- one with a generated-id and one without. Then you can insert using the generated-id Dao to get that behavior and the other Dao to take the id value set by the caller. Here's another answer talking about that. The issue for you sounds like that this will create a lot of of extra DAOs.
The only other solution is to not use the id for your purposes. Let the database generate the id and then have an additional field that you use that is set externally for your purposes. Forcing the database-id in certain circumstances seems to me to be a bad pattern.
From http://ormlite.com/docs/generated-id:
Boolean whether the field is an auto-generated id field. Default is false. Only one field can have this set in a class. This tells the database to auto-generate a corresponding id for every row inserted. When an object with a generated-id is created using the Dao.create() method, the database will generate an id for the row which will be returned and set in the object by the create method. Some databases require sequences for generated ids in which case the sequence name will be auto-generated. To specify the name of the sequence use generatedIdSequence. Only one of this, id, and generatedIdSequence can be specified.
You must use either generatedId (in which case it appears all ids must be generated) or id (in which case you can set them) but not both.