I'm developing a GWT web application very similar to Doodle.com
I need to store in a database some informations about Events and Users.
It is a project for a university exam and we have to use mapDB for the persistent datas.
The problem we have is that each time the method saveEventsToDB() is called, a new database overwrites the one had been previously created.
This is the saveEventsToDB() method:
public void saveEventsToDB(Event event) {
DB eventsDB = DBMaker.newFileDB(new File("eventsDB")).closeOnJvmShutdown().make();
Map<Integer, Event> map = eventsDB.getTreeMap("Events");
map.clear();
Set<Integer> keys = map.keySet();
int id=0;
for(int key : keys) {
id++;
}
map.put(id+1, event);
eventsDB.commit();
eventsDB.close();
}
I'm pretty sure it's caused by this line of code:
DB eventsDB = DBMaker.newFileDB(new File("eventsDB")).closeOnJvmShutdown().make();
But this was in the example code that our professor gave us for mapDB.
mapDB documentation says that newFileDB:
Creates or open database stored in file.
But a new database is created everytime, we saw this using some breakpoints and trying to exctract data from db, only one record was returned each time.
If anyone can help it would be very appreciate. Thanks
call db.commit() sometimes. Mapdb has transactions and uncommitted data are discarded.
Your problem is that you delete the data you stored in DB when you do
map.clear()
This function delete or the record in the map. You shouldn't call this function.
Related
i need to manage concurrency access for data updates in mongoDB.
Example: two users a and b connect to my application. The user a has updated a data and the user b wants to update the same data that the user a has already updated, so i want the user b cannot update this data because is has already updated by the user a.
if user A and user B only update one document and your can know the initial value and updated value try this code:
The code try to update the secret field,and we know the inital value is expertSecret
public void compareAndSet(String expertSecret, String targetSecret) {
// get a mongodb collection
MongoCollection<Document> collection = client.getDatabase(DATABASE).getCollection(COLLECTION);
BasicDBObject filter = new BasicDBObject();
filter.append("secret", expertSecret);
BasicDBObject update = new BasicDBObject();
update.append("secret", targetSecret);
collection.updateOne(filter, update);
}
If don't know initial value,how to do?
you can add a filed to representative operation,and before update check the filed.
If need to update more than one document,how to do?
Multi-document transactions need mongo server to support,get more information from here
However, for situations that require atomicity for updates to multiple documents or consistency between reads to multiple documents, MongoDB provides the ability to perform multi-document transactions against replica sets. Multi-document transactions can be used across multiple operations, collections, databases, and documents. Multi-document transactions provide an “all-or-nothing” proposition.
I am working on a monitoring tool developed in Spring Boot using Hibernate as ORM.
I need to compare each row (already persisted rows of sent messages) in my table and see if a MailId (unique) has received a feedback (status: OPENED, BOUNCED, DELIVERED...) Yes or Not.
I get the feedbacks by reading csv files from a network folder. The CSV parsing and reading of files goes very fast, but the update of my database is very slow. My algorithm is not very efficient because I loop trough a list that can have hundred thousands of objects and look in my table.
This is the method that make the update in my table by updating the "target" Object (row in table database)
#Override
public void updateTargetObjectFoo() throws CSVProcessingException, FileNotFoundException {
// Here I make a call to performProcessing method which reads files on a folder and parse them to JavaObjects and I map them in a feedBackList of type Foo
List<Foo> feedBackList = performProcessing(env.getProperty("foo_in"), EXPECTED_HEADER_FIELDS_STATUS, Foo.class, ".LETTERS.STATUS.");
for (Foo foo: feedBackList) {
//findByKey does a simple Select in mySql where MailId = foo.getMailId()
Foo persistedFoo = fooDao.findByKey(foo.getMailId());
if (persistedFoo != null) {
persistedFoo.setStatus(foo.getStatus());
persistedFoo.setDnsCode(foo.getDnsCode());
persistedFoo.setReturnDate(foo.getReturnDate());
persistedFoo.setReturnTime(foo.getReturnTime());
//The save account here does an MySql UPDATE on the table
fooDao.saveAccount(foo);
}
}
}
What if I achieve this selection/comparison and update action in Java side? Then re-update the whole list in database?
Will it be faster?
Thanks to all for your help.
Hibernate is not particularly well-suited for batch processing.
You may be better off using Spring's JdbcTemplate to do jdbc batch processing.
However, if you must do this via Hibernate, this may help: https://docs.jboss.org/hibernate/orm/5.2/userguide/html_single/chapters/batch/Batching.html
I have to check for changes in an old embedded DBF database which is populated by an old third-party application. I don't have access to source code of that application and cannot put trigger or whatever on the database. For business constraint I cannot change that...
My objective is to capture new records, deleted records and modified records from a table (~1500 records) of that database with a Java application for further processes. The database is accessible in my Spring application though JPA/Hibernate with HXTT DBF driver.
I am looking now for a way to efficiently capture changes made by the third-party app in the database.
Do I have to periodically read the whole table and check if each record is still unchanged or to apply any kind of diff within two readings? Is there a kind of "trigger" I can set in my Java app? How to listen properly for those changes?
There is no JPA mechanism for getting callbacks from a database when the data changes.
The only options is to build your own change detection. Typically you would start by detecting which entities were added, removed, and which still exists. For the once that still exist you will need to check if they are changed, so the entity needs an equals() method.
An entity is identified by it primary key, so you will need to get the set of all primary keys, once you have that you can easily use Guava's Sets methods to produce the 3 sets of added, removed, and existing (before and now), like this.
List<MyEntity> old = new ArrayList<>(); // load from the DB last time
List<MyEntity> current = new ArrayList<>(); // loaded from DB now
Map<Long, MyEntity> oldMap = old.stream().collect(Collectors.toMap(MyEntity::getId, Function.<MyEntity>identity() ));
Map<Long, MyEntity> currentMap = current.stream().collect(Collectors.toMap(MyEntity::getId, Function.<MyEntity>identity()));
Set<Long> oldKeys = oldMap.keySet();
Set<Long> currentKeys = currentMap.keySet();
Sets.SetView<Long> deletedKeys = Sets.difference(oldKeys, currentKeys);
Sets.SetView<Long> addedKeys = Sets.difference(currentKeys, oldKeys);
Sets.SetView<Long> couldBeChanged = Sets.intersection(oldKeys, currentKeys);
for (Long id : couldBeChanged) {
if (oldMap.get(id).equals(currentMap.get(id))) {
// entity with this id was changed
}
}
I have been using open source data set provider Casper to achieve in-memory representation of a collection of Database objects in Java.
Github Repository : https://github.com/casperds/casperdatasets
Below is the code that I have been using to pull data in Casper datasets
String[] primaryKeys = { "QUESTION_ID" };
if (resultSet != null)
{
container = CDataCacheDBAdapter.loadData(resultSet, null, primaryKeys,new HashMap<Object, Object>());
lCDataRowset = container.getAll();
preparedStatement.close();
resultSet.close();
}
The problem with using this is, when I don't mention primary keys then DBAdapter does not load data. And If I mention some column as primary keys then "Order By" does not have effect in the dataset. It just orders by primary keys.
I want to be able to pull data in dataset in order the way I have mentioned in the query.
Did anybody face this issue? Any kind of help is appreciated!! Thanks
Well it turned out to be very stupid issue. If you pass null for primaryKeys parameter then it returns data in the order the way it returns in MySQL query browser.
I thought this could help someone someday. That's why keeping this post other wise I would have deleted it.
This question is related to my other question
I am building a Spring web application which reads data from DB using hibernate. My App will not be aware of any changes(Updates/Inserts) done to the DB. Is there a way to use query cache in such a scenario?
I configured query cache, and it is not invalidating the cache when I update the DB from different App. And I think it is the expected behavior.
I need the queries to be cached and invalidated when there is an update in DB. How to achieve this?
I am not sure is there any automatic way for refreshing the cache. But i have solved this problem in my last project. Expose a method like below and give access to admin. Once any modification done in DB externally call this method to refresh your cache.
public void refreshCache()
{
try {
Map<String, ClassMetadata> classesMetadata = sessionFactory.getAllClassMetadata();
for (String entityName : classesMetadata.keySet()) {
sessionFactory.evictEntity(entityName);
}
} catch (Exception e) {
e.printStackTrace();
}
}
Well if you are using Oracle , the following command will give you the last updated unique scn on the table
select max(ora_rowscn) from TableName;
output
10772982279880
further you convert this to timestamp if you want
select scn_to_timestamp(10772982279880) from dual
but idont think you need to convert it into time , just cache the the rowscn alone and periodically check the table , if there is a change you can evict the cache regions.
Please note that this supports version > 10g