I am newbie to MongoDB i implemented transactional feature in one of my application, as per my requirements i need to persist data into different collections in the same database. Below is the code snippet for the same
In Tuple3 first element is database, second element is collection and third element is data i want to persist which is coming as json string which i am converting to bson document
ClientSession clientSession = mongoClient.startSession();
try {
clientSession.startTransaction(transactionOptions);
for (Tuple3<String, String, String> value: insertValues) {
MongoCollection<Document> collection = mongoClient
.getDatabase(insertValues.f0)
.getCollection(insertValues.f1);
Document data= Document.parse(insertValues.f2);
log.info(String.format("Inserting data into database %s and collection is %s", insertValues.f0, insertValues.f1));
collection.insertOne(clientSession, data);
clientSession.commitTransaction();
}
} catch (MongoCommandException | MongoWriteException exception) {
clientSession.abortTransaction();
log.error(String.format("Exception happened while inserting record into Mongo DB rolling back the transaction " +
"and cause of exception is: %s", exception));
} finally {
clientSession.close();
}
Below are transaction options i am using
TransactionOptions transactionOptions = TransactionOptions.builder().readConcern(ReadConcern.LOCAL).writeConcern(WriteConcern.W1).build();
Below is MongoClient method with MongoClientOptions i am taking Mongo DB Connection string as input to this method
public MongoClient getTransactionConnection(String connectionString) {
MongoClientOptions.Builder mongoClientOptions = new MongoClientOptions.Builder()
.readConcern(ReadConcern.LOCAL)
.writeConcern(WriteConcern.W1)
.readPreference(ReadPreference.primary())
.serverSelectionTimeout(120000)
.maxWaitTime(120000)
.connectionsPerHost(10)
.connectTimeout(120000);
MongoClientURI uri = new MongoClientURI(connectionString, mongoClientOptions);
return new MongoClient(uri);
}
Till here it is good and it is inserting data to three different collection under the specified database. But when i try to some negative scenario i am trying to throw exception in try block which ideally should rollback the data for that particular client session if any error happens.
I am trying to throw exception by using count variable which will increment and for if count value is equal to 1 i am throwing exception which should abort the transaction and rollback if any data is written to database but what i am seeing it is writing to one of the collection and throws exception after that stops the program but it is not rolling back the data written to collection actually. I am trying something like this below
ClientSession clientSession = mongoClient.startSession();
int count = 0;
try {
clientSession.startTransaction(transactionOptions);
for (Tuple3<String, String, String> value: insertValues) {
MongoCollection<Document> collection = mongoClient
.getDatabase(insertValues.f0)
.getCollection(insertValues.f1);
Document data= Document.parse(insertValues.f2);
log.info(String.format("Inserting data into database %s and collection is %s", insertValues.f0, insertValues.f1));
collection.insertOne(clientSession, data);
if(count == 1){
throw new MongoException("Aborting transaction.....");
}
count++;
clientSession.commitTransaction();
}
} catch (MongoCommandException | MongoWriteException exception) {
clientSession.abortTransaction();
log.error(String.format("Exception happened while inserting record into Mongo DB rolling back the transaction " +
"and cause of exception is: %s", exception));
} finally {
clientSession.close();
}
I am not sure where i am going wrong i am using Mongo DB version 4.0 deployed using Azure CosmosDB Api. Please help me in resolving this issue thanks in advance.
Cosmos DB does not have transaction support outside of a single partition (shard) of a single collection. This limitation exists regardless of API in use (in your case, MongoDB API). This is why you're not seeing the behavior you're expecting. Note: this is mentioned in the Cosmos DB MongoDB compatibility docs.
You'll need to come up with your own implementation for managing data consistency within your app.
Related
We are fetching the list of namespaces from datastore which counts upto 30k.
The cron to fetch namespaces runs daily. But one day it works fine and other day it throws datastore timeout exception.
com.google.appengine.api.datastore.DatastoreTimeoutException: The
datastore operation timed out, or the data was temporarily
unavailable.
Related Code :
DatastoreService ds = DatastoreServiceFactory.getDatastoreService();
FetchOptions options = FetchOptions.Builder.withChunkSize(150);
Query q = new Query(Entities.NAMESPACE_METADATA_KIND);
for (Entity e : ds.prepare(q).asIterable(options)){
// A nonzero numeric id denotes the default namespace;
// see Namespace Queries, below
if (e.getKey().getId() != 0){
continue;
}else{
namespaces.add(e.getKey().getName());
}
}
What could be the issue?
According to official documentation:
DatastoreTimeoutException is thrown when a datastore operation times
out. This can happen when you attempt to put, get, or delete too many
entities or an entity with too many properties, or if the datastore is
overloaded or having trouble.
This means that datastore having troubles with your request. Try to handle that error like:
import com.google.appengine.api.datastore.DatastoreTimeoutException;
try {
// Code that could result in a timeout
} catch (DatastoreTimeoutException e) {
// Display a timeout-specific error page
}
I have some method in my DAO class:
public void insertAVAYAcmCDRs(List<AvayaCmCdr> cdrList) {
AvayaCmCdr aCdrList1 = null;
try {
em.getTransaction().begin();
for (AvayaCmCdr aCdrList : cdrList) {
aCdrList1 = aCdrList;
em.persist(aCdrList);
}
em.getTransaction().commit();
em.clear();
} catch (Exception e) {
logger.log(Level.INFO, "Exception in task time={0}. Exception message = {1}.", new Object[]{aCdrList1.getDate(), e.getMessage()});
}
}
I tried save all array entities to DB. But in DB i have uniqe index - it does not allow to insert duplicate rows. It work normaly on DB side but i have some error in java.
a different object with the same identifier value was already associated with the session:
I get this error on 2 step of cycle. I print this object and found dublicate in DB.
I want ignore this error and continue insert data or somehow handle the error.
if this row already in the database i want ignore and skip it and continue insert
Why are you assigning this aCdrList1 = aCdrList ? Is there any specific reason?
you can save aCdrList object. Use below one
em.saveOrUpdate(aCdrList);
or
em.merge(aCdrList);
I just switch over from Python, and need to continue my work with a MongoDB database. One particular task is to save an incoming document (in this case, a tweet) into a collection for archiving. A tweet could comes in multiple times, so I prefer to use save() over insert() since the former do not raise an error if the document already exists in the collection. But it seems the Java driver for MongoDB does not support the save operation. Am I missing something?
EDIT: for reference, i'm using this library 'org.mongodb:mongodb-driver:3.0.2'
Example code:
MongoCollection<Document> tweets = db.getCollection("tweets");
...
Document tweet = (Document) currentDocument.get("tweet");
tweets.insertOne(tweet);
The last line raise this error when the tweet already exists:
Exception in thread "main" com.mongodb.MongoWriteException: insertDocument :: caused by :: 11000 E11000 duplicate key error index: db.tweets.$_id_ dup key: { : ObjectId('55a403b87f030345e84747eb') }
Using the 3.x MongoDB Java driver you can use MongoCollection#replaceOne(Document, Document, UpdateOptions) like this:
MongoClient mongoClient = ...
MongoDatabase database = mongoClient.getDatabase("myDB");
MongoCollection<Document> tweets = db.getCollection("tweets");
...
Document tweet = (Document) currentDocument.get("tweet");
tweets.replaceOne(tweet, tweet, new UpdateOptions().upsert(true));
This will avoid the duplicate key error. However, this is not exactly the same as using DBCollection#save(DBObject), since it uses the whole Document as filter instead of just the _id field. To mirror the old save method, you would have to write something like this:
public static void save(MongoCollection<Document> collection, Document document) {
Object id = document.get("_id");
if (id == null) {
collection.insertOne(document);
} else {
collection.replaceOne(eq("_id", id), document, new UpdateOptions().upsert(true));
}
}
I am using a java program for mongo db insertion trying to create a unique index for a field. product_src is a field in my collection and I want to set it as unique index for avoiding the duplicate insertion. I am trying the following code but showing syntax error what is the problem with this.
DB db;
try {
sample = new MongoClient("myIP",PORT);
db = sample.getDB("client_mahout");
t = db.getCollection("data_flipkart_in_avoid_duplicate_checking");
System.out.println("enter the system ip");
db.t.ensureIndex({"product_src":1});
} catch (Exception e) {}
t is the collection. there is problem with line db.t.ensureIndex({"product_src":1});
Please give me a sample code how to create unique index in mongo DB
For future reference, the way to handle this in the Java Mongo driver v3.0+ is by:
public void createUniqueIndex() {
Document index = new Document("field", 1);
MongoCollection<Document> collection = client.getDatabase("db").getCollection("Collection");
collection.createIndex(index, new IndexOptions().unique(true));
}
You need to pass a DBObject to the ensureIndex() method.
db.t.ensureIndex(new BasicDBObject("product_src",1))
But, the ensureIndex method has been deprecated since version 2.12, you need to use createIndex() instead.
db.t.createIndex(new BasicDBObject("product_src",1));
When I delete my neo4j database after my tests like this
public static final DatabaseOperation clearDatabaseOperation = new DatabaseOperation() {
#Override public void performOperation(GraphDatabaseService db) {
//This is deprecated on the GraphDatabaseService interface,
// but the alternative is not supported by implementation (RestGraphDatabase)
for (Node node : db.getAllNodes()) {
for (Relationship relationship : node.getRelationships()) {
relationship.delete();
}
boolean notTheRootNode = node.getId() != 0;
if (notTheRootNode) {
node.delete();
}
}
When querying the database through an ajax search (i.e searching on an empty database it returns an internal 500 error)
localhost:9000/search-results?keywords=t 500 Internal Server Error
197ms
However if I delete the database manually like this
start r=relationship(*) delete r;
start n=node(*) delete n;
No exception is thrown
Its most likely an issue with my code at a lower level in the call and return.
Just wandering why the error only works on one of the scenarios above and not both
Use cypher,
you should probably state more obviously that you use the rest-graph-database.
Are you querying after the deletion or during it?
Please check your logs in data/graph.db/messages.log and data/log/console.log to find the error cause.
Perhaps you can also look at the response body of the http-500 request
As per your error I guess your data is getting corrupted after deletion.
I have used same code like yours and deleted the nodes, except I put the Iterator in transaction and shut down the database after opetation.
e.g.
Transaction _tx = _db.beginTx();
try {
for ( your conditions){
Your code
}
_tx.success();
} catch (Exception e) {
_logger.error(e.getMessage());
}finally{
_tx.finish();
_db.shutdown();
graphDbFactory.cleanUp();
}
Hope it will work for you.