We are fetching the list of namespaces from datastore which counts upto 30k.
The cron to fetch namespaces runs daily. But one day it works fine and other day it throws datastore timeout exception.
com.google.appengine.api.datastore.DatastoreTimeoutException: The
datastore operation timed out, or the data was temporarily
unavailable.
Related Code :
DatastoreService ds = DatastoreServiceFactory.getDatastoreService();
FetchOptions options = FetchOptions.Builder.withChunkSize(150);
Query q = new Query(Entities.NAMESPACE_METADATA_KIND);
for (Entity e : ds.prepare(q).asIterable(options)){
// A nonzero numeric id denotes the default namespace;
// see Namespace Queries, below
if (e.getKey().getId() != 0){
continue;
}else{
namespaces.add(e.getKey().getName());
}
}
What could be the issue?
According to official documentation:
DatastoreTimeoutException is thrown when a datastore operation times
out. This can happen when you attempt to put, get, or delete too many
entities or an entity with too many properties, or if the datastore is
overloaded or having trouble.
This means that datastore having troubles with your request. Try to handle that error like:
import com.google.appengine.api.datastore.DatastoreTimeoutException;
try {
// Code that could result in a timeout
} catch (DatastoreTimeoutException e) {
// Display a timeout-specific error page
}
Related
I am newbie to MongoDB i implemented transactional feature in one of my application, as per my requirements i need to persist data into different collections in the same database. Below is the code snippet for the same
In Tuple3 first element is database, second element is collection and third element is data i want to persist which is coming as json string which i am converting to bson document
ClientSession clientSession = mongoClient.startSession();
try {
clientSession.startTransaction(transactionOptions);
for (Tuple3<String, String, String> value: insertValues) {
MongoCollection<Document> collection = mongoClient
.getDatabase(insertValues.f0)
.getCollection(insertValues.f1);
Document data= Document.parse(insertValues.f2);
log.info(String.format("Inserting data into database %s and collection is %s", insertValues.f0, insertValues.f1));
collection.insertOne(clientSession, data);
clientSession.commitTransaction();
}
} catch (MongoCommandException | MongoWriteException exception) {
clientSession.abortTransaction();
log.error(String.format("Exception happened while inserting record into Mongo DB rolling back the transaction " +
"and cause of exception is: %s", exception));
} finally {
clientSession.close();
}
Below are transaction options i am using
TransactionOptions transactionOptions = TransactionOptions.builder().readConcern(ReadConcern.LOCAL).writeConcern(WriteConcern.W1).build();
Below is MongoClient method with MongoClientOptions i am taking Mongo DB Connection string as input to this method
public MongoClient getTransactionConnection(String connectionString) {
MongoClientOptions.Builder mongoClientOptions = new MongoClientOptions.Builder()
.readConcern(ReadConcern.LOCAL)
.writeConcern(WriteConcern.W1)
.readPreference(ReadPreference.primary())
.serverSelectionTimeout(120000)
.maxWaitTime(120000)
.connectionsPerHost(10)
.connectTimeout(120000);
MongoClientURI uri = new MongoClientURI(connectionString, mongoClientOptions);
return new MongoClient(uri);
}
Till here it is good and it is inserting data to three different collection under the specified database. But when i try to some negative scenario i am trying to throw exception in try block which ideally should rollback the data for that particular client session if any error happens.
I am trying to throw exception by using count variable which will increment and for if count value is equal to 1 i am throwing exception which should abort the transaction and rollback if any data is written to database but what i am seeing it is writing to one of the collection and throws exception after that stops the program but it is not rolling back the data written to collection actually. I am trying something like this below
ClientSession clientSession = mongoClient.startSession();
int count = 0;
try {
clientSession.startTransaction(transactionOptions);
for (Tuple3<String, String, String> value: insertValues) {
MongoCollection<Document> collection = mongoClient
.getDatabase(insertValues.f0)
.getCollection(insertValues.f1);
Document data= Document.parse(insertValues.f2);
log.info(String.format("Inserting data into database %s and collection is %s", insertValues.f0, insertValues.f1));
collection.insertOne(clientSession, data);
if(count == 1){
throw new MongoException("Aborting transaction.....");
}
count++;
clientSession.commitTransaction();
}
} catch (MongoCommandException | MongoWriteException exception) {
clientSession.abortTransaction();
log.error(String.format("Exception happened while inserting record into Mongo DB rolling back the transaction " +
"and cause of exception is: %s", exception));
} finally {
clientSession.close();
}
I am not sure where i am going wrong i am using Mongo DB version 4.0 deployed using Azure CosmosDB Api. Please help me in resolving this issue thanks in advance.
Cosmos DB does not have transaction support outside of a single partition (shard) of a single collection. This limitation exists regardless of API in use (in your case, MongoDB API). This is why you're not seeing the behavior you're expecting. Note: this is mentioned in the Cosmos DB MongoDB compatibility docs.
You'll need to come up with your own implementation for managing data consistency within your app.
A cron job is being used to fire this script off once a day. When the script runs it seems to work as expected. The code builds a map, iterates over that map, creates points which are added to a batch, and finally writes those batched points to influxDB. I can connect to the influxDB and I can query my database and see that the points were added. I am using influxdb-java 2.2.
The issue that I am having is that when influxDB is restarted all of my data is being removed. The database still exists and the series still exists, however, all of the points/rows are gone (Each table is empty). My database is not the only database, there are several others, those databases are restored correctly. My guess is that the transaction is not being finalized. I am not aware of a way to make it do a flush and ensure that my points are persisted. I tried to adding:
influxDB.write(batchPoints);
influxDB.disableBatch(); // calls this.batchProcessor.flush() in InfluxDBImpl.java
This was an attempt to force a flush but this didn't work as expected. I am using influxDB 0.13.X
InfluxDB influxDB = InfluxDBFactory.connect(host, user, pass);
String dbName = "dataName";
influxDB.createDatabase(dbName);
BatchPoints batchPoints = BatchPoints
.database(dbName)
.tag("async", "true")
.retentionPolicy("default")
.consistency(ConsistencyLevel.ALL)
.build();
for (Tags type: Tags.values()) {
List<LinkedHashMap<String, Object>> myList = this.trendsMap.get(type.getDisplay());
if (myList != null) {
for (LinkedHashMap<String, Object> data : myList) {
Point point = null;
long time = (long) data.get("time");
if (data.get("date").equals(this.sdf.format(new Date()))) {
time = System.currentTimeMillis();
}
point = Point.measurement(type.getDisplay())
.time(time, TimeUnit.MILLISECONDS)
.field("count", data.get("count"))
.field("date", data.get("date"))
.field("day_of_week", data.get("day_of_week"))
.field("day_of_month", data.get("day_of_month"))
.build();
batchPoints.point(point);
}
}
}
influxDB.write(batchPoints);
Can you upgrade InfluxDB to 0.11.0? There have been many important changes since then and it would be best to test against that.
I have some method in my DAO class:
public void insertAVAYAcmCDRs(List<AvayaCmCdr> cdrList) {
AvayaCmCdr aCdrList1 = null;
try {
em.getTransaction().begin();
for (AvayaCmCdr aCdrList : cdrList) {
aCdrList1 = aCdrList;
em.persist(aCdrList);
}
em.getTransaction().commit();
em.clear();
} catch (Exception e) {
logger.log(Level.INFO, "Exception in task time={0}. Exception message = {1}.", new Object[]{aCdrList1.getDate(), e.getMessage()});
}
}
I tried save all array entities to DB. But in DB i have uniqe index - it does not allow to insert duplicate rows. It work normaly on DB side but i have some error in java.
a different object with the same identifier value was already associated with the session:
I get this error on 2 step of cycle. I print this object and found dublicate in DB.
I want ignore this error and continue insert data or somehow handle the error.
if this row already in the database i want ignore and skip it and continue insert
Why are you assigning this aCdrList1 = aCdrList ? Is there any specific reason?
you can save aCdrList object. Use below one
em.saveOrUpdate(aCdrList);
or
em.merge(aCdrList);
When I delete my neo4j database after my tests like this
public static final DatabaseOperation clearDatabaseOperation = new DatabaseOperation() {
#Override public void performOperation(GraphDatabaseService db) {
//This is deprecated on the GraphDatabaseService interface,
// but the alternative is not supported by implementation (RestGraphDatabase)
for (Node node : db.getAllNodes()) {
for (Relationship relationship : node.getRelationships()) {
relationship.delete();
}
boolean notTheRootNode = node.getId() != 0;
if (notTheRootNode) {
node.delete();
}
}
When querying the database through an ajax search (i.e searching on an empty database it returns an internal 500 error)
localhost:9000/search-results?keywords=t 500 Internal Server Error
197ms
However if I delete the database manually like this
start r=relationship(*) delete r;
start n=node(*) delete n;
No exception is thrown
Its most likely an issue with my code at a lower level in the call and return.
Just wandering why the error only works on one of the scenarios above and not both
Use cypher,
you should probably state more obviously that you use the rest-graph-database.
Are you querying after the deletion or during it?
Please check your logs in data/graph.db/messages.log and data/log/console.log to find the error cause.
Perhaps you can also look at the response body of the http-500 request
As per your error I guess your data is getting corrupted after deletion.
I have used same code like yours and deleted the nodes, except I put the Iterator in transaction and shut down the database after opetation.
e.g.
Transaction _tx = _db.beginTx();
try {
for ( your conditions){
Your code
}
_tx.success();
} catch (Exception e) {
_logger.error(e.getMessage());
}finally{
_tx.finish();
_db.shutdown();
graphDbFactory.cleanUp();
}
Hope it will work for you.
I have verified this multiple times using appstats. When the below code is NOT wrapped in a transaction, JDO performs two datastore reads and one write, 3 RPC's, at a cost of 240. Not just the first time, every time, even though it is accessing the same record every time hence should be pulling it from cache. However, when I wrap the code in a transaction as above, the code makes 4 RPC's: begin transaction, get, put, and commit -- of these, only the Get is billed as a datastore read, so the overall cost is 70.
If it's pulling it from cache, why would it only bill for a read? It would seem that it would bill for a write, not a read. Could app engine be billing me the same amount for non-transactional cache reads as it does for datastore reads? why?
This is the code WITH transaction:
PersistenceManager pm = PMF.getManager();
Transaction tx = pm.currentTransaction();
String responsetext = "";
try {
tx.begin();
Key userkey = obtainUserKeyFromCookie();
User u = pm.getObjectById(User.class, userkey);
Key mapkey = obtainMapKeyFromQueryString();
// this is NOT a java.util.Map, just FYI
Map currentmap = pm.getObjectById(Map.class, mapkey);
Text mapData = currentmap.getMapData(); // mapData is JSON stored in the entity
Text newMapData = parseModifyAndReturn(mapData); // transform the map
currentmap.setMapData(newMapData); // mutate the Map object
tx.commit();
responsetext = "OK";
} catch (JDOCanRetryException jdoe) {
// log jdoe
responsetext = "RETRY";
} catch (Exception e) {
// log e
responsetext = "ERROR";
} finally {
if (tx.isActive()) {
tx.rollback();
}
pm.close();
}
resp.getWriter().println(responsetext);
This is the code WITHOUT the transaction:
PersistenceManager pm = PMF.getManager();
String responsetext = "";
try {
Key userkey = obtainUserKeyFromCookie();
User u = pm.getObjectById(User.class, userkey);
Key mapkey = obtainMapKeyFromQueryString();
// this is NOT a java.util.Map, just FYI
Map currentmap = pm.getObjectById(Map.class, mapkey);
Text mapData = currentmap.getMapData(); // mapData is JSON stored in the entity
Text newMapData = parseModifyAndReturn(mapData); // transform the map
currentmap.setMapData(newMapData); // mutate the Map object
responsetext = "OK";
} catch (Exception e) {
// log e
responsetext = "ERROR";
} finally {
pm.close();
}
resp.getWriter().println(responsetext);
With the transaction, the PersistenceManager can know that the caches are valid throughout the processing of that code. Without the transaction, it cannot (it doesn't know whether some other action has come in behind its back and changed things) and so must validate the cache's contents against the DB tables. Each time it checks, it needs to create a transaction to do so; that's a feature of the DB interface itself, where any action that's not in a transaction (with a few DB-specific exceptions) will have a transaction automatically added.
In your case, you should have a transaction anyway, because you want to have a consistent view of the database while you do your processing. Without that, the mapData could be modified by another operation while you're in the middle of working on it and those modifications would be silently lost. That Would Be Bad. (Well, probably.) Transactions are the cure.
(You should also look into using AOP for managing the transaction wrapping; that's enormously easier than writing all that transaction management code yourself each time. OTOH, it can add a lot of complexity to deployment until you get things right, so I could understand not following this piece of adviceā¦)