Deleting vertices from Titan leads to inconsistent read behavior. I'm testing this on a single machine running Cassandra, here's my conf.properties:
storage.backend=cassandra
storage.hostname=localhost
storage.cassandra.keyspace=test
The following method deletes the appropriate vertex:
public void deleteProfile(String uuid, String puuid) {
for(Person person : this.graph.getVertices("uuid", uuid, Person.class)) {
if (person != null) {
for (Profile profile : this.graph.getVertices("uuid", puuid, Profile.class)) {
person.removeProfile(profile);
graph.removeVertex(profile.asVertex());
}
}
}
this.graph.getBaseGraph().commit();
}
When the following method gets called it returns two different sets of results:
public Iterable<ProfileImpl> getProfiles(String uuid) {
List<ProfileImpl> profiles = new ArrayList<>();
for(Person person : this.graph.getVertices("uuid", uuid, Person.class)) {
if (person != null) {
for (Profile profile : person.getProfiles()) {
profiles.add(profile.toImpl());
}
}
}
return profiles;
}
One result will be as expected - it will not contain the deleted profile. However, when I run it enough times - it sometimes will contain one extra profile - the one which was deleted.
Attempting to delete the same vertex again shows that no vertex exists with that 'uuid', the iterator's hasNext() returns false.
After the program is restarted, however, it never returns the deleted vertex. How can I fix this inconsistent behavior?
The problem is that on some threads, transactions had been opened for the graph already. Reading from the graph opens up a transaction, even if nothing is changed. These transactions need to be closed in order to ensure that the behavior is consistent.
According http://s3.thinkaurelius.com/docs/titan/0.9.0-M2/tx.html#tx-config you should set checkInternalVertexExistence
Related
I did a function on java to get the next turn from a database (PostgreSQL) table. After getting the next turn, the record is updated so no other user can get the same turn. If another users request next turn at the same time, there is a change that both get the same next turn. So firt idea is to syncronize the function so only one user can request turn at the same time. But there are several departments, so two users from the same department cannot request turn at the same time, but two users from diferent departments could without any issue.
This is a simplified / pseudocode of the function
private DailyTurns callTurnLocal(int userId)
{
try {
DailyTurns turn = null;
DailyTurns updateTurn = null;
//get next turn for user (runs a query to the database)
turn = getNextTurnForUser(userId);
//found turn for user
if (turn != null)
{
//copy information from original record object to new one
updateTurn = turn;
//change status tu turn called
updateTurn.setTurnStatusId(TURN_STATUS_CALLED);
//add time for the event
updateTurn.setEventDate(new Date());
//update user that took the turn
updateTurn.setUserId(userId);
//save new record in the DB
updateTurn = save(updateTurn);
}
return updateTurn;
}
catch (Exception e)
{
logger.error( "Exception: " + e.getMessage(), e );
return null;
}
}
I'm aware that I can syncronize the entire function, but that would slow process if two or more threads from users in different departments want to get next turn. How can I add syncronization per department? Or is something that I can achieve with a function in the DB?
Seems like a more obviously solution would be to keep a cache like ConcurrentHashMap where the keys are defined as department.
This won't lock the entire object and different threads can operate concurrently for different departments.
I am using Reactive Redis where I am trying to use Redis as cache for database. I am checking if value is present in the cache or not? If it is present then return it otherwise query database if result comes back; store the result cache it and return it.
However, even if value is present in Redis it is still querying the database all the time.
public Mono<User> getUser(String email) {
return reactiveRedisOperation.opsForValue().get("tango").switchIfEmpty(
// Always getting into this block (for breakpoint) :(
queryDatabase().flatMap(it -> {
reactiveRedisOperation.opsForValue().set("tango", it, Duration.ofSeconds(3600)).then(Mono.just(it));
})
);
}
private Mono<User> queryDatabase() {
return Mono.just(new User(2L,"test","test","test","test","test",true,"test","test","test"));
}
But call is always hitting the database even if value is present in Redis. What am I doing wrong here?
Base on this answer you can try with Mono.defer:
public Mono<User> getUser(String email) {
return reactiveRedisOperation.opsForValue().get("tango").switchIfEmpty(Mono.defer(() -> {
// Always getting into this block (for breakpoint) :(
queryDatabase().flatMap(it -> {
reactiveRedisOperation.opsForValue().set("tango", it, Duration.ofSeconds(3600)).then(Mono.just(it));
})})
);
}
UPDATE:
I don't have much experience with Mono. The answer that I pointed explain it:
... computation was already triggered at the point when we start composing our Mono types. To prevent unwanted computations we can wrap our future into a defered evaluation:
... is trapped in a lazy supplier and is scheduled for execution only when it will be requested.
This is my first attempt to implement Entity Component System in my project and I'm not sure how some of its mechanics works. For example do I remove an entity? Since all systems are using entities list throughout whole game loop, every attempt of deleting element of that list is condemned to ConcurrentModificationException. Going by this advice I've tried to setting some kind of "toRemove" flag for entities and look for it every time system iterate through list
public class DrawingSystem extends System {
public DrawingSystem(List<Entity> entityList) {
super(entityList);
}
public void update(Batch batch) {
for (Entity entity : entityList) {
removeIfNeccesarry(entity);
//code
}
}
public void removeIfNeccesarry(Entity entity){
if(entity.toRemove){
entityList.remove(entity);
}
}
}
but that didn't help getting rid of the exception. I'm sure there is a elegant solution to this problem since this design pattern is broadly used but I'm just not aware of it.
Check out iterators:
"Iterators allow the caller to remove elements from the underlying collection during the iteration with well-defined semantics."
https://docs.oracle.com/javase/8/docs/api/index.html?java/util/Iterator.html
Iterator<Entity> it = entityList.iterator();
while (it.hasNext()) {
Entity entity = it.next();
if (...) {
it.remove();
}
}
You could also store the indices of the entities to remove somewhere outside the list and then remove the dead entities in an extra step after the update/render.
This has the advantage that you do not miss entities in later steps of your update.
Edit: Added code.
I'm new to optaplanner, and am hoping to use it to solve the VRPTW problem with pickups and deliveries (VRPTWPD).
I started by taking the VRPTW code from the examples repo. I am trying to add to it to solve my problem. However, I'm unable to return a solution that honors the precedence/vehicle constraints (pickups must be done before deliveries, and both must be done by the same vehicle).
I am consistently returning a solution where the hard score is what I would expect for such a solution (i.e. I can add up all the violations in a small sample problem and see that the hard score matches the penalties I assigned for these violations).
The first approach I tried was following the steps outlined by Geoffrey De Smet here - https://stackoverflow.com/a/19087210/351400
Each Customer has a variable customerType that describes whether it is a pickup (PU) or a delivery (DO). It also has a variable called parcelId that indicates which parcel is either being picked up or delivered.
I added a shadow variable to the Customer named parcelIdsOnboard. This is a HashSet that holds all the parcelIds that the driver has with him when he visits a given Customer.
My VariableListener that keeps parcelIdsOnboard updated looks like this:
public void afterEntityAdded(ScoreDirector scoreDirector, Customer customer) {
if (customer instanceof TimeWindowedCustomer) {
updateParcelsOnboard(scoreDirector, (TimeWindowedCustomer) customer);
}
}
public void afterVariableChanged(ScoreDirector scoreDirector, Customer customer) {
if (customer instanceof TimeWindowedCustomer) {
updateParcelsOnboard(scoreDirector, (TimeWindowedCustomer) customer);
}
}
protected void updateParcelsOnboard(ScoreDirector scoreDirector, TimeWindowedCustomer sourceCustomer) {
Standstill previousStandstill = sourceCustomer.getPreviousStandstill();
Set<Integer> parcelIdsOnboard = (previousStandstill instanceof TimeWindowedCustomer)
? new HashSet<Integer>(((TimeWindowedCustomer) previousStandstill).getParcelIdsOnboard()) : new HashSet<Integer>();
TimeWindowedCustomer shadowCustomer = sourceCustomer;
while (shadowCustomer != null) {
updateParcelIdsOnboard(parcelIdsOnboard, shadowCustomer);
scoreDirector.beforeVariableChanged(shadowCustomer, "parcelIdsOnboard");
shadowCustomer.setParcelIdsOnboard(parcelIdsOnboard);
scoreDirector.afterVariableChanged(shadowCustomer, "parcelIdsOnboard");
shadowCustomer = shadowCustomer.getNextCustomer();
}
}
private void updateParcelIdsOnboard(Set<Integer> parcelIdsOnboard, TimeWindowedCustomer customer) {
if (customer.getCustomerType() == Customer.PICKUP) {
parcelIdsOnboard.add(customer.getParcelId());
} else if (customer.getCustomerType() == Customer.DELIVERY) {
parcelIdsOnboard.remove(customer.getParcelId());
} else {
// TODO: throw an assertion
}
}
I then added the following drool rule:
rule "pickupBeforeDropoff"
when
TimeWindowedCustomer((customerType == Customer.DELIVERY) && !(parcelIdsOnboard.contains(parcelId)));
then
System.out.println("precedence violated");
scoreHolder.addHardConstraintMatch(kcontext, -1000);
end
For my example problem I create a total of 6 Customer objects (3 PICKUPS and 3 DELIVERIES). My fleet size is 12 vehicles.
When I run this I consistently get a hard score of -3000 which matches my output where I see two vehicles being used. One vehicle does all the PICKUPS and one vehicle does all the DELIVERIES.
The second approach I used was to give each Customer a reference to its counterpart Customer object (e.g. the PICKUP Customer for parcel 1 has a reference to the DELIVERY Customer for parcel 1 and vice versa).
I then implemented the following rule to enforce that the parcels be in the same vehicle (note: does not fully implement precedence constraint).
rule "pudoInSameVehicle"
when
TimeWindowedCustomer(vehicle != null && counterpartCustomer.getVehicle() != null && (vehicle != counterpartCustomer.getVehicle()));
then
scoreHolder.addHardConstraintMatch(kcontext, -1000);
end
For the same sample problem this consistently gives a score of -3000 and an identical solution to the one above.
I've tried running both rules in FULL_ASSERT mode. The rule using parcelIdsOnboard does not trigger any exceptions. However, the rule "pudoInSameVehicle" does trigger the following exception (which is not triggered in FAST_ASSERT mode).
The corrupted scoreDirector has no ConstraintMatch(s) which are in excess.
The corrupted scoreDirector has 1 ConstraintMatch(s) which are missing:
I'm not sure why this is corrupted, any suggestions would be much appreciated.
It's interesting that both of these methodologies are producing the same (incorrect) solution. I'm hoping someone will have some suggestions on what to try next. Thanks!
UPDATE:
After diving into the asserts that were being triggered in FULL_ASSERT mode I realized that the problem was with the dependent nature of the PICKUP and DELIVERY Customers. That is, if you make a move that removes the hard penalty on a DELIVERY Customer you also have to remove the penalty associated with the PICKUP Customer. In order to keep these in sync I updated my VehicleUpdatingVariableListener and my ArrivalTimeUpdatingVariableListener to trigger score calculation callbacks on both Customer objects. Here's the updateVehicle method after updating it to trigger score calculation on both the Customer that was just moved and the counterpart Customer.
protected void updateVehicle(ScoreDirector scoreDirector, TimeWindowedCustomer sourceCustomer) {
Standstill previousStandstill = sourceCustomer.getPreviousStandstill();
Integer departureTime = (previousStandstill instanceof TimeWindowedCustomer)
? ((TimeWindowedCustomer) previousStandstill).getDepartureTime() : null;
TimeWindowedCustomer shadowCustomer = sourceCustomer;
Integer arrivalTime = calculateArrivalTime(shadowCustomer, departureTime);
while (shadowCustomer != null && ObjectUtils.notEqual(shadowCustomer.getArrivalTime(), arrivalTime)) {
scoreDirector.beforeVariableChanged(shadowCustomer, "arrivalTime");
scoreDirector.beforeVariableChanged(((TimeWindowedCustomer) shadowCustomer).getCounterpartCustomer(), "arrivalTime");
shadowCustomer.setArrivalTime(arrivalTime);
scoreDirector.afterVariableChanged(shadowCustomer, "arrivalTime");
scoreDirector.afterVariableChanged(((TimeWindowedCustomer) shadowCustomer).getCounterpartCustomer(), "arrivalTime");
departureTime = shadowCustomer.getDepartureTime();
shadowCustomer = shadowCustomer.getNextCustomer();
arrivalTime = calculateArrivalTime(shadowCustomer, departureTime);
}
}
This solved the score corruption issue I had with my second approach, and, on a small sample problem, produced a solution that satisfied all the hard constraints (i.e. the solution had a hard score of 0).
I next tried to run a larger problem (~380 Customers), but the solutions are returning very poor hard scores. I tried searching for a solution for 1 min, 5 mins, and 15 mins. The score seems to improve linearly with runtime. But, at 15 minutes, the solution is still so bad that it seems like it would need to run for at least an hour to produce a viable solution.
I need this to run in 5-10 minutes at the most.
I learned about Filter Selection. My understanding is that you can run a function to check whether the move you are about to make would result in breaking a built in hard constraint, and if it would, then this move is skipped.
This means that you don't have to re-run score calculations or explore branches that you know will not be fruitful. For example, in my problem I don't want you to ever be able to move a Customer to a Vehicle unless its counterpart is assigned to that Vehicle or not assigned a Vehicle at all.
Here is the filter I implemented to check for that. It only runs for ChangeMoves, but I suspect I need this to implement a similar function for SwapMoves as well.
public class PrecedenceFilterChangeMove implements SelectionFilter<ChangeMove> {
#Override
public boolean accept(ScoreDirector scoreDirector, ChangeMove selection) {
TimeWindowedCustomer customer = (TimeWindowedCustomer)selection.getEntity();
if (customer.getCustomerType() == Customer.DELIVERY) {
if (customer.getCounterpartCustomer().getVehicle() == null) {
return true;
}
return customer.getVehicle() == customer.getCounterpartCustomer().getVehicle();
}
return true;
}
}
Adding this filter immediately led to worse scores. That makes me think I have implemented the function incorrectly, though it's not clear to me why it is incorrect.
Update 2:
A co-worker pointed out the problem with my PrecedenceFilterChangeMove. The correct version is below. I've also included PrecedenceFilterSwapMove implementation. Together, these have enabled me to find a solution to the problem where no hard constraints are violated in ~10 minutes. There are a couple of other optimizations I think I might be able to make to reduce this even further.
I will post another update if those changes are fruitful. I'd still love to hear from someone in the optaplanner community about my approach and whether they think there are better ways to model this problem!
PrecedenceFilterChangeMove
#Override
public boolean accept(ScoreDirector scoreDirector, ChangeMove selection) {
TimeWindowedCustomer customer = (TimeWindowedCustomer)selection.getEntity();
if (customer.getCustomerType() == Customer.DELIVERY) {
if (customer.getCounterpartCustomer().getVehicle() == null) {
return true;
}
return selection.getToPlanningValue() == customer.getCounterpartCustomer().getVehicle();
}
return true;
}
PrecedenceFilterSwapMove
#Override
public boolean accept(ScoreDirector scoreDirector, SwapMove selection) {
TimeWindowedCustomer leftCustomer = (TimeWindowedCustomer)selection.getLeftEntity();
TimeWindowedCustomer rightCustomer = (TimeWindowedCustomer)selection.getRightEntity();
if (rightCustomer.getCustomerType() == Customer.DELIVERY || leftCustomer.getCustomerType() == Customer.DELIVERY) {
return rightCustomer.getVehicle() == leftCustomer.getCounterpartCustomer().getVehicle() ||
leftCustomer.getVehicle() == rightCustomer.getCounterpartCustomer().getVehicle();
}
return true;
}
There's mixed pickup and delivery VRP experimental code here, which works. We don't have a polished out-of-the-box example yet, but we it's on long-term roadmap.
I have this code thtat works just fine w/o HR:
protected Entity createEntity(Key key, Map<String, Object> props){
Entity result = null;
try {
Entity e = new Entity(key);
Iterator it = props.entrySet().iterator();
while (it.hasNext()) {
Map.Entry<String, Object> entry = (Map.Entry<String, Object>) it.next();
String propName = entry.getKey();
Object propValue = entry.getValue();
setProperty(e, propName, propValue);
}
key = _ds.put(e);
if (key != null)
result = _ds.get(key);
} catch (EntityNotFoundException e1) {
}
return result;
}
This is just a simple method where its function is to create a new Entity out a a given key, just return NULL otherwise. This works fine without the HR configuration in JUnit however when I configured it, I am always getting an error, where _ds.get(key) can't find the key throwing:
EntityNotFoundException: No entity was found matching the key:
Specifically when doing:
while(it.hasNext()){
// stuff
createEntity(key, map);
// stuff
}
I assume that the problem in my code is that it tries to fetch the entity too soon. If thats is the case, how can I deal with this wihout resorting to Memcache or anything like that.
Update:
When the createEntity is executed within a transaction, it fails. However if I remove it outside of the transaction if fails miserably. I need to be able to run within a transaction, since my higher level API put lots of objects that needs to be there as a group.
Update:
I followed Strom's advise however I found a weird side effect, not doing a _ds.get(key) on the method, makes my PreparedQuery countEntities to fail. Where if add a _ds.get(key) even I don't do anything or save the Entity return from that get countEntities return the expected count. Why is that?
You try to create a new entity and then read back that entity within the same transaction? Can't be done.
Queries and gets inside transactions see a single, consistent snapshot of the datastore that lasts for the duration of the transaction. 1
In a transaction, all reads reflect the current, consistent state of the Datastore at the time the transaction started. This does not include previous puts and deletes inside the transaction. Queries and gets inside a transaction are guaranteed to see a single, consistent snapshot of the Datastore as of the beginning of the transaction. 2
This consistent snapshot view also extends to reads after writes inside transactions. Unlike with most databases, queries and gets inside a Datastore transaction do not see the results of previous writes inside that transaction. Specifically, if an entity is modified or deleted within a transaction, a query or get returns the original version of the entity as of the beginning of the transaction, or nothing if the entity did not exist then. 2
PS. Your assumption is worng, it's impossible to fetch an entity by key "too soon". Fetches by key are strongly consistent.
Also, why do you need to retrieve the entity again anyway? You just put it in the datastore yourself, so you already have its contents.
So change this part:
key = _ds.put(e);
if (key != null)
result = _ds.get(key);
To this:
key = _ds.put(e);
if (key != null)
result = e; // key.equals(e.getKey()) == true
Welcome in GAE environment, try to read it more times before you give up :
int counter = 0;
while (counter < NUMBER_OF_TRIES){
try {
//calling storage or any other non-reliable thing
if(success) {break;} //escape away if success
} catch(EntityNotFoundException e){
//log exception
counter++;
}
}
Important note from google documentation : "the rate at which you can write to the same entity group is limited to 1 write to the entity group per second."
source : https://developers.google.com/appengine/docs/java/gettingstarted/usingdatastore