Rxjava creates a loop when retrieving then updating an item from database - java

I have a challenge with rxjava, there's an observable to retrieve an item from database. I subscribe to this observable and on successful retrieval of an item from database i update that item. Problem is immediately after updating the item, my observable to get an item starts emitting and the update method is called again hence creating a loop.
Sample code
mOrderRepository.getOrder(orderId)
.subscribeOn(mSchedulerProvider.io())
.observeOn(mSchedulerProvider.ui())
.subscribe((Order order) -> {
// i calculate amount due after payment then update this order
order.setAmountDue(amountDue);
mOrderRepository.updateOrder(order);
});

If getOrder(orderId) returns a Flowable<Order>, that will receive Order again and again on each update, then this should be executed within the context of a separate Single that can be used to update the item for the given orderId.
public static final Object UNIT = new Object(); // avoid emitting `null`
public void updateOrder(final long orderId, final long amountDue) {
Single.fromCallable(() -> UNIT)
.subscribeOn(Schedulers.io())
.flatMap((ignored) -> getOrder(orderId).firstOrError()) // <-- convert Flowable to Single
.doOnSuccess(order -> {
order.setAmountDue(amountDue);
mOrderRepository.updateOrder(order);
}).subscribe();
}
Or something similar.

If you get an Order object every time it is updated in the database, and every time you get that Order object you update it in the database, it will loop indefinitely. The missing logic should answer the following question: When should the object NOT be updated?
One solution, as #akarnokd suggested, is to limit the retrieval to the first emitted item by specifying take(1):
mOrderRepository.getOrder(orderId)
.take(1)
.subscribeOn(mSchedulerProvider.io())
.observeOn(mSchedulerProvider.ui())
.subscribe((Order order) -> {
order.setAmountDue(amountDue);
mOrderRepository.updateOrder(order);
});
However, this may not be the logic you desire in the scenario where the order could legitimately be updated several times from another source. In this case, it may make sense to compare whether the received order's amountDue (or whatever other properties are relevant) is different than the updated amount. If so, update the order.
mOrderRepository.getOrder(orderId)
.subscribeOn(mSchedulerProvider.io())
.observeOn(mSchedulerProvider.ui())
.subscribe((Order order) -> {
// assuming `amountDue` has already been defined
if (!order.getAmoundDue().equals(amountDue)) {
order.setAmountDue(amountDue);
mOrderRepository.updateOrder(order);
}
});

Related

Multithreading problem when trying select and insert in single method

friends!
I'm working with Play+Java+Ebean.
I've got a problem with parallel threads when trying to select and insert values to DB in one method.
At first, I check the existence of a device in my DB(PostgreSQL) by id.
If I haven't found the device by id, I try to insert it to DB.
Then I return Device object.
All works fine except the situation when two http-requests asynchronously try to send to me the same device ids. Firstly both of them select a value from DB and return nothing. Then one inserts values and the second fails because of io.eBean.DuplicateKeyException.
I've tried to synchronize method. It works but I don't like this solution. If I do this many requests will be put in a queue. I want to stay in parallel but synchronize only if I have two or more requests with the same id.
Another way to solve the problem is to write a query with INSERT ... WHERE NOT EXISTS (SELECT ...) RETURNING. But it isn't an object style and there is hardcoding for several operations of the same type.
public CompletionStage<Device> getDevice(String deviceID, String ipAddress) {
return CompletableFuture.supplyAsync(() ->
SecurityRepository.findDeviceByDeviceID(deviceID))
.thenApplyAsync(curDevice -> curDevice
.orElseGet(() -> {
List<Device> devices =
SecurityRepository.listDevicesByIpAddress(ipAddress);
if(devices.size() >= 10) {
for(Device device : devices)
device.delete();
}
Device newDevice = new Device();
newDevice.setDeviceID(deviceID);
newDevice.ipAddress = ipAddress;
newDevice.save(); //here is the problem
return newDevice;
})
);
}
I want to synchronize this method if and only if deviceID is the same.
Have you any suggestions to this problem?
Thank you.

DynamoDB: ConditionalCheckFailedException

I am dealing with a ConditionalCheckFailedException and I am not exactly sure which condition is failing the check. When I open up debugger and examine the exception variable, I can't find any useful info.
Below is my Java Dynamo Client code. I am trying to make a conditional write to DynamoDB using DynamoDBSaveExpression when:
The client date in the table comes before the current client date that I am trying to write (stored as EPOCH time)
An entry does not exist in the table (I check for the FEEDBACK_KEY as it is the primary key in my table)
When I write the first entry into the table, it works, but on updates when an entry exists, I get the ConditionalCheckFailedException exception. Any ideas?
final DynamoDBSaveExpression expression = new DynamoDBSaveExpression();
final Map<String, ExpectedAttributeValue> expectedAttributes =
ImmutableMap.<String, ExpectedAttributeValue>builder()
.put(ThemesMessageEligibility.TableKeys.CLIENT_DATE,
new ExpectedAttributeValue()
.withComparisonOperator(ComparisonOperator.LT)
.withValue(new AttributeValue().withN(clientDate)))
.put(ThemesMessageEligibility.TableKeys.FEEDBACK_KEY,
new ExpectedAttributeValue(false))
.build();
expression.setExpected(expectedAttributes);
expression.setConditionalOperator(ConditionalOperator.OR);
// Conditional write if the clientDate is after the dynamo's client Date
try {
dynamoMapper.save(themesFeedbackComponentContainer, expression);
} catch (ConditionalCheckFailedException ex) {
...
}
I would remove the second condition, or change it so that it conditions on the item existing (new ExpectedAttributeValue(true)). UpdateItem will just overwrite the existing item even if it exists, so it seems like the CLIENT_DATE condition is the only one you need.
The API call as written above will only succeed on the first write (that is, when the item does not exist). In retrospect, if you only want the first write to an item to succeed (and fail if the item already exists) the CLIENT_DATE condition is not necessary (as there are no attributes in the existing image to compare to).

Cassandra Bulk-Write performance with Java Driver is atrocious compared to MongoDB

I have built an importer for MongoDB and Cassandra. Basically all operations of the importer are the same, except for the last part where data gets formed to match the needed cassandra table schema and wanted mongodb document structure. The write performance of Cassandra is really bad compared to MongoDB and I think I'm doing something wrong.
Basically, my abstract importer class loads the data, reads out all data and passes it to the extending MongoDBImporter or CassandraImporter class to send data to the databases. One database is targeted at a time - no "dual" inserts to both C* and MongoDB at the same time. The importer is run on the same machine against the same number of nodes (6).
The Problem:
MongoDB import finished after 57 minutes. I ingested 10.000.000 documents and I expect about the same amount of rows for Cassandra. My Cassandra importer is now running since 2,5 hours and is only at 5.000.000 inserted rows. I will wait for the importer to finish and edit the actual finish time in here.
How I import with Cassandra:
I prepare two statements once before ingesting data. Both statements are UPDATE queries because sometimes I have to append data to an existing list. My table is cleared completely before starting the import. The prepared statements get used over and over again.
PreparedStatement statementA = session.prepare(queryA);
PreparedStatement statementB = session.prepare(queryB);
For every row, I create a BoundStatement and pass that statement to my "custom" batching method:
BoundStatement bs = new BoundStatement(preparedStatement); //either statementA or B
bs = bs.bind();
//add data... with several bs.setXXX(..) calls
cassandraConnection.executeBatch(bs);
With MongoDB, I can insert 1000 Documents (thats the maximum) at a time without problems. For Cassandra, the importer crashes with com.datastax.driver.core.exceptions.InvalidQueryException: Batch too large for just 10 of my statements at some point. I'm using this code to build the batches. Btw, I began with 1000, 500, 300, 200, 100, 50, 20 batch size before but obviously they do not work too. I then set it down to 10 and it threw the exception again. Now I'm out of ideas why it's breaking.
private static final int MAX_BATCH_SIZE = 10;
private Session session;
private BatchStatement currentBatch;
...
#Override
public ResultSet executeBatch(Statement statement) {
if (session == null) {
throw new IllegalStateException(CONNECTION_STATE_EXCEPTION);
}
if (currentBatch == null) {
currentBatch = new BatchStatement(Type.UNLOGGED);
}
currentBatch.add(statement);
if (currentBatch.size() == MAX_BATCH_SIZE) {
ResultSet result = session.execute(currentBatch);
currentBatch = new BatchStatement(Type.UNLOGGED);
return result;
}
return null;
}
My C* schema looks like this
CREATE TYPE stream.event (
data_dbl frozen<map<text, double>>,
data_str frozen<map<text, text>>,
data_bool frozen<map<text, boolean>>,
);
CREATE TABLE stream.data (
log_creator text,
date text, //date of the timestamp
ts timestamp,
log_id text, //some id
hour int, //just the hour of the timestmap
x double,
y double,
events list<frozen<event>>,
PRIMARY KEY ((log_creator, date, hour), ts, log_id)
) WITH CLUSTERING ORDER BY (ts ASC, log_id ASC)
I sometimes need to add further new events to an existing row. That's why I need a List of UDTs. My UDT contains three maps because the event creators produce different data (key/value pairs of type string/double/boolean). I am aware of the fact that the UDTs are frozen and I can not touch the maps of already ingested events. That's fine for me, I just need to add new events that have the same timestamp sometimes. I partition on the creator of the logs (some sensor name) as well as the date of the record (ie. "22-09-2016") and the hour of the timestamp (to distribute data more while keeping related data close together in a partition).
I'm using Cassandra 3.0.8 with the Datastax Java Driver, version 3.1.0 in my pom.
According to What is the batch limit in Cassandra?, I should not increase the batch size by adjusting batch_size_fail_threshold_in_kb in my cassandra.yaml. So... what do or what's wrong with my import?
UPDATE
So I have adjusted my code to run async queries and store the currently running inserts in a list. Whenever an async insert finishes, it will be removed from the list. When the list size exceeds a threshold and an error occured in an insert before, the method will wait 500ms until the inserts are below the threshold. My code is now automatically increasing the threshold when no insert failed.
But after streaming 3.300.000 rows, there were 280.000 inserts being processed but no error happened. This seems number of currently processed inserts looks too high. The 6 cassandra nodes are running on commodity hardware, which is 2 years old.
Is this the high number (280.000 for 6 nodes) of concurrent inserts a problem? Should I add a variable like MAX_CONCURRENT_INSERT_LIMIT?
private List<ResultSetFuture> runningInsertList;
private static int concurrentInsertLimit = 1000;
private static int concurrentInsertSleepTime = 500;
...
#Override
public void executeBatch(Statement statement) throws InterruptedException {
if (this.runningInsertList == null) {
this.runningInsertList = new ArrayList<>();
}
//Sleep while the currently processing number of inserts is too high
while (concurrentInsertErrorOccured && runningInsertList.size() > concurrentInsertLimit) {
Thread.sleep(concurrentInsertSleepTime);
}
ResultSetFuture future = this.executeAsync(statement);
this.runningInsertList.add(future);
Futures.addCallback(future, new FutureCallback<ResultSet>() {
#Override
public void onSuccess(ResultSet result) {
runningInsertList.remove(future);
}
#Override
public void onFailure(Throwable t) {
concurrentInsertErrorOccured = true;
}
}, MoreExecutors.sameThreadExecutor());
if (!concurrentInsertErrorOccured && runningInsertList.size() > concurrentInsertLimit) {
concurrentInsertLimit += 2000;
LOGGER.info(String.format("New concurrent insert limit is %d", concurrentInsertLimit));
}
return;
}
After using C* for a bit, I'm convinced you should really use batches only for keeping multiple tables in sync. If you don't need that feature, then don't use batches at all because you will incur in performance penalties.
The correct way to load data into C* is with async writes, with optional backpressure if your cluster can't keep up with the ingestion rate. You should replace your "custom" batching method with something that:
performs async writes
keep under control how many inflight writes you have
perform some retry when a write timeouts.
To perform async writes, use the .executeAsync method, that will return you a ResultSetFuture object.
To keep under control how many inflight queries just collect the ResultSetFuture object retrieved from the .executeAsync method in a list, and if the list gets (ballpark values here) say 1k elements then wait for all of them to finish before issuing more writes. Or you can wait for the first to finish before issuing one more write, just to keep the list full.
And finally, you can check for write failures when you're waiting on an operation to complete. In that case, you could:
write again with the same timeout value
write again with an increased timeout value
wait some amount of time, and then write again with the same timeout value
wait some amount of time, and then write again with an increased timeout value
From 1 to 4 you have an increased backpressure strength. Pick the one that best fit your case.
EDIT after question update
Your insert logic seems a bit broken to me:
I don't see any retry logic
You don't remove the item in the list if it fails
Your while (concurrentInsertErrorOccured && runningInsertList.size() > concurrentInsertLimit) is wrong, because you will sleep only when the number of issued queries is > concurrentInsertLimit, and because of 2. your thread will just park there.
You never set to false concurrentInsertErrorOccured
I usually keep a list of (failed) queries for the purpose of retrying them at later time. That gives me powerful control on the queries, and when the failed queries starts to accumulate I sleep for a few moments, and then keep on retrying them (up to X times, then hard fail...).
This list should be very dynamic, eg you add items there when queries fail, and remove items when you perform a retry. Now you can understand the limits of your cluster, and tune your concurrentInsertLimit based on eg the avg number of failed queries in the last second, or stick with the simpler approach "pause if we have an item in the retry list" etc...
EDIT 2 after comments
Since you don't want any retry logic, I would change your code this way:
private List<ResultSetFuture> runningInsertList;
private static int concurrentInsertLimit = 1000;
private static int concurrentInsertSleepTime = 500;
...
#Override
public void executeBatch(Statement statement) throws InterruptedException {
if (this.runningInsertList == null) {
this.runningInsertList = new ArrayList<>();
}
ResultSetFuture future = this.executeAsync(statement);
this.runningInsertList.add(future);
Futures.addCallback(future, new FutureCallback<ResultSet>() {
#Override
public void onSuccess(ResultSet result) {
runningInsertList.remove(future);
}
#Override
public void onFailure(Throwable t) {
runningInsertList.remove(future);
concurrentInsertErrorOccured = true;
}
}, MoreExecutors.sameThreadExecutor());
//Sleep while the currently processing number of inserts is too high
while (runningInsertList.size() >= concurrentInsertLimit) {
Thread.sleep(concurrentInsertSleepTime);
}
if (!concurrentInsertErrorOccured) {
// Increase your ingestion rate if no query failed so far
concurrentInsertLimit += 10;
} else {
// Decrease your ingestion rate because at least one query failed
concurrentInsertErrorOccured = false;
concurrentInsertLimit = Max(1, concurrentInsertLimit - 50);
while (runningInsertList.size() >= concurrentInsertLimit) {
Thread.sleep(concurrentInsertSleepTime);
}
}
return;
}
You could also optimize a bit the procedure by replacing your List<ResultSetFuture> with a counter.
Hope that helps.
When you run a batch in Cassandra, it chooses a single node to act as the coordinator. This node then becomes responsible for seeing to it that the batched writes find their appropriate nodes. So (for example) by batching 10000 writes together, you have now tasked one node with the job of coordinating 10000 writes, most of which will be for different nodes. It's very easy to tip over a node, or kill latency for an entire cluster by doing this. Hence, the reason for the limit on batch sizes.
The problem is that Cassandra CQL BATCH is a misnomer, and it doesn't do what you or anyone else thinks that it does. It is not to be used for performance gains. Parallel, asynchronous writes will always be faster than running the same number of statements BATCHed together.
I know that I could easily batch 10.000 rows together because they will go to the same partition. ... Would you still use single row inserts (async) rather than batches?
That depends on whether or not write performance is your true goal. If so, then I'd still stick with parallel, async writes.
For some more good info on this, check out these two blog posts by DataStax's Ryan Svihla:
Cassandra: Batch loading without the Batch keyword
Cassandra: Batch Loading Without the Batch — The Nuanced Edition

Hibernate pagination

I have a table of items with a flag of "null" or "done" , I need to fetch the null flagged items ,process them, set flag to done.
thing is , I want to use pagination , where I fetch 500 by 500 item(null flagged)
my design goes as follows
I fetch 500 item // the producer
put them in a queue
some thread takes these 500 item // the consumer
operate on them and updates flag to "done"
the problem am facing is the consumer is pretty slow, so the producer fetches the same 500 part again , so I went for indexing but seems not to work properly
public List<Parts> getNParts(int listSize) {
try {
criteria = session.createCriteria(Parts.class);
criteria.setFirstResult(DBIndexGuard.getNextIndex()); //index+=500;
criteria.add(Restrictions.isNull("Status"));
criteria.setMaxResults(listSize); //list size is 500;
newPartList = criteria.list();
} catch (Exception e) {
e.printStackTrace();
} finally {
}
return newPartList;
}
how can I implement pagination in order to fetch 500 by 500 different items with the criteria that these items are null flagged ?
create a synchronized method for producer - consumer type of problem, this tutorial can help you.
You may try one of the following implementation.
1. Eliminate duplicate processing on consumer side. Set done only if null.
2. Eliminate duplicate on producer by maintaining additional status 'in processs" whenever put in queue. Exclude them in your producer query.
3. While paginating, sort your records by primary key of your table , and for subsequent pages , query only on those records greater than the primary key of last record in previous page.
This problem can easily be solved with inclusion of one more status, say 'processing'. So your producer will mark the records picked as - 'processing' and consumer can then work on them and set their status to 'done'.
In this case producer will not pick the already picked records.
I solved it as follows
if (newPartList.isEmpty() || newPartList.size()<DBIndexGuard.getAllowedListSize()) { // AllowedListSize=500
System.out.println("DataFetcher Sleeping");
inputQueue.offer(newPartList);
DBIndexGuard.resetIndex();
session.clear();
TimeUnit.MINUTES.sleep(10);
}

Couchbase 2.0 Java SDK 1.1 - Synchronous Add and Views

I am trying to create a junit test. Scenario:
setUp: I'm adding two json documents to database
Test: I'm getting those documents using view
tearDown: I'm removing both objects
My view:
function (doc, meta) {
if (doc.type && doc.type == "UserConnection") {
emit([doc.providerId, doc.providerUserId], doc.userId);
}
}
This is how I add those documents to database and make sure that "add" is synchronous:
public boolean add(String key, Object element) {
String json = gson.toJson(element);
OperationFuture<Boolean> result = couchbaseClient.add(key, 0, json);
return result.get();
}
JSON Documents that I'm adding are:
{"userId":"1","providerId":"test_pId","providerUserId":"test_pUId","type":"UserConnection"}
{"userId":"2","providerId":"test_pId","providerUserId":"test_pUId","type":"UserConnection"}
This is how I call the view:
View view = couchbaseClient.getView(DESIGN_DOCUMENT_NAME, VIEW_NAME);
Query query = new Query();
query.setKey(ComplexKey.of("test_pId", "test_pUId"));
ViewResponse viewResponse = couchbaseClient.query(view, query);
Problem:
Test fails due to invalid number of elements fetched from view.
My observations:
Sometimes tests are passing
Number of elements that are fetched from view is not consistent(from 0 to 2)
When I've added those documents to database instead of calling setUp the test passed every time
Acording to this http://www.couchbase.com/docs/couchbase-sdk-java-1.1/create-update-docs.html documentation I'm adding those json documents synchronously by calling get() on returned Future object.
My question:
Is there something wrong with how I've approached to fetching data from view just after this data was inserted to DB? Is there any good practise for solving this problem? And can someone explain it to me please what I've did wrong?
Thanks,
Dariusz
In Couchbase 2.0 documents are required to be written to disk before they will show up in a view. There are three ways you can do an operation with the Java SDK. The first is asynchronous which means that you just send the data and at a later time check to make sure that the data was received correctly. If you do an asynchronous operation and then immediately call .get() as you did above then you have created a synchronous operation. When an operation returns success in these two cases above you are only guaranteed that the item has been written into memory. Your test passed sometimes only because you were lucky enough that both items were written to disk before did your query.
The third way to do an operation is with durability requirements and this is the one you want to do for your tests. Durability requirements allow you to say that you want an item to be written to disk or replicated before success is returned to the client. Take a look at the following function.
https://github.com/couchbase/couchbase-java-client/blob/1.1.0/src/main/java/com/couchbase/client/CouchbaseClient.java#L1293
You will want to use this function and set the PersistedTo parameter to MASTER.

Categories