friends!
I'm working with Play+Java+Ebean.
I've got a problem with parallel threads when trying to select and insert values to DB in one method.
At first, I check the existence of a device in my DB(PostgreSQL) by id.
If I haven't found the device by id, I try to insert it to DB.
Then I return Device object.
All works fine except the situation when two http-requests asynchronously try to send to me the same device ids. Firstly both of them select a value from DB and return nothing. Then one inserts values and the second fails because of io.eBean.DuplicateKeyException.
I've tried to synchronize method. It works but I don't like this solution. If I do this many requests will be put in a queue. I want to stay in parallel but synchronize only if I have two or more requests with the same id.
Another way to solve the problem is to write a query with INSERT ... WHERE NOT EXISTS (SELECT ...) RETURNING. But it isn't an object style and there is hardcoding for several operations of the same type.
public CompletionStage<Device> getDevice(String deviceID, String ipAddress) {
return CompletableFuture.supplyAsync(() ->
SecurityRepository.findDeviceByDeviceID(deviceID))
.thenApplyAsync(curDevice -> curDevice
.orElseGet(() -> {
List<Device> devices =
SecurityRepository.listDevicesByIpAddress(ipAddress);
if(devices.size() >= 10) {
for(Device device : devices)
device.delete();
}
Device newDevice = new Device();
newDevice.setDeviceID(deviceID);
newDevice.ipAddress = ipAddress;
newDevice.save(); //here is the problem
return newDevice;
})
);
}
I want to synchronize this method if and only if deviceID is the same.
Have you any suggestions to this problem?
Thank you.
Related
I am trying to create an IoT project where sensor data is sent every second to DynamoDB and an android app has to display it on the front-end. Using AWS Amplify, I was able to build an app that retrieves the data from the table with a button press according to its ID. What I want to retrieve is the latest item from the DB. I believe this can be done by sorting all the items in descending order and limiting the items to be retrieved by 1, and putting it in a loop.
My problem is I am having difficulty in writing the correct syntax for it. This is my current code:
public void readById() {
String objectId = "f5d470f6-72e2-49b6-bf28-43d7db130de4";
Amplify.DataStore.query(
MyModel.class,
Where.id(objectId),
items -> {
while (items.hasNext()) {
MyModel item = items.next();
retrievedItem = item.getName().toString();
Log.i("Amplify", "Id " + item.getId() + " " + item.getName());
}
},
failure -> Log.e("Amplify", "Could not query DataStore", failure)
);
}
The code below is what I want to achieve, but trying this method did not work because I can't find the builder() method under QueryOptions or QueryPredicate even though my AWS amplify dependencies is up to date.
Amplify.DataStore.query(MyModel.class, QueryOptions.builder()
.sort(MyModel.TIMESTAMP, SortOrder.DESCENDING)
.limit(1)
.build(),
result -> {
System.out.println(result.get(0));
},
error -> {
System.out.println(error.getCause());
});
I saw a code snippet from amplify docs (https://docs.amplify.aws/lib/datastore/sync/q/platform/android/#reevaluate-expressions-at-runtime) about advanced use cases of Query but it does not seem to sync with my code, see image below:
Amplify.addPlugin(AWSDataStorePlugin.builder().dataStoreConfiguration(
DataStoreConfiguration.builder()
.syncExpression(User.class, () -> User.LAST_NAME.eq("Doe").and(User.CREATED_AT.gt("2020-10-10")))
.build())
.build());
I am new to AWS so it's a struggle but I'm willing to learn. Any input is appreciated!
You're potential going to be sending and retrieving a lot of data from dynamodb. Could you cache the data locally when your app starts, then write the data to dynamodb before your program ends?
You could add ZonedDateTime.now() or similar, to record when each point of data is created? If you use Set setName, you could use something like setName.sort(Comparator.comparing(ClassName::getZonedDateTime))
I have a challenge with rxjava, there's an observable to retrieve an item from database. I subscribe to this observable and on successful retrieval of an item from database i update that item. Problem is immediately after updating the item, my observable to get an item starts emitting and the update method is called again hence creating a loop.
Sample code
mOrderRepository.getOrder(orderId)
.subscribeOn(mSchedulerProvider.io())
.observeOn(mSchedulerProvider.ui())
.subscribe((Order order) -> {
// i calculate amount due after payment then update this order
order.setAmountDue(amountDue);
mOrderRepository.updateOrder(order);
});
If getOrder(orderId) returns a Flowable<Order>, that will receive Order again and again on each update, then this should be executed within the context of a separate Single that can be used to update the item for the given orderId.
public static final Object UNIT = new Object(); // avoid emitting `null`
public void updateOrder(final long orderId, final long amountDue) {
Single.fromCallable(() -> UNIT)
.subscribeOn(Schedulers.io())
.flatMap((ignored) -> getOrder(orderId).firstOrError()) // <-- convert Flowable to Single
.doOnSuccess(order -> {
order.setAmountDue(amountDue);
mOrderRepository.updateOrder(order);
}).subscribe();
}
Or something similar.
If you get an Order object every time it is updated in the database, and every time you get that Order object you update it in the database, it will loop indefinitely. The missing logic should answer the following question: When should the object NOT be updated?
One solution, as #akarnokd suggested, is to limit the retrieval to the first emitted item by specifying take(1):
mOrderRepository.getOrder(orderId)
.take(1)
.subscribeOn(mSchedulerProvider.io())
.observeOn(mSchedulerProvider.ui())
.subscribe((Order order) -> {
order.setAmountDue(amountDue);
mOrderRepository.updateOrder(order);
});
However, this may not be the logic you desire in the scenario where the order could legitimately be updated several times from another source. In this case, it may make sense to compare whether the received order's amountDue (or whatever other properties are relevant) is different than the updated amount. If so, update the order.
mOrderRepository.getOrder(orderId)
.subscribeOn(mSchedulerProvider.io())
.observeOn(mSchedulerProvider.ui())
.subscribe((Order order) -> {
// assuming `amountDue` has already been defined
if (!order.getAmoundDue().equals(amountDue)) {
order.setAmountDue(amountDue);
mOrderRepository.updateOrder(order);
}
});
Riddle me this Stackoverflow:
I have a query that I am sending to GAE. The query (When in String format) looks like this:
SELECT * FROM USER WHERE USER_ID = 5884677008
If I go to the GAE console and type it in via a manual GQL query, it returns the item just fine. If I browse via the GUI and scroll to it, I can see it just fine. But when I call it from the Java code, it returns nothing every time.
code:
I have already confirmed the query is correct as I printed it out as a String just so I can test it.
Anyone have any idea what is going on with this?
q = new Query(entityName); //entityName = "User", confirmed
q.setFilter(filter); //filter = "USER_ID = 5884677008", confirmed
DatastoreService datastore = DatastoreServiceFactory.getDatastoreService();
PreparedQuery pq = datastore.prepare(q);
/*
This always is empty here. Calling either pq.countEntities()); or
pq.toString()); returns size 0 or a String of nothing.
*/
Thanks!
-Sil
Edit: I Do have an index built, but it did not seem to help with the problem.
From the docs, you don't necessarily need to do toString. Have you tried asIterable or asSingleEntity on pq? Something like:
PreparedQuery pq = datastore.prepare(q);
for (Entity result : pq.asIterable()) {
String test = (String) result.getProperty("prop1");
}
That's if you have multiple entries. In the event you only have one:
PreparedQuery pq = datastore.prepare(q);
Entity result = pq.asSingleEntity();
String test = (String) result.getProperty("prop1");
Basically, if you don't call asIterable or asSingleEntity, the query is JUST prepared and doesn't run
Took quite a bit of testing, but found the issue.
The problem revolved around the filter being set. If I removed the filter, it worked fine (but returned everything). Turns out, what was being passed as a filter was a String version of the user_id as opposed to the Long version of it. There was really no way to tell as the exact SQL query DID NOT read ( SELECT * FROM USER WHERE USER_ID = "5884677008" ) when I printed it, which would have been a dead giveaway.
I changed the passed filter parameter (which I had stored in a hashmap of (String, Object) btw) from a String to a Long and that solved the issue.
One thing to point out though, as #Patrice brought up (And as I excluded from my code while posting to save space), to actually iterate through the list of results, you do need to call a method against it (Either .asIterable() or .asSingleEntity() ).
You actually can check against the number of returned entities / results by calling pq.countEntities() and it will return the correct number even before you call a formatting method against the pq, but as #tx802 pointed out, it is deprecated, and despite the fact that it worked for me, someone in the future using this post as a reference may not have it work for them.
I am trying to create a junit test. Scenario:
setUp: I'm adding two json documents to database
Test: I'm getting those documents using view
tearDown: I'm removing both objects
My view:
function (doc, meta) {
if (doc.type && doc.type == "UserConnection") {
emit([doc.providerId, doc.providerUserId], doc.userId);
}
}
This is how I add those documents to database and make sure that "add" is synchronous:
public boolean add(String key, Object element) {
String json = gson.toJson(element);
OperationFuture<Boolean> result = couchbaseClient.add(key, 0, json);
return result.get();
}
JSON Documents that I'm adding are:
{"userId":"1","providerId":"test_pId","providerUserId":"test_pUId","type":"UserConnection"}
{"userId":"2","providerId":"test_pId","providerUserId":"test_pUId","type":"UserConnection"}
This is how I call the view:
View view = couchbaseClient.getView(DESIGN_DOCUMENT_NAME, VIEW_NAME);
Query query = new Query();
query.setKey(ComplexKey.of("test_pId", "test_pUId"));
ViewResponse viewResponse = couchbaseClient.query(view, query);
Problem:
Test fails due to invalid number of elements fetched from view.
My observations:
Sometimes tests are passing
Number of elements that are fetched from view is not consistent(from 0 to 2)
When I've added those documents to database instead of calling setUp the test passed every time
Acording to this http://www.couchbase.com/docs/couchbase-sdk-java-1.1/create-update-docs.html documentation I'm adding those json documents synchronously by calling get() on returned Future object.
My question:
Is there something wrong with how I've approached to fetching data from view just after this data was inserted to DB? Is there any good practise for solving this problem? And can someone explain it to me please what I've did wrong?
Thanks,
Dariusz
In Couchbase 2.0 documents are required to be written to disk before they will show up in a view. There are three ways you can do an operation with the Java SDK. The first is asynchronous which means that you just send the data and at a later time check to make sure that the data was received correctly. If you do an asynchronous operation and then immediately call .get() as you did above then you have created a synchronous operation. When an operation returns success in these two cases above you are only guaranteed that the item has been written into memory. Your test passed sometimes only because you were lucky enough that both items were written to disk before did your query.
The third way to do an operation is with durability requirements and this is the one you want to do for your tests. Durability requirements allow you to say that you want an item to be written to disk or replicated before success is returned to the client. Take a look at the following function.
https://github.com/couchbase/couchbase-java-client/blob/1.1.0/src/main/java/com/couchbase/client/CouchbaseClient.java#L1293
You will want to use this function and set the PersistedTo parameter to MASTER.
I have a strange problem to update a table in my database...forgive me if I can not explain well but I'm a bit confused...
The problem is this:
I created a table with values, I read this values in my listview..everything works for now..insert and delete values works without problem..now created a loop in a service why do I need to make a comparison between a value and a string of my database and when this comparison is true, I need to change a value in my table..
The real problem is this: my db.update works only if not use ... never, the command db.delete... if use it, the db.update not work anymore ..and to make it work again, i need to make a new AVD.
how is it possible?
my db.delete and id is this:
item.getMenuInfo();
id = getListAdapter().getItemId(info.position);
public void deleteReg(SQLiteDatabase db ,long id)
{
db.delete(TabRegistry.TABLE_NAME, TabRegistry._ID + "=" + id, null);
}
on activity:
databaseHelper.deleteReg(db, id);
my db.update is this: (positions is a value of getPositions(),for locate a positions with a cursor(always works, even when fails db.update))
public void updateReg(SQLiteDatabase db,int positions, String stat)
{
ContentValues v = new ContentValues();
v.put(TabRegistry.STATUS, stat);
db.update(TabRegistry.TABLE_NAME, v, TabRegistry._ID + " = " + positions, null);
}
on service:
databaseHelper.updateReg(db, positions, "SUCCESS");
if you need more code, tell me what I add now..thanks in advance
The SQLite api you are using is based off of CRUD operations (you should read this).
You are DELETE-ing the record from the database, therefore there is nothing to UPDATE when you attempt to update it. If you want to create a new record, or recreate the one you deleted then you would perform an INSERT instead of an UPDATE.
EDIT:
It also appears you are passing in position number to the update and delete. I assume that you are also using this value to place the record in your table? Is it possible that when you delete the record from the table and the database, that the other records now have an invalid position because they haven't been updated also? It's just a shot in the dark, figured I might as well ask.