I am trying to create an IoT project where sensor data is sent every second to DynamoDB and an android app has to display it on the front-end. Using AWS Amplify, I was able to build an app that retrieves the data from the table with a button press according to its ID. What I want to retrieve is the latest item from the DB. I believe this can be done by sorting all the items in descending order and limiting the items to be retrieved by 1, and putting it in a loop.
My problem is I am having difficulty in writing the correct syntax for it. This is my current code:
public void readById() {
String objectId = "f5d470f6-72e2-49b6-bf28-43d7db130de4";
Amplify.DataStore.query(
MyModel.class,
Where.id(objectId),
items -> {
while (items.hasNext()) {
MyModel item = items.next();
retrievedItem = item.getName().toString();
Log.i("Amplify", "Id " + item.getId() + " " + item.getName());
}
},
failure -> Log.e("Amplify", "Could not query DataStore", failure)
);
}
The code below is what I want to achieve, but trying this method did not work because I can't find the builder() method under QueryOptions or QueryPredicate even though my AWS amplify dependencies is up to date.
Amplify.DataStore.query(MyModel.class, QueryOptions.builder()
.sort(MyModel.TIMESTAMP, SortOrder.DESCENDING)
.limit(1)
.build(),
result -> {
System.out.println(result.get(0));
},
error -> {
System.out.println(error.getCause());
});
I saw a code snippet from amplify docs (https://docs.amplify.aws/lib/datastore/sync/q/platform/android/#reevaluate-expressions-at-runtime) about advanced use cases of Query but it does not seem to sync with my code, see image below:
Amplify.addPlugin(AWSDataStorePlugin.builder().dataStoreConfiguration(
DataStoreConfiguration.builder()
.syncExpression(User.class, () -> User.LAST_NAME.eq("Doe").and(User.CREATED_AT.gt("2020-10-10")))
.build())
.build());
I am new to AWS so it's a struggle but I'm willing to learn. Any input is appreciated!
You're potential going to be sending and retrieving a lot of data from dynamodb. Could you cache the data locally when your app starts, then write the data to dynamodb before your program ends?
You could add ZonedDateTime.now() or similar, to record when each point of data is created? If you use Set setName, you could use something like setName.sort(Comparator.comparing(ClassName::getZonedDateTime))
Related
I have a dataflow application(java) which is running in gcp and able to read the data from bigquery table and write to Kafka. But the application running as a batch mode, where as I would like make application as stream to read the data continuously from bigquery table and write to kafka topic.
Bigquery Table: Partitioned table with insert_time ( timestamp of record inserted intable) and message column
PCollection<TableRow> tablesRows = BigQueryUtil.readFromTable(pipeline,
"select message,processed from `myprojectid.mydatasetname.mytablename` " +
"where processed = false " +
"order by insert_time desc ")
.apply("Windowing",Window.into(FixedWindows.of(Duration.standardMinutes(1))));
.apply("Converting to writable message", ParDo.of(new ProcessRowDoFn()))
.apply("Writing Messages", KafkaIO.<String, String>write().
withBootstrapServers(bootStrapURLs).
withTopic(options.getKafkaInputTopics()).
withKeySerializer(StringSerializer.class).
withValueSerializer(StringSerializer.class).
withProducerFactoryFn(new ProducerFactoryFn(sslConfig, projected))
);
pipeline.run();
Note: I have tried below options but no luck yet
Options 1. I tried the options of options.streaming (true); its running as stream but it will finish on the first success write.
Options 2. Applied trigger
Window.into(
FixedWindows.of(Duration.standardMinutes(5)))
.triggering(
AfterWatermark.pastEndOfWindow()
.withLateFirings(AfterPane.elementCountAtLeast(1)))
.withAllowedLateness(Duration.standardDays(2))
.accumulatingFiredPanes();
Option 3. Making unbounded forcibly
WindowingStrategy<?, ?> windowingStrategy = tablesRows.setIsBoundedInternal(PCollection.IsBounded.UNBOUNDED).getWindowingStrategy();
.apply("Converting to writable message", ParDo.of(new ProcessRowDoFn())).setIsBoundedInternal(PCollection.IsBounded.UNBOUNDED)
Any solution is appreciated.
Some of the advice in Side Input Patterns in the Beam Programming Guide may be helpful here, even though you aren't using this as a side input. In particular, that article discusses using GenerateSequence to periodically emit a value and trigger a read from a bounded source.
This could allow your one time query to become a repeated query that periodically emits new records. It will be up to your query logic to determine what range of the table to scan on each query, though, and I expect it will be difficult to avoid emitting duplicate records. Hopefully your use case can tolerate that.
Emitting into the global window would look like:
PCollectionView<Map<String, String>> map =
p.apply(GenerateSequence.from(0).withRate(1, Duration.standardSeconds(5L)))
.apply(Window.into(FixedWindows.of(Duration.standardSeconds(5))))
.apply(Sum.longsGlobally().withoutDefaults())
.apply(
ParDo.of(
new DoFn<Long, Map<String, String>>() {
#ProcessElement
public void process(
#Element Long input,
#Timestamp Instant timestamp,
OutputReceiver<Map<String, String>> o) {
// Read from BigQuery here and for each row output a record: o.output(PlaceholderExternalService.readTestData(timestamp)
);
}
}))
.apply(
Window.<Map<String, String>>into(new GlobalWindows())
.triggering(Repeatedly.forever(AfterProcessingTime.pastFirstElementInPane()))
.discardingFiredPanes())
.apply(View.asSingleton());
This assumes that the size of the query result is relatively small, since the read happens entirely within a DoFn invocation.
I have a requirement where I want to update 1000 records in the Couchbase and get the ids of the updated record back. I don't want to use Reactive (project reactor) as mentioned in the official document of couchbase SDK 3.x.
there is one more approach where we can use the CompletableFuture on the service side and call the AsynCluster.query(queryStatment) which also return CompletableFuture<QueryResult> back.
is there any way to perform 1000 update(to be precise update with select "UPDATE bucketName SET docType = 'abc' "WHERE caseId = 123 RETURNING caseId as caseId ;") operation and return caseID back and do this task asynchronously.
i tried with below code but not sure about it.
List <CompletableFuture<QueryResult>> completableFutureList = new ArrayList<>();
for(JsonObject j : jsonObjectList) {
completableFutureList.add(asyncCluster.query(queryStatement,
QueryOptions.queryOptions().parameters(j)));}
CompletableFuture.allOf(completableFutureList.toArray(new CompletableFuture[0]))
.exceptionally(ex-> null).join();
it should work asynchronously and return the list of caseIds that are successfully updated and also handle any exception that occurred while doing the update operation and catch that separately.
I am using Android Jetpack Navigation for my project, and I have implemented the logic in the following way!
In my Mission Repository I am using this logic ->
public void getCurrentJobMissions() {
missionsDisposable.add(dao.missionCount().subscribeOn(Schedulers.io()).observeOn(AndroidSchedulers.mainThread()).subscribe(integer -> {
if (integer == 0) {
makeApiCallAndSaveToDBJobMissions();
}
getDataFromDBJobMissions();
}));
Where dao.mission count is just a
#Query("SELECT COUNT(*) FROM MissionsTable")
Single<Integer> missionCount();
So what I am doing here is that every time the user is entering the application , I am checking if there is any data on the database , if not -> makeApiCallAndSaveToDBJobMissions and then retrieve them from room database , if yes -> retrieve them from room database!
This logic is working fine but I don't think this is the best practice!
Can anyone provide me with an example of better solution to this! Thanks
friends!
I'm working with Play+Java+Ebean.
I've got a problem with parallel threads when trying to select and insert values to DB in one method.
At first, I check the existence of a device in my DB(PostgreSQL) by id.
If I haven't found the device by id, I try to insert it to DB.
Then I return Device object.
All works fine except the situation when two http-requests asynchronously try to send to me the same device ids. Firstly both of them select a value from DB and return nothing. Then one inserts values and the second fails because of io.eBean.DuplicateKeyException.
I've tried to synchronize method. It works but I don't like this solution. If I do this many requests will be put in a queue. I want to stay in parallel but synchronize only if I have two or more requests with the same id.
Another way to solve the problem is to write a query with INSERT ... WHERE NOT EXISTS (SELECT ...) RETURNING. But it isn't an object style and there is hardcoding for several operations of the same type.
public CompletionStage<Device> getDevice(String deviceID, String ipAddress) {
return CompletableFuture.supplyAsync(() ->
SecurityRepository.findDeviceByDeviceID(deviceID))
.thenApplyAsync(curDevice -> curDevice
.orElseGet(() -> {
List<Device> devices =
SecurityRepository.listDevicesByIpAddress(ipAddress);
if(devices.size() >= 10) {
for(Device device : devices)
device.delete();
}
Device newDevice = new Device();
newDevice.setDeviceID(deviceID);
newDevice.ipAddress = ipAddress;
newDevice.save(); //here is the problem
return newDevice;
})
);
}
I want to synchronize this method if and only if deviceID is the same.
Have you any suggestions to this problem?
Thank you.
I am trying to create a junit test. Scenario:
setUp: I'm adding two json documents to database
Test: I'm getting those documents using view
tearDown: I'm removing both objects
My view:
function (doc, meta) {
if (doc.type && doc.type == "UserConnection") {
emit([doc.providerId, doc.providerUserId], doc.userId);
}
}
This is how I add those documents to database and make sure that "add" is synchronous:
public boolean add(String key, Object element) {
String json = gson.toJson(element);
OperationFuture<Boolean> result = couchbaseClient.add(key, 0, json);
return result.get();
}
JSON Documents that I'm adding are:
{"userId":"1","providerId":"test_pId","providerUserId":"test_pUId","type":"UserConnection"}
{"userId":"2","providerId":"test_pId","providerUserId":"test_pUId","type":"UserConnection"}
This is how I call the view:
View view = couchbaseClient.getView(DESIGN_DOCUMENT_NAME, VIEW_NAME);
Query query = new Query();
query.setKey(ComplexKey.of("test_pId", "test_pUId"));
ViewResponse viewResponse = couchbaseClient.query(view, query);
Problem:
Test fails due to invalid number of elements fetched from view.
My observations:
Sometimes tests are passing
Number of elements that are fetched from view is not consistent(from 0 to 2)
When I've added those documents to database instead of calling setUp the test passed every time
Acording to this http://www.couchbase.com/docs/couchbase-sdk-java-1.1/create-update-docs.html documentation I'm adding those json documents synchronously by calling get() on returned Future object.
My question:
Is there something wrong with how I've approached to fetching data from view just after this data was inserted to DB? Is there any good practise for solving this problem? And can someone explain it to me please what I've did wrong?
Thanks,
Dariusz
In Couchbase 2.0 documents are required to be written to disk before they will show up in a view. There are three ways you can do an operation with the Java SDK. The first is asynchronous which means that you just send the data and at a later time check to make sure that the data was received correctly. If you do an asynchronous operation and then immediately call .get() as you did above then you have created a synchronous operation. When an operation returns success in these two cases above you are only guaranteed that the item has been written into memory. Your test passed sometimes only because you were lucky enough that both items were written to disk before did your query.
The third way to do an operation is with durability requirements and this is the one you want to do for your tests. Durability requirements allow you to say that you want an item to be written to disk or replicated before success is returned to the client. Take a look at the following function.
https://github.com/couchbase/couchbase-java-client/blob/1.1.0/src/main/java/com/couchbase/client/CouchbaseClient.java#L1293
You will want to use this function and set the PersistedTo parameter to MASTER.