Repository in MVVM with RxJava - java

I am using Android Jetpack Navigation for my project, and I have implemented the logic in the following way!
In my Mission Repository I am using this logic ->
public void getCurrentJobMissions() {
missionsDisposable.add(dao.missionCount().subscribeOn(Schedulers.io()).observeOn(AndroidSchedulers.mainThread()).subscribe(integer -> {
if (integer == 0) {
makeApiCallAndSaveToDBJobMissions();
}
getDataFromDBJobMissions();
}));
Where dao.mission count is just a
#Query("SELECT COUNT(*) FROM MissionsTable")
Single<Integer> missionCount();
So what I am doing here is that every time the user is entering the application , I am checking if there is any data on the database , if not -> makeApiCallAndSaveToDBJobMissions and then retrieve them from room database , if yes -> retrieve them from room database!
This logic is working fine but I don't think this is the best practice!
Can anyone provide me with an example of better solution to this! Thanks

Related

How to retrieve latest item from DynamoDB table (Java)

I am trying to create an IoT project where sensor data is sent every second to DynamoDB and an android app has to display it on the front-end. Using AWS Amplify, I was able to build an app that retrieves the data from the table with a button press according to its ID. What I want to retrieve is the latest item from the DB. I believe this can be done by sorting all the items in descending order and limiting the items to be retrieved by 1, and putting it in a loop.
My problem is I am having difficulty in writing the correct syntax for it. This is my current code:
public void readById() {
String objectId = "f5d470f6-72e2-49b6-bf28-43d7db130de4";
Amplify.DataStore.query(
MyModel.class,
Where.id(objectId),
items -> {
while (items.hasNext()) {
MyModel item = items.next();
retrievedItem = item.getName().toString();
Log.i("Amplify", "Id " + item.getId() + " " + item.getName());
}
},
failure -> Log.e("Amplify", "Could not query DataStore", failure)
);
}
The code below is what I want to achieve, but trying this method did not work because I can't find the builder() method under QueryOptions or QueryPredicate even though my AWS amplify dependencies is up to date.
Amplify.DataStore.query(MyModel.class, QueryOptions.builder()
.sort(MyModel.TIMESTAMP, SortOrder.DESCENDING)
.limit(1)
.build(),
result -> {
System.out.println(result.get(0));
},
error -> {
System.out.println(error.getCause());
});
I saw a code snippet from amplify docs (https://docs.amplify.aws/lib/datastore/sync/q/platform/android/#reevaluate-expressions-at-runtime) about advanced use cases of Query but it does not seem to sync with my code, see image below:
Amplify.addPlugin(AWSDataStorePlugin.builder().dataStoreConfiguration(
DataStoreConfiguration.builder()
.syncExpression(User.class, () -> User.LAST_NAME.eq("Doe").and(User.CREATED_AT.gt("2020-10-10")))
.build())
.build());
I am new to AWS so it's a struggle but I'm willing to learn. Any input is appreciated!
You're potential going to be sending and retrieving a lot of data from dynamodb. Could you cache the data locally when your app starts, then write the data to dynamodb before your program ends?
You could add ZonedDateTime.now() or similar, to record when each point of data is created? If you use Set setName, you could use something like setName.sort(Comparator.comparing(ClassName::getZonedDateTime))

Proper approach how to add context from external source to records in Kafka Streams

I have records that are processed with Kafka Streams (using Processor API). Let's say the record has city_id and some other fields.
In Kafka Streams app I want to add current temperature in the target city to the record.
Temperature<->City pairs are stored in eg. Postgres.
In Java application I'm able to connect to Postgres using JDBC and build new HashMap<CityId, Temperature> so I'm able to lookup temperature based on city_id. Something like tempHM.get(record.city_id).
There are several questions how to best approach it:
Where to initiate the context data?
Originally, I have been doing it within AbstractProcessor::init() but that seems wrong as it's initialized for each thread and also reinitialized on rebalance.
So I moved it before streams topology builder and processors are build with it. Data are fetched only once independently on all processor instances.
Is that proper and valid approach. It works but...
HashMap<CityId, Temperature> tempHM = new HashMap<CityId, Temperature>;
// Connect to DB and initialize tempHM here
Topology topology = new Topology();
topology
.addSource(SOURCE, stringDerializer, protoDeserializer, "topic-in")
.addProcessor(TemperatureAppender.NAME, () -> new TemperatureAppender(tempHm), SOURCE)
.addSink(SINK, "topic-out", stringSerializer, protoSerializer, TemperatureAppender.NAME)
;
How to refresh the context data?
I would like to refresh the temperature data every 15 minutes for example. I was thinking of using Hashmap container instead of Hashmap, that would handle it:
abstract class ContextContainer<T> {
T context;
Date lastRefreshAt;
ContextContainer(Date now) {
refresh(now);
}
abstract void refresh(Date now);
abstract Duration getRefreshInterval();
T get() {
return context;
}
boolean isDueToRefresh(Date now) {
return lastRefreshAt == null
|| lastRefreshAt.getTime() + getRefreshInterval().toMillis() < now.getTime();
}
}
final class CityTemperatureContextContainer extends ContextContainer<HashMap> {
CityTemperatureContextContainer(Date now) {
super(now);
}
void refresh(Date now) {
if (!isDueToRefresh(now)) {
return;
}
HashMap context = new HashMap();
// Connect to DB and get data and fill hashmap
lastRefreshAt = now;
this.context = context;
}
Duration getRefreshInterval() {
return Duration.ofMinutes(15);
}
}
this is a brief concept written in SO textarea, might contain some syntax errors but the point is clear I hope
then passing it into processor like .addProcessor(TemperatureAppender.NAME, () -> new TemperatureAppender(cityTemperatureContextContainer), SOURCE)
And in processor do
public void init(final ProcessorContext context) {
context.schedule(
Duration.ofMinutes(1),
PunctuationType.STREAM_TIME,
(timestamp) -> {
cityTemperatureContextContainer.refresh(new Date(timestamp));
tempHm = cityTemperatureContextContainer.get();
}
);
super.init(context);
}
Is there a better way? The main question is about finding proper concept, I'm able to implement it then. There is not much resources on the topic out there though.
In Kafka Streams app I want to add current temperature in the target city to the record. Temperature<->City pairs are stored in eg. Postgres.
In Java application I'm able to connect to Postgres using JDBC and build new HashMap<CityId, Temperature> so I'm able to lookup temperature based on city_id. Something like tempHM.get(record.city_id).
A better alternative would be to use Kafka Connect to ingest your data from Postgres into a Kafka topic, read this topic into a KTable in your application with Kafka Streams, and then join this KTable with your other stream (the stream of records "with city_id and some other fields"). That is, you will be doing a KStream-to-KTable join.
Think:
### Architecture view
DB (here: Postgres) --Kafka Connect--> Kafka --> Kafka Streams Application
### Data view
Postgres Table ----------------------> Topic --> KTable
Example connectors for your use case are https://www.confluent.io/hub/confluentinc/kafka-connect-jdbc and https://www.confluent.io/hub/debezium/debezium-connector-postgresql.
One of the advantages of the Kafka Connect based setup above is that you no longer need to talk directly from your Java application (which uses Kafka Streams) to your Postgres DB.
Another advantage is that you don't need to do "batch refreshes" of your context data (you mentioned every 15 minutes) from your DB into your Java application, because the application would get the latest DB changes in real-time automatically via the DB->KConnect->Kafka->KStreams-app flow.

Multithreading problem when trying select and insert in single method

friends!
I'm working with Play+Java+Ebean.
I've got a problem with parallel threads when trying to select and insert values to DB in one method.
At first, I check the existence of a device in my DB(PostgreSQL) by id.
If I haven't found the device by id, I try to insert it to DB.
Then I return Device object.
All works fine except the situation when two http-requests asynchronously try to send to me the same device ids. Firstly both of them select a value from DB and return nothing. Then one inserts values and the second fails because of io.eBean.DuplicateKeyException.
I've tried to synchronize method. It works but I don't like this solution. If I do this many requests will be put in a queue. I want to stay in parallel but synchronize only if I have two or more requests with the same id.
Another way to solve the problem is to write a query with INSERT ... WHERE NOT EXISTS (SELECT ...) RETURNING. But it isn't an object style and there is hardcoding for several operations of the same type.
public CompletionStage<Device> getDevice(String deviceID, String ipAddress) {
return CompletableFuture.supplyAsync(() ->
SecurityRepository.findDeviceByDeviceID(deviceID))
.thenApplyAsync(curDevice -> curDevice
.orElseGet(() -> {
List<Device> devices =
SecurityRepository.listDevicesByIpAddress(ipAddress);
if(devices.size() >= 10) {
for(Device device : devices)
device.delete();
}
Device newDevice = new Device();
newDevice.setDeviceID(deviceID);
newDevice.ipAddress = ipAddress;
newDevice.save(); //here is the problem
return newDevice;
})
);
}
I want to synchronize this method if and only if deviceID is the same.
Have you any suggestions to this problem?
Thank you.

How to use WHERE NOT clause in Firebase? [duplicate]

I am using Firebase database with a Json structure to manage users' comments.
{
"post-comments" : {
"post-id-1" : {
"comment-id-11" : {
"author" : "user1",
"text" : "Hello world",
"uid" : "user-id-2"
},....
}
I would like to pull all the comments but excluding the current user's one.
In SQL this will be translated into:
Select * from post-comments where id !="user-id-2"
I understand that Firebase database does not offer a way to excludes nodes based on the presence of a value (ie: user id != ...).
Thus is there any alternative solutions to tackle this. Either by changing the Database structure, of maybe by processing the datasource once the data are loaded.
For the latter I am using a FirebaseTableViewDataSource. is there a way to filter the data after the query?
Thanks a lot
The first solution is to load the comments via .ChildAdded and ignore the ones with the current user_id
let commentsRef = self.myRootRef.childByAppendingPath("comments")
commentsRef.observeEventType(.ChildAdded, withBlock: { snapshot in
let uid = snapshot.value["uid"] as! String
if uid != current_uid {
//do stuff
}
})
You could expand on this and load everything by .Value and iterate over the children in code as well. That method will depend on how many nodes you are loading - .ChildAdded will be lower memory usage.

Query batch job metadata in Spring batch

I want to fetch the 10 latest records from the BATCH_JOB_EXECUTION-table joined with the BATCH_JOB_INSTANCE-table.
So how can I access these tables?
In this application I have used Spring Data JPA. It's another application which uses Spring Batch and created these tables. In other words, I would just like to run a JOIN-query and map it directly to my custom object with just the necessary fields. As far as it's possible, I would like to avoid making seperate models for the two tables. But I don't know the best approach here.
If you want to do it from Spring Batch code you need to use JobExplorer and apply filters on either START_TIME or END_TIME. Alternatively, just send an SQL query with your desired JOIN to the DB using JDBC. The DDLs of the metadata tables can be found here.
EDIT
If you want to try to do it in SpringBatch, I guess you need to iterate through JobExecutions and find the ones that interest you, then do your thing )) someth. like:
List<JobInstance> jobInstances = jobExplorer.getJobInstances(jobName);
for (JobInstance jobInstance : jobInstances) {
List<JobExecution> jobExecutions = jobExplorer.getJobExecutions(jobInstance);
for (JobExecution jobExecution : jobExecutions) {
if (//jobExecution.getWhatever...)) {
// do your thing...
}
}
}
Good Luck!
Since JobExplorer doesn't have the interface .getJobInstances(jobName) anymore, I have done this (this example with BatchStatus as a condition) adapted with streams :
List<JobInstance> lastExecutedJobs = jobExplorer.getJobInstances(jobName, 0, Integer.MAX_VALUE);
Optional<JobExecution> jobExecution = lastExecutedJobs
.stream()
.map(jobExplorer()::getJobExecutions)
.flatMap(jes -> jes.stream())
.filter(je -> BatchStatus.COMPLETED.equals(je.getStatus()))
.findFirst();
To return N elements, you could use others capacities of stream (limit, max, collectors, ...).

Categories