In my java/spring application a database record is fetched at the server init and is stored as a static field. Currently we do a mbean refresh to refresh the database values across all instances. Is there any other way to programatically refresh the database value across all the instances of the server? I am reading about EntityManager refresh.Will that work across all instances?Any help would be greatly appreciated.
You could schedule a reload every 5 minutes for example.
Or you could send events and all instance react to that event.
Till now, Communication between databases and servers is one-sided i.e. app server requests for data from the database. This generally results in the problem, and as you mentioned, that all application servers cannot know about a database change if an application is being run in cluster mode.
The current solution includes refreshing the fields time-to-time (A poll based technique).
To make this a push based model, We can create wrapper APIs over databases and let those wrapper APIs pass on the change to all the application servers.
By this I mean, Do not directly update database values from one application server but instead, on an update request send this change request to another application which keeps track of your application servers and pushes an event (via API call or queues) for a refresh of the passed database table.
Luckily, if you are using some new database (like MongoDB), they provide this update push to app servers out of the box now.
Related
The Architecture I am working on today consists of 2 instances of the same springboot app connected to a single datasource i.e PostgreSQL Database.
For all database queries I rely heavily on Spring Data JPA. I use the JpaRepository Interface to perform actions like findById , save etc.
The Spring Boot application mostly behaves like an Events ingestor, whose primary task is to take in requests and make updates in the database.
The Load balancer directs requests alternatively two each application server.
It is highly likely that 2 or more incoming concurrent requests need to access the same row/entity in the Database.
Today, even though we say repository.saveAndFlush(), we observe that the final save happened with a stale entity i.e some columns are not updated with the info from previous incoming requests.
Can someone point me in the right direction with the best design and spring data features to avoid such inconsistent states in the DB ?
In my application (app A) I have many trades. For each of these trades I need to fetch the clearing status from a second application (app B) database.
Currently, I have a Spring Boot Java Application with a Scheduled component that queries app B database every 10 minutes to check for all the cleared trades to then match with app A records and update accordingly if not already updated.
While I am querying app B database looking at only today's updates to minimize the dataset, it is expected to grow.
What I would like to do?
I would like to avoid retrieving the same set of records throughout the day every 10 minutes from app B database.
Is there a clean approach to solving this problem?
Instead of looking on app A for uncleared records and then going to app B database to query if trades cleared, I want to pick up the events in app B that are new and update accordingly in app A.
What would you suggest? Are there any tools I can use?
Ideally the owner of the application that has the App B database would expose an API for you to retrieve these trades rather than having you connect to their database directly or they would publish them to a message queue for you to consume from. These are standard patterns that are used in trading environments.
Generally it isn't a good idea to integrate directly with another applications database and polling is usually used as an integration of last resort.
I am working on a java-spring boot application. I have some application's specific constant values stored in the Cassandra database and I need to hit the database for every request to fetch these constant values from the database, which is a bad practice. So I thought of fetching these values at the application startup and store it in static variables and access these variables across the application. But the problem I am facing here is, after the application startup and once the data is fetched from DB and stored in static variables, and in case if we change the values in the database, these static variables are not updating with the new values until I restart the application. Is there any best approach to update these static variables as and when we change the values in the database, without restarting the application. Kindly help me out. Thanks in advance.
One approach could be to use some kind of message broker like Kafka (http://kafka.apache.org/) to publish database changes to subscribers or (without kafka) to use some kind of push notifications.
A possible setup could be something like this:
Service2 is used to change your "constants". So you should not change these values directly using cqlsh but only using this second service.
After a successfull change you could now send a http request to a "refresh"-endpoint of your regular spring-boot application. This refresh notification then can trigger an update in your application. This is basically a callback approach.
The second and more flexible way could be using a message broker like kafka.
With kafka you would still create a service which you use to change your "constants". The difference here would be that after you successfully changed your data, you would send a message to kafka. Kafka now can dispatch your message to each service which registered itself as a subscriber. Using this option you are free to add more than one subscriber in case you have more services depending on your "constants".
Personally I'm not a fan of a polling solution where you check every x seconds if a change is present.
I started working with REST services recently. I have several tools joined into the framework of integrated tools. Tools communicate over the common component (CC) which handles their requests (using REST services) and is actually an interface between all the tools. For every POST request a new resource is created and stored into memory. Every time the CC goes off all the data is lost. For that case, I created an Apache Derby database to store all the resources. With every resource creation, entry is created in the database. Every time CC turns on it fetches all the data from the database and the data is regurally synced. The problem is that multiple tools can POST at almost the same time. How does REST handle these requests? I hoped that it manages the requests in a queue-like way, but from what I see it does it at the same time in a thread-like way. My database goes down instantly. Am I on the right track or something else could be wrong?
We are using spring as back end process, hibernate as dao layer and maven as build tool for the project and data tables as the front end data display as a dashboard. Dashboard has almost 30 columns and 25 of them are editable by selected users who has admin rights.
Let say 5 users are viewing the Dashboard at the same time and one user change the data in some column then how we push updated data to other 4 users who are viewing same data live. In other words, how we push updated or changed data to all other live users if one live user changes something.
Have a look to Websocket or server side event.
You can also implement your own mechanism. Create an URL endpoint where javascript clients connect regulary to check for updates. The idea is to have a service exposing updates to clients each time a data is updated in the database.
With the release of Spring 4, Spring now supports WebSockets and actually make them easy to use. To get your hands dirty check out this tutorial.
An older solution that is fairly common is Comet.