My scenario is the following: I have two applications, one allows the user to interact with the product catalogue (only GET API, totally passive) and the other one allows admins to create/modify/delete (crud API) products from catalogue.
In order to speed up the user application, I have been thinking about implement Spring Cache. The problem is that if an admin does any interaction with the database (Oracle19c), the app for the citizen does not detect anything.
How can I solve this problem?
In the past, I managed something like this with the Change Streams, by using Mongo or thanks to Spring Data Events, so that any operation to database could be perceived.
I would need to detect operation made on database to empty and reload my cache with always last updates and I do not if is possible.
Any advice?
Related
The Architecture I am working on today consists of 2 instances of the same springboot app connected to a single datasource i.e PostgreSQL Database.
For all database queries I rely heavily on Spring Data JPA. I use the JpaRepository Interface to perform actions like findById , save etc.
The Spring Boot application mostly behaves like an Events ingestor, whose primary task is to take in requests and make updates in the database.
The Load balancer directs requests alternatively two each application server.
It is highly likely that 2 or more incoming concurrent requests need to access the same row/entity in the Database.
Today, even though we say repository.saveAndFlush(), we observe that the final save happened with a stale entity i.e some columns are not updated with the info from previous incoming requests.
Can someone point me in the right direction with the best design and spring data features to avoid such inconsistent states in the DB ?
I'm definitely not an expert Java coder, I need to implement sessions in my java Servlet based web application, and as far as I know this is normally done through HttpSession. However this method stores the session data in the local filesystem and I don't want to have such constraints to horizontal scalability. Therefore I thought to save sessions in an external database to which the application communicates through a REST interface.
Basically in my application there are users performing some actions such as searches. Therefore what I'm going to persist in sessions is essentialy the login data, and the meta data associated to searches.
As the main data storage I'm planning to use a graph noSQL database, the question is: let's say I can eventually also use another database of another kind for sessions, which architecture fits better for this kind of situation?
I currently thought to two possible ways. the first one uses another db (such as an SQL db) to store sessions data. In this way I would have a more distributed workload since I'm not using the main storage also for sessions. Moreover I'd also have a more organized environment being session state variables and persisten ones not mixed up.
The second way instead consists in storing every information relative to any session into the "user node" of the main database. The sessionid will be at this point just a "shortcut" for an authentication. This way I dont have to rely on a second database, however I move all the workload to the main db mixing the session data with the persistent ones.
is there any standard general architecture to which I can ake reference? DO I miss some important point which should constraint my architecture?
Your idea to store sessions in a different location is good. How about using an in-memory cache like memcached or redis? Session data is generally not long-lived so you have other options other than a full-blown database. Memcached & Redis can both be clustered and can scale horizontally.
We are using spring as back end process, hibernate as dao layer and maven as build tool for the project and data tables as the front end data display as a dashboard. Dashboard has almost 30 columns and 25 of them are editable by selected users who has admin rights.
Let say 5 users are viewing the Dashboard at the same time and one user change the data in some column then how we push updated data to other 4 users who are viewing same data live. In other words, how we push updated or changed data to all other live users if one live user changes something.
Have a look to Websocket or server side event.
You can also implement your own mechanism. Create an URL endpoint where javascript clients connect regulary to check for updates. The idea is to have a service exposing updates to clients each time a data is updated in the database.
With the release of Spring 4, Spring now supports WebSockets and actually make them easy to use. To get your hands dirty check out this tutorial.
An older solution that is fairly common is Comet.
I would like to ask for an starting point of what technology or framework to research.
What I need to accomplish is the following:
We have a Java EE 6 application using JPA for persistance; we would like to use a primary database as some sort of scratchpad, where users can insert/delete records according to the tasks they are are given. Then, at the end of the day an administrator will do some kind of check on their work approving or disapproving it. If he approves the work, all changes will be done permanent and the primary database will be synced - replicated to another one (for security reasons). Otherwise, if administrator do not approve changes they will be rolled back.
Now here I got two problems to figure out:
First.- Is it possible to rollback a bunch of JPA operations done through a certain amount of time?
Second.- Trigger the replication (This can be done by RDBMS engines) process by code.
Now, if RDBMS replication is not possible (maybe because of client requirement) we would need a sync framework for JPA as a backup. I was looking at some JMS solutions, however not clear about the exact process or how to make them work on JPA.
Any help would be greatly appreciated,
Thanks.
I think, your design steps are having too much risk on loosing data. What I understand that you are talking about holding data in memory until admin approves/reject it. You must think about a disaster scenario and saving your data in that case.
Rather this problem statement is more inclined towards a workflow design, where the
data is entered by one entity, it is persisted.
Other entity approve/> reject the data.
All the approved data is further replicated to next database.
All these three steps could be implemented in 3 modules, backed by a persistent storage/ JMS technology. Depending on how real time, each of these steps needs to be; you could think of an elegant design to accomplish this in a cost effective manner.
Add a "workflow state" column to your table. States: Wait for approval, approved, replicated
Persist your data normally using JPA (state: wait for approval)
Approver approves: Update using JPA, change to approved state
As for the replication
In the approve method you could replicate the data synchronously to the other database (using JPA)
You could copy as well the approved data to another table, and use some RDBMS functionality to have the RDBMS replicate the data of that table
You could as well send a JMS message. At the end of the day a job reads the queue and persists the data into the other database
Anyway I suggest using a normal RDBMS cluster with synchronous replication. In that scenario you don't have to develop a self-made replication scheme, and you always have a copy of your data. You always have the workflow state.
I've been tasked with making an enterprise application multi-tenant. It has a Java/Glassfish BLL using SOAP web services and a PostgreSQL backend. Each tenant has its own database, so (in my case at least) "multi-tenant" means supporting multiple databases per application server.
The current single-tenant appserver initializes a C3P0 connection pool with a connection string that it gets from a config file. My thinking is that now there will need to be one connection pool per client/database serviced by the appserver.
Once a user is logged in, I can map it to the right connection pool by looking up its tenant. My main issue is how to get this far - when a user is first logged in, the backend's User table is queried and the corresponding User object is served up. It seems I will need to know which database to use with only a username to work with.
My only decent idea is that there will need to be a "config" database - a centralized database for managing tenant information such as connection strings. The BLL can query this database for enough information to initialize the necessary connection pools. But since I only have a username to work with, it seems I would need a centralized username lookup as well, in other words a UserName table with a foreign key to the Tenant table.
This is where my design plan starts to smell, giving me doubts. Now I would have user information in two separate databases, which would need to be maintained synchronously (user additions, updates, and deletions). Additionally, usernames would now have to be globally unique, whereas before they only needed to be unique per tenant.
I strongly suspect I'm reinventing the wheel, or that there is at least a better architecture possible. I have never done this kind of thing before, nor has anyone on my team, hence our ignorance. Unfortunately the application makes little use of existing technologies (the ORM was home-rolled for example), so our path may be a hard one.
I'm asking for the following:
Criticism of my existing design plan, and suggestions for improving or reworking the architecture.
Recommendations of existing technologies that provide a solution to this issue. I'm hoping for something that can be easily plugged in late in the game, though this may be unrealistic. I've read about jspirit, but have found little information on it - any feedback on it or other frameworks will be helpful.
UPDATE: The solution has been successfully implemented and deployed, and has passed initial testing. Thanks to #mikera for his helpful and reassuring answer!
Some quick thoughts:
You will definitely need some form of shared user management index (otherwise you can't associate a client login with the right target database instance). However I would suggest making this very lightweight, and only using it for initial login. Your User object can still be pulled from the client-specific database once you have determined which database this is.
You can make the primary key [clientID, username] so that usernames don't need to be unique across clients.
Apart from this thin user index layer, I would keep the majority of the user information where it is in the client-specific databases. Refactoring this right now will probably be too disruptive, you should get the basic multi-tenant capability working first.
You will need to keep the shared index in sync with the individual client databases. But I don't think that should be too difficult. You can also "test" the synchronisation and correct any errors with an batch job, which can be run overnight or by your DBA on demand if anything ever gets out of sync. I'd treat the client databases as the master, and use this to rebuild the shared user index on demand.
Over time you can refactor towards a fully shared user management layer (and even in the end fully shared client databases if you like. But save this for a future iteration.....