I'm making a web service (in Spring Boot) with my friends where organizations can join and by joining each organization gets its own database, which is generated on the fly. The reason that every organization has a different database, is that the individual database consists of multiple tables that have unique data only for that organization and isolation is needed.
The database structure/schema should be shared among all companies and every change we do (for example add/edit a column or a table) should take effect in all the current databases without disrupting the already present data.
Also, we should be able to query every organization together to get a listing of booked times, names etc on one page with different organizations. Is it possible to achieve this with JPA or is there a better way to do this?
Related
Is it possible to create a repository without an entity?I've been working on a project and I need to take data from different tables. So I can't create an entity because I don't have that kind of table in DB.
How can I do that, Please Help?
You need an aggregator. If you know exactly which data you do need to work with, you can create a view with columns from those different tables and define an entity class with those column names in the view. Hence you can implement the repository just as the regular way with that entity of aggregated view.
But, if you have to fetch and combine multiple entities and you actually don't know how much varieties are there in fetching data and entities, you can write a dao as service or component and access different entities through different repositories and then aggregate the data programmatically as your need. You can use native queries too to access various data from different tables.
I am new to spring framework recent i have made small project on microservices, where i create two microservices
department service
User service
I need to know how can i use join in them, i have create one common field in both the service i.e departmentId,
when i use getmapping in user service containing department id fetching the data from department service in respective to that departmentId.
Using intellij, mongodb as database, spring framework,java
Since mongo is a document store type database.
It depends on how the data will be used. You'll need to think how the data will be queried, what will the response may be.
In a RDBMS, it is natural to denormalize your data and split it over several tables and use joins to create the views you need.
In a document store you do exactly the opposite you'll normalize your data and try to include as much as you can to satisfy most queries in one query.
When you use spring, you might also like to use https://spring.io/projects/spring-data-mongodb
If you want to gain in-depth knowledge on mongo, they have several courses available where they can teach you for free: https://university.mongodb.com/
I have a Java XPages application with a REST service that functions as an API for rooms & resources database (getting appointments for specific room, creating etc).
The basic workflow is that an HTTP request is being made to a specific REST action, having the room's mail address in the search query. Then in the java code I'm iterating over all documents from the rooms & resources database, until I find a document with the InternetAddress field with the searched mail address.
This isn't as fast as I would like it to be, and there are multiple queries like this being made all the time.
I'd like to do some sort of caching in my application, that when one room is found once, it's document UID is being stored in a server-wide cache so next time a request is made for this mail address, I can directly go to the document using getDocumentByUNID(), which I think should be way faster than searching over the entire database.
Is it possible to have such persistent lookup table in Java XPages without having any additional applications, while keeping it as fast as possible? A hash table would be perfect for this.
To clarify: I don't want caching in a single request, because I'm not doing more than one database lookups in a single query, I'd want to keep the caching server-wide, so it would be kept between multiple requests.
Yes, it is possible to store persistent data. What you are looking for is called an application scoped managed bean.
I need to keep in sync Client with postgreSQL database (only data that are loaded from database, not entire database, 50+ db tables and a lot of collections inside entities). As recently I have added server based on Spring-REST API to my application I could manage those changes maybe differently/more efficient that would require less work. So untill now my approach was to add psql notification that triggers json
CREATE TRIGGER extChangesOccured
AFTER INSERT OR UPDATE OR DELETE ON xxx_table
FOR EACH ROW EXECUTE PROCEDURE notifyUsers();
the client then receive the json built as:
json_build_object(
'table',TG_TABLE_NAME,
'action', TG_OP,
'id', data,
'session', session_app_name);
compare if this change is made by this client or any other and fetch the new data from database.
Then on client side new object is manually "rewritten", something like method copyFromObject(new_entity) and variables are being overriden (including collections, avoid transient etc...).
This approach requires to keep copyFromObject method for each entity (hmm still can be optimized with reflections)
Problems with my approach is:
requires some work when modifying variables (can be optimized using reflections)
entire new entity is loaded when changed by some client
I am curious of Your solutions to keep clients in sync with db, generally I have desktop client here and the client loads a lot of data from database which must be sync, loading database takes even 1min on the app start depends on chosen data-period which should be fetched
The perfect solution would be to have some engine that would fetch/override only those variables in entities that was really changed and make it automatically.
A simple solution is to implement Optimistic Lock? It will prevent user from persisting data if the entity was changed after the user fetched it.
Or
You can use 3rd party apps for DB synchronization. I've played some time ago with Pusher and you can find an excessive tutorial about Client synchronization here: React client synchronization
Of course pusher is not the only one solution, and I'm not related to the dev team of that app by at all.
For my purpose I have implemented AVL Tree based loaded entities and database synchronization engine that creates repositiories based on the loaded entities from hibernate and asynchronously search throught all the fields in entities and rewrites/merge all the same fields (so that if some field (pk) is the same entity like the one in repository, it replaces it)
In this way synchronization with database is easy as it comes to find the externally changed entity in the repository (so basically in the AVL Tree which is O(log n)) and rewrite its fields.
The company I am working for stores their client data in a separate database schema for each client. They indicate that this cannot be changed at this time. Is there an efficient way to pull data and update data in all schemas without configuring a connection for each schema? Everything I can find when I search seems to be talking about using one or a couple of schemas, but I need to use many (100+) simultaneously.
In any given persistence context, each JPA entity class is mapped to a specific base table. Whether and how easily you can access multiple schemas via a single DB connection is a function of your DBMS, your JDBC driver, and perhaps your particular database, but even a combination that in general supports the kind of access you would need will still not allow you to map the same entity class to multiple distinct base tables in the same persistence context.
You might be able to use the same entity classes for different clients by associating a different persistence context with each client, but that will not allow you use the same DB connection for all of them. Thus, if using the same connection were possible for you at all, it would require different entity classes per client.
Have you considered creating a new DB user and creating SYNONYMS for each of the tables in the separate database schemas ?
You could then map JPA entitys to the SYNONYM names that you have created..
Using this approach you could still use the one DB connection but with SYNONYMS to the DB tables in the other schemas...