Update data in two differents PostgreSQL database servers - java

I have 2 differents applications (Web app and Desktop app) with differents database servers but same structure.
I want to have the same data in all databases, no matter where the user insert/update/delete a record. This is the easiest for me but I don't think is the optimal.
So, for example, if I insert a record in the desktop app, this record must be inserted into the web app server ("cloud") and vice versa.
I'm using Spring+Hibernate+PostgreSQL for the web app and JavaFX+Hibernate+PostgreSQL for the desktop app.
I'm considering 2 options at the moment:
Use sockets to send messages between servers every time a record has been inserted/deleted/updated.
Use Triggers and Foreign Data Wrapper in PostgreSQL. I'm not too familiarize with this, so I don't know if I can make what I want with this option. Can they work together?
There is another option? What do you think is the best?

The simplest and maybe best solution is to have one central read-write database and several physical replication standbys.
The standby servers will be physical copies of the main database, and you can read from them (but not modify data).

Related

Different applications to the same database

I have 3 different applications
ASP.NET web application
Java Desktop application
Android Studio mobile application
These 3 applications have the same database and and they need to connect from any part of the world with an internet connection. They share almost all the information, so, if you move something in one application it has to update the information in the other 2 applications.
I have the database on a physical server and I want to know how best to make this connection.
I have searched but I couldn't find if I have to connect directly to the server with some SQL Server, using Web Service, or something like that.
I hope someone could help.
Thank you.
I believe the best way is to first create a Web API layer (REST/SOAP) that will be used to perform all the relative operations in the centralized DB. Once that is setup, any of your applications written in any language can use the exposed web API methods to manipulate the data of the same DB.
If you are looking at a global solution - will you have multiple copies of the applications in different parts of the world as well?
In this scenario you should be looking at a cloud-hosted database with some form of geo-replication so that you can keep latency to a minimum.
There are no restrictions on the number of applications that can connect to a specific database - you do not have to create a different database for each and you may be able to reuse Stored Procedures between applications if they perform the same task.
I would however look at the concept of schemas - any database objects that are specific to one app should be separated from other - so put them in a schema for "App1". Shared objects can be in a shared schema.

How to connect different DB with single application across multiple users

So i have a problem. Currently my application connects with single database and supports multi user. So for different landscapes, we deploy different application all together.
I need a solution that my application remains the same (single WAR deployment) but is able to connect to different DB across different landscapes.
For example a user in UK is using same application but underlying DB is in UK and subsequently another user logs in from Bangladesh and he sees the data of DB schema for Bangladesh and so on.
Currently we are creating JDBC connections in a connection pool created out of java and taking the same through out the application. Also we load static datas in hashmaps during the start up of server. But the same would not be possible with multiple DB since one would overwrite the other static data.
I have been scratching here and there , If some one can point me in the right direction, would be grateful.
You have to understand that your application start up and a user's geography are not connected attributes. You simply need to switch / pick correct DB connection while doing CRUD operations for a user of a particular geography.
So in my opinion, your app's memory requirement is going to be bigger now ( than previous ) but rest of set up would be simple.
At app start up, You need to initialize DB Connection pools for all databases and load static data for all geographies and then use / pick connection & static data as per logged in user's geography.
There might be multiple ways to implement this switching / choosing logic and this very much dependent on what frameworks & libraries you are using.

Is it meaningful to create a own backend in java?

in my project I have different applications that all need a database connection (all "apps" are running on the same server) now my question is, what is better:
one "backend" that get requested from the apps through netty or something and has the one and only mongodb connection and cache with redis
or
all apps have mongodb connection and global cache with redis
Thanks in advance
TG
//edit
all applications are for the same project so they will need the same data
I would suggest you to write separate Backends for each Application as tomorrow you might want to have different connection requirements from each application. For eg : One application might decide it doesn't want to use Mongo DB at all . One application might want to use more connections and might be a noisy neighbour for others. Unless you are willing to write a Full Policy based server which can cater to the unique requirements of each application.

DB Scalability for a high load application?

I have seen application to have clustered web server(like 10 to 20 server) to have scalability where they can distribute the
load among webservers. But i have always seen all webserver using single DB.
Now consider any ecommerce or railways web application where million users are hitting the application at any point of time.
To scale at webserver side, we can have server clustering but how we can scale DB ? As we can not have multiple DB like multiple webserver as one dB will have different state than other one :)
UPDATE:-
Is scaling the db not possible in relational DBMS but only in NO SQL DB like mongo db etc ?
There is two differend kind of scalability on database side. One is read-scalability and other one is write scalability. You can do both with scaling vertically means adding more CPU and RAM to some level. But if you need to scale on very large data more than the limit of a single machine you should use read replicas for need of read-scalability and sharding for write-scalability.
Sharding is not working like putting some entities(shoes) to one server and others(t-shirts) to another servers. It works like putting some of shoes and some of t-shirts to one machine and doing that for the rest of entities also.
Another solution for high volume data management is using microservices which is more similar to your example. I mean having a service for shoes another service for t-shirts. With microservices you divide your code and data to different projects and to different application and database servers. So you can deal with scalability of different part of your data differently.

Java web - sessions design for horizontal scalability

I'm definitely not an expert Java coder, I need to implement sessions in my java Servlet based web application, and as far as I know this is normally done through HttpSession. However this method stores the session data in the local filesystem and I don't want to have such constraints to horizontal scalability. Therefore I thought to save sessions in an external database to which the application communicates through a REST interface.
Basically in my application there are users performing some actions such as searches. Therefore what I'm going to persist in sessions is essentialy the login data, and the meta data associated to searches.
As the main data storage I'm planning to use a graph noSQL database, the question is: let's say I can eventually also use another database of another kind for sessions, which architecture fits better for this kind of situation?
I currently thought to two possible ways. the first one uses another db (such as an SQL db) to store sessions data. In this way I would have a more distributed workload since I'm not using the main storage also for sessions. Moreover I'd also have a more organized environment being session state variables and persisten ones not mixed up.
The second way instead consists in storing every information relative to any session into the "user node" of the main database. The sessionid will be at this point just a "shortcut" for an authentication. This way I dont have to rely on a second database, however I move all the workload to the main db mixing the session data with the persistent ones.
is there any standard general architecture to which I can ake reference? DO I miss some important point which should constraint my architecture?
Your idea to store sessions in a different location is good. How about using an in-memory cache like memcached or redis? Session data is generally not long-lived so you have other options other than a full-blown database. Memcached & Redis can both be clustered and can scale horizontally.

Categories