How to connect different DB with single application across multiple users - java

So i have a problem. Currently my application connects with single database and supports multi user. So for different landscapes, we deploy different application all together.
I need a solution that my application remains the same (single WAR deployment) but is able to connect to different DB across different landscapes.
For example a user in UK is using same application but underlying DB is in UK and subsequently another user logs in from Bangladesh and he sees the data of DB schema for Bangladesh and so on.
Currently we are creating JDBC connections in a connection pool created out of java and taking the same through out the application. Also we load static datas in hashmaps during the start up of server. But the same would not be possible with multiple DB since one would overwrite the other static data.
I have been scratching here and there , If some one can point me in the right direction, would be grateful.

You have to understand that your application start up and a user's geography are not connected attributes. You simply need to switch / pick correct DB connection while doing CRUD operations for a user of a particular geography.
So in my opinion, your app's memory requirement is going to be bigger now ( than previous ) but rest of set up would be simple.
At app start up, You need to initialize DB Connection pools for all databases and load static data for all geographies and then use / pick connection & static data as per logged in user's geography.
There might be multiple ways to implement this switching / choosing logic and this very much dependent on what frameworks & libraries you are using.

Related

Different applications to the same database

I have 3 different applications
ASP.NET web application
Java Desktop application
Android Studio mobile application
These 3 applications have the same database and and they need to connect from any part of the world with an internet connection. They share almost all the information, so, if you move something in one application it has to update the information in the other 2 applications.
I have the database on a physical server and I want to know how best to make this connection.
I have searched but I couldn't find if I have to connect directly to the server with some SQL Server, using Web Service, or something like that.
I hope someone could help.
Thank you.
I believe the best way is to first create a Web API layer (REST/SOAP) that will be used to perform all the relative operations in the centralized DB. Once that is setup, any of your applications written in any language can use the exposed web API methods to manipulate the data of the same DB.
If you are looking at a global solution - will you have multiple copies of the applications in different parts of the world as well?
In this scenario you should be looking at a cloud-hosted database with some form of geo-replication so that you can keep latency to a minimum.
There are no restrictions on the number of applications that can connect to a specific database - you do not have to create a different database for each and you may be able to reuse Stored Procedures between applications if they perform the same task.
I would however look at the concept of schemas - any database objects that are specific to one app should be separated from other - so put them in a schema for "App1". Shared objects can be in a shared schema.

Update data in two differents PostgreSQL database servers

I have 2 differents applications (Web app and Desktop app) with differents database servers but same structure.
I want to have the same data in all databases, no matter where the user insert/update/delete a record. This is the easiest for me but I don't think is the optimal.
So, for example, if I insert a record in the desktop app, this record must be inserted into the web app server ("cloud") and vice versa.
I'm using Spring+Hibernate+PostgreSQL for the web app and JavaFX+Hibernate+PostgreSQL for the desktop app.
I'm considering 2 options at the moment:
Use sockets to send messages between servers every time a record has been inserted/deleted/updated.
Use Triggers and Foreign Data Wrapper in PostgreSQL. I'm not too familiarize with this, so I don't know if I can make what I want with this option. Can they work together?
There is another option? What do you think is the best?
The simplest and maybe best solution is to have one central read-write database and several physical replication standbys.
The standby servers will be physical copies of the main database, and you can read from them (but not modify data).

Is it meaningful to create a own backend in java?

in my project I have different applications that all need a database connection (all "apps" are running on the same server) now my question is, what is better:
one "backend" that get requested from the apps through netty or something and has the one and only mongodb connection and cache with redis
or
all apps have mongodb connection and global cache with redis
Thanks in advance
TG
//edit
all applications are for the same project so they will need the same data
I would suggest you to write separate Backends for each Application as tomorrow you might want to have different connection requirements from each application. For eg : One application might decide it doesn't want to use Mongo DB at all . One application might want to use more connections and might be a noisy neighbour for others. Unless you are willing to write a Full Policy based server which can cater to the unique requirements of each application.

Handling dynamic DB Connections in REST based website

I am building a java REST APIs based website whose function is to connect to any user entered database and get the schemas, tables, indexes etc and the user can pick whatever schemas / tables / indexes they want and send to another system.
So the site takes the database details, then shows the schemas - user selects the schemas they need- then the site brings back the corresponding tables etc. So in the backend I have separate calls for getting schema/tables/indexes.
I am using plain JDBC calls in the server to do this. Each time I am opening the connection, getting the metadata(schema/table/index), closing the connection. I think performance can be improved if I keep the database connection open between requests.
Since the database details are dynamic and each user is connecting to a different database,I cannot use the connection pool facility provided in the (play) framework. Is there a better way to do this? Thanks in advance!
I am using play framework 2.x with Angular JS.
You can use a singleton or static Map of JDBC DataSource and get the connexion from it. The datasource will manage the connexion pull.

Multiple independent H2 databases within one JVM

Is it possible to start up and shut down multiple H2 databases within a JVM?
My goal is to support multi-tenancy by giving each user/account their own database. Each account has very little data. Data between the accounts is never accessed together, compared, or grouped; each account is entirely separate from the others. Each account is only accessed briefly once a day or a few times a month. So there are few upsides to housing the data together in a single database, and some serious downsides.
So my idea is that when a user logs in for a particular account, that account’s database is loaded. When that user logs out, or their web app session (Vaadin app) times out, that account’s database is closed, it's data flushed to storage, and possibly a backup performed. This opening and closing would be happening for any number of databases in parallel.
Benefits include minimizing the amount of memory in use at any one time for caching data and indexes, minimizing locking and other contention, and allowing for smooth scaling.
I'm new to H2, so I'm not sure if its architecture can support this. I'm asking for a denial or confirmation of this capability, along with any tips or caveats.
Yes it is possible to do so. Each database will contain its own mini environment, no possible pollution between databases.
You could for example use a jdbc url based on the user id or login from the user:
jdbc:h2:user1 in H2 1.3.x embedded mode
jdbc:h2:./user1 in H2 1.4.x embedded mode
jdbc:h2:tcp://localhost/user1 in tcp mode
You can use any naming convention for the database name, provided your OS allows it: user1, user2, etc... or truly the name of the login.
Tips:
use the server mode rather than the embedded mode, allowing for same user multiple connections from multiple sessions/hosts
have a schema migrator (like flyway) to initialize each newly created db
ensure you manage name collisions at the top level of your app, and possibly store these databases and corresponding logins in a dedicated database as well
Caveats:
do not use a connection pool as connections will be difficult to reuse
You must make sure IFEXISTS=TRUE is not used on the server
avoid using tweaks on the jdbc url, like turning LOG=0, UNDO_LOG=0, etc...
I do not know if you'll have a limitation from your OS or the JVM on how many db files could be opened like this.
I do not know if such setting can be tweaked from the manual pages. I could not find one.
Please refer to H2 manual in doubts of url parameters.

Categories