Is it meaningful to create a own backend in java? - java

in my project I have different applications that all need a database connection (all "apps" are running on the same server) now my question is, what is better:
one "backend" that get requested from the apps through netty or something and has the one and only mongodb connection and cache with redis
or
all apps have mongodb connection and global cache with redis
Thanks in advance
TG
//edit
all applications are for the same project so they will need the same data

I would suggest you to write separate Backends for each Application as tomorrow you might want to have different connection requirements from each application. For eg : One application might decide it doesn't want to use Mongo DB at all . One application might want to use more connections and might be a noisy neighbour for others. Unless you are willing to write a Full Policy based server which can cater to the unique requirements of each application.

Related

Different applications to the same database

I have 3 different applications
ASP.NET web application
Java Desktop application
Android Studio mobile application
These 3 applications have the same database and and they need to connect from any part of the world with an internet connection. They share almost all the information, so, if you move something in one application it has to update the information in the other 2 applications.
I have the database on a physical server and I want to know how best to make this connection.
I have searched but I couldn't find if I have to connect directly to the server with some SQL Server, using Web Service, or something like that.
I hope someone could help.
Thank you.
I believe the best way is to first create a Web API layer (REST/SOAP) that will be used to perform all the relative operations in the centralized DB. Once that is setup, any of your applications written in any language can use the exposed web API methods to manipulate the data of the same DB.
If you are looking at a global solution - will you have multiple copies of the applications in different parts of the world as well?
In this scenario you should be looking at a cloud-hosted database with some form of geo-replication so that you can keep latency to a minimum.
There are no restrictions on the number of applications that can connect to a specific database - you do not have to create a different database for each and you may be able to reuse Stored Procedures between applications if they perform the same task.
I would however look at the concept of schemas - any database objects that are specific to one app should be separated from other - so put them in a schema for "App1". Shared objects can be in a shared schema.

Find out whose using Redis

We have one Redis for our company and multiple teams are using it. We are getting a surge of requests and nobody seems to know which application is causing it. We have only one password that goes around the whole company and our Redis is secured under a VPN so we know it's not coming from the outside.
Is there a way to know whose using Redis? Maybe we can pass in some headers with the connection from every app to identify who makes the most requests, etc.
We use Spring Data Redis for our communication.
This question is too broad since different strategies can be used here:
Use Redis MONITOR command. This is basically a built-in debugging tool that monitors all the commands executed by Redis
Use some kind of intermediate proxy. Instead of routing all the commands directly to redis - route everything to proxy that will do some processing like measuring the amounts of commands by the calling host or maybe types of commands depending what you want.
This is still only a configuration related solution so you won't need any changes at the level of applications
Since you have spring boot, you can use Micrometer / metering integration. This way you could create a counter / gauge that will get updated upon each request to Redis. If you also stream the metering data to tools like Prometheus, you'll be able to create a dashboard, say in grafana to see the whole picture. Micrometer can integrate also with other products, Prometheus/Grafana was only an example, you can chose any other solution (maybe in your organization you already have something like that).

Java distributed client-server application + rdbms and concurrency issues

In the context of a university project, we have to develop a Java distributed application with these requirements:
The application will follow the classic client-server schema, with
multiple clients connecting to a central server on a different
machine, which also hosts a rdbms to which the server connects
The relational database we must use is postgresql (latest version)
Both client and server must be written in Java
We must use native JDBC to access the database (we can't use frameworks like Spring)
DISCLAIMER:please understand we are just a group of students and this is our first big project involving all these aspects and we aren't experts by any means, so please be patient with us :) (also English is not our first language, sorry for any mistake you might find)
We are currently in the design phase of the application (class diagrams, sequence diagrams etc) and we're stuck with a possible concurrency problem with the database:
ideally our server would listen for any requests and for each client that logs in the application, the server launches a dedicated thread that provides implemented services to the user (implementing pattern proxy-skeleton with basic socket programming). Each of these service providers (threads), upon completing the requested task should update/insert/delete data to the database. Here is the problem: how should we manage the concurrency here?
We tried to search the internet for this kind of issue and we found some things but we're still very confused:
Since we actually interact with the database from one single central
server (with one admin profile) we could implement a queue sistem for
the various transactions coming from the different threads we
launched
We manage concurrency at database level with some well-known mechanism such as MVCC, which is apparently a lot more complicated
Ideally we would like that read requests don't block other reads or writes, and writes only block other writes (which seems to be the case with MVCC). Which alternative would be best? Are there any other options that we could implement with the restrictions above mentioned? Thanks in advance for any suggestion

How to connect different DB with single application across multiple users

So i have a problem. Currently my application connects with single database and supports multi user. So for different landscapes, we deploy different application all together.
I need a solution that my application remains the same (single WAR deployment) but is able to connect to different DB across different landscapes.
For example a user in UK is using same application but underlying DB is in UK and subsequently another user logs in from Bangladesh and he sees the data of DB schema for Bangladesh and so on.
Currently we are creating JDBC connections in a connection pool created out of java and taking the same through out the application. Also we load static datas in hashmaps during the start up of server. But the same would not be possible with multiple DB since one would overwrite the other static data.
I have been scratching here and there , If some one can point me in the right direction, would be grateful.
You have to understand that your application start up and a user's geography are not connected attributes. You simply need to switch / pick correct DB connection while doing CRUD operations for a user of a particular geography.
So in my opinion, your app's memory requirement is going to be bigger now ( than previous ) but rest of set up would be simple.
At app start up, You need to initialize DB Connection pools for all databases and load static data for all geographies and then use / pick connection & static data as per logged in user's geography.
There might be multiple ways to implement this switching / choosing logic and this very much dependent on what frameworks & libraries you are using.

Creating MySQL db schema using Hibernate with hosting porvider, pros and cons, practices.

Context: I'm working on Spring MVC project and using Hibernate to generate database schema from my classes using annotations. It uses MySQL server running on my local machine. I'm aiming to get hosting and make my website live.
Do I use mySQL server of a hosting provider in that case to run my database?
What are the pros and cons? Would they normally do db backups or its worth to do that myself and store it on my machine?
Am I going to loose data in case of server reboot?
Thanks in advance. I'm new to this, hence feel free to moderate questions if it sounds unreasonable.
Much of this will depend on how you host your site. I would recommend looking into CloudFoundry which is a free Platform as a Service (PAAS) provided by the folks at VMWare. If your using Spring to setup hibernate, Cloudfoundry can automatically hook your application into a MySql service it provides.
In any case, your database will most likely reside on the hosts server, unless you establish a static ip for your machine and expose the database services. At that point, you might as well host your own site.
Where the data will be stored depends on the type of host. For instance if you use a PAAS, they will choose the location they store your database on the server. It will be transparent to you. If you go with a dedicated server, you will most likely have to install your database software.
Most databases supporting websites should provide persistent storage or be configured to do so. I'm not sure why your MySql database loses data after you restart, but out of the box it should not do so. If your using hibernate to autogenerate your DDL, I could see the data being blown away at each restart. You would want to move away from this configuration.
1 Do I use mySQL server of a hosting provider in that case to run my database?
Yes. In your application you only change the JDBC connection URL and credentials.
There are other details about the level of service that you want for the database: security, backup, up time. But that depends on your hosting provider and your application needs.
2 Is it stored somewhere on the server?
Depends on how your hosting provider hosts the database. The usual approach is to have the web server in one machine and the database in another machine inside the VPN.
From the Hibernate configuration perspective, is just changing the JDBC url. But there are other quality attributes that will be affected by your provider infrastructure, and that depends on the level of service that you contract.
3 Should I declare somehow that data must be stored f.e. in a separate file on server?
Probably not. If your provider gives you a database service, what you choose is the level of service: storage, up-time... they take care of providing the infrastructure. And yes usually they do that using a separate machine for the database.
4 Am I going to loose data in case of server reboot? (As f.e. I do when I restart server on my local machine)
Depends on the kind of hosting that you are using. BTW Why you loose the data on reboot in your local machine? Probably you are re-creating the database each time (check your Hibernate usage). Because the main feature of any database is well... persistent storage :)
If you host your application in a virtual machine and you install MySQL in that VM... yes you are going to loose data on reboot. Because in this kind of hosting (like Amazon EC2) you host a VM for CPU execution, and all the disk data is transient. If you want persistent data you have to use a database located in another machine (this is done in this way for architectural reasons, and cloud providers like Amazon gives you also different storage services).
But if the database is provided, no.. a persistent database is the usual level of service that you should expect from a provider.

Categories