Two Spring apps each use jpa to control a single database - java

Two Spring apps each use jpa to control a single database.
Each Spring app must use a single database.
Will spring.jpa.hibernate.ddl-auto = update work properly?

In my opinion, having 2 applications using directly the same database is a poor design.
Here is a quote from this sofware engineering answer
The more applications use the same database, the more likely it is
that you hit performance bottlenecks and that you can't easily scale
the load as desired. SQL Databases don't really scale. You can buy
bigger machines but they do not scale well in clusters!
Maintenance and development costs can increase: Development is harder
if an application needs to use database structures which aren't suited
for the task at hand but have to be used as they are already present.
It's also likely that adjustments of one application will have side
effects on other applications ("why is there such an unecessary
trigger??!"/"We don't need that data anymore!"). It's already hard
with one database for a single application, when the developers
don't/can't know all the use-cases.
Administration becomes harder: Which object belongs to which
application? Chaos rising. Where do I have to look for my data? Which
user is allowed to interact with which objects? What can I grant whom?
Upgrading: You'll need a version that is the lowest common denominator
for all applications using it. That means that certain applications
won't be able to use powerful features. You'll have to stick with
older versions. It also increases development costs a bit.
Concurrency: Can you really be sure that there're no chronological
dependencies between processes? What if one application modifies data
that is outdated or should've been altered by another application
first? What about different applications working on the same tables
concurrently?
What I would suggest to you is to create a service layer which will be responsible for dealing with database access. This service can then be accessed by differents ways (a REST webservice might be an option).

#Vinod Bokare comment is correct, you must create jar of POJO's and use in both projects,
and #Heejeong Jang, It will be okay if each of our Spring apps has different table areas for insert, update, and delete.

Related

Google Cloud Platform: are my architectural solutions correct?

I'm trying to make simple application and deploy it on Google Cloud Platform Flexible App Engine, which will contain two main parts:
Front end application (simple Web UI based on Java 8 (Spring + Thymeleaf) with OAuth authorization from different external sites)
Back end application (monitoring several resources in separate threads, based on logged in users and reacting to their input in a certain way (behavioral changes))
Initially I was planning to make them as one app, but I think that potentially heavy background processing may cause failures in my front end application part + App Engine docs says that deployed services behave similar to microservice architecture.
My questions are:
Do I really need to separate front end from back end, if I need to react to user input as fast as possible? (but delays up to 2 seconds aren't that critical)
If I do need to separate them (and I strongly believe that I do) - how to I set up interaction between applications?
Each resource must be tracked exactly by one thread on back end - what are the best practices about this? I thought about having a SQL table with a list of acquired resources, but the flaw I see there is if an instance will fail I will need to make some kind of clean up on that table and redetermine which resources are actually acquired.
Your proposed architecture sounds like the right approach in separating the two into different services for the following reasons:
Can deploy code for each separately, rollback versions separately, and split traffic separately for experiments or phased rollouts.
Can adjust machine types and memory allocations for each service to better suit its needs. If you're doing work that is memory intensive on the backend, you can adjust that service's settings to allocate more memory per instance.
Allow each type of service to scale independently based on demands, which should result in better utilization of the services and less waste. This should also lower your overall spending than if you tried to go for a one-sized fits all approach in a single monolithic service.
You can mix different runtime environments across services. For example, you can mix language runtimes within a project OR you could even mix between standard and flexible environments. Say your front-end code is more cost efficient in standard, designate that service as a standard environment service and your backend as a flexible environment service. Or say you need a customer docker file with Perl in it, you could do that as a flexible environment custom runtime and have your front-end in Java 8.
You can still share common services like Cloud SQL, PubSub, Cloud Tasks (currently in alpha) or Redis for in memory caching. Your works don't need t reside in App Engine, they could reside in a different product if that better suits your needs.
Overall, you get much better control over your application to split it apart. The biggest benefit likely comes down to optimizing your application for spending only on what you need.
I think that you are likely to be able to deploy everything as an appengine app except if you use some exotic Java libraries that are not whitelisted. It could still be good to deploy it with compute engine for increased configurability and versatility.
You can create one front-end instance and one back-end instance in compute engine and divide the resources between them like that. Google's documentation has an example where you can do that.

Is hibernate recommended in a heterogeneous environment?

Is Hibernate less effective in some environments, like a polygot company where several distributed systems are accessing the same db? If Acme Company has a python website reading from and writing to the same database as a java web app (web services), will Hibernate be a poor choice for the java web services app? In other words, does Hibernate caching and session management assume all db transactions for Acme will be using Hibernate? Do I need to be sensitive to certain ORM concerns at a company where several programming languages are writing a lot of updates to the same data concurrently? Is Hibernate more advantageous for a strict java shop using a java ee app server for nearly all of its business operations?
Hibernate does have some performance overhead over pure JDBC, but if you're using it cautiously it should be fine for most of use cases.
Hibernate does not assume that it handles all operations itself. The only thing I would worry about is second level cache if you need it. You won't have a way to keep it in sync if other apps access the same DB (but you don't have to use it).
Having said that, I must add that having multiple apps write to the same DB is not a good practice. I'd rather create one app that handles this DB and have others communicate with this one - this way it's much easier to keep the database consistent.

Dropwizard: Using migrations for a cloud based application

We are having an application i.e. exposing RESTful web services and we are targeting this application to be deployed in cloud. We need to one time setup a database schema for the application on some database instance in the cloud.
Can someone tell me if it is a good approach to use migrations with liquibase for the one time database schema setup. We will be using alter scripts in case some DDL modification needed in future releases.
Someone stop me if I'm wrong, but the fact you application will be deplyed on the cloud only mean it will be on a virtual server hosted by an extern compagnie, wich in the case of your question don't chance anything.
So the question is "is the database versionning system Liquibase on a database with an aimed stable shema worth it".
In absolute it could be considered overkill, and a lot of big companies still manage database schema evolution with bare sql scripts. You could simply export the final built script of your developpement database and go with it.
But since you know Liquibase, the overhead is pretty cheap, and the comfort of using if you happen to have to modify tour shema later is important.
So yes, I think it's a pretty good pratice (safer than hand applying script under the stress of a production server problem) which cost one or two hours(given you know how to use the tool) and can save dozens when having to handle hotfixing of a production database.
I assume that you will be deploying this application in more than one place - not just production in the cloud, but also development servers, test servers, staging, etc. If that is true, then it seems to me that you definitely want to have a process around how you make changes to the database schema.
For me, over the course of my 20+ years in software development, I have seen several things that I use now that were not in common use when I started but that have now become 'baseline' practices on any project I work on. Yeah, I used to work without source control, but that is an absolute must now. I used to write software without tests, but not any more. I used to work without continuous integration, but that is yet another practice that I consider a must-have. The most recent addition to my must-have list is some sort of automated database migration process.
Also, since Liquibase is built-in to Dropwizard, I don't see any reason not to use it.

Different scenarios on distributed processing [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a web application - a simple web application archive file - that having several storage adapters for different storage types ie. MongoDB and CouchDB. By using this application I can store/query data to those databases with the web services I have written. Currently I can only have one single database instance per application, cannot have more than one which prevents me having parallel processing.
What I want is to run my application on several machines. And above of those, I want to write a UI enabling the Client to store/query the data without knowing the database types/addresses.
I have two different scenarios and wanted to ask you which one of them is a better way to do it and why.
1) Let's say I have three servers running three single databases - couchdb. I can upload my application to those servers and then with the help of my UI or a layer above my application I can define a map of servers so that I can store and query the data.
As you see above, database and application lies in the same server, so they are remote.
2) Let's say three servers are still running remotely but in this case my application is local. And I enabled it to accept several database instances.
I actually prefer the first one since in that case I won't need to extend my application but I wanted to hear what you think about it. I will be glad if you can provide some sources for that kind of distributed scenarios - I had no experience at all on that kind of stuff.
Please take a look on article, which is describing of Instagram architecture. It's quite interesting to know how 3 engineers handled 15-25 millions of user with 150 millions of photos per day.
Also I would recommend interesting blog, which describes different scalability solutions for popular web-resources:
Facebook: An Example Canonical Architecture for Scaling Billions of Messages
Tumblr Architecture - 15 Billion Page Views a Month and Harder to Scale than Twitter
There are lots of information.
But the most common things are:
keep everything as easy as it's possible
use well-known, mature technologies, which have good support of the community
scale each part of application separately
And even though you may find explanations of each these, I'd like to focus on the last one according to your requirements.
When you want to make your application horizontally scalable, you need to consider each of clusters as separate logical module, regardless on actual number of servers, involved into cluster. F.e. for your web-application you can setup several instances of that application and set a load-balancer before them. Thus users can access single entry point (e.g. http://mysite.com), meanwhile actual instance may be arbitrary.
If you need to collaborate instances between each other, then you need to avoid in-memory storage, but use "key-value" storages, such as Redis, along with Messages Brokers, such as ActiveMQ, RabbitMQ or cloud version Iron.IO etc.
Datastorage you also need to consider as single entry point, e.g. sharded cluster (f.e. MongoDB supports out-of-box auto-sharding, and most of NoSQL solutions also have it - CouchDB, HBase).
So basically you call some shard-controller, which according specific shard-key redirects to corresponding instance. But please note, that usually sharding might be quite non-trivial thing, therefore in most cases when you deal with RDBMS you need to use vertical scalability.
Considering everything above I would suggest you such structure:
For sure ideally all the servers must be near each other physically (f.e. in the same data-centre). But if you are going to use your application as World-wide, then you need to shard your instances according less latency. Here is quite interesting lecture about server's configuration (even though it's about MongoDb, I believe some approaches might be helpful in your case as well): https://www.youtube.com/watch?v=TZOH92mZIN8
But if do not need to use all your servers for distributed "map/reduce" computing, and for getting result you need only one particular server's instance, in that case I believe scenario #1 is fairly suitable and better for your needs (in case if you setup load-balancer before your instances).

What is the simplest solution to integrate 2 apps within a Tomcat server?

I'm new to this and is looking at Apache Camel, Spring Integration and even Terracotta.
I'm looking at sharing of common data like user/groups/account/permission and common business data like inventory/product details/etc.
Any example will be really appreciated.
How about database-level integration?
Have both applications access the same relational database. Those are built for that kind of task.
To do that, the two applications can use a shared library (of which for the sake of simplicity each one will have a copy in their WEB-INF/lib).
You should consider creating a full blown EAR instead, if you want this to be web container independent.
As different web applications have different classloaders you cannot just create an object in one web app which is immediately usable by another. Hence you need to have a common classloader which knows about the classes in common, and - to be 100% compliant - these classes may not be in either web apps WEB-INF/lib. This is hard to get right, and the result is fragile.
Therefore consider migrating to a web container which can deploy EAR's instead as they may contain several web applications sharing objects. I believe a good choice for starting would be JBoss.
I'm looking at sharing of common data like user/groups/account/permission and common business data like inventory/product details/etc.
Common data like users, groups, and permissions belong in a central LDAP or database. These are part of your Spring Security solution, and all apps can share those regardless of whether they're on the same app server or not.
It can be argued that common business data like inventory, product details, etc. should be "owned" by a single service. It's the only one that can modify the data. Others can get access by querying the service, but it's the one that manages CRUD operations on those tables.
If you do this, you keep objects and systems from being coupled at the database level. You're trading looser coupling for increased network latency.
In theory, every application has its own memory space, but off the top of my head I can think of a number of methods for sharing information between applications.
If the amount of shared information is small, perhaps a direct approach is best. set up a communication channel (web services are a bit of an overkill, but a good example) and have the applications request info from each other.
If there is massive sharing, perhaps the two applications should be reading from the same database or local file. Mind you, This brings up synchronization issues, and gets you into the realm of lockings and blockings. Tread lightly in this realm...
If your new an idea may be to build the classes to handle the common data and just build a separate servlet for each application.
This will at least get you started and more familiar with the technologies.

Categories