Tomcat with multiple instances of the same application - java

We're building a java web application where each customer will have an instance of it with it's own database schema.
It will be managed by my company so I would like to know what is the best approach to have several apps instances running on the same Tomcat runtime since we tried to run 3 instances on a single Tomcat and it ended up on an Out of memory exception.
We considered to run multiple tomcat instances in the same server but we haven't already tested it. Also we are considering to have a separate server for each customer.
From your experience with similar scenarios, what is your opinion?
EDITED: This application can't be multi-tenant since there will be code customizations in some parts of it as well as some other business facts that require an application instance per client. So please the application architecture is not the subject here.
Thank you,
Gyo

You want to use multi tenant architecture. There will be only one database and web application instance, and every record will be qualified by the 'owner' company. You can use the subdomain/domain which the client uses to access your application to differentiate between them.
Simplistically, you add a 'domain_id' column to every table and you have a 'where domain_id=?' in every query. Each user will have an associated domain_id which you will pick up on login and put in session. In reality there will be other considerations.
EDITED: Based on the edit in the question, here is additional part to the answer.
In multitenant architecture it is possible to customise every instance without maintaining separate codebases. Some of the customisations can be part of the 'profile'. This is suitable for data values and flags, like currency, date format, etc. The case where new functionality specific to a client is required, this can be achieved by supporting plugins.
Taking a one time pain to fit your solution into a multi tenant architecture will be better than the on going pain of maintaining several separate versions of your code for each client. You might want to read up on the topic of 'technical debt'.
An ERP is a complex case of a business application, and you can get inspiration from reading the OpenBravo Trial FAQ to get an idea of what we are saying. Openbravo is open source and you may get technical details by looking at their code.

My Opinion is exactly the same as Kinjal Dixit one. Your approach is wrong and will be an huge waste of resources.
If you want to be able to deploy different version of the web-app for the same server you will have to isolate the class-loading of each app and this will imply an huge memory consumption. Otherwise if all web-app will always be the same there is no interest to deploy it many times.
Having a separate server for each customer will also be a waste of resource (multiple instance of JVM, multiple classloading of the same libraries, multiplication of the number of thread and so of the cost of scheduling) and will significantly complicate the deployment, especially if you plan some clustering strategy where the load balancing will probably become a hell
Moreover if you want to have some specific feature for a given client it will also become a hell to manage / deploy / upgrade, etc...
Multi-tenant architecture does not necessary imply to share the database (you can have a DB instance per client and dispatch the request with an interceptor at low level) however sharing the web-app is an absolute necessity.
I'd also advice to provide some kind of configuration to allow to enable custom features for a given client.
I worked for a company where we encountered exactly this problem (expose a legacy web application as a SAAS one) we started by deploying one web-app per client, spent huge time in various optimization (including class loading factorization) to reach the "huge" number of 14 customer per server.
This was far from our performance expectation and we finally switched to a multi-tenant architecture, but keeping one DB instance per customer to avoid the important cost of data-model refactoring. The new deployment was able to handle more than 100 customer on the same server with incomparable performance.
EDIT (according to question update)
If you absolutely want to avoid multi-tenancy then i'd recommend to use only one servlet container (tomcat) per server. In this case you will have to let the default web-app classloading isolation (as you will have custom code in different instances) which will imply a high memory footprint. You should however put all common libraries in the common/lib tomcat directory to factorize their loading ( see http://tomcat.apache.org/tomcat-6.0-doc/class-loader-howto.html).

Related

Struts 1 to Spring Migration - Strategy

I have a legacy banking application coded in Struts 1 + JSP
Now requirement is to migrate the backend (for now MVC) to Springboot (MVC).
Later UI (JSP) will be migrated to angular.
Caveats
1.) Backend is not stateless
2.) Lot of data stored in session object
Approach
Have 2 application running in parallel (Struts and Spring), and share the session object between the two, Storing session in Database, in memory (Redis) etc. This means lot of code changes, as currently session is manipulated across layers of JSP, action, service for every update/fetch
Build complete Spring application and then make it live, which is again not feasible and we can't have user waiting.
Marry Struts 1 and Spring in same app and later Divorce them, and removing struts components progressively.
Question
Is it feasible to have Struts 1 and Spring together in same web application.
Can 2 different servlets (ActionServlet & DispatcherServlet exists together), possible if i have 2 different context paths for spring & struts
Currently focus is to migrate MVC layer, service layer will not be an issue.
Also if i need to keep backend design of API to support REST in future, possible if i can design in such way.
Current
JSP -> Struts 1 MVC -> Service Layer -> DB
What we can built
JSP -> Struts 1 MVC <-> JSON Object parser <-> Spring REST MVC -> Service Layer -> DB
Future
Just remove JSP -> Struts MVC
Angular (or any other framework) -> Spring REST MVC -> Service Layer -> DB
My friend, it is so good to read your question! I lived the same hell you are about to enter using the very same stack...
Approach 3
Honestly I would never try it. The reason would be simple, we wouldn't want the risk of having old project and new project mixing with each other. Libraries from the legacy are very likely to conflict with libraries from the new project (same libraries, different versions) and you would then need to either refactor old code to allow use of the new version, or change libraries completely.
When migrating, you will want to keep your work on the legacy code to a minimal, to none if possible.
Approach 2
The perfect one, but as you said, it won't pay the bills. If you have the cash to work on it, then great, go for it, otherwise, you are for...
Approach 1
Strangulation, that's what worked for me. Start with a working shared login and then move to small functionalities. Think of a tree, you start by removing some small branches, them you move to nodes, until you are able to cut everything. As you remove the small functionalities, they should be made available on the new product (obviously you can't disrupt service otherwise you would go with approach 2).
More specifically, my suggestion is:
Back-end
1) Get the login working. In my case, legacy was all about session, but we didn't want that on the new product. So we implemented a method on the legacy code during login, which would call Oauth from the new product and store in the database the login information, just as you mentioned. The reason for this is on the front-end part of my reply.
2) Define how your legacy and back-end will live together and the resources to have both of them working (ram and CPU to be more precise).
2.1) If by any chance, your legacy runs on tomcat with custom libraries, you might have problems running your new product in a different context. If that happens, my suggestions, go for Docker (just get a close look on memory usage and make sure to limit them on your container(s)).
3) Start very small, replace functionalities related to creating new stuff which hold little to no logic (small crud, such as users, etc) then move to things that have mid-sized logic or that are really ugly on the legacy product and are used on day-to-day basis by your end-users.
4) All of the rest (by the time I left the company, we were not in this phase yet, so I can't provide much info on that).
5) Don't treat this project as a migration only. Get everyone on the page that this a new product. Old code should not be copied and pasted, it should be understood and improved using best practices.
5.1) Unit and integration tests as soon as possible, if your legacy have them, GREAT, compare results to make sure your refactoring didn't break anything or changed the expected outputs. THIS IS ESSENTIAL.
Front-end
1) Once you have the "unified" login working, you will be able to load the pages from the new product as if they were part of the legacy (you could even add a frame on the jsp of the legacy that will load your angular page, we did it and it works like a charm).
1.1) Not cute from a UI/UX perspective to have old and new pages, but it will add value to end-user and provide you with feedback from them once you release the pieces to production. Since your legacy now have access to the token (or whatever auth method you are using, that will be feasible).
2) Define the styles from the beginning. Don't get the job of UI/UX to later (like my team did). The sooner you figure out things such as colors, design, icons, etc, the less time you will waste on meetings that should discuss the release and its impact but are wasted discussing "this is not the color I wanted" or similar. Honestly, get UX defined before UI and make that crystal clear.
3) Design it as if you were designing a micro-services front-end. You might take a lot of time to get to that point, but if and when you do, the migration from the new architecture to micro-services will be much less traumatic.
Culture
I don't know the culture of your workplace, but mine was far from perfect, old people with old thinking into their comfort zones.
Get to change the workplace culture to adapt to what we currently have on the marketplace, old people sometimes tend to resist change, specially when they are technical and do not update on what's new out there. It will make much easier to replace people when they leave the company (because people do move on).
I heard they are still trying to run Scrum (as I mentioned I am no longer there), so there was a huge headache for developers defining What and How the migration of functionalities will take place.
Those are my two cents, hope they help you in some way, and I wish you the best of luck.
Since option 2 is not considerable because of business feasibility let’s talk about the other two.
Approach 1
If you push your session state as far back as appropriate datastore ( Redis/Memcache) and use a transparent mechanism to get the
session data and update any changes made by app server then you would not need to change any code interacting with session.
Any call to get session object from any piece of code in your application is delegated to container and it is container which
provides you with the object ( usually a mapping of sessionid and object is maintained). For container such as Tomcat I am aware of
session manager which can be replaced by just putting the jar in the container and pointing the config to the backend store. I have used
memcache based session manager successfully in production for a high traffic internet application.
Check this for Redis (https://github.com/pivotalsoftware/session-managers) and Redisson Tomcat manager (https://github.com/redisson/redisson/tree/master/redisson-tomcat), Memcache (https://github.com/magro/memcached-session-manager)
Using this will tranparently fetch/store session in respective datastore without changing any session related code in your application.
So the request with session id in cookie can land up on any of the tomcats ( hosting struts or Spring MVC app) and get the session
fetched from the backend store, made changes upon and tranparently stored back.
Approach 3
Is technically possible ( they are after all different servlets with different configuration responding to different url patterns)
but opens a lots of problems areas in terms of
dependency jars conflicts. But if you freeze the library versions of both the framework during the migration and somehow don't get any
conflict with a certain versions of your mix then it can be worth trying as eventually the struts library and it's dependencies will go away.
As for the Angular part - you can still still have the user info in the session and rest of the stateful interaction will need to be designed
in a series of stateless ( stateless on the middle tier as eventually you will need some state - just that it will be pushed to the database) interactions.

where to store/keep configuration data for an Java Enterprise application

What is the best way to store parameters and data for an EE7 application. I have to provide the web applications with information like a member fee or similar data (which may/can be altered several times in a year). The owner of the application should also have a central place where these data are stored and an application to change them.
Thanks in advance for any input
Franz
This is one question we are currently struggling with as we re-architect some of our back-end systems here, and I do agree with the comment from #JB Nizet that it should be stored on the database, however I will try to add some additional considerations and options to help you make the decision that is right for you. The right option will depend on a few factors though.
If you are delivering source code and automation to build and deploy your software, the configuration can be stored in a source code repository (i.e. as YAML or XML) and bundled with your deployable during the build process. This is a bit archaic but certainly widely adopted practice and works well, for the most part.
If you are delivering deployable binaries, you have a couple of options.
First one is to have a predetermined place in the file system where your application will look for an "override" configuration file (i.e. home directory of the user used to run your application server). This way you can have your binary deployable file completely separate from your configuration, but you will still need to build some sort of automation and version control for that configuration file so that your customer can roll back versions if/when necessary. This can also be one or many configuration files (i.e. separate files for your app server, vs. the application itself).
The option we are contemplating currently is having a configuration database where all of our applications can query for their own configuration. This can either be a very simple or complex solution depending on your particular needs - for us these are internal applications and we manage the entire lifecycles ourselves, but we have a need to have a central repository since we have tens of services and applications running with a good number of common configuration keys, and updating these keys independently can be error prone.
We are looking at a few different solutions, but I would certainly not store the configuration in our main database as: 1) I don't think SQL is best repository for configuration, 2) I believe we can get better performance from NoSQL databases which can be critical if you need to load some of those configuration keys for every request.
MongoDB and CouchDB both come to mind as good candidates for storing the our configuration keys if you need clearly defined hierarchy for you options, whereas Redis or Memcached are great options if you just need a key-value storage for your configuration (faster than document based too). We will also likely build a small app to help up configure and version the configuration and push changes to existing/active servers, but we haven't spec'd out all the requirements for that.
There are also some OSS solutions that may work for you, although some of them add too much complexity for what we are trying to achieve at this point. If you are using springframework, take a look at the Spring Cloud Config Project, it is very interesting and worth looking into.
This is a very interesting discussion and I am very willing to continue it if you have more questions on how to achieve distributed configurations. Food for thought, here are some of my personal must haves and nice to haves for our new configuration architecture design:
Global configuration per environment (dev,staging,prod)
App specific configuration per environment (dev,staging,prod)
Auto-discovery (auto environment selection depending on requestor)
Access control and versioning
Ability to push updates live to different services
Roger,thanks a lot. Do you have an example for the version predetermined place in the file system"predetermined place in the file system"? Does it make sense to use a singleton which reads the configuration file (using Startup annotation) and provides then the configuration data? But this does not support a dynamic solution.kind regards Franz

What is a good practice to deploy webservices? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Is it a good practice to deploy web services separately or should they be part of the web application? For instance, I am developing a spring rest based web service. The function of this service is to, let's say, to get user data.
Each webapplication that queries this web service has it's user data in different schema. So, now the webservice will need to know who is calling it - is it Appilcation A or Application B? If it's AppA, then it should get data from Schema A, if it's AppB, then its another schema. Note, that AppA and AppB are just the same code packed into two different wars and the schema they are supposed to query is supplied from properties file.
In a situation like this, does it make sense to pack the webservice with the webapp code and deploy it under different contexts, so it becomes a duplciate service running in a different context. Or, should it be deployed separately and somehow the AppA and AppB are supposed to identify themselves to this web service?
I prefer below approach, which is in use for 50K concurrent users.
Make sure that each web service encapsulates both UI and Schema independently by executing required business use case. Each web service will have all three layers - Model, View and Controller for that business service. That means your App-A is one web service & App-B is other web service.
All web services will register and un-register with Master web service. Master web service is responsible to redirecting user request to appropriate web service like App-A OR App-B.
You should have cluster of Master web service & cluster of individual web services - App-A & App-B
In this approach, your schema can reside on different database instead of single database
Advantages of this approach:
Each web service can scale horizontally. Just add additional VM nodes if you want to increase the scale.
If you have different schemas on different databases in different locations, you are avoiding network performane bottlenecks in OLTP queries (Online transaction processing queries).
Disadvantages:
I see only one disadvantage since Master Web Service acts like a Facade and it should know the internals of individual web service. But it's not a drawback for the advantages it is offering if you consider the trade-off.
I have no idea about your business requirement to maintain different schemas for user data and going with webservice.
But instead of maintaining multiple wars with same code, i would suggest you to configure multiple datasources within the application and switch to datasource as per your requirement.
This link may help you to configure multiple DS
If you fallow aforementioned logic, you may end up with single deployable context.
Still want to stick with multiple wars as webservice, i would suggest you to have look at SpringBoot simple, container less deployable and scalable.
It is a matter of opinion, both choices are okay. You should take into account the usage of the service, scaling concerns etc.
You could look at Microservices as an idea, but it has to make sense from your standpoint.
About the two different apps: if the differences are only in configuration, try externalizing it (23. Externalized Configuration). This way you can have a single artifact being deployed twice.
Given that scenario, it is a good practice having only one web service, in this way you improve the maintainability of the system because you don’t have the same code twice. If you have in the future other new similar app you don’t have to implement a new service.
Approach 1:- (Preffered)
You should have a single web application in which will have the entire code for application UI and Repo/data interaction.
Based on the type of request dynamically switch the data source as needed. You can have at look at Spring Dynamic datasource routing here
Approach 2:-
In case your UI has a completely different type of interactions managed by different teams, it makes sense to have separate UI components and the backend web services maintained at a same location.
Again based on the type of request you can dynamically route the datasource.
Hope this helps :)
my inputs:
1) Any specific reasons to build 2 different wars for same code? Is it only because you have two different data sources for each of them?
Why cant you have single application deploy with some parameterized mechanism in each request to identify which schema to get data from?
2) Why do you need a web service in first place? Why not application hook directly to database it needed.
3) Is underlying database transactional DB or some historical data? How about merging both schemas in one as one-time effort OR using some sort of virtualized views which picks data from 2 schemas based on input parameters.
***** edited after Jay's inputs:
My suggestion will be to have web service deployed separately from 2 web apps because it provides single place to manage code in long run. I have following additional suggestions:
Define your own headers in SOAP XML Schema which can give you both appContext(application making call) as well as userContext(user). Give a good thought on this aspect keeping long term view.
Keep SOAP request-response stateless which will give you scalability. Dont maintain any state of SOAP request at server side.
I have in past used a data virtualization solution (CISCO Composite)..what benefits it provides: if there are two (or more) data sources containing similar type of data(entities), it can join,cleanse & merge it virtually and expose it as REST/SOAP based web service. Try evaluating this option as well.
What it can further help if in future you have other consumers to access your information using plain SQL/JDBC call, they will be able to do it...also data virtualization solutions support many other interfaces to consumers like Hadoop, OData etc...again it depends on budget and other constraints of project...I am not sure if there is any effective open source data virtualization solution available or not?
Personally, in my experience, it's a lot better to have them separated, it usually depends on how big and how critical your main project is.
But even if at the beginning your project isn't that big and there's only 1 person working on it, later on, as it continues to grow, if you have microservices for all the things your main project do, it will be a lot easier to maintain, rather than having many people working on the same code handling many versions of an unique project, handling many small projects is less confusing and errors are easier to find.
Plus if something fails, you can have 1 microservice down while your main still runs without interruption, it will only by denied of 1 service, instead of having everything down while you fix it.
High availability is very important in production, and having them separated helps with this.
Given your situation I'd advice going with ONE webapp (one "project") with some caveat and then consider one of the two solutions:
1) Given you are using spring, I'll assume (hope) you are using maven as well..
Make a different compilation goal and make it so that, based on the goal invoked to produce the war, the relevant properties file is different..
This way you have ONE webapp, and based on the compilation (or rather based on the properties file tied to that specific compilation) you will obtain a war tied to a specific environment&schema... You deploy an individual war for each webservice with a clean separation, though the root code is the very same and it's only one application... [CLEANER SOLUTION]
2) Make it so that you don't only get the json request but also the https certificate of the sender (thus you identify a specific "webapp" based on the https certificate exposed), and based on the certificate AND The source of the request, you ensure the source as "qualified" to receive data from schema X rather than schema Y.. You deploy ONE war only that will, at his own discretion, apply logic to reroute your "user data fetch query" to one database or the other [I DISCOURAGE THIS PRACTICE]
of course there are other approach as well, but I think these two are the most feasible..
It really depends on what you want to achieve.
If you want to encapsulate the database/schema/table, then it should really be one service for each application. The main advantage of doing this is that you could swap the database later on if there is some problem with the current one, it also simplifies caching and invalidation, etc etc.
If the database/schema/table is not encapsulated anyway, then the single service is much easier and better. Each web application just have to identify themselves, and each of them will get exactly what they need. This could be achieved by putting the query/schema information in property file, or creating db views with the same name as client, etc.
If we were to go for this approach, a question will pop up. Why bother having this layer at all? Couldn't each web application just query the db directly? If the answer is yes, then just remove the whole layer completely.
You are trying to implement a Data Provider, or DAO as a service.
To make it -
Simple
Scalable
Maintainence-friendly,
Adaptable
You can simply have a single webservice, deployed outside the WebApp(s) and driven off configuration. The configuration itself can be stored as property file, or from a DB. The identifier for the client should be being passed in the webservice request.
This is actually a pretty standard approach implemented to enable optimizations at the Data tier outside of DB, like caching (again driven of configuration), expiry, pooling, etc.
The other option, to include as a shared jar within the webapp, yes, has advantage of code-reuse (which you get with externally deployed service as well), but the following disadvantages outweigh the option.
Coupling
Employing optimizations are difficult
Release management (this also depends upon how your code is organized)
Versioning.
Hope it helps.
I would deploy to one instance. No matter what. Of course, there are circumstances where it may be necessary to deploy separately. From a best "coding" practice, one instance should be used to allow for "right once, use many".
Then...
Define different XSD's for each AppA, AppB, etc. Marshall accordingly.
Or, use Groovy to marshall appropriate objects as json or xml.

In a distributed Java web application, how to share a value between all servlets on all machines?

If I have a distributed java web application deployed in a cluster and I have say 10 servlets & 10 JSPs running the show, and if I want to share some data, say a variable or a simple POJO between all the threads of all the servlets on all the machines, what is the way to do it?
No framework like Spring/Struts is used and let's say I'm only using the basic Servlets and JSPs. Usually we think about ServletConfig, ServletContext, HttpSession and HttpServletRequest objects to store information which needs to be passed/shared from one component to another. ServletContext has the largest scope because it's accessible from all the servlets and JSPs in the web app. But in case of distributed application I guess ServeltContext object would be created one per JVM, so even for a single web app every machine in the cluster will have a different java object for ServletContext, correct? So in such a scenario what should be done to share a POJO between all the servlets on all the machines of a single web app?
If it's not possible using plain Servlets and JSP, do any frameworks make is possible? Would appreciate any inputs. Many thanks!
In a distributed architecture, it is useful to think beyond objects and think about "services". There are several possible solutions for this but all of them would include some form of service you could access from any of your 10 nodes.
So, you could for example create an 11th machine and host an API for putting and getting objects (values/maps/etc?). That would create a shareable region between the nodes.
However, this opens a whole world of possible issues if not done correctly, because you need to think about sinchronization, deadlocks, dirty reads and other concurrent processing stuff in a cross-JVM mindset.
Also, many systems sinchronize their nodes via the database, but this approach is somehow deprecated nowadays in favor of the more recent "microservices" approach where persistence is distributed, not monolithic.
you are using spring already, so maybe spring session project is a right choice for you - http://projects.spring.io/spring-session/. For sure its the easiest one to run.
You can use hazelcast, a framework as memcache but with auto-discovery for clustering . I use to used for the session and cache sharing on my Amazon cluster and works like a charm
http://hazelcast.com/use-cases/caching/
But if you want keep in simple you can always use as I said before memcached
http://memcached.org/
Sharing things between servers is:
error prone
sometimes complicated
The most common thing to want is user session data across a load balanced cluster of servers. If someone is talking to one server, then gets load balanced to a different server, you want to keep their session going. Tomcat Clusters does this, and it's already built in.
https://tomcat.apache.org/tomcat-7.0-doc/cluster-howto.html
The last time I played with that, it was touchy; don't count on session replication always working in any servlet container, and you'll be better off. Also, session replication is crazy expensive; once you're past a few machines, the cost (in RAM) of having all session data everywhere... starts to add up quickly, and you can't add more users easily anymore.
Wanting to share things between multiple JVMs is a code smell; if you can architect around it, do so. But other than clustering, you have the two normal options:
a database. Tried, true, tested; keep details that need to change there.
an in-memory store. If it gets called on every request, and/or must be really fast for whatever reason, just consider keeping it in memory; memcached is a multi-machine in-memory key-value-store that does just this.
The simplest solution is ConcurrentHashMap https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ConcurrentHashMap.html
If you want to scale your application - you will need something like hazelcast - http://hazelcast.com/

Architecture - Multiple web apps operating on the same data

I'm asking for a suitable architecture for the following Java web application:
The goal is to build several web applications which all operate on the same data. Suppose a banking system in which account data can be accessed by different web applications; it can be accessed by customers (online banking), by service personal (mostly read) and by the account administration department (admin tool). These applications run as separate web applications on different machines but they use the same data and a set of common data manipulation and search queries.
A possible approach is to build a core application which fits the common needs of the clients, namely data storage, manipulation and search facilities. The clients can then call this core application to fulfil their requests. The requirement is the applications are build on top of a Wicket/Spring/Hibernate stack as WARs.
To get a picture, here are some of the possible approaches we thought of:
A The monolithic approach. Build one huge web application that fits all needs (this is not really an option)
B The API approach. Build a core database access API (JAR) for data access/manipulation. Each web application is build as a separate WAR which uses the API to access a database. There is no separate core application.
C RMI approach. The core application runs as a standalone application (possibly a WAR) and offers services via RMI (or HttpInvoker).
D WS approach. Just like C but replace RMI with Web Services
E OSGi approach. Build all the components as OSGi modules and which run in an OSGi container. Possibly use SpringSource dm Server or ModuleFusion. This approach was not an option for us for some reasons ...
Hope I could make clear the problem. We are just going the with option B, but I'm not very confident with it. What are your opinions? Any other solutions? What are the drawbacks of each solution?
I think that you have to go in the oppposite direction - from the bottom up. Of course, you have to go forth and back to verify that everything is playing, but here is the general direction:
Think about your data - DB scheme, how transactions are important (for example in banking systems everything is about transactions) etc.
Then define common access method - from set of stored procedures to distributed transaction engine...
Next step is a business logic/presentation - what could be generalized and what is a subject of customization.
And the final stage are the interfaces, visualisation and reports
B, C, and D are all just different ways to accomplish the same thing.
My first thought would be to simply have all consumer code connecting to a common database. This is certainly doable, and would eliminate the code you don't want to place in the middle. The drawback, of course, is that if the schema changes, all consumers need to be updated.
Another solution you may want to consider is giving each consumer its own database, using some sort of replication to keep them in sync.
It looks like A and E are out of the picture as you have stated in your question for various reasons. Option A would be one huge application which would make maintenance difficult in the future.
B, C and D are essentially the same architecturally since they involve remote access to common libraries from the various web applications, the only difference is the transport mechanism. I would recommend implementing this in EJB 3 or Spring if possible instead of with your own RMI libraries since either of these provide a good framework over RMI / Webservices.
So I think this problem basically boils down to the following two options:
1) Include the business and DAO layer classes as a common jar included in the deployment of all web applications.
Advantages:
Deployment is easier.
Applications will perform better initially since there is no remote access to other servers.
Disadvantages:
You cannot add more hardware to the middle tier specifically (service and DAO layers) since it is included in each web application.
Other business teams in the organisation will not have access to your business services since there is no remote interface.
2) Deploy the business service and DAO layer classes in a separate application server and expose business methods remotely.
Advantages:
You can scale up the business service and DAO layer as needed depending on load from the various web applications calling it.
Other applications in the organisation can make use of your interfaces if needed.
More scalable
You get all the advantages of Java EE.
Disadvantages:
More complex deployment.
Another server to maintain and monitor.
Could be slower since calls will be made over the network although this shouldn't be too much of a problem.
In both cases if the interfaces change the client code will need to change so this isn't a factor in the decision. Transactions should be handled on the business service method level so this shouldn't be a factor either.
I think it depends on the size of the applications as well and how scalable the solution needs to be to warrant the extra complexity of option 2 above.
I think you need to have a separate application that all the client applications will use as their data layer. The reason for this is that you want to ensure they're all accessing the database in the same way. There are also some race conditions you can get into that database transactions may not be able to prevent. The other reason is that using the database as a form of RPC is a known antipattern. If all your apps access the database directly, you will almost inevitably end up with some "event" table that the various applications poll periodically... don't do that.
Apart from the provided responses, if you are considering having multiple applications working with the database at the same time, consider a distributed cache as part of your solution, as well. The beauty of the distributed cache is that it can be accessed by multiple applications at the same time, apart from being distributed. I am not sure if this holds true for all of the Java variations, such as Ehcache, etc, as I do not come from a Java background.
What we are currently doing is abstracting the data a level further than before. We now have a DAL that can be accessed directly, but we have put a "Model Factory" in front of the DAL. The purpose of the Model Factory is to broker both the cache and the data layer, acting as a passthrough. So, the caller always calls the Model Factory and not the DAL or caching code directly. This abstraction layer will basically retrieve data from the DAL on a cache miss without adding the complexity to the API.

Categories