I have a Java web application using Spring, Hibernate and Wicket connecting to a MySQL database that I'd like to refactor and separate into several applications. I started by using Maven's multi-module system but in reality each of the applications would have its own release cycle, so I've ditched that effort now and I'm looking at creating individual projects for each of them. They will all continue to connect to the same database so I was going to move the model classes into a project of their own which can be used as a dependency.
I have a few questions regarding this setup:
Is moving the model classes to their own project a typical solution to the multiple apps/single database problem, or is there another way?
Is there a nice way of ensuring all the applications are using the same version of the model dependency?
Should I also include any base daos and services in this core project that each application could use or extend, or should I just include my GenericHibernateDao and let each application create its own daos and services? Obviously I will want to avoid changing this project as much as possible as it will require a new release of all the applications depending on it.
Is there any Hibernate related config I would need to change, such as connection pooling? Does it matter if each app has its own pool or should they share one? I'm not using caching at the moment, but I understand if I wanted to I would need a distributed cache?
How would I share application config such as db params, email host, sms gateway etc. between applications? Is there any way of defining them once somewhere to ensure they are all pointed at the same db?
Are there any other gotchas I may encounter further down the road with this setup, either with Maven or during deployment? Any tips or best practises I should follow?
These have been usual scenario with me, what I have usually done is..
- DAOs, Conn. Pool Management, Fail over related code can be managed by writing separate module [jar]
- You can then use this module in components as you have mentioned.
With this you will have separate connection pool for each of your component.
Related
While working on a modular system architecture for an enterprise application I run into some problems with database initialization. We have a core library that provides base entities and base configuration. On top of this core several modules are build. They are pluggable and can have their own entities and configuration. Some characteristics:
Configuration, like system properties, resourcebundles, etc, are all stored in the database.
JPA is used to make the system database independent.
System runs on Java SE
Every module can bring its own tables, but they can also require to populate the core property table, or the core resourcebundle table. So somehow we need some mechanisme to run a DDL and DML initialization for the database. Some options:
Create simple sql scripts. Disadvantage is that they must be database independent and perhaps this is not the most developer friendly. Unless we can generate them with some DB diff tool?
Use Java classes to initialize via JPQL?
Store configuration in files? This avoids a lot (but not all) of configuration DML.
Use some tool like liquibase?
What would be the best practice for this (or a similar) problem?
Use a database for store all configuration data is the best option. Many products, such as WebSphere Portal or Liferay use a database to store the configuration data for each portlet or even for theme. Don't forget to include those that are used as part of an SOA and Business Rules.
Therefore, the use of SQL scripts is also the best choice. However, if you require very specific features of SQL, you may need to create several versions of same script for each database management system.
I am currently in an project that has the same idea of modules that add functionality to a core system.
Generally we are using maven and multiple src folders as well as maven profiles and different builds to be able to generate a deployable with different modules. (we do not have the necessity to push out single modules and install them later on - this might be different in your project. We just build different versions with different modules.)
Anyway, for the DB we are using liquibase. Firstly to manage the DB and the changes done to it. But also (and this might be helpful to you) to include/generate another SQL script that adds tables for the modules.
Each module has its own changeset-file that includes everything that is necessary for that module (also in different versions as the modules evolve through time). These can then be applied or not.
So, I think liquibase could also be useful in your case (even though it's main purpose is to manage DB changes).
We have to develop and maintain many Java web based applications (for the same company) of different sizes, scopes and life-spans. Some of them are huge and other ones are just simple pages that may live only a few months (or days), some are already implemented and need refactoring.
There have one thing in common though, they need access to (almost) the same information.
Problem
Due to the complexity of the data the company handles, we have to deal with many different sources, some of them inherited from the ancient times. Our domain objects may be mapped across many of those sources. As an example, a Contract domain object is mapped to our main database but its related (physical) files are stored in a document server, and the activity related to it is stored in a NoSQL database. Therefore, adding, removing, searching any of these objects involves many internal operations.
Our data sources are (although it could be any):
AS400 (using DB2 as a database)
Documentum document manager
Mongo DB
External web services
Other legacy sources
We normally use Glassfish as the application server and maven as our build tool.
Goal
Our goal is to create a business layer or library that all of our applications can access and it is:
Compact
Consistant
Easy to use
Easy to maintain
Accessible from many different clients
What we have found so far
We have been struggling for weeks and still we cannot find anything fully satisfactory. Some solutions:
Pack all the business logic in one or more jars: Very easy to share, but all the applications will have to contain all the jar dependencies and configuration files and take care of security, caching and other stuff. Difficult to maintain (we have to update the jars for every project when there are changes).
Create an Ejb project containing all the logic and access it remotely: Easy to maintain, security, caching and configuration only implemented once. We are afraid of the penalty of the remote calls. As we have noticed in our research, it seems to be a bad practice (we don't have much experience with ejbs).
Create an Ear project with everything inside and use local access: Well, this is faster than the remote version but it is a hell to maintain.
Go for OSGI: We are a bit afraid of this one since it is not as popular as Ejb and we have never used it seriously.
Is there a common practice for this kind of problem?
Many thanks!
I would not recommend put all logic into 1 EAR project and use local access. If you have a lot of code in the one place, it will be harder to maintain, test, deploy etc.
I would create mutlti-module maven project with common dependencies. One of the dependency - service with business logic and DAO access, which will expose API. With Maven project you can easy control version of the POM files. Different projects may work with different version of common service. Maven will handle version control for you. However it's require some configuration and implementation efforts.
Another option mentioned by you - standalone EAR with remote EJBs should work fine as well. Do not worry about performance and number of remote calls, unless you have heavy load. Simply cache remote EJB stubs on client to avoid unnecessary JNDI lookup.
Personally I prefer first option with shared dependency managed by Maven. It's clear and easy to maintain, easy to manage versions, deploy, configure. With Maven you don't need to change jar file manually for every project, you can simply use tools like Nexus
We have a web application that uses Spring/JPA/Hibernate. Currently we are using SolidBase for database change management, which works well in a managed deployment model - however we are now migrating to a non-managed deployment model where users will be able to download the web application. We are building an "Update-Center" type functionality for the web application and are trying to figure out how we should apply database changes.
Ideally, I would like the application to apply any pending database changes at application startup and I would like this to be something that we can code pro grammatically but I don't want to rewrite Hibernate's SchemaExport functionality to do it.
Does anyone have any recommendations, patterns, or best practices on how we can best implement this functionality in to our application?
Is there any update-center application libraries that will solve our problem (I haven't been able to find a single one)?
I discovered this article while researching this
http://www.infoq.com/news/upgrade-frameworks
This led me to this post
http://www.jroller.com/mrdon/entry/transparent_sql_schema_migration_with
Which ultimately led me to rolling my own solution to this problem using Apache DdlUtils and the BeanFactory solution offered in the jroller.com blog post.
This ultimately will be a component that can be dropped in to any application, legacy or new to implement update functionality into a web application. It will use XML to apply database updates and with the use of DDL it means that the package will work against any supported database. The updater will also support updates to filesystem resources and data itself (as opposed to schema)
I do not work for BitRock.
This may not be exactly what you are looking for, but I have used InstallBuilder from Bitrock to manage these types of updates for distributed applications. This is the same installer package that the PostgreSQL team uses. It was pretty straight forward to get this working, with minimal headaches. Especially when compared to other installer programs.
I'm looking for a web-based Java tool (preferably one that will run in both Weblogic and JBoss) that will allow controlled access to a particular database. I need to allow non-technical users to insert, update, and delete rows in a particular Oracle DB table. The rows will be of varying data type (some dates, some numbers). Ability to add dropdowns with specific values would be nice.
Also nice, but not necessary (since we can always use a reverse proxy) would be the ability to control read/write access using LDAP/AD groups.
Another developer on my team suggested Spring/Roo, but that may be too heavyweight for what we're looking to do. There's got to be something simpler out there... Oracle Apex is another option, if we get desperate.
Grails is a great cheap way to build a CRUD app like you're describing, and it integrates cleanly with Java applications. You can probably build your first prototype app in an hour or two to get a feel for it. Here's a decent starter tutorial: https://www.ibm.com/developerworks/java/library/j-grails01158/
Spring Roo is absolutely not an overkill for this task in my opinion. It actually supports database reverse engineering, so you can explicitly specify which tables you want to have a CRUD view for.
You will need a really simple script, something like this:
project --topLevelPackage org.whatever --projectName crud --java 6
persistence setup --provider HIBERNATE --database ORACLE
--> you will need to acquire ojdbc*.jar because it's not available from Maven
--> also you will need to adjust database.properties to suit your needs
database reverse engineer --schema my --includeTables "Table1 .." --package ~.domain
controller all --package ~.web
logging setup --level DEBUG --> OPTIONAL
security setup --> OPTIONAL
exit
That's it, you can run your application.
Just write a simple web application with a few JSP files if that is all that you need to do. You can package them into a WAR file and deploy them easily to either JBoss or Weblogic.
What you want is a java-based Web Framework that gives you automatic Create/Retrieve/Update/Delete (CRUD) screens. There are a huge number of frameworks available, each with different strengths and weaknesses. You don't give enough information to make a reasonable suggestion of which would be best, so I would recommend that you play around with different frameworks until you find the one best suited to your needs.
Spring Roo is one way to try out different frameworks, but I find that it has a lot of typing overhead to build the model you want. If you recorded a script you could perhaps replay it with different frameworks selected for generation, but that may be too complicated.
I would recommend you check out AppFuse, which is a meta-framework that allows you to play with different frameworks easily. See AppFuse QuickStart for information on getting started.
As for controlling access to the tables using LDAP, there are many possibilities available. Java provides direct control as shown here . Another option that many use is Spring Security.
I'm asking for a suitable architecture for the following Java web application:
The goal is to build several web applications which all operate on the same data. Suppose a banking system in which account data can be accessed by different web applications; it can be accessed by customers (online banking), by service personal (mostly read) and by the account administration department (admin tool). These applications run as separate web applications on different machines but they use the same data and a set of common data manipulation and search queries.
A possible approach is to build a core application which fits the common needs of the clients, namely data storage, manipulation and search facilities. The clients can then call this core application to fulfil their requests. The requirement is the applications are build on top of a Wicket/Spring/Hibernate stack as WARs.
To get a picture, here are some of the possible approaches we thought of:
A The monolithic approach. Build one huge web application that fits all needs (this is not really an option)
B The API approach. Build a core database access API (JAR) for data access/manipulation. Each web application is build as a separate WAR which uses the API to access a database. There is no separate core application.
C RMI approach. The core application runs as a standalone application (possibly a WAR) and offers services via RMI (or HttpInvoker).
D WS approach. Just like C but replace RMI with Web Services
E OSGi approach. Build all the components as OSGi modules and which run in an OSGi container. Possibly use SpringSource dm Server or ModuleFusion. This approach was not an option for us for some reasons ...
Hope I could make clear the problem. We are just going the with option B, but I'm not very confident with it. What are your opinions? Any other solutions? What are the drawbacks of each solution?
I think that you have to go in the oppposite direction - from the bottom up. Of course, you have to go forth and back to verify that everything is playing, but here is the general direction:
Think about your data - DB scheme, how transactions are important (for example in banking systems everything is about transactions) etc.
Then define common access method - from set of stored procedures to distributed transaction engine...
Next step is a business logic/presentation - what could be generalized and what is a subject of customization.
And the final stage are the interfaces, visualisation and reports
B, C, and D are all just different ways to accomplish the same thing.
My first thought would be to simply have all consumer code connecting to a common database. This is certainly doable, and would eliminate the code you don't want to place in the middle. The drawback, of course, is that if the schema changes, all consumers need to be updated.
Another solution you may want to consider is giving each consumer its own database, using some sort of replication to keep them in sync.
It looks like A and E are out of the picture as you have stated in your question for various reasons. Option A would be one huge application which would make maintenance difficult in the future.
B, C and D are essentially the same architecturally since they involve remote access to common libraries from the various web applications, the only difference is the transport mechanism. I would recommend implementing this in EJB 3 or Spring if possible instead of with your own RMI libraries since either of these provide a good framework over RMI / Webservices.
So I think this problem basically boils down to the following two options:
1) Include the business and DAO layer classes as a common jar included in the deployment of all web applications.
Advantages:
Deployment is easier.
Applications will perform better initially since there is no remote access to other servers.
Disadvantages:
You cannot add more hardware to the middle tier specifically (service and DAO layers) since it is included in each web application.
Other business teams in the organisation will not have access to your business services since there is no remote interface.
2) Deploy the business service and DAO layer classes in a separate application server and expose business methods remotely.
Advantages:
You can scale up the business service and DAO layer as needed depending on load from the various web applications calling it.
Other applications in the organisation can make use of your interfaces if needed.
More scalable
You get all the advantages of Java EE.
Disadvantages:
More complex deployment.
Another server to maintain and monitor.
Could be slower since calls will be made over the network although this shouldn't be too much of a problem.
In both cases if the interfaces change the client code will need to change so this isn't a factor in the decision. Transactions should be handled on the business service method level so this shouldn't be a factor either.
I think it depends on the size of the applications as well and how scalable the solution needs to be to warrant the extra complexity of option 2 above.
I think you need to have a separate application that all the client applications will use as their data layer. The reason for this is that you want to ensure they're all accessing the database in the same way. There are also some race conditions you can get into that database transactions may not be able to prevent. The other reason is that using the database as a form of RPC is a known antipattern. If all your apps access the database directly, you will almost inevitably end up with some "event" table that the various applications poll periodically... don't do that.
Apart from the provided responses, if you are considering having multiple applications working with the database at the same time, consider a distributed cache as part of your solution, as well. The beauty of the distributed cache is that it can be accessed by multiple applications at the same time, apart from being distributed. I am not sure if this holds true for all of the Java variations, such as Ehcache, etc, as I do not come from a Java background.
What we are currently doing is abstracting the data a level further than before. We now have a DAL that can be accessed directly, but we have put a "Model Factory" in front of the DAL. The purpose of the Model Factory is to broker both the cache and the data layer, acting as a passthrough. So, the caller always calls the Model Factory and not the DAL or caching code directly. This abstraction layer will basically retrieve data from the DAL on a cache miss without adding the complexity to the API.