I'm noticing a lot of projects (DropWizard, Grails, etc.) starting to embrace the notion of a "fat" JAR (using an embedded web server like Jetty or Tomcat) vs. the traditional WAR deploy. Both methods involve a single JVM process (i.e. no matter how many WARs are deployed to Tomcat, it's all the same JVM process).
Under what circumstances is either deployment method preferable over the other?
Here are some reasons:
In favor of JAR:
Simple to build and deploy.
Embedded servers like Jetty are easy to operate.
Applications are easy for users to start and they can run on their personal computers too, because they are lightweight.
Starting and stopping applications will require less knowledge than managing web servers.
In favor of WAR or EAR:
The server would provide features like deployment, restart, security and so on for multiple web applications simultaneously.
Perhaps a separate deployment team can handle the starting and stopping of apps.
If your supervisors like to follow rules, they will be happy to find that you are not breaking them.
Having said this, you can always provide 2 or 3 types of executables to cater to all needs. Any build tool makes this easy.
Distributing an application with an embedded webserver allows for standalone setup and running it by just calling java -jar application.jar.
However, there may be users who want to be in control of which web server is used or who want to deploy multiple applications into a single webserver (e.g. in order to prevent port clashes especially with ports 80 and 8080). In that case a "fat" jar might cause problems or at least some unneeded code and thus a larger memory footprint.
IMHO the best approach for those two cases would be to provide two artifacts: a "fat" jar for (easier) standalone setup and an application-only war/ear for those who want to deploy the application in their own container.
I am thinking about user perspective. You could wrap this one-self containing jar within a .exe or .dmg and just install it without the need to have additional instructions on how to deploy. Also, since you are doing the deploy for a particular server only, you could take advantage of that particular server
Related
After looking for a long time without finding a good answer, I come to the place where good answers are found.
I'm creating en ecosystem of independent applications (modeled as a WebApp in a WAR) and service modules (plugins) that those WebApps can consume (modeled as an OSGI bundle). I'm having trouble getting my head around how to architect those elements with Apache Felix and Jetty. The way I understand it I have three possible ways of doing it, but I have no idea of the implication of each.
Create a felix container that brings up the plugins, and also brings jetty who eventually bring up the WebApps.
Create a jetty server with embedded felix to provide the plugins, and use Jetty's deployer to manage the WebApps.
Create a jetty server with a less complicated framework than OSGI to manage the plugins, and use Jetty's deployer to manage the WebApps.
Option 1 seems to be a very orthogonal solution, everything is an osgi module (assuming the wars are a module), and managing the whole thing would be just a matter of creating the felix infrastructure and bringing everything up. From my early testing, managing all these osgi modules in development is not an easy or fast task (but most likely I'm doing something wrong).
Option 2 seems that it would work (is the one that I have managed to get further from the two) and is simpler to manage my head around, since the OSGI is limited to managing only the plugin infrastructure and not the applications or the server.
Option 3 I haven't even started to explore.
I'm expecting to have several independent applications (WebApps) and many many plugins (OSGI modules) and I would like to hear from you on the pros and cons of each option, in terms of maintainability and ease of development.
One of the problems here is that 1 and 2 are both valid use cases of osgi frameworks.
I would recommend having a detailed look at JBoss Fuse, as this is a very mature implementation of option 1 (ignoring the container based, openshift stuff and focus on the on prem version). The basics of it are:
a single JVM that hosts an OSGI container based on Apache Felix. (It's really Apache Camel, repackaged from Apache Servicemix, which uses Apache Karaf, which can either use felix or the Eclipse OSGi framework. Turtles all the way down).
Applications are packaged as osgi bundles that can include a servlet engine.
The servlet engine can then also utilise osgi to run a plugin / framework system.
You will probably not be surprised at the huge amount of house of cards tooling it requires to get this stuff up and running, and then maintaining it. You wont suffer from classpath dependency clashes, but the cost is an extremely complicated toolchain for creating and deploying bundles. This also makes unit and component testing very difficult. Some of this is just due to how complex fuse is, but trying to seperate the unnecessary complexity from the necessary is a hard problem.
A hello world on Fuse, where you are digging into each part of the platform and really getting to know what's happening, would probably take a week.
Leaving fuse aside, there are plenty of issues with either option 1 or 2
you are still limited by the JVM and its threads. You need to take some care to ensure everything works together as it is very easy for a single bundle or plugin to happily consume the entire CPU and block other applications from doing work.
plugins have a lifecycle that needs to be managed - start, stop, load, reload, unload. There are a number of management issues that will bite right away - How do you force stop a plugin? When do you give up and restart the JVM?
who is writing the plugins, where are they hosted or built, how do you trust them and so on.
OSGi is pretty successful client side, but IMHO the reason there's not many really well known server side OSGi implementations is because it's really difficult to manage with lots of threads and unpredictable request flow and people just don't get the results they want - run code from different sources in varying configurations, as decided by a user - from the pain of making it work.
So are there any other mature plugin frameworks that solve these issues in a simple reliable way? Not that I'm aware of! There's plenty around on github and google, but they always end up foundering on the same rocks of coming up with a reliable way of managing the plugins and making them play nice with the other things running in the JVM.
I would much prefer to keep the independent applications independent via their own docker container and then maybe look at felix if you really need to be able to load plugins at runtime.
I've been reading about some of the (relatively) new application frameworks for Java such as Akka, Play and Vertx. I can't find a clear answer however on whether or not applications created with these frameworks are deployed like traditional EE applications? That is, are they packaged as WAR/EAR files and deployed to an application server like WebSphere? I my mind, a lot of the WAR/EAR infrastructure was built with traditional EE apps in mind.
In there default they are not deployed like normal EE Applications. These Frameworks try to simplify things and make writing code faster and easier and so they most of the time have there own deployment mode and bring there own web server. Also they follow more the Docker approach of having fat jars and be able to be used as micro service.
So from my point of view it looks like this (could be wrong I did not use them):
Akka its possible to add to an WEB-INF/lib in an war file
Play native installer is recommended. They dropped the war possibility but there seems to be an github plugin
vert.x seems no support for ear or war files
What are the implications of building a java program against the jars of one web container (say Jetty) and running it in another (say Tomcat)?
I have an application which I run in Jetty durring development but which is deployed into a tomcat server for production (Why? because it seems easier to develop without having to run a whole tomcat server.)
You should compile against only the official Java EE API's for the level you target, for any non-developer builds. Preferably by a build engine. Preferably on a different operating system than you develop on.
For a web application this mean the appropriate servlet API as downloaded from Oracle. Similar for an enterprise application.
In my experience this is the best way to keep it straight.
Edit: Java EE SDK is available from http://www.oracle.com/technetwork/java/javaee/downloads/index.html. If you need an older version than Java EE 6, then follow the "Previous Releases" link.
You can get issues such as MethodNotFoundError. You can usually resolve these by making sure versions of jars installed on the servers match.
you typically want to develop where you deploy. It might be slightly harder to develop with tomcat vs jetty, but you have identified a potential mess of a problem with jar conflicts, so doesn't it seem worth it to develop with tomcat, since you deploy to tomcat?
Also, typically the pain of developing against tomcat/your container of choice is mitigated by putting in the time to write a ant (or other) task that will deploy your code to your development container. The work cycle bemoes
1) Write new code
2) make sure tests pass
3) run your 'redeploy' script
4) poke around in the running instance
You probably want to do that.
Finally, in the spirit of loose coupling, you probably do not want to depend on a container-specific libraries if you can avoid it; only do that as an absolute last resort.
I know there is a lot of information here... but there may be other people with problems like this, and I think it would be a great help to discuss this, or at least get some decent input/suggestions brewing.
Alright, let me start out by giving an overview of our environment.
We have a multi-module maven project with about 11 JARs. Dependent on those internal JARs are 9 WAR files, of which 8 are placed in an EAR file. The remaining WAR file is deployed on its own as a separate application. When the 8 WAR files are built (that reside in the EAR), they are built as skinny WAR files, so the resultant EAR file is at a minimal size with all dependencies in the APP-INF/lib section. All of this works with no issues. We currently deploy to a remote WebLogic 10.3 server that has a lot of memory and CPU, so the load is not on our individual machines. We're also publishing nightly snapshots using a continuous integration build server.
Artifacts that we're deploying:
EAR file containing 8 WAR files, 11 internal JARs, and third party libs: ~70MB
Other WAR file: ~110MB
Some of our software engineers would like to work from home, over a VPN connection, and have incremental/hot deployment options. Otherwise, because of how we deploy with web logic/maven, they are forced to build an entire EAR file or the 110MB WAR file and upload them over VPN. This is not fun, and it's not fast. I have been reading on JRebel, and was wondering if anyone else uses JRebel with a multimodule maven project doing remote deployments, and how to do it efficiently.
From some of my reading, it is recommended to 'upload the changes' to the server and have the rebel.xml configurations reading those directories for that particular deployment which... well, brings us to the issue at hand. How do I tell Maven to dump changed resources/newly compiled class files to some other directory so that I can upload them to the server and to the appropriate folders (our server hosts something like 10+ WebLogic instances running on various ports, one instance per developer). Or just have the developers share their workspace folder with the network, and configure rebel.xml files (in a JAR, for example) point to the appropriate //COMPUTERNAME/workspace/jarProjectName/target/classes folder. The problem I foresee with that is, every time they start WebLogic, it's going to fish all the .class and configuration files and JSP files across the network, because the rebel.xml file wins first, and that will be terrible over VPN. AFTER the deploy is up though, then hot deployment should work as usual. I just don't want the unnecessary overhead of transferring all the classes over the network for the first boot, and not only that, sometimes developers are at the office, turn off their computer, and then go home. What happens to JRebel/WebLogic then?
It seems like a much better idea to only see what files have changed in the various maven projects, and FTP them to the proper location on the server so that JRebel can do it's thing, completely server side. Does anyone have a good way to do this? Or maybe someone has a solution that does not involve JRebel at all. Let's talk.
It is now possible to use JRebel for remote deployment also. Really easy to setup, no need for special networking configurations, opening ports on the remote machine, etc.
http://zeroturnaround.com/jrebel/remoting
It relies on IDE plugin heavily but the experience is then as if you were developing on a local machine
What you should do is have each developer run their own local instances of WebLogic.
There is a fair bit of memory usage with WebLogic, but having to do a deployment over a VPN is going to be a losing proposition. The only way this might work is using LiveRebel. But again, you will still pay a heavy penalty for network transmission, especially over a slow connection.
You are mostly likely better off running your app in the JDeveloper WLS and dropping the huge shared instance of WebLogic.
Why not to use samba (http://en.wikipedia.org/wiki/Samba_(software)) protocol for this? You will just need a network drive to be used as a shared location. Developer could set the compiler path to point to that location and in the deployed app, rebel.xml paths should point to the same directories. That would do the trick.
Even if the developer switches off his machine, Weblogic will keep running.
Hi
I want to design and develop a big enterprise application using just
GWT in client side.
I want to break this enterprise application into parts and I call each
of them a module (or bundle or portlet or whatever!).
These modules might have relation with each other and might call some services that
exists in other modules (in both client and server side).
The problem is, These modules must be Designed , Developed, Compiled
and Deployed Independently and Dynamically and they will be placed and
shown together in one context on the client and the dependencies
between modules should be manageable (in both client and server side).
What can I do? What kind of technologies I can use to build an enterprise application like this?
When you develop an application that is not divided into parts (In the way that i mentioned) you can easily deploy your application after building your project, but when you change just one form in your application you have to build the entire application again, and deploy the entire application.
In this application I cannot stop the server to deploy the application again, I want to change and deploy that part of application that is needed to be changed not the entire application!!!
Of course I have searched about the way that I can solve my problem!!!
I have found that I can use OSGI on server side because it provides modularity at software construction level and helps me to manage life cycle of modules and many other benefits that you know!
And I have found that I can use Gadgets on client side.
What do you think? Are they good choices?
If they are good choices, how can I start? I know that we have different kinds of implementations of OSGi, like Apache Felix, Eclipse Equinox and Knopflerfish. Which one is good for this choice?
How GWT and OSGi can be integrated? How can they interact with each other?
Unfortunately what you want to do is not fully possible with GWT.
OSGi is a modularity solution for Java, or more accurately the JVM. A GWT client application does not run on the JVM, it runs on the browser in a JavaScript environment. Therefore OSGi cannot be used to create runtime-assembled modular GWT applications.
A GWT application can be modular at the source level, but the modules must be assembled into an application at build time. The resulting runtime is monolithic.
However, it's perfectly possible to use OSGi to host the GWT servlets, and you can use the full power of OSGi runtime modularity on the server side.
As an alternative you may want to look at Vaadin. This is a web framework that uses GWT to provide widgets, but the logic of the application runs on the server. As a result, it does support full runtime modularity through OSGi bundles. There is a cost with this approach though: your web application is quite chatty, with lots more communication going between the browser and the server than in GWT or in a traditional web application. It's possible that this approach will not scale to very large numbers of users.
As for whether to use Equinox, Felix or Knopflerfish... it really doesn't matter. Stick to the specification, and you can easily switch between implementations.
I did just this two years ago: OSGi and GWT for no downtime deployments of project modules.
Verdict: Don't do it unless you really must.
In short, OSGi is a beast and retrofitting an existing application for it is far from trivial. You're no longer making .war files (.ear now) and can't use the standard jars and Maven repositories you used before. Now everything needs to be a bundle. Trouble is, a lot of stuff (GWT, Spring, tons of libs) are not bundles! And you'll need to find them in an enterprise bundle repository or, even more fun, start rebundling 3rd party sources them yourself. Better yet, telling the other devs to rewrite everything that uses their favorite lib because bundling it would be too complex.
The GWT part didn't take that much work. The way contexts for modules were handled in gwt-servlet had to be modified so each module could find it's context on the server. We also had to make a way for most of the GWT services to register/unregister on load and a discovery service so they could know who else was out there.
Now the other pain: project explosion.
Let's say you had 20 modules you wanted to deploy independently. Well, to start with they're probably more coupled than you'd like, so better spent a few weeks breaking them into independent Maven projects and pushing common parts to a lib project. But now, you've got tons of dependencies to keep track of. When someone tweaks your lib project, do you need up upgrade every project or just 7 of them? In the classic stop the world deployment, you only had one version of all your code. Now, you need to decide if that forgot password form being upgrades will require you to also upgrade your index page module. You'll have a ton of version numbers to make up and keep track of. In our case, we quickly had 55 Maven projects building all the time in our CI server. This meant some checkins could trigger 55 builds. Eek.
Finally, JSON interfaces.
We used GWT RPC. It's magical. Write an interface and everything just works. It's also serialized and gzipped over the wire too. Awesome. But, the serialization policies depend on object and string lookup tables that are built at compile time per module. So, project A cannot RPC to project B. Boo. We chose to use JSON due to the graceful degradation, that is not failing when new unrecognized properties were present on objects. This means you'll again need a way to keep all the backend service calls coherent in the versions of the JSON they are expecting and can handle. Better simulate that live upgrade beforehand too.
So, final word: possible, but why? Do you really need OSGi to hot deploy modules because you're running a 1000% uptime business critical application? Or does your boss/architect just refuse to accept that 99.999% is good enough? You probably don't need that uptime and can achieve nearly 100% uptime with a good proxy to let you take instances in/out of the balancer pool. Also, don't forget that even if you can upgrade your projects live on the fly, I hope you've got a way to upgrade your database on the fly without dropping a single transaction.
I think you are setting yourself for more headaches than it is worth.
I would go with deploying the whole thing at a pop. If not you will end up with mismatched pieces of the application that are out of sync with each other. GWT has both Client and Server components and they need to be deployed together. If you have a zero downtime policy then you probably have load balancing in place.
I would use the load balancing software to deploy the new version of the app. Turn off one side (by diverting all traffic to the other side) deploy to it, do a quick smoke test, switch all traffic to the new side and repeat with the old side.