We have several hundred microservices running on many tomcat servers. Common for all these tomcat servers and microservices is the use of a common library containing shared business logic.
When we change logic in the shared library, we sometimes break some microservices, even though we ensure backwards-compatible changes. We lack overview of which service is using which shared class.
The question is not about regression testing, but more about give the developer overview of impact of changing shared logic.
What is the best way to get this overview ?
We are using Eclipse, Java 8, RTC (jazz.net), Tomcat, Windows, Cygwin, Ant.
We could use Eclipse reference search, however this requires us to check out all microservices into the workspace and then let Eclipse resolve all references (which will take 10+ minutes)
It would be a preference to have a tool within Eclipse to avoid using external tools to get this overview.
I am not sure if it has to be a real-time search or if we find it sufficient to use documentation generated by a job ran in scheduled intervals.
What would you suggest ?
PS My need is part of the static program analysis practice.
Related
Morning all.
I work on maintaining a complex Java application that uses dozens of megabytes of third-party libraries. Recently I've been working on isolating a part of it to run as a standalone application, which by and large depends on the same source files as its parent. It's basically one complex wizard dialog from the original application, which still exists in the original application and which I don't want to maintain twice.
Installed alongside the whole application, the standalone part works fine. But now I want to make a standalone distribution. The part I've isolated still needs some third-party libraries, but I don't know which ones. Many of the classes involved have multipurpose constructors that can take some complex UI objects from the original application, and all the libraries that make them work have to be there at compile-time, but the execution path will be sharply curtailed at run-time.
Is there some way that I can configure Eclipse to monitor which libraries are being loaded during debugging? Either a plugin or some core functionality I've missed?
Thanks.
Hi
I want to design and develop a big enterprise application using just
GWT in client side.
I want to break this enterprise application into parts and I call each
of them a module (or bundle or portlet or whatever!).
These modules might have relation with each other and might call some services that
exists in other modules (in both client and server side).
The problem is, These modules must be Designed , Developed, Compiled
and Deployed Independently and Dynamically and they will be placed and
shown together in one context on the client and the dependencies
between modules should be manageable (in both client and server side).
What can I do? What kind of technologies I can use to build an enterprise application like this?
When you develop an application that is not divided into parts (In the way that i mentioned) you can easily deploy your application after building your project, but when you change just one form in your application you have to build the entire application again, and deploy the entire application.
In this application I cannot stop the server to deploy the application again, I want to change and deploy that part of application that is needed to be changed not the entire application!!!
Of course I have searched about the way that I can solve my problem!!!
I have found that I can use OSGI on server side because it provides modularity at software construction level and helps me to manage life cycle of modules and many other benefits that you know!
And I have found that I can use Gadgets on client side.
What do you think? Are they good choices?
If they are good choices, how can I start? I know that we have different kinds of implementations of OSGi, like Apache Felix, Eclipse Equinox and Knopflerfish. Which one is good for this choice?
How GWT and OSGi can be integrated? How can they interact with each other?
Unfortunately what you want to do is not fully possible with GWT.
OSGi is a modularity solution for Java, or more accurately the JVM. A GWT client application does not run on the JVM, it runs on the browser in a JavaScript environment. Therefore OSGi cannot be used to create runtime-assembled modular GWT applications.
A GWT application can be modular at the source level, but the modules must be assembled into an application at build time. The resulting runtime is monolithic.
However, it's perfectly possible to use OSGi to host the GWT servlets, and you can use the full power of OSGi runtime modularity on the server side.
As an alternative you may want to look at Vaadin. This is a web framework that uses GWT to provide widgets, but the logic of the application runs on the server. As a result, it does support full runtime modularity through OSGi bundles. There is a cost with this approach though: your web application is quite chatty, with lots more communication going between the browser and the server than in GWT or in a traditional web application. It's possible that this approach will not scale to very large numbers of users.
As for whether to use Equinox, Felix or Knopflerfish... it really doesn't matter. Stick to the specification, and you can easily switch between implementations.
I did just this two years ago: OSGi and GWT for no downtime deployments of project modules.
Verdict: Don't do it unless you really must.
In short, OSGi is a beast and retrofitting an existing application for it is far from trivial. You're no longer making .war files (.ear now) and can't use the standard jars and Maven repositories you used before. Now everything needs to be a bundle. Trouble is, a lot of stuff (GWT, Spring, tons of libs) are not bundles! And you'll need to find them in an enterprise bundle repository or, even more fun, start rebundling 3rd party sources them yourself. Better yet, telling the other devs to rewrite everything that uses their favorite lib because bundling it would be too complex.
The GWT part didn't take that much work. The way contexts for modules were handled in gwt-servlet had to be modified so each module could find it's context on the server. We also had to make a way for most of the GWT services to register/unregister on load and a discovery service so they could know who else was out there.
Now the other pain: project explosion.
Let's say you had 20 modules you wanted to deploy independently. Well, to start with they're probably more coupled than you'd like, so better spent a few weeks breaking them into independent Maven projects and pushing common parts to a lib project. But now, you've got tons of dependencies to keep track of. When someone tweaks your lib project, do you need up upgrade every project or just 7 of them? In the classic stop the world deployment, you only had one version of all your code. Now, you need to decide if that forgot password form being upgrades will require you to also upgrade your index page module. You'll have a ton of version numbers to make up and keep track of. In our case, we quickly had 55 Maven projects building all the time in our CI server. This meant some checkins could trigger 55 builds. Eek.
Finally, JSON interfaces.
We used GWT RPC. It's magical. Write an interface and everything just works. It's also serialized and gzipped over the wire too. Awesome. But, the serialization policies depend on object and string lookup tables that are built at compile time per module. So, project A cannot RPC to project B. Boo. We chose to use JSON due to the graceful degradation, that is not failing when new unrecognized properties were present on objects. This means you'll again need a way to keep all the backend service calls coherent in the versions of the JSON they are expecting and can handle. Better simulate that live upgrade beforehand too.
So, final word: possible, but why? Do you really need OSGi to hot deploy modules because you're running a 1000% uptime business critical application? Or does your boss/architect just refuse to accept that 99.999% is good enough? You probably don't need that uptime and can achieve nearly 100% uptime with a good proxy to let you take instances in/out of the balancer pool. Also, don't forget that even if you can upgrade your projects live on the fly, I hope you've got a way to upgrade your database on the fly without dropping a single transaction.
I think you are setting yourself for more headaches than it is worth.
I would go with deploying the whole thing at a pop. If not you will end up with mismatched pieces of the application that are out of sync with each other. GWT has both Client and Server components and they need to be deployed together. If you have a zero downtime policy then you probably have load balancing in place.
I would use the load balancing software to deploy the new version of the app. Turn off one side (by diverting all traffic to the other side) deploy to it, do a quick smoke test, switch all traffic to the new side and repeat with the old side.
Disclaimer: I've never used the technique which is described below. That's why there may occur some mistakes or misunderstandings in its description.
I heard that some teams (developers) use 'pre-configured' tomcat. As I understand they add different jars to tomcat \lib folder and do something else.
Once I've read a thread in a java forum where one developer wrote something about recompilation (or reassembly?) of tomcat for certain needs.
Just yesterday I heard a dialog where one developer sayd that his team-mates were not able to deploy the project until he would give them configured tomcat version.
So, I wonder, what is it all about and why do they do it? What benefits can they gain from that?
Open source projects, always have been an space for customizations (I believe, that's something of its charm), and I think it's acceptable to modify Tomcat for very specific in-house requirements.
But in general I would recommend to avoid a solution that requires hard modifications of open source tools -probably there is another way to do what you want using the existing ; ) (this do not apply for general accepted changes i.e. community Addons, bug fixes, and all the stuff you publish in the project spaces that are accepted and made part of the final solution).
About external lib, I would mentioned them in the project README as platform requirements. so to have a pre-configured server it's not that crazy. in fact it can save you some time, but it's a bonus. you should mention your dependencies somewhere anyway : )
Hope it helps.
Using a customized version of Tomcat could make upgrading very difficult. The benefit of having an application that does not require a specially configured server is that you can easily move to a new version or even move to an entirely different app server (e.g. Jetty, GlassFish)
I'd also point out that you do not specify the context of the changes. The special configuration may not have been application specific, but was required for security settings, compatibility with the web server being used, etc. You should talk to the developer's in question and learn more about why they require the specialized configuration.
This is the mechanism necessary to provide e.g. JDBC pools and objects over JNDI since that requires it to be in the Tomcat classloader. That is a necessity.
It may also be used to allow multiple deployments to share the same single jar file instead of having it in each WAR file. That is in my opinion generally a bad idea which should be avoided unless absolutely necessary. Keep to standard mechanisms if at all possible.
what's the point of using ant, maven, and buildr? won't the using build in eclipse or netbeans work fine? i'm just curious what the purpose and benefit of extended build tools are.
Dependency Management: The build tools follow a component model that provides hints on where to look for dependencies. In Eclipse / Netbeans, you have to depend on a JAR and you don't really know if this JAR has been updated or not. With these build tools, they 'know' updates in dependencies (generally because of a good integration with your source control repository), recalculate transitive dependencies and ensure that everything is always built with the latest versions.
Access Control: Java, apart from class level access control, has no higher abstraction. With these build tools you can specify exactly which projects you want to depend on you and control visibility and access at a higher level of granularity.
Custom Control: The Eclipse / Netbeans build always builds JAR files. With custom build mechanisms, you could build your own custom (company-internal) archive with extra metadata information, if you so wish.
Plugins: There are a variety of plugins that come with build tools which can do various things during build. From something basic like generating Javadocs to something more non-trivial like running tests and getting code coverage, static analysis, generation of reports, etc.
Transport: Some build systems also manage transport of archives - from a development system to a deployment or production system. So, you can configure transport routes, schedules and such.
Take a look at some continuous integration servers like CruiseControl or Hudson. Also, the features page of Maven provides some insight into what you want to know.
On top of all the other answers. The primary reason I keep my projects buildable without being forced to use NetBeans or Eclipse is that it makes it so much easier to setup automated (and continuous) builds.
It would be rather complicated (in comparison) to set up a server that somehow starts eclipse, updates the source from the repository, build it all, sends a mail with the result and copies the output to somewhere on a disk where the last 50 builds are stored.
If you are a single developer or a very small group, it can seem that a build system is just an overhead. As the number of developers increases though it quickly becomes difficult to track all changes and ensure developers are keeping in sync. A build system reduces the rate of increase of those overheads as your team grows. Consider the issues of building all the code in Eclipse once you have 100+ developers working on the project.
One compelling reason to have a separate build system is to ensure that what has been delivered to your customers is compiled from a specific version of the code checked into your SCM. This eliminates a whole class of "works on my box" issues and in my opinion this benefit is worth the effort on its own in reduced support time. Isolated builds (say on a CI server) also highlight issues in development, e.g. where partial or breaking changes have been committed, so you have a chance to catch issues early.
A build in an IDE builds whatever happens to be on the box, whereas a standalone build system will produce a reproducible build directly from the SCM. Of course this could be done within an IDE, but AFAIK only by invoking something like Ant or Maven to handle all the build steps.
Then of course there are also the direct benefits of build systems. A modular build system reduces copy-paste issues and handles dependency resolution and other build related issues. This should allow developers to focus on delivering code. Of course every new tool introduces its own issues and the learning curve involved can make it seem that a build system is a needless overhead (just Google I hate Maven to get some idea).
The problem with building from the IDE, is that there are tons of settings affecting the build. When you use a build tool all the settings a condensed in a more or less readable form into a small set of scripts or configuration files. This allows in the ideal case anybody to execute a build with hardly any manual setup.
Without the build tool it might become next to impossible to even compile your code in let's say a year, because you'll have to reverse engineer all the settings
Different features. For example Maven can scan your dependencies and go download them, and their dependencies so you don't have to. For even a medium sized project there may be a very large number of dependencies. I don't think Eclipse can do that.
#anonymous,
Why do you I assume that me, a member
of your team, is using an IDE all the
time? I might want to build the code
on a headless build server, is that
ok?
Would you also deny me the right of
using a continuous integration
engine?
May I fetch dependencies from a central repository please? How can I do that?
Would you tie me to a specific IDE? I can't run Eclipse easily on my very old laptop, but I'll buy a new one.
Maybe I should also uninstall subversion and use patches or just zip folders on a sftp/ftp/Samba share.
The build tools allow you to do a build automatically, without human invention, which is essential if you have a code base being able to build many applications (like we do).
We want to be certain that each and everyone of our applications can build correctly after any code base changes. The best way to check that is to let a computer do it automatically using a Continouos integration tool. We just check in code, and the CI server picks up there is a change and rebuilds all modules influenced by that change. If anything breaks the responsible person is mailed directly.
It is extremely handy being able to automate things.
To expand on Jens Schauder's answer, a lot of those build options end up in some sort of .project file. One of the evils of Eclipse is that they store absolute path names in all of it's project files, so you can't copy a project file from one machine to another, which might have its workspace in a different directory.
The strongest reason for me, is automated builds.
IDEs just work on a higher abstraction layer.
NetBeans nativly uses Ant as its underlying build tool and recently can directly open maven projects in NetBeans. Hence, your typical NetBeans project can be compiled with ant and your maven project already is a NetBeans project.
As with every GUI vs CLI discussion, IDEs seem easier for beginners but once you get the idea it becomes cumbersome to do complex things.
Changing the configuration with an IDE means clicking somewhere which is easy for basic things but for complex stuff you need to find the right place to click. Furthermore IDEs seem to hide the importent information. Clicking a button to add a library is easy but you may still not know where the library is, etc.
In contrast, using a CLI isn't easy to start with but becomes quickly easy. It allows to do complex things more easily.
Using Ant or Maven means that every one can choose his/her own IDE to work one the code. Telling someone to install IDE X to compile it is much more overhead than telling "run <build command> in your shell". And of course your can't explain the former to an external tool.
To sum up, the IDE uses a build tool itself. In case of NetBeans Ant (or Maven) is used so you can get all the advantages and disadvantages of those. Eclipse uses its own thing (as far as I know) but also can integrate ant scripts.
As for the build tools themselves Maven is significantly different from Ant. It can download specified dependencies up to the point of downloading a web server to run your project.
In all projects, developers will often manually invoke the Build process.but it is not Suitable for large Projects, Where it is very difficult to keep track of what needs to be built, in what sequence and what dependencies there are in the building process.Hence we Use Build Tools for Our Projects.
Build Tools Done varieties of the task in the Application which will do by the Developer in their daily life.
They are
1.Downloading dependencies.
2.Compiling source code into binary code.
3.Packaging that binary code.
4.Running tests.
5.Deployment to production systems.
What Java web development environment is the best for absolutely minimizing the build-deploy-test cycle time?
Web development environment: JBOSS, Tomcat, Jetty? Deploy WAR exploded? Copy WAR or use symbolic links? There are factors here I don't know about.
Build-deploy-test cycle? The amount of time it takes to test a change in the browser after making a change to the source code or other resources (including Java source, HTML, JSP, JS, images, etc.).
I am looking to speed up my development by reducing the amount of time I spend watching Ant builds and J2EE containers start. I want the Ruby on Rails experience --- or as close as I can get.
I'd prefer a solution that is web framework agnostic, however if a particular framework is particularly advantageous, then I'd like to hear about it.
Assume all the standard tools are in use: Hibernate, Spring, JMS, etc. If stubbing/mocking support infrastructure is required to make this work, I'm OK with that. In fact, I'm OK with having a development environment that is very different from our production environment if it saves me enough time.
You should probably take a look at Javarebel:
http://www.zeroturnaround.com/javarebel/
and this thread here:
How to improve productivity when developing Java EE based web applications
JBOSS uses Tomcat for its servlet/JSP engine, so that's a wash.
Tomcat does support hot deploy.
Jetty's pretty small and starts quickly, but it doesn't support hot deploy.
Eclipse is merely an IDE. It needs a servlet/JSP engine of some kind. If it's like IntelliJ, you can use any Java EE app server or servlet/JSP engine you'd like.
IntelliJ is pretty darned fast, and you don't have to stop and start the server every time you rebuild. It works off the exploded WAR, so things happen fast.
Building (used to be compiling) is a a sign of our times. We need quick validation of our thoughts and our actions. Whenever I find myself building to many times it is usually a sign that I'm not focused. That I don't have a plan. For me this is the time to stop and think. Do a list of things that need to be done (this is web framework agnostic) do them all and test them all after one build.
Jboss Seam together with the Jboss Developer Studio is good for hot deploying everything aside from EJBs (SLSB, SFSB and Entities need redeploy).
Have you considered Grails?
Deployment is as fast as it can get with Google App-Engine + GWT (optional) + Eclipse Plugin.
Never seen anything faster.
Maven 2 and eclipse. mvn eclipse:eclipse <- pure awesomeness. Also, WTP within eclipse works great (and maven generates working WTP projects).
Small web containers will load faster than overloaded webcontainers with the kitchen sink built in (.. cough .. jboss ).
Some design decisions slow build times (e.g. aspect-weaving based toolkits add an aspect-weaving phase to compile times).
Avoid building components that can only be tested after long elaborate load cycles. Caches are a prime culprit here. If your system has deep dependencies on a global cache scattered everywhere you'll need to load the cache every time you need to test something.
Unit-testable components, so you can run pieces instead of the whole thing.
I find that projects built reasonably compile, deploy, and startup in a few to 10 seconds, which is usually fine.
GWT in eclipse is probably the fastest I can think of. Using the hosted mode browser for your tests you can debug and change your code without restarting anything. Just need to click the refresh button in the browser and the changes are there (java, css, etc). One other thing is that GWT is adding this same support to normal browsers (Firefox, IE, Safari) so you can debug from within them the same way. These changes are coming in 2.0. See http://code.google.com/events/io/sessions/GwtPreviewGoogleWebToolkit2.html
Have you tried using Eclipse Java EE and then tell it to deploy to a server managed by Eclipse? Tomcat and JBOss works pretty well in this way. Also allow you to change code in a method, use Ctrl-S and have the class updated inside the server.
MyEclipse also works pretty well like this.
JRuby on Rails. Develop on whatever platform you want, deploy to standard Java servers.
I think the best way to avoid the long build deploy tests cycles is writing unit tests for your code. This way you can find bugs without waiting for the build/deploy phases.
For JSP you can edit the JSP files directly in the JBOSS work folder:
> cd $JBOSS_HOME/server/default/tmp
> find -name myJspFile.jsp
./tmp/vfs/automountd798af2a1b44fc64/Jee6Demo.war-bafecc49fc594b00/myJspFile.jsp
If you edit the file in the tmp folder you can test your changes just hitting the browser refresh button.