Morning all.
I work on maintaining a complex Java application that uses dozens of megabytes of third-party libraries. Recently I've been working on isolating a part of it to run as a standalone application, which by and large depends on the same source files as its parent. It's basically one complex wizard dialog from the original application, which still exists in the original application and which I don't want to maintain twice.
Installed alongside the whole application, the standalone part works fine. But now I want to make a standalone distribution. The part I've isolated still needs some third-party libraries, but I don't know which ones. Many of the classes involved have multipurpose constructors that can take some complex UI objects from the original application, and all the libraries that make them work have to be there at compile-time, but the execution path will be sharply curtailed at run-time.
Is there some way that I can configure Eclipse to monitor which libraries are being loaded during debugging? Either a plugin or some core functionality I've missed?
Thanks.
Related
I have a RCP based MacOS application that uses open-jdk 11.0.1 “2018-10-16” to build/run the application.
Currently I am trying to notarize our application. Apple has flagged various issues mainly with the JDK. While I’m attempting to fix all of them, it seems the issues flagged in the jdk/jmods directory can be difficult to resolve. Apple wants us to code sign all binaries (.dylib files) inside the jmod modules.
From what I understand, looking at various articles about JMOD, they don’t have much role to play during application runtime? (I could be mistaken here, since my sources are other stackoverflow answers) and they are mainly used to create other custom JRE’s using JLink.
Now I have done some testing without the jdk/jmods directory and up till now I didn’t encounter any issue. But I'd like to be completely certain.
So, since my application is a sandboxed-with-JDK Eclipse RCP application, is it safe to get rid of the jdk/jmods directory completely? If not, what does it depend on and what would be the ideal litmus tests to determine whether my application is completely safe from jdk/jmods removal.
I am attempting to port an application that was written with a combination of c++ for the back end, and java for the front end. This application relies on the library opencv 2.4.13, which is outdated, as well as multiple other libraries. The concern i have is that i do not want the end user to need to install these dependant programs, as they have been proving challenging to install on any but a select few linux distributions. I believe the term i am looking for is statically linking, but i'm a bit unfamiliar with c++ compilation at the moment, so i am unsure the steps i need to take to make these files portable. The java application requires these files to be libraries, and while i have managed to get them to compile on one machine, the problem seems to be getting them to run on a different one after compilation.
Don't bother - this might also give you licensing problems, depending on what libraries you need.
Instead, just figure out what platforms your application is supposed to run on and package the libraries for each platform with your jar - or download them at startup, or provide them as a separate package. The exact mechanics you choose depend on your use case, the point is you don't need to rely on system wide installs.
We have several hundred microservices running on many tomcat servers. Common for all these tomcat servers and microservices is the use of a common library containing shared business logic.
When we change logic in the shared library, we sometimes break some microservices, even though we ensure backwards-compatible changes. We lack overview of which service is using which shared class.
The question is not about regression testing, but more about give the developer overview of impact of changing shared logic.
What is the best way to get this overview ?
We are using Eclipse, Java 8, RTC (jazz.net), Tomcat, Windows, Cygwin, Ant.
We could use Eclipse reference search, however this requires us to check out all microservices into the workspace and then let Eclipse resolve all references (which will take 10+ minutes)
It would be a preference to have a tool within Eclipse to avoid using external tools to get this overview.
I am not sure if it has to be a real-time search or if we find it sufficient to use documentation generated by a job ran in scheduled intervals.
What would you suggest ?
PS My need is part of the static program analysis practice.
I used playframework previously. Development with play! is so fast. It has an internal java compiler and all the actlon methods are static. So the result is awesome.
Nowadays i use spring on netbeans. Netbeans has a deploy on save feature. But redeployment time is greater than 10 seconds. I used jrebel. But jrebel does not give the same effect. I used eclipse. Eclipse is worst than netbeans. Why java development should be so difficult? Is there any method for fast redeployment?
You have already mentioned JRebel. There are other options, but they are not faster. For example, WTP plugin for Eclipse. You can use jetty-maven plugin, you can use emended jetty-server for development. You can use file-sync plugin for Eclipse. This is 3 most popular and fastest way to deploy project. But all of them require redeploy of server.
You will never get this speed like Play framework or some dynamic compiler language. But probably it's not necessary ?
If you change static resources, like jsp, js, css, you don't need deploy. If you change Java code, just test your code with JUnit or something else. Or write a bunch of code and make deploy
IMHO the more experience you gain, rarely you make deploy =) You don't need to check, what's going on, because you know exactly, what you are doing =)
The reason why Play deployment is so fast, is that it isn't an actual deployment in the original sense of the word. Play checks for the modifications in your Java code, then takes just that file and compiles it and changes the state of the JVM to incorporate the new class.
A real deployment to an application server or event to "just" a servlet container is more than that. The package (war, ear) has to be expanded. Internal structures of the app server has to be updated and the app has to be started. This all takes time because much more components are working together.
Hi
I want to design and develop a big enterprise application using just
GWT in client side.
I want to break this enterprise application into parts and I call each
of them a module (or bundle or portlet or whatever!).
These modules might have relation with each other and might call some services that
exists in other modules (in both client and server side).
The problem is, These modules must be Designed , Developed, Compiled
and Deployed Independently and Dynamically and they will be placed and
shown together in one context on the client and the dependencies
between modules should be manageable (in both client and server side).
What can I do? What kind of technologies I can use to build an enterprise application like this?
When you develop an application that is not divided into parts (In the way that i mentioned) you can easily deploy your application after building your project, but when you change just one form in your application you have to build the entire application again, and deploy the entire application.
In this application I cannot stop the server to deploy the application again, I want to change and deploy that part of application that is needed to be changed not the entire application!!!
Of course I have searched about the way that I can solve my problem!!!
I have found that I can use OSGI on server side because it provides modularity at software construction level and helps me to manage life cycle of modules and many other benefits that you know!
And I have found that I can use Gadgets on client side.
What do you think? Are they good choices?
If they are good choices, how can I start? I know that we have different kinds of implementations of OSGi, like Apache Felix, Eclipse Equinox and Knopflerfish. Which one is good for this choice?
How GWT and OSGi can be integrated? How can they interact with each other?
Unfortunately what you want to do is not fully possible with GWT.
OSGi is a modularity solution for Java, or more accurately the JVM. A GWT client application does not run on the JVM, it runs on the browser in a JavaScript environment. Therefore OSGi cannot be used to create runtime-assembled modular GWT applications.
A GWT application can be modular at the source level, but the modules must be assembled into an application at build time. The resulting runtime is monolithic.
However, it's perfectly possible to use OSGi to host the GWT servlets, and you can use the full power of OSGi runtime modularity on the server side.
As an alternative you may want to look at Vaadin. This is a web framework that uses GWT to provide widgets, but the logic of the application runs on the server. As a result, it does support full runtime modularity through OSGi bundles. There is a cost with this approach though: your web application is quite chatty, with lots more communication going between the browser and the server than in GWT or in a traditional web application. It's possible that this approach will not scale to very large numbers of users.
As for whether to use Equinox, Felix or Knopflerfish... it really doesn't matter. Stick to the specification, and you can easily switch between implementations.
I did just this two years ago: OSGi and GWT for no downtime deployments of project modules.
Verdict: Don't do it unless you really must.
In short, OSGi is a beast and retrofitting an existing application for it is far from trivial. You're no longer making .war files (.ear now) and can't use the standard jars and Maven repositories you used before. Now everything needs to be a bundle. Trouble is, a lot of stuff (GWT, Spring, tons of libs) are not bundles! And you'll need to find them in an enterprise bundle repository or, even more fun, start rebundling 3rd party sources them yourself. Better yet, telling the other devs to rewrite everything that uses their favorite lib because bundling it would be too complex.
The GWT part didn't take that much work. The way contexts for modules were handled in gwt-servlet had to be modified so each module could find it's context on the server. We also had to make a way for most of the GWT services to register/unregister on load and a discovery service so they could know who else was out there.
Now the other pain: project explosion.
Let's say you had 20 modules you wanted to deploy independently. Well, to start with they're probably more coupled than you'd like, so better spent a few weeks breaking them into independent Maven projects and pushing common parts to a lib project. But now, you've got tons of dependencies to keep track of. When someone tweaks your lib project, do you need up upgrade every project or just 7 of them? In the classic stop the world deployment, you only had one version of all your code. Now, you need to decide if that forgot password form being upgrades will require you to also upgrade your index page module. You'll have a ton of version numbers to make up and keep track of. In our case, we quickly had 55 Maven projects building all the time in our CI server. This meant some checkins could trigger 55 builds. Eek.
Finally, JSON interfaces.
We used GWT RPC. It's magical. Write an interface and everything just works. It's also serialized and gzipped over the wire too. Awesome. But, the serialization policies depend on object and string lookup tables that are built at compile time per module. So, project A cannot RPC to project B. Boo. We chose to use JSON due to the graceful degradation, that is not failing when new unrecognized properties were present on objects. This means you'll again need a way to keep all the backend service calls coherent in the versions of the JSON they are expecting and can handle. Better simulate that live upgrade beforehand too.
So, final word: possible, but why? Do you really need OSGi to hot deploy modules because you're running a 1000% uptime business critical application? Or does your boss/architect just refuse to accept that 99.999% is good enough? You probably don't need that uptime and can achieve nearly 100% uptime with a good proxy to let you take instances in/out of the balancer pool. Also, don't forget that even if you can upgrade your projects live on the fly, I hope you've got a way to upgrade your database on the fly without dropping a single transaction.
I think you are setting yourself for more headaches than it is worth.
I would go with deploying the whole thing at a pop. If not you will end up with mismatched pieces of the application that are out of sync with each other. GWT has both Client and Server components and they need to be deployed together. If you have a zero downtime policy then you probably have load balancing in place.
I would use the load balancing software to deploy the new version of the app. Turn off one side (by diverting all traffic to the other side) deploy to it, do a quick smoke test, switch all traffic to the new side and repeat with the old side.