Version of WAR after deploying - java

I am currently building snapshots of a project using Java, Maven, Jenkins and Artifactory, and running the WAR on Tomcat.
After I copy a WAR file out of Artifactory into Tomcat and rename it, I have essentially "lost" the version of the application. I have two remote teams that are constantly dropping wars across integration machines and it very quickly becomes difficult to tell what is running where.
Eventually I would like to be able to query the app for it's version, or even just print out the app version into a log file at startup, but first I have to get the version into the war file itself.
I'm not entirely certain which entity is responsible for creating the version but it looks like it is Artifactory creating the SNAPSHOT version, like "myapp-1.2.0-20140514.145130-1.war".
The only solution I can think of is to stop using snapshots, increment maven versions manually at checkin, and then run a script that injects the maven version into a .java file before it is compiled. Yuck. Is there a way I can get this version into the app with my current setup?

This is a fairly common problem. In such an environment, with multiple teams and commits you might want to have more than just a simple version - namely who initiated this build, when and why/how. Take a look here: Job Exporter Plugin. It will output a nicely (Java friendly) formatted properties file that you can easily import into your application in order to get all the details about the job that was responsible for building and deploying that specific version of your app. Well, if you will implement this - there is no "out-of-the-box" mechanism to do so, you can write your own using regular Java APIs.
The other option is to use the Maven's plugins to do the effort for you. As it is a little bit more portable, it's actually harder to implement. One way to do so is to use Maven BuildNumber plugin. It can use timestamps, store build number in a properties files and do a lot of stuff... but it's local as this file should probably not to be committed. The other option is to rely on your repository (SVN, git or other) and get the last revision ID (i.e. with this plugin). It's handy as well, but it's not perfect and easy to read.
I would suggest to go with option 1 - the Jenkins plugin works great and you will get much more handy information. Just remember to read that file with an if-clause, so local builds can rely on something else or you will skip reading the file if it's missing...
Additionally, I have stumbled upon a plugin that does the extraction of your version from the Maven or SBT build process quite nicely - the Semantic Versioning Plugin. This does what is advertised - extracting the version from POM or whatever and including this as a file and a variable in your build process. So you have the freedom to use both, either include the file in your build process and do what you heart wishes AND/OR use the variable to affect the build flow in Jenkins. Now, because this plugin still have couple of bugs I would like to point you for now to my own build of this plugin with fixes already in that can be obtained from here. I will take my own version down the moment that all fixes will be merged to the official plugin...

Related

Running Java program on iSeries on command line

I've created a java program and I'm wanting to run it from the iSeries. I've been able to get it to run from the QSH so I know it compiled and runs fine, but I need to run it from the command line not QSH. The program requires the jsch-0.1.55.jar file for the program to work correctly and I'm not 100% sure how to to call the jar file with the program its referencing to.
I've tried
RUNJVA CLASS(ANL0106J) CLASSPATH('/JAVA/Jars/jsch-0.1.55.jar')
That didn't work. Then I tried
RUNJVA CLASS('/JAVA/Jars.jsch-0.1.55.jar':. ANL0106J) CLASSPATH('/Java/Jars/jsch-0.1.55.jar')
That didn't work either. What am I doing wrong?
Having successfully done this many many times, your last try seems pretty close, but change the CLASS parameter to point to your class file which contains the main method. ANL0106J seems like a pretty weird class name.. Example would be:
CLASS(com.company.test.ApplicationMain)
With the CLASSPATH pointing to the required jars using the full path names. For example:
RUNJVA CLASS(com.company.test.ApplicationMain) CLASSPATH('/test/app.jar:/test/dependency.jar')
To make matters a bit easier, you could even include your dependency in your JAR file by using something like maven or Gradle to create your builds, which can then be configured to generate fat jars. Essentially, those are jars that contain the other JAR files that your application depends on. That way, you can also be pretty sure that your application will continue to work, even after you update a single jar file on your ibm i machine for example. Shadowjar for example is pretty easy to setup using a gradle project which will do this for you. Then it's just a matter of running the bootjar gradle task and using the RUNJVA command, simply pointing to your single JAR. Don't get caught up in dependency management hell, please. Save yourself and future devs by using something like maven or gradle. Gradle/maven can even be used to manage depencencies using a maven repository with a tool such as Sonatype Nexus which can also be hosted locally. If your JAR has a valid manifest, you don't have to do anything else. It would look like this:
RUNJVA CLASS('/test/app.jar')
Especially useful for using CI, which can build the JAR for you from a GIT repository and place the fat jar in the correct path, with you not having to do a single thing. Setting up Jenkins on an as400 isn't that difficult at all using the apache WebSphere application server which is an option that can be used to host WAR files, to put it simply (it can do a lot more than that though :P).
Hell, using only a single jar for the RUNJVA command should also speed up the time it takes to start your application since it only needs to verify a single jar. Just food for thought. Here's the maven entry by the way:
https://mvnrepository.com/artifact/com.jcraft/jsch/0.1.55
On a side note for java/react devs: Yes, fellow Java/react developers, one could use the RUNJVA command to modernize ibm i development to run Spring boot applications! We are successfully running react front end applications using a spring boot backend system, Works extremely fast, as expected :)
(Same answer given to you on Reddit, simply on this platform to make it visible for others that are looking for this on Stackoverflow)

Is there a way to generate a hash based on all the java source files in a package and then use that value within a log message at runtime?

This is kind of a ridiculous thing to do, but for some back-story we develop a number of java packages that become part of a large, unwieldy system. We have no control over the builds for this large system. We just check code into a subversion repo and jenkins takes over from there. The problem we've had in the past is that we have no indication that the code we checked in is actually running in any of the test or production environments. The group doing these builds is incompetent at best and can't seem to provide any real information. This continuous build environment does not lend itself to us changing minor version numbers every time a code change is made. And there are about a dozen packages so I'd prefer not to have to update version numbers in all those places all the time. (but I am prepared to accept that as the only viable solution if needed).
What I'd like to do, and this is completely pie-in-the-sky, is have maven do something as part of the building of our packages, where it generates a hash value based on all the *.java files in a package and outputs that value to a resource file which also gets picked up when packaged. The classes within these *.java files, when outputting log messages, will be able to append this hash value somewhere in the log string since the logging framework will read this hash value from this resources file. So at runtime, we can tell which "version" of our packages is actually running. This has its downsides for sure...
but part of me just wants to see if this will work. I'm fairly new to maven and I've been unsuccessful at searching for just the right thing in this case.
let the flames begin...
What you are looking for is a way to operate with build numbers instead of artifact versions. This is a very common-sense approach, which usually works better, so there is nothing wrong with it.
In order for it to work, you need to annotate the artifacts with build name and number metadata.
Luckily, when using Artifactory and Jenkins there is a very easy way to do it without the hassle of adding build numbers to files inside the archives or storing additional information in Jenkins or playing with filtered resources in Maven. All you need to do is use the Jenkins Artifactory plugin (if you have an access to Jenkins configuration) or Maven Artifactory plugin (this is totally under your control, since you can change to pom files).
Once configured, the artifacts will be annotated with build names and numbers and you'll be able to get all the information about the build just by knowing the artifact checksum:
Checksum search -> Tree Browser -> Builds tab -> Build browser.

Why do we need Maven or Ant, if we already have Eclipse?

I think this question is an extension of Compare to the IDE for Java,do we still need Ant?
There are answers for the question above, but I wish to know a concrete example of using Maven or Ant over just Eclipse.
When I develop in Eclipse, Eclipse does everything for me and I just need to click the run button. And also, Eclipse can let you export your code to a runnable jar or even .exe for windows.
So I really don't know why I need Maven or Ant.
And also if I do need, which one should I choose, Maven or Ant?
Because your collegue might prefer NetBeans or IDEA
Because the settings might vary from one eclipse install to another
Because you might want to get your dependencies automatically
Because you want to automate the complete build: build, jar, apply static code analysis, run the unit tests, generate the documentation, copy to some directory, tune some properties depending on the environment, etc.
Because once it's automated, you can use a continuous integration system which builds the application at each change or every hour to make sure everything still builds and the tests still pass...
Because Maven uses convention over configuration.
Because your IDE may not support some fancy code generation/transformation you need.
Because a build script documents the build process.
Eclipse is a development environment. But it's not a build tool.
I personally hate Maven, but YMMV. There are many alternatives: gradle, buildr, etc.
Maven strikes me as a case of something written by a bunch of past-their-sell-by-date c-shell script kiddies who think autoconf is leading edge code automation and don't understand that object code requires an object environment to be in any way efficient either for development or deployment. Ant was bad enough, but Maven combines all the worst features of Ant and Ivy. It doesn't create an object environment, and it doesn't play well with tools that do.
Simply, an object environment should have all class objects, i.e. the objects that determine the types of objects available to the system, live and available at all times. From there I can do whatever I want, instantiate multiple objects of a class, set up various sequences and instantiation rules, etc. Since the environment should be completely live, I shouldn't need a build tool at all. In terms of deploying my app, it's not difficult for the environment to simply throw out all the class objects that are never referenced by code in the namespaces that make up my app. The garbage collector in the JVM does almost the same thing on the fly today. At that point I have a deployment environment made up of my objects and all the objects (primarily class objects) that my objects reference, i.e. my application and all dependencies. This is how virtual machines work. (that our VMs are so poorly written we need to run a Spring VM on a Java VM on a Linux VM on a VMWare VM on another Linux VM is another example of the idiocy of software development). When dependencies get updated, it's simple enough for the environment to prompt the developer to merge his old code to the new libs, merge the code using the new libs down to the older version, or keep both versions. Prompting encourages the developer to make the slight modifications that are sometimes necessary to avoid having twenty versions of every library, while tools like Maven hide the fact that you have twenty versions and result in the massive runtime bloat common in Java apps.
In the Java development space Eclipse comes closest to being a proper object environment, although granted there are plenty of plugins that break the paradigm in various ways. Most of the reasons given for using Maven fall apart when examined critically.
Netbeans and Idea are overblown text editors, not object environments, but if you do want to use their tools for something not covered by the thousands of Eclipse plugins, both can import and maintain Eclipse projects, your build will just be inordinately slow compared to developers using Eclipse, but then, they'd be that slow if they were pure Netbeans or Idea projects anyway.
Not a serious reason to use Maven.
The ease of exporting / importing settings in Eclipse (something every team should do in any IDE in any case) makes the different settings problem nothing more than laziness on the part of the development team (or a religious argument over spaces vs tabs, lol).
Again, not a serious reason to use Maven.
Team environment? Show me a team that doesn't already use a repository like GIT or SVN. Why do we need to duplicate both the functionality and the maintenance headache by setting up Nexus repos as well?
That one's actually a good reason NOT to use Maven.
Running a server build? Great idea, now, shouldn't that be kicked off by code that's actually checked in to the source repo rather than a random build that happens to get pushed to Nexus? This brings up a point against Git, particularly Git with Maven. Since in Git I don't work on a branch, test locally, then commit (partly because my local test doesn't prove the server build works due to differences in the Maven configuration in Jenkins and Eclipse) I have to commit my changes to a different branch in order to see that the server Maven build fails, then commit a further change to fix the problem, resulting in an unreadable source history in the repo. Checked in code should at the very least build and pass unit tests, which if Git and Maven were out of the picture should be guaranteed.
Exporting a headless build from Eclipse is trivial if you actually look into it - all you need is ant or Gradle, the developer build already maintained by Eclipse, and a few Eclipse jars (Eclipse will export all the necessary files for a headless build to a directory or zip file, or ftp them to the build server). Server build tools like Hudson/Jenkins can pull updated code from most source repos and call any build script, there's no dependency on Maven. With Maven you either force developers to use a tool not suited to anybody but build engineers (the magnitudes longer it takes to build, even using M2E, is sufficient for that case to be made), or you live with the possibility that the server build doesn't work quite like the workstation build, which is still true if you go through all the hassle of integrating the two using the plethora of M2E plugins. Either way you get a slower and more fragile workstation build for the sake of an equally slow and more fragile server build. On every Maven based project I've worked on I've seen transient Hudson/Jenkins errors that don't show up in Eclipse unless you have absolutely every possible M2E plugin installed and correctly configured, and most developers never do.
Seems like another great reason to avoid Maven.
That doesn't cover some of the more fundamental problems with Maven, such as its namespaces breaking Java namespaces and XML namespaces, it's build unit (the POM) having no relation to anything in the actual deployment environment (think about it, when you separate via POMs what are you actually accomplishing in the finished product? Nothing. All it accomplishes is a false sense that you've separated concerns and functionality into different build units that all run as one monolithic piece of code); the hassle of manually maintaining complex configuration files, which only gets worse if you happen to need to use OSGi or another container and have to maintain other config files that affect and are affected by the Maven config with very little obvious sense to it; the problems caused by trying to run unit tests without a full environment for the code to execute in; the myriad versions not only of dependencies but of Maven specific plugins (I've actually seen JAR hell in the Maven build itself where multiple Maven plugins were using conflicting dependencies - one of the problems Maven was supposed to solve.
Yes, you can build object code with Maven. You can also write pure object code in C or even assembler, but I don't know why you'd want to.
The best reason to avoid Maven is the phenomenal amount of work required to de-mavenize a set of projects when you get sick of all the problems noted above (and numerous others not mentioned).
The mindset, inherited from C development, that the development cycle consists of write code, compile, assemble, build, deploy, test, do over again, is hopelessly outdated in an object environment. At some point we need to tell all the people with this mindset that they need to relearn how to develop, period. Doing so would remove any need for Maven, Git, and a host of other tools that do nothing but waste time.
Object development should be done in a live object environment, where a code change is tested as it is saved since the modified object is live. Deployment should consist of removing development only artefacts from that environment, creating a runtime that has everything used by the running app in development and test.
I'm currently dealing with a problem caused by creating deployment assemblies for an OSGi app using the maven-assembly plugin. The app works perfectly in the Eclipse environment, which hot deploys all code changes into a running OSGi container within the environment. However the configuration doesn't survive intact through the maven-assembly process, despite having a very good configuration/build engineer whose sole job is to accomplish that process. If we got rid of Maven (very difficult now due to the amount of code, but possible) and used the BNDTOOLS Eclipse plugin we could simply export the Eclipse build as an Ant or Gradle headless build (note, the OSGi developers who write BND and BNDTOOLS don't support Maven, and for good reason, the Maven plugin is written by the Felix developers who themselves use Netbeans and Maven, and no live environment other than at the end of the deploy cycle), where both tools set up the same environment as Eclipse, without the GUI objects that are only meant for developers anyway. The result would be an identical configuration and build for deployment. This would easily save 2-3 hours per day per developer currently spent watching slow Maven or M2E builds, and free up the config/build engineer to do more testing of the app on the deployment hosts.
Getting over the mindset of write/compile/assemble/build/deploy/test is the only major impediment. Pretending you're coding on a 1979 VT100 terminal instead of a modern machine doesn't make you a 'real' developer, it just demonstrates that your methods are 35 years out of date.
Of the developers on the team, none of the others properly understands a live object environment like Eclipse sufficiently to get it to work as a live environment with M2E and OSGi, and they are top developers, they just haven't been exposed to it due to the prevalence of outdated command line development tools. They only realized it was possible to do so when we were pair programming to solve the configuration problem and I was sharing my screen, causing one of the other team members to exclaim "that's how you write code so damn fast", when he saw my code change instantly test itself in the background OSGi container. I can use a bash shell when I have to, such as when I'm looking at logs on a remote server, in fact I do so fairly efficiently precisely so I can get out of that environment as quickly as possible and return to the 21st century.
There are soo many advantages to using Ant or Maven.
Maven is more or less an update concept of Ant.
Instead of giving you a bullet point answer I have decided to take another approach into answering this question. I'll ask you a simple question. I'am assuming here that you would be a developer; or have some sort of OO programming background.
So If your manager was to ask you to copy two hundred directories, but ignore jar, war and ear files within those directories and once copied. You then deploy those two hundred directories to another destination but deploy only .class files; copy rest of the files into another destination etc.
For you to do this in java; it will be lots of logic, lots of code and would not be extensible or adaptable to change. So that in mind Ant or Maven will accomplish and prepare all this on the fly with less overhead for your application to use. The size of the code in ant or Maven will be 1/4 compare to Java.
Click on the links for more technical benefits:
Maven
Ant I could not find an authentic answer with benefits, but I'm sure this would convince you ;)
Maven and Ant are used to script builds so that they may be executed in batch jobs like with Jenkins or on the command line.
In fact Eclipse itself uses Ant extensively to build plugins.
If you were to learn one of the two, learn Maven, it's the one pretty much everyone uses these days (replacing Ant).
Maven is generally used to build the plugins or jars for a particular application.
Suppose you have developed an application but you don't want to go for adding the jars needed for that application manually. In this situation Maven or Ant is very helpful. Once you have written your code just got to Run As -> Maven Build (click on Maven Build) , it will generate all the required plugins or jars and include in your application library build-path. A doubt may come like how the application will get those jars, For each application there is a xml file named as POM.xml where reference of all the jars are kept there for downloading purposes.

Java equivalent to VS solution file

I'm a C# guy trying to learn Java. I understand the syntax and the basic architecture of the Java platform, and have no problem doing smaller projects myself, but I'd really like to be able to download some open source projects to learn from the work of others. However, I'm running into a stumbling block that I can't seem to find any information on.
When I download an open source .NET project, I can open the .sln file with visual studio and everything just loads. Sure, there's occasionally a missing reference or something, but there's really very little configuration required to get things going. I'm not sensing the same ease of use with Java. I'm using eclipse at the moment, and it feels like for every project I have to create a brand new Eclipse project using "create from existing source", and almost nothing compiles properly without significant reconfiguration. In the case of web projects, it's even worse, because Eclipse doesn't appear to support creating a web project from existing source. I have to create a standard Java project from source, then then apparently modify the project file to include the bindings for the web toolkit stuff to work properly.
Assuming I want to be able to contribute to a project later on, I shouldn't have to be making such drastic changes to the file structure to get my IDE to a workable state. What am I missing?
The best way to go about this, is to first remove the IDE from the equation. In C# there is only one environment, so the presence of the default IDE is assumed. In Java a default IDE does not exist.
In the end Java is all about java source files and supporting jars. If you figure out what those are, your 99% of the way home. Then you can apply you favorite build system for the set. Some project require a runtime environment, like a webserver to handle the JSP files. If you understand what the basic setup is (as specified by the specification) you can quickly setup your IDE to handle that.
If I get a project with java files and supporting jars, I fire up Eclipse, create a new project, point it to the project's base directory and Eclipse will automatically detect what it finds and set up the project accordingly.
But projects often come with a build environment included. The trick is to figure out which one:
if a build.xml file is present, it is using ANT. This is a "make" like tool. You can execute "ant" in the directory where the build file is (if you have ANT installed) and it will try to compile. All IDE's like Eclipse and NetBeans recognize the build.xml file and allow for starting ant from inside the IDE. There is no guarantee the supporting jars will be present.
if a pom.xml file is present, it is using Maven. Maven is also a make like tool, but enforces a much stricter build cycle. Plus (and this probably is its biggest advantage) it automatically downloads supporting jars. If you have Maven installed you will be amazed at what it downloads... just sit tight, it'll work out in the end. IDE's usually require a plugin to support pom.xml, but then you automatically have the whole project setup at once.
if a .project file is present, it usually is a Eclipse project
if a nbproject directory is present, it is a NetBeans project
Getting to know a build environment / IDE is more work that trying to setup a project in the one you know. So I always try to get it running in Eclipse. Usually projects are quite simple to get running once you know your IDE.
Having multiple ways of doing things is not always pleasant, but it's the cost of having an open community. If there is only one IDE it makes things easier, but I like the fact that there are more people trying to figure out what the best way is to get things done.
In some cases you really may have to make drastic changes. A well-designed build system will require no configuration at all on most platforms and perhaps a few changes on exotic platforms. However, there is no single standard build system for Java; some people use Eclipse, some people use Apache Ant, and others use Apache Maven or Apache Maven2. If you were to create a project from scratch, then Maven or Ant is probably the ideal way to go. If you use the NetBeans IDE, projects that you create will automatically contain an Ant build file (so that it can be built on all systems using Ant), but will add additional metadata so that it is recognized by NetBeans IDE. If you create a Maven project, either using Maven directly or using an IDE such as Eclipse or NetBeans, then that same project can be loaded in either NetBeans or Eclipse without any additional configuration changes (although you may need to install a plugin for Eclipse for it to recognize Maven projects; NetBeans recognizes Maven projects out of the box). If you are starting a project from scratch, you may be interested in the Java Project Template. If you are contributing to an existing project, how you view/edit the project depends on the build system chosen; if the project already uses Maven or Ant, loading it with other IDEs should be fairly simple, while if the project uses a specific IDE's quirks or uses some more exotic build system, it may be harder.

Deploy java (command line) app using Netbeans / ant

I've finally managed to create a Netbeans project out of an old standalone (not Web-) Java application which consisted only out of single .java sources. Now I have basically two questions regarding Netbeans Subversion interaction and application deployment:
Do you check in all the Netbeans project files into the repository, normally?
If I build the project using Netbeans (or ant) I get a .jar file and some additional jar libraries. In order for the app to run properly on the server, some additional config files and directories (log/ for example) are needed. The application itself is a J2SE application (no frameworks) which runs from the command line on a Linux platform. How would you deploy and install such an application? It would also be nice if I could see what version of app is currently installed (maybe by appending the version number to the installed app path).
Thanks for any tips.
No, not usually. Anything specific to NetBeans (or Eclipse, IntteliJ, etc), I don't check in; try to make it build from the command line with your ant script and produce exactly what you want. The build.xml is something that can be used for other IDEs, or in use with Anthill or CruiseControl for automated builds/continuous integration, so that should be checked in. Check in what is needed to produce/create your artifacts.
You don't specify what type of server, or what exact type of application. Some apps are deployed via JNLP/WebStart to be downloaded by multiple users, and have different rules than something deployed standalone for one user on a server to run with no GUI as a monitoring application. I cannot help you more with that unless you can give some more details about your application, the server environment, etc.
Regarding the config files, how do you access those? Are they static and never going to change (something you can load using a ResourceBundle)? ? You can add them to the jar file to look them up in the ResourceBundle, but it all depends on what you are doing there. If they have to be outside the jar file for modification without recompiling, have them copied with an installer script.
As for directories, must they already exist? Or does the application check for their existence, and create them if necessary? If the app can create them if absent, you have no need to create them. If they need to be there, you could make it part of the install script to create those folders before the jar files are installed.
Version number could be as simple as adding an about box somewhere in the app, and looking up the version string in a config/properties file. It has to be maintained, but at least you would be able to access something that would let you know you have deployed build 9876.5.4.321 (or whatever version numbering scheme you use).
Ideally, you should not tie down your application sources and config to a particular IDE.
Questionwise,
I suggest you do not. Keep you repository structure independent of the IDE
You might have to change your application so that it's structure is very generic and can be edited in any IDE.
Is this a web app? A standalone Java app? If you clarify these, it would be easier to answer your query.
We don't check in the /build or the /dist directories.
We tend to use this structure for our Netbeans projects in SVN:
/project1/
/trunk
/tags/
/1.0
/1.1
/binaries/
/1.0
/1.1
When a change is need we check out the netbeans project from trunk/ and make changes to it and check it back in. Once a release of the project is needed we do an SVN copy of the netbeans project files to the next tag version. We also take a copy of the deployable (JAR or WAR) and place it in the version directory under binaries along with any dependencies and config files.
By doing this we have a clean, versioned deployable that is separate from the source. Are deployables are version in the name - project1-1.0.jar, project1-1.1jar and so on.
I disagree with talonx about keeping your source non-IDE specific - by not storing IDE files in SVN along with you source you are adding extra complication to the checkout, change, checkin, deploy cycle. If you store the IDE project files in SVN you can simply check out the project, fire up the IDE and hit build. You don't have to go through the steps of setting up a new project in the IDE, including the files you SVNed, setting up dependencies etc. It saves time and means all developers are working with the same setup, which reduces errors and discrepancies. The last thing you want is for a developer to check out a project to make a small bug fix and have to spend time having to find dependencies and set stuff up.
To answer question #2 -- who's your consumer for this app?
If it's an internal app and only you (or other developers) are going to be deploying it, then what you have is perfectly all right. Throw in a README file explaining the required directories.
If you're sending it out to a client to install, that's a different question, and you should use an installer. There are a few installers out there that wrap an ant script and your resources, which is a nice approach particularly if you don't need the GUI... just write a simple ant script to put everything in the right place.
Version number is up to you -- naming the JARs isn't a bad idea. I also have a habit of printing out the version number on startup, which can come in handy.

Categories