How do I run dcm4che tools from command line after compilation? - java

I want to use the dcm2json tool, part of the dcm4che3 toolkit, but I cannot figure out how to compile and execute the command line tool. Having run
$ git clone https://github.com/dcm4che/dcm4che.git
$ cd dcm4che
$ mvn install
in the dcm4che directory root as outlined in the installation manual, all I get from compilation is a jar dcm4che/dcm4che-tool/dcm4che-tool-dcm2json/target/dcm4che-tool-dcm2json-3.3.5-SNAPSHOT.jar and a class file dcm4che/dcm4che-tool/dcm4che-tool-dcm2json/target/classes/org/dcm4che3/tool/dcm2json/Dcm2Json.class. There is no tool to execute. I can execute the standalone tools downloaded from http://sourceforge.net/projects/dcm4che/files/dcm4che3/3.3.3/ but sadly dcm2json isn't included in this (most recent sourceforge) release.
Does anyone know from where I can download a dcm2json executable or how to compile it? Any help would be really, really appreciated.
(Yes I did Google. A lot.)

dcm4che project has a sub-project called dcm4che-assembly that after running mvn install on the parent dcm4che project, produces a zip that assembles all runnable artifacts, including dcm2json.
If you are curious about how you can get the sh/bat script to run it, then dcm4che-assembly is where you have to look.
In fact, this zip assembly is the same as you download in the binary package.
Hope it helps!

I was also curious about this, so I investigated it some. I don't have a final answer, but I'll post what I've found so far in the hopes that it'll be useful to someone else, and might be the first step to an answer.
It seems that mvn install, after it does its thing, puts a whole bunch of stuff in ~/.m2. The jar files in there don't seem to run as you might expect, and based on the files in the git repo in dcm4che-assembly/src/bin, it seems they need a wrapper to run right. My preliminary toying with the wrappers doesn't seem to work—I get errors like the following:
Error: Could not find or load main class org.dcm4che3.tool.dcm2json.Dcm2Json
These wrapper files really seem to want to be installed somewhere (like /usr/local/bin?), but they do not seem to be.
On the other hand, more recent binaries are now available (currently up through 3.3.7, while git is 3.3.8-SNAPSHOT), and I'm able to use the dcm2json tool available in those. Interestingly, that executable is also a wrapper, nearly identical to the one in git. Further investigation of why it works and the one it git doesn't may lead to an answer of why the dcm4dch3 tools don't magically run after being installed.
And of course the key lies in understanding how the maven framework works.

Related

Can't install package in maven

I want to work with some audio analysis and want to use the Phash perceptual algorithm. Here's a Java implementation I want to use.
I'm having trouble adding this project as a dependency; I don't see how to run mvn package test install to get this project linked.
Similarly, this project needs this other project to read the audio files. When I try to install the library using the cmake commands I get errors like sndfile.h: No such file or directory. This Stackoverflow answer shows how to install it, but I still get the sndfile.h: No such file or directory error. I'm on a Mac, by the way. I found the sndfile.h file and copied it to the directory, but the build didn't work in the end anyway.
I feel like this isn't the right way to do this and things should "work" after doing the basic install commands listed in the repositories. What am I missing?

Build automation to create a jar file with differences only

At my current company we build major releases about twice a year, and throughout the year, when bugs are fixed or new enhancements added, we build service packs to release.
A service pack would basically be a .jar file that is dumped onto the clients machine, and since it is first on the classpath, that is then the code that will execute. (If you do not know what I am talking about - sorry, this might be old school).
The jar file contains only the changed class files and it is normally assembled by hand, by the developer on the job.
I am using hudson for above mentioned steps. If it is possible to specify that hudson to look at two revisions and put the differences between them to a service pack (class files into a sp.jar). This would enable us to automate our deployment of enhancement or bug fixes and it would definitely have added advantage.
If anyone know of such functionality or setup, could you please share your online resources?
Thanks
Using ant script you can achieve the output:
See some tool which can help you.
clirr
java -jar clirr-core-0.6-uber.jar -o OLD.jar -n NEW.jar
Or JAPICC
japi-compliance-checker OLD.jar NEW.jar
Or PkgDiff
pkgdiff OLD.jar NEW.jar

Checkstyle & Findbugs Install

I have javac version 1.6.0_16 already installed on Windows XP and I'm using both Dr.Java and command prompt to compile and run Java programs.
I downloaded and extracted Checkstyle 5.5 and Findbugs 2.0.1. I'm trying to install Checkstyle and the instructions stated that I need to include checkstyle-5.5-all.jar in the classpath.
My question is, should I place the Checkstyle directory in the lib folder of the jdk1.6.0_16 directory and set the classpath as follows:
C:>set classpath=%C:\Program Files\Java\jdk1.6.0_16\lib\checkstyle-5.5\checkstyle-5.5-all.jar
Is this correct? Should I do the same for Findbugs? Thanks in advance
EDIT: When I added the above path using the environmental variables, and ran checkstyle hello.java, I got the error: 'checkstyle' is not recognized as an internal or external command, operable program or batch file
Maven will solve this problem for you
It sounds like you're just getting started in the world of Java. To that end, I'd suggest that you look into Maven for your build process. Also, you should be using at least JDK1.6.0_33 at the time of writing.
Essentially, Maven will manage the process of running Checkstyle, Findbugs (and you should also consider PMD) via standard plugins against your code. It will also manage the creation of the Javadocs, linked source code and generate a website for your project. Further, Maven promotes a good release process whereby you work against snapshots until ready to share your work to the wider world.
And if I don't use Maven?
Well, just create a /lib folder in your project and stuff your dependencies into it. Over time you will create more and more and these will get intertwined. After a while you will enter JAR Hell and turn to Maven to solve the problem.
We've all been there.

Trouble installing Boilerpipe

This is the third time I've installed it. I had it working on Windows, and up until a few days ago on Linux. I've done all I can do and I don't understand how to run this Java program.
The source code is a folder with a lib, src some jars and a classpath and project file.
The classpath file makes some declarations like classpathentry=src/main and path=lib, path=src.
All of these make sense. There is a folder 'main' inside 'src'.
The tiny file I'm trying to run starts off by
import de.l3s.boilerpipe.demo
I'm trying to run 'Oneliner.java'. I cannot compile it.
No matter what/where that class file is, I cannot run it. It results in a noclassdeffound.
I've run it in the main, the src, the root, the demo, the ... anywhere.
I've tried compiling it in different directories, running it with various java command line switches that were recommended. Supposedly you can have it 'search' for the file, which I've yet to experience. The sheer stubbornness of this java environment is terrifying. And massively humiliating for me.
I had the same problem with installing it. The 'Getting Started' page is poor quality.
My solution was to use a python wrapper, which you can find here: https://github.com/misja/python-boilerpipe
It takes care of all of the dependencies you'll need (however, you might be missing jpype if you're on a Mac. In that case, you'll need to install it manually from: http://jpype.sourceforge.net/).
The best way to start using the boilerpipe algorithm (and to see what it is for) is to use the demo site:
http://boilerpipe-web.appspot.com/
If you want to integrate the boilerpipe library into your applications, or even intent to modify/improve the code, you will definitely need solid Java programming skills.
As a quick-start I suggest that you install a recent version of the Eclipse IDE for Java Developers and import boilerpipe-core as a project. This avoids pretty much of the classpath configuration, and almost everything should be set up correctly for you.
The classpath file you mentioned is probably ".classpath", which is part of the Eclipse project configuration. You don't need it unless you want an Eclipse project.

How do you handle different Java IDEs and svn?

How do you ensure, that you can checkout the code into Eclipse or NetBeans and work there with it?
Edit: If you not checking in ide-related files, you have to reconfigure buildpath, includes and all this stuff, each time you checkout the project. I don't know, if ant (especially an ant buildfile which is created/exported from eclipse) will work with an other ide seamlessly.
We actually maintain a Netbeans and an Eclipse project for our code in SVN right now with no troubles at all. The Netbeans files don't step on the Eclipse files. We have our projects structured like this:
sample-project
+ bin
+ launches
+ lib
+ logs
+ nbproject
+ src
+ java
.classpath
.project
build.xml
The biggest points seem to be:
Prohibit any absolute paths in the
project files for either IDE.
Set the project files to output the
class files to the same directory.
svn:ignore the private
directory in the .nbproject
directory.
svn:ignore the directory used for
class file output from the IDEs and any other runtime generated directories like the logs directory above.
Have people using both consistently
so that differences get resolved
quickly.
Also maintain a build system
independent of the IDEs such as
cruisecontrol.
Use UTF-8 and correct any encoding issues
immediately.
We are developing on Fedora 9 32-bit and 64-bit, Vista, and WindowsXP and about half of the developers use one IDE or the other. A few use both and switch back and forth regularly.
The smart ass answer is "by doing so" - unless you aren't working with multiple IDEs you don't know if you are really prepared for working with multiple IDEs. Honest. :)
I always have seen multiple platforms as more cumbersome, as they may use different encoding standards (e.g. Windows may default to ISO-8859-1, Linux to UTF-8) - for me encoding has caused way more issues than IDEs.
Some more pointers:
You might want to go with Maven (http://maven.apache.org), let it generate IDE specific files and never commit them to source control.
In order to be sure that you are generating the correct artefacts, you should have a dedicated server build your deliverables (e.g. cruisecontrol), either with the help of ant, maven or any other tool. These deliverables are the ones that are tested outside of development machines. Great way to make people aware that there is another world outside their own machine.
Prohibit any machine specific path to be contained in any IDE specific file found in source control. Always reference external libraries by logical path names, preferable containing their version (if you don't use maven)
The best thing is probably to not commit any IDE related file (such as Eclipse's .project), that way everyone can checkout the project and do his thing as he wants.
That being said, I guess most IDEs have their own config file scheme, so maybe you can commit it all without having any conflict, but it feels messy imo.
For the most part I'd agree with seldaek, but I'm also inclined to say that you should at least give a file that says what the dependencies are, what Java version to use to compile, etc, and anything extra that a NetBeans/Eclipse developer might need to compile in their IDE.
We currently only use Eclipse and so we commit all the Eclipse .classpath .project files to svn which I think is the better solution because then everyone is able too reproduce errors and what-not easily instead of faffing about with IDE specifics.
I'm of the philosophy that the build should be done with a "lowest common denominator" approach. What goes into source control is what is required to do the build. While I develop exclusively in with Eclipse, my build is with ant at the command line.
With respect to source control, I only check in files that are essential to the build from the command line. No Eclipse files. When I setup a new development machine (seems like twice a year), it takes a little effort to get Eclipse to import the project from an ant build file but nothing scary. (In theory, this should work the same for other IDEs, no? Surly they must be able to import from ant?)
I've also documented how to setup a bare minimum build environment.
I use maven, and check in just the pom & source.
After checking out a project, I run mvn eclipse:eclipse
I tell svn to ignore the generated .project, etc.
Here's what i do:
Only maintain in source control your ant build script and associated classpath. Classpath could either be explicit in the ant script, a property file or managed by ivy.
write an ant target to generate the Eclipse .classpath file from the ant classpath
Netbeans will use your build script and classpath, just configure it to do so through a free form project.
This way you get IDE independent build scripts and happy developers :)
There's a blog on netbeans site on how to do 3. but i can't find it right now. I've put some notes on how to do the above on my site - link text (quick and ugly though, sorry)
Note that if you're using Ivy (a good idea) and eclipse you might be tempted to use the eclipse ivy plugin. I've used it and found it to be horribly buggy and unreliable. Better to use 2. above.

Categories