Thinking that the answer to this is pretty obvious but here it goes:
When I am working on a small project for school (in java) I compile it.
On my coop we are using ant to build our project.
I think that compiling is a subset of building. Is this correct? What is the difference between building and compiling?
Related:
What is the difference between compiling and building?
The "Build" is a process that covers all the steps required to create a "deliverable" of your software. In the Java world, this typically includes:
Generating sources (sometimes).
Compiling sources.
Compiling test sources.
Executing tests (unit tests, integration tests, etc).
Packaging (into jar, war, ejb-jar, ear).
Running health checks (static analyzers like Checkstyle, Findbugs, PMD, test coverage, etc).
Generating reports.
So as you can see, compiling is only a (small) part of the build (and the best practice is to fully automate all the steps with tools like Maven or Ant and to run the build continuously which is known as Continuous Integration).
Some of the answers I see here are out-of-context and make more sense if this were a C/C++ question.
Short version:
"Compiling" is turning .java files into .class files
'Building" is a generic term that includes compiling and other tasks.
"Building" is a generic term describes the overall process which includes compiling. For example, the build process might include tools which generate Java code or documentation files.
Often there will be additional phases, like "package" which takes all your .class files and puts them into a .jar, or "clean" which cleans out .class files and temporary directories.
Compiling is the act of turning source code into object code.
Linking is the act of combining object code with libraries into a raw executable.
Building is the sequence composed of compiling and linking, with possibly other tasks such as installer creation.
Many compilers handle the linking step automatically after compiling source code.
What is the difference between compile code and executable code?
In simple words
Compilation translates java code (human
readable) into bytecode, so the
Virtual machine understands it.
Building puts all the compiled parts
together and creates (builds) an
executable.
Build is a compiled version of a program.
Compile means, convert (a program) into a machine-code or lower-level form in which the program can be executed.
In Java: Build is a Life cycle contains sequence of named phases.
for example: maven it has three build life cycles, the following one is default build life cycle.
◾validate - validate the project is correct and all necessary information is available
◾compile - compile the source code of the project
◾test - test the compiled source code using a suitable unit testing framework. These tests should not require the code be packaged or deployed
◾package - take the compiled code and package it in its distributable format, such as a JAR.
◾integration-test - process and deploy the package if necessary into an environment where integration tests can be run
◾verify - run any checks to verify the package is valid and meets quality criteria
◾install - install the package into the local repository, for use as a dependency in other projects locally
◾deploy - done in an integration or release environment, copies the final package to the remote repository for sharing with other developers and projects.
Actually you are doing the same thing. Ant is build system based on XML configuration files that can do a wide range of tasks related to compiling software. Compiling your java code is just one of those tasks. There are many others such as copying files around, configuring servers, assembling zips and jars, and compiling other languages such as C.
You don't need Ant to compile your software. You can do it manually as you are doing at school. Another alternative to Ant is a product called Maven. Both Ant and Maven do the same thing , but in quite different ways.
Lookup Ant and Maven for more details.
In Eclipse and IntelliJ, the build process consist of the following steps:
cleaning the previous packages,
validate,
compile,
test,
package,
integration,
verify,
install,
deploy.
Compiling is just converting the source code to binary, building is compiling and linking any other files needed into the build directory
Related
My project has a complicated maven structure, with lots of sub projects, and 66 pom.xml files. But for my current task, I'm only modifying one resource file, a .xsl file that I need to test lots of small changes. The build takes 9 minutes, and it spends most of its time recompiling my java source files that haven't changed. Is there any way to use maven to package and build my .war file without recompiling my java source code?
This is a tricky one. Maven is not good at building incrementally.
My first shot would be to try #gaborsch suggestion. But be careful. The results can be inconsistent, depending on your changes. You need to make some experiments to figure out if this works for you.
The second shot would be to speed up the build. You can try to build in parallel, or you can only partly build your multi-module project if only parts are affected (and you are building Snapshot versions).
Gradle is much better at building incrementally (I am a Maven guy, but I have to admit it). But switching to Gradle is probably not the right way to go.
I will suggest what I answer to a similar question.
mvn package -pl my-child-module
This command will build the project my-child-module only.
It won't skip the compilation but at least will skip unwanted modules to be built.
-pl, --projects Build specified reactor projects instead of all projects
-am, --also-make If project list is specified, also build projects required by the list
-amd, --also-make-dependents If project list is specified, also build projects that depend on projects on the list
In my IDEA project a Scala module depends on a Java module. When I try to compile the Scala module, only scalac is triggered. It compiles both Java and Scala sources.
I'd like scalac to compile only the Scala module, because javac is much faster for Java sources (and my Java project is a big one).
How to make IDEA use different compiler for different modules?
My workaround is to (for each dependency to Java module):
Delete module dependency in project configuration
Add dependency to appropriate compile output directory "MyJavaModule/target/classes"
Obviously I'm not happy with that, because every time I reimport Maven project I need to repeat all of this to have fast compilation. I hope somebody knows a better way.
Clarification: I'd like to stress, that tools like SBT or Maven don't solve my problem. It is not about compilation alone. It's about compilation in IDEA, required for things like Scala Worksheet or running unit tests from IDEA. My goal is to have full range of IDEA niceties (syntax highlighting, intelligent auto-completion, auto-imports, etc) with compilation speed of SBT. Now I have to either tolerate long compilation times (due to dependencies to my Java module) or to use bare-bones REPL and testing in SBT.
Randall Schulz has asked the right question in the comment: "Why does it matter which tool does the compilation?"
Up until now I believed that IDEA needs to compile all classes itself if you want to use its nice features (like IDEA's Scala Console or running tests from within it). I was wrong.
In fact, IDEA will pick up classes compiled by any other tool (like the great SBT for instance). You just need to assure that all classes are up-to-date before using any of IDEA's helpful features. The best way to do it is:
launch continuous incremental compilation in the background (for
example by issuing "~ compile" in SBT)
remove "make" step in IDEA's
run configurations
That's all! You can then use all cool features of IDEA (not only syntax highlighting and code completion, but all auto-imports in Scala Console, quickly running selected unit tests) without switching between different windows.
That's the workflow I missed until now! Thanks to everybody for all the comments about the issue.
You should look at using a dependency management suite like Apache Ivy or Apache Maven. Then put your Java source in a separate artifact, and have your Scala project be dependent on the Java project artifact.
If you go the Maven route, there is a Scala plugin.
Probably the simplest way to get compiled Scala and Java files is SBT - Simple Build Tool. Just create a project (+ add dependencies and so on) and compile it. Scala + Java compilation works out of the box. I've switched to SBT from Maven.
If you have a complex POM or if you have another reason not to migrate to SBT, you can try to configure the POM. Just adding (and possibly configuring) the Scala plugin should be enough. I hope it will not break the Java support.
I'm a little shocked that (if?) this precise questions hasn't been asked, but 15 minutes of scanning SO didn't turn up an exact match. (If I'm wrong, point me the right way and vote to close...)
Question 1:
What are the best practices for laying out Java projects under an Ant build system?
For our purposes, we have the following context (perhaps most of which is irrelevant):
Most developers are using Eclipse (not all)
Project is maintained in subversion
Project builds have recently migrated to Hudson, in which we want to use the release plugin to manage releases, and some custom scripts to handle automated deployment
This project is a "conventional" application, a sort of "production prototype" with a very limited pool of users, but they are at remote sites with airgap separation, so delivering versioned, traceable artifacts for easy installation, manual data collection/recovery, and remote diagnosis is important.
Some dependencies (JARs) are included in the SVN repo; others may be fetched via the ant script at build time, if necessary. Nothing fancy like Ivy yet (and please don't tell me to switch to Maven3... I know, and we'll do so if/when the appropriate time comes.)
Build includes JUnit, FindBugs, CheckStyle, PMD, JavaDoc, a bit of custom documentation generation
Two or three primary JAR artifacts (a main application artifact plus a couple of minimal API JARs for inclusion in a few coupled applications)
Desire to distribute the following distribution artifacts:
a "Full" source+bin tarball with all dependencies resolved, jars and JavaDoc prebuilt
a bin tarball, with just the docs and JavaDoc, jars, and ancillary wrapper scripts etc
a "Partner" source+bin, which has the "shared" source that partner developers are likely to look at, and associated testcases
Current structure looks like this
project-root/
project-root/source
project-root/source/java // main application (depends on -icd)
project-root/source/java-icd // distributable interface code
project-root/source/test // JUnit test sources
project-root/etc // config/data stuff
project-root/doc // pre-formatted docs (release notes, howtos, etc)
project-root/lib // where SVN-managed or Ant-retrieved Jars go
project-root/bin // scripts, etc...
At build time, it expands to include:
build/classes // Compiled classes
build/classes-icd
build/classes-test
build/javadoc
build/javadoc-icd
build/lib // Compiled JAR artifacts
build/reports // PMD, FindBugs, JUnit, etc... output goes here
build/dist // tarballs, zipfiles, doc.jar/src.jar type things, etc..
build/java // Autogenerated .java products
build/build.properties // build and release numbering, etc...
Question 2:
How can I maintain strict separation in the development tree between revision-controlled items and build-time artifacts WHILE producing a coherent distribution as above AND allowing me to treat a development tree as a operational/distribution during development and testing? In particular, I'm loathe to have my <jar> task drop .jar files in the top-level lib directory -- that directory in the developers' trees is inviolable SVN territory. But distributing something for public use with build/lib/*.jar is a confusing annoyance. The same is true of documentation and other built artifacts that we want to appear in a consistent place in the distribution, but don't want to have developers and users use completely different directory structures.
Having all the generated products in a separate build/ directory is very nice for development-time, but it's an annoying artifact to distribute. For distribution purposes I'd rather have all the JARs sitting in a single lib location, in fact, a structure like the below makes the most sense. Currently, we build this structure on the fly with ant dist by doing some intricate path manipulations as .tar.gz and .zip artifacts are built.
What I think the dist should look like:
project-root/
project-root/source // present in only some dists
project-root/etc // same as in development tree
project-root/doc // same as in development tree
project-root/doc/javadoc // from build
project-root/lib // all dependency and built JAR files
project-root/bin // same as in development tree
build.properties // build and release numbering, etc...
This question is narrowly about the "how do I maintain clean development and distribution project layouts?" as I asked above; but also to collect info about Java/Ant project layouts in general, and critiques of our particular approach. (Yes, if you think it should be a Community Wiki I'll make it so...)
My one suggestion would be that the directory tree you distribute should not be the one in CVS. Have a script which puts together a dist directory under build, then zips that up. That script can combine source-controlled and derived files to its heart's content. It can also do things like scrub out SVN directories, which you don't want to distribute. If you want to be able to treat development and distributed trees in the same way, simply ensure that the layout of dist is the same as the layout of the development project - the easiest way to do that would be to copy everything except the build subdirectory (and CVS directories, and perhaps things like the Eclipse .project and .classpath).
I suspect you won't like this suggestion. It may be that you are attached to the idea that the distributed file is simply a portable version of your development environment - but i think it's the case that it isn't, it can never be, and it doesn't need to be. If you can accept that idea, you might find my suggestion agreeable.
EDIT: I thought about this a bit more, and looked at some of the scripts i use. I think what i'd do in this situation is to build a separate tree even in development; point the execution environment at project-root/build/app (or perhaps project-root/build if you can) rather than project-root, and then symlink (or copy if you don't have symlinks) all the necessaries (whether static, from in the project root, or derived, from in build) into that. Building a distribution may then be as simple as zipping up that tree (with a tool that resolves symlinks, of course). The nice thing about this is it allows the structure of the executed tree to be very clean - it won't contain source directories, IDE files, build scripts, or other supporting files from inside the project, etc. If you're using Subversion, it will still contain .svn directories inside anything symlinked from the static areas; if you were using Mercurial, it wouldn't contain any .hg stuff.
Layout-wise, we use something which has evolved into something very close to a Maven layout (see here). This is a very functional layout which has been used by a lot of people. And, if you want to switch to Maven later, you're all set. We have a couple of variations, the most important of which is that we separate automated unit- and integration-tests.
In terms of mingling sources and build artefacts - I would certainly recommend against it. As you've seen, it messes with IDE indexing and version control and generally makes life difficult.
As far as I can tell you either have to accept this mingling, or copy your dependencies as part of the build and treat the output as a separate project - perhaps constantly open in another IDE window if you need it. The idea of mixing your work 'as a user' versus 'as a producer' of your release package sounds like it would be confusing, anyway.
http://ant.apache.org/ant_in_anger.html
The project contains sub directories
bin common binaries, scripts - put this on the path.
build This is the tree for building; Ant creates it and can empty it in the 'clean' project.
dist Distribution outputs go in here; the directory is created in Ant and clean empties it out
doc Hand crafted documentation
lib Imported Java libraries go in to this directory
src source goes in under this tree in a hierarchy which matches the package names.
There are also (maybe a bit outdated) general recommendations from sun/oracle for a project layout you maybe want to take a look at:
Guidelines, Patterns, and Code for End-to-End Java Applications
In Java if you package the source code (.java) files into the jar along with classes (.class) most IDE's like eclipse will show the javadoc comments for code completion.
IIRC there are few open-source projects that do this like JMock.
Lets say I have cleanly separated my API code from implementation code so that I have something like myproject-api.jar and myproject-impl.jar is there any reason why I should not put the source code in my myproject-api.jar ?
Because of Performance? Size?
Why don't other projects do this?
EDIT: Other than the Maven download problem will it hurt anything to put my sources into the classes jar to support as many developers as possible (maven or not)?
Generally because of distribution reason:
if you keep separate binaries and sources, you can download only what you need.
For instance:
myproject-api.jar and myproject-impl.jar
myproject-api-src.jar and myproject-impl-src.jar
myproject-api-docs.zip and myproject-impl-docs.zip
Now, m2eclipse - Maven for Eclipse can download sources automatically as well
mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs=true
Now, it can also generate the right pom to prevent distribution of the source or javadoc jar when anyone declare a dependency on your jar.
The OP comments:
also can't imagine download size being an issue (i mean it is 2010 a couple 100k should not be a problem).
Well actually it (i.e. "the size) is a problem.
Maven suffers already from the "downloading half the internet on first build" syndrome.
If that downloads also sources and/or javadocs, that begins to be really tiresome.
Plus, the "distribution" aspect includes the deployment: in a webapp server, there is no real advantage to deploy a jar with sources in it.
Finally, if you really need to associate sources with binaries, this SO question on Maven could help.
Using maven, attach the sources automatically like this:
http://maven.apache.org/plugins/maven-source-plugin/usage.html
and the javadocs like this:
http://maven.apache.org/plugins/maven-javadoc-plugin/jar-mojo.html
That way they will automatically be picked up by
mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs=true
or by m2eclipse
I want to build a "toJavaCode()" on my model that would generated the required Java source code to generate that model (never mind the reasons or if it should or shouldn't be done, nor the compatibility issues that may occur).
I'm at a loss at how to test this. I'm using maven, but generate-sources won't really work for me since my server needs to be up for proper, bulk testing. I do get the server up during the "test" goal, but generate-sources is just too early.
On the other hand, while I can use the built in compiler (from tools.jar in the JDK) to do this, I don't know how I can pack it into the jar for testing (or load that jar).
Any ideas?
You can use the JavaCompiler API to compile your source files and setup a classloader to load the compiled classes in your test (sample code). tools.jar has to be on the classpath. This is the case if the JDK is used.
If your generated code is stable for any given class you could use annotation processor to generate the source code and compile it in the same javac run as the annotated class.
You can add ant tasks to your maven. That's a way to something 'out-of-classical-order' during a maven build. Like adding a javac ant task to mavens test goal or so.
(sorry, I'm as confused as your commentor matt b - but the embedded ant tasks are your swiss army knife here.)