We are working on a legacy project and the first task is to setup a DevOps for the same.
The important thing is we are very new to this area.
We are planning to use jenkins and sonarqube for the purpose initially. Let me start with the requirements.
Currently the the project is sub divided into multiple projects (not modules)
We had to follow this build structure as there are no plans for re-organising it as a single multimodule maven project
Currently the builds and dependencies are managed manually
Eg: The project is subdivided in to 5 multi-module maven projects,
say A,B,C,D and E
1. A and C are completely independent and can be easly built
2. B depends on the artifact generated by A (jar) and has multiple maven profiles (say tomcat and websphere, it is a webservice module)
3. D depends on the artifact generated by C
4. E depends on A, B and D and has multiple maven profiles (say tomcat and websphere, it is a web project)
Based on jenkins documentation to handle this scenario, we are thinking about parameterized builds using “parameterized build plugin" and "extended choice parameter plugin" with the help of these plugins we are able to parameterize the profile name. But before each build, the builder waits for the profile parameter.
So we are still searching to find an good solution to
1. keep the dependency between projects an built the whole projects if there is any change in SCM (SVN). For that we are used "Build whenever a SNAPSHOT dependency is built" and "SCM polling option". Unfortunately this option seems not working in our case (we have given an interval of 5 min for scm polling but no build is happening based on test commits)
2. Even though we are able to parameterize the profile, this seems as a manual step (is there an option to automate this part too, ie. build with tomcat profile and websphere profile should happen sequentially).
We are struggling to find a solution to cater all these core requirements. Any pointer would be greatly appreciated.
Thanks,
San
My maven knowledge is limited, however since you didnt get any response yet, ill try to give some general advice.
There are usually multiple ways to reach some aim in Jenkins, each has its pros and cons. Choosing the most fitting solution depends on the specific requirements and your environment/setup.
However you first need something that just works, then you can refine it.
A quick result you get with the following
Everything in one job
Configure your subversion repo (Multiple are possible) to be checked out into your workspace
Enable Poll SCM trigger
Build your modules/projects via Execute shell build steps. (Failed builds can be handed to the job result by using Exit 1 on a Execute shell Build step.)
However keep in mind that this will prevent advanced functionality on a per project/module basis, such as mail notifications to the dev to blame. Or trend of metrics, like warnings or static code analysis.
The following solution is easier to extend in that direction.
Wrapper job around your various build jobs
Use Build step Trigger/call builds on other projects to build A, archive needed artifacts
Use Build step Trigger/call builds on other projects with some parameter tomcatto build B tomcat version, use Copy Artifact Plugin to copy over jar from A
...
Use Build step Trigger/call builds on other projects with some parameter tomcatto build E tomcat version. Use Copy Artifact Plugin to copy all needed artifacts, you can specify parameter there if you need artifact of i.e. B tomcat version
In this setup, monitoring the svn is an issue since if you trigger it from polling SCM, it will checkout it in your wrapper workspace while you dont actually need it checked out there, but in your build jobs.
Possible solution: Share the workspace between wrapper job and your build jobs, so the duplicate checkouts in the build jobs will find the files already in the right revision. However then you *need+ to make sure the downstream jobs are executed on the same machine (there are plugins to do so)
Or even more elegant: Use a post-commit hook (See here, section Post-comit hook) on your svn to notify jenkins of changes.
Edit: For the future, its worth looking into the Pipeline Plugin and its documentation for more complex builds, this is the engine for the upcoming jenkins version 2.0, see here.
I would create 5 different jobs for ABCDE.
As you mentioned A and C would be standalone jobs so I would just do mvn clean install/pkg/verify based on your need.
For B I would first build A and then invoke another maven target in build to build B
For D, I would first build C and then build D
Finally for E , i would use invoke top level mvn targets 5 times A , B,C,D and finally E
Edit:
Jenkins 2 is out and has a built-in support for pipelines.
A few pointers for your requirements:
"built the whole projects if there is any change in SCM"
Although Poll SCM usually requires less work, the proper way to do it is to use SVN hooks.
The solution works as follows:
First you enable the Trigger builds remotely feature and enter a random token in Authentication Token.
This allows you to trigger builds remotely using Jenkins REST API (http[s]://JENKINS_URL/job/BUILD_NAME/build?token=TOKEN)
Then you create a SVN hook (a script that runs whenever you commit) which triggers the build by sending a request to that URL (using curl,wget, python,...).
There are a lot of manuals on how to create SVN hooks, here's the first result on "SVN Hooks" from Google.
"keep the dependency between projects"
I would create a different Jenkins Job for each project separately, then make sure builds are executed in the required order.
I think the best way to order your builds (dependencies) is to create a Build Pipeline using the Pipeline Plugin (previously known as Workflow Plugin).
There is a lot to explain here, so it's better you read on your own. You can start here.
There are also other (simpler) solutions, like Build Flow Plugin or Parameterized Trigger Plugin which can help create dependencies between builds, but I think Pipeline is the newest and considered a best practice (it's definitely the most advanced solution).
Still, having said that, if you feel Pipeline is an overkill for you, go for the alternatives.
I would recommend making sure each build does a mvn install to the same local repo, and also deploys the artifact to Artifactory (hopefully you have one).
Automate parameterized builds: "build with tomcat profile and websphere profile"
To do that you'll need to create parameterized builds.
That's pretty easy to do, you just check This build is parameterized in your build config and add a MVN_PROFILE string/choice parameter.
After that you can trigger each build several times, with different parameters, using any one of the plugins mentioned in the previous bullet.
Extra Tip:
While hacking your way through this, consider using Job Configuration History Plugin, it can help review and revert changes made to the configuration.
Good luck, hope this helps :)
I would consider a bit different approach to fully de-couple the projects.
If you are able to create your internal artifactory, than I would consider in the maven build each on of the dependencies as a third party library exactly like it is done with any other external libraries you are using.
This way, each such project can be seperatly built and stored in the artifactory and when a dependent project will be built it will just take the right version as mentioned in the pom file.
This way you'll have different build process for each one of the projects and only relevant projects (relevant = changed) will be built.
Related
We have 2 git repositories, Platform and US (we have other geo-specific ones as well which is why they are split, but they are not necessarily relevant here). US depends on Platform.
We are using git-flow (meaning new features are in their own branches like feature/some-product, develop branch is somewhat more stable and represents QA-ready builds, master branch is stable and for releases). (If a feature has both Platform and US parts, there will be a branch in each with the same name.) We decided that the Jenkins jobs for the features should not run mvn deploy because we don't want to publish them to the snapshot repository and probably shouldn't run mvn install because we don't want a different feature branch to grab it from Jenkins's local repo (this we are less sure about though). We believe they should only make sure everything compiles and that the unit tests pass (mvn verify).
This is where the problem comes in, because these are separate git repositories and we are not doing anything with the compiled jar (install or deploy),
how can we safely expose the compiled jars from the Platform job to the US without exposing them to other developers or jobs (or is this even a concern is only doing mvn install) or
how can one Jenkins job build Platform and US for a specific branch together?
If we only have a single actively developed branch (or we were using subversion) this would not be an issue.
Some ideas we have (and concerns with each)
For feature branches use a different version (e.g., 8.1.0-SNAPSHOT-some-product).
This seems like a lot of work for every feature branch.
It seems like it'd clog up the local repo with "stale" jars, and we would need to worry about purging them.
Somehow use git submodule to checkout Platform's and US's feature/some-product and either use mvn verify --reactor or a simple pom file with the top level projects as modules.
How to make Jenkins add the submodules?
If the submodules were already there, there would need to be a whole git repo for this, which seems redundant.
--reactor doesn't work always.
How to supply the pom file?
Just do mvn install.
feature/other-thing may only be on US, so after Platform feature/some-product publishes to Jenkins local repository (which may be very different from Platform develop, which US feature/other-thing would be built against normally), it would (We think) cause US feature/other-thing to fail (or pass!) in a false sense (supposing that if it were compiled against Platform develop it could possibly get a different result).
I have not had to address this issue personally.... here is my thoughts on how I would look at the issue:
If you MUST only have one job for both branches (a bad idea), you can use parameterized build plugin to pass in the text string "US" or "Platform" and have logic in a shell script that will check out the relevant repo's branch.
HOWEVER, this eliminates the ability to have repo polling kickoff the build. You would have to set up a build schedule on a cron and you would get a new build no matter what, even if the repo hasn't changed (unless your batch / shell script is smart enough to check for changes).
I don't see any reason NOT to have two separate Jenkins Jobs, one for each branch.
If one job needs access to the .jars (aka the build artifacts) then you can always reference the artifacts of any other jar from the job's "LATEST" URL on the jenkins server. Be sure the jobs specify what artifacts need to get archived.
The way I ended up solving this is using the maven versions plugin. I had to make sure all the modules were managed dependencies in the top-level project, but that may have been a different issue. Also, I am sure of this, the US project will need to explicitly declare its version even if it is the same as the parent.
They both poll git but the Platform job also triggers US if it built successfully.
The way the versions plugin works will require you to do this in two steps. Add 2 "Invoke top-level Maven targets" in the job, the second is the clean deploy. The first is a little different for the Platform and US.
Platform: mvn versions:set -DnewVersion=yourBranchName-${project.version}.
US: mvn versions:update-parent -DparentVersion=yourBranchName-${project.version} versions:set -DnewVersion=yourBranchName-${project.version}
If the branch only exists on the US repository, then obviously don't make the Platform one, and the US one is the same command as what the Platform one's would have been.
One final issue I ran into was originally I had the new version as ${project.version}-yourBranchName but the issue here was that the repository the job was deploying to only accepted snapshots and because the version didn't end in -SNAPSHOT it gave error code 400.
I have a Maven 3 multi-module project (~50 modules) which is stored in Git. Multiple developers are working on this code and building it, and we also have automated build machines that run cold builds on every push.
Most individual changelogs alter code in a fairly small number of modules, so it's a waste of time to rebuild the entire source tree with every change. However, I still want the final result of running the parent project build to be the same as if it had built the entire codebase. And I don't want to start manually versioning modules, as this would become a nightmare of criss-crossing version updates.
What I would like to do is add a plugin which intercepts some step in build or install, and takes a hash of the module contents (ideally pulled from Git), then looks in a shared binary repository for an artifact stored under that hash. If one is found, it uses that artifact and doesn't even execute the full build. If it finds nothing in the cache it performs the build as normal, then stores its artifact in the cache. It would also be good to rebuild any modules which have dependencies (direct or transient) which themselves had a cache miss.
Is there anything out there which does anything like this already? If not, what would be the cleanest way to go about adding it to Maven? It seems like plugins might be able to accomplish it, but for a couple pieces I'm having trouble finding the right way to attach to Maven. Specifically:
How can you intercept the "install" goal to check the cache, and only invoke the module's 'native' install goal on a cache miss?
How should a plugin pass state from one module to another regarding which cache misses have occurred in order to force rebuilds of dependencies with changes?
I'm also open to completely different ways to achieve the same end result (fewer redundant builds) although the more drastic the solution the less value it has for me in the near term.
I have previously implemented a more complicated solution with artifact version manipulation and deployment to private Maven repository. However, I think this will fit your needs better and is somewhat more simple:
Split your build into multiple builds (e.g., with a single build per module using maven -pl argument).
Setup parent-child relationships between these builds. (Bamboo even has additional support for figuring out Maven dependencies, but I'm not sure how it works.)
Configure Maven settings.xml to use a different local repository location - specify a new directory inside your build working directory. See docs: https://maven.apache.org/guides/mini/guide-configuring-maven.html
Use mvn install goal to ensure newly built artifacts are added to local repository
Use Bamboo artifact sharing to expose built artifacts from local repository - you should probably filter this to include only the package(s) you're interested in
Set dependent builds to download all artifacts from parent builds and put them into proper subdirectory of local repository (which is customized to be in working directory)
This should even work for feature branch builds thanks to the way Bamboo handles parent-child relations for branch builds.
Note that this implies that Maven will redownload all other dependencies, so you should use a proxy private Maven repository on local network, such as Artifactory or Nexus.
If you want, I can also describe the more complicated scenario I've already implemented that involves modifying artifact versions and deploying to private Maven repository.
The Jenkins plugin allows you to manage/minimize dependent builds
whenever a SNAPSHOT dependency is built (determined by Maven)
after other projects are built (manually via Jenkins jobs)
And if you do a 'mvn deploy' to save the build into your corporate Maven repo then you don't have to worry about dependencies when builds run on slave Jenkins machines. The result is that no module is ever built unless it or one of its dependencies has changed.
Hopefully you can apply these principles to a solution with Bamboo.
Environment:
Around a dozen services: both WARs and JARs (run with the Tanuki wrapper)
Each service is being deployed to development, staging, and production: the environments use different web.xml files, Spring profiles (set on the Tanuki wrapper),...
We want to store the artifacts on AWS S3 so they can be easily fetched by our EC2 instances: some services run on multiple machines, AutoScaling instances automatically fetch the latest version once they boot, development artifacts should automatically be deployed once they are updated,...
Our current deployment process looks like this:
Jenkins builds and tests our code
Once we are satisfied, we do a release with the Artifactory plugin (only for production deployments; development and staging artifacts are based on plain Jenkins builds)
We use the Promoted Builds plugin with 3 different promotion jobs for each project (development, staging, production) to build the artifact for the desired environment and then upload it to S3
This works mostly OK, however we've run into multiple annoyances:
The Artifactory plugin cannot tag and updated the source code due to not being able to commit to the repository (this might be specific to CloudBees, our Jenkins provider). Never mind - we can manually do that.
The Jenkins S3 plugin doesn't properly work with multiple upload targets (we are using one for each environment) - it will try to upload to all of them and duplicate the settings when you save the configuration: JENKINS-18563. Never mind - we are using a shell script for the upload, which is working fine.
The promotion jobs will sometimes fail, because they can be run on a different instance than the original build - either resulting in a failed (build because the files are missing) or in an outdated build (since an old version is being used). Again, this might happen due to the specific setup CloudBees is providing. This can be fixed by running the build again and hopefully having the promotion job run on the same instance as the its build this time (which is pure luck in my understanding).
We've been advised by CloudBees to do a code checkout before running the promotion job, since the workspace is ephemeral and it's not guaranteed that it exists or that it's up to date. However, shell scripts in promotion jobs don't seem to honor environment variables as stated in another StackOverflow thread, even though they are linked below the textarea field. So svn checkout ${SVN_URL} -r ${SVN_REVISION} doesn't work.
I'm pretty sure this can be fixed by some workaround, but surely I must be doing something terribly wrong. I assume many others are deploying their applications in a similar fashion and there must be a better way - without various workarounds and annoyances.
Thanks for any pointers and reading everything to the end!
The biggest issue you face is that you actually want to do a build as part of the promotion. The promoted builds plugin is designed to do a few small actions, primarily on the archived artifacts of your build, or else to perform tagging or other such actions.
The design ensures that it gets a workspace on a slave. But (and this is a general Jenkins thing not a CloudBees specific) firstly the workspace you get need not be the workspace that was used for the build you are trying to promote. It could be:
An empty workspace
The workspace of the most recent build of the project (which may be a newer build than you are trying to promote)
The workspace of an older build of the project (when you cannot get onto the slave that has the most recent build of the project)
Now which of these you get entirely depends on the load that your Jenkins cluster is under. When running on CloudBees, you are usually most likely to encounter either of the first two situations above.
The actions you place in your promotion should therefore be “self-sufficient”. So for example they will copy the archived artifacts into a specific directory and then use those copied artifacts to do their 'thing'. Or they will trigger a downstream build passing through parameters from the build that is promoted. Or they will perform some action on a remote server without using any local state (i.e. flipping the switch in a blue-green deployment)
In your case, you are fighting on multiple fronts. The second front you are fighting on is that you are fighting Maven.
Maven does not like it when a build profile being active produces a different, non-equivalent, artifact.
Maven can probably live with the release profile producing a version of the artifact with JavaScript minified and perhaps debug info stripped (though it would be better if it could be avoided)... but that is because the two artifacts are equivalent. A webapp with JavaScript minified should work just as well as one that is not minified (unless minification introduces bugs)
You use profiles to build completely different artifacts. This is bad as Maven does not store the active profile as part of the information that gets deployed to the Maven repository. Thus when you declare a dependency on one of the artifacts, you do not know which profile was active when that artifact was built. A common side-effect of this is that you end up with split-brain artifacts, where one part of a composite artifact is talking to the development DB and the other part is talking to the production DB. You can tell there is this bad code smell when developers routinely run mvn clean install -Pprod && mvn clean install -Pprod when they want to switch to building a production build because they have been burned by split-brain artifacts in the past. This is a symptom of bad design, not an issue with Maven (other than it would be nice to be able to create architecture specific builds of Maven artifacts... but they can pose their own issues)
The “Maven way” to handle the creation of different artifacts for different deployment environments is to use a separate module to repack the “generic” artifact. As is typical of Maven's subtle approach to handling bad architecture, you will need to add repack modules for each environment you want to target... which discourages having lots of different target environments... (i.e. if you have 10 webapps/services to deploy and three target deployment environments you will have 10 generic modules and 3x10 target specific modules... giving a 40 module build) and hopefully encourages you to find a way to have the artifact self-discover its correct configuration when deployed.
I suspect that the safest way to hack a solution is to rely on copying archived artifacts.
During the main build, create a file that will checkout the same revision as is being built, e.g.
echo '#!/usr/bin/env sh' > checkout.sh
echo "svn revert . -R" >> checkout.sh
echo "svn update --force -r ${SVN_REVISION}" >> checkout.sh
Though you may want to write something a little more robust, e.g. ensuring that you are in a SVN working copy, ensuring that the working copy is using the correct remote URI, removing any files that are not needed, etc
Ensure that the script you create is archived.
Add at the beginning of your promotion process, you copy the archived checkout.sh to the workspace and then run that script. The you should be able to do all your subsequent promotion steps as before.
A better solution would be to create build jobs for each of the promotion processes and pass the SVN revision through as a build parameter (but will require more work to set up for you (as you already have the promotion processes set up)
The best solution is to fix your architecture so that you produce artifacts that discover their correct configuration from their target environment, and thus you can just deploy the same artifact to each required environment without having to rebuild artifacts at all.
My problem is that I have written a maven plugin to deploy the artifact to a user specified location. I'm now trying to write another maven plugin to use this deployed artifact, change some things and zip it again.
I want to write the second plugin such that i use the first plugin to get the information for where it was deployed.
I don't know how to access this information from the first plugin.
I would agree with #Barend that if you can afford to make changes before deploy, that could be best strategy.
If you cannot do that, you can follow strategy of a plugin like Maven Release plugin. Maven release plugin runs in two phases where second run needs output of the first run. They manage it by keeping temporary properties file in the project directory which carry the information like tag name, SNAPSHOT version name etc.
You could use the same approach with plugin. Just remember that your plugin will be sort of transactional, where it expects the other goal to have run before it can do its work.
It seems to me that the easiest workaround is to reverse the order in which the plugins run.
Have Plugin B run first, using the known location under target/ to modify the artifact and then run Plugin A, deploying the modified artifact to the configured location.
If that's no option, I suggest you simply duplicate the configuration value (so that both plugins are told about the new location in their <configuration> element). This keeps both plugins independent, which is what Maven assumes them to be.
A last option is to make make Plugin B parse the entire POM and extract the information from Plugin A's <configuration> element, but I really can't recommend this. If you go this way the two plugins are so closely intertwined that they're really just one plugin. This is poor design, violates the principle of least surprise and might cause nasty configuration problems down the line.
I am working in a small team (3 persons) on several modules (about 10 currently). The compilation, integration and management of build versions is becoming more and more tedious.
I am looking for a good build / integration tool to replace / complete Ant.
Here is the description of our current development environment :
- Several modules depending on each over and on third party JARs
- Some may export JARS, some export WARS, some export standalone, runnable JARS (with Fat-Jar)
- Javadoc for all of them
- We work with eclipse
- Custom Ant script for each module. Many redundant information between the eclipse configuration and Ant scripts. For example, for the standalone Fat-JAR, we have listed all the recursive dependencies, whereas ideally, it could clearly be imported from the eclipse configuration.
- The source code is versioned using SVN
Here is what I would like a perfect integration tool to do for me :
Automatize the releases and versioning of modules. Ideally, the integration tool should detect if a new version is needed. For example, if I want to release a project A that depends on a project B, and if I have made small changes on the project B locally, then the integration tool should first release a new version of B as well and make A based on it.
Integrate strongly with eclipse, so that it could get the dependencies between modules and third party libs from its configuration. BTW, I would like to continue to configure build path with eclipse without updating some other ".xml" stuff. I saw that Gradle can generate eclipse project files from its configuration, but the counterpart would be great.
Enable a "live" and transparent development on local projects. I mean that I often make small changes on the core / common projects while developing the main / "leaf" projects. I would like to have my changes on core projects immediately available to leaf projects without the need of publishing (even locally) the JARs of my core projects.
Store all versions of the releases of my module on an external server. The simplest (shares folder / Webdav) would be the best. A nice web page with list of modules and delivered artifacts would be great too.
I have looked around for many things. From Ant4eclipse (to integrate the Eclipse configuration into my Ant script), to the Maven / Ivy / Gradle tools.
I am a bit confused.
Here is what I have understood so far:
- Maven is a great / big tool, but is somewhat rigid and obliges you to bend to its structure and concepts. It is based on description rather than on scripting. If you go out of the path, you have to develop you own plugins.
- Ivy is less powerful than maven, it handles less stuff but is more flexible.
- Gradle is in-between. It is general purpose. It enables scripting as well as "convention based" configuration. It integrates Ant and extends it.
So at this point I am looking for actual testimonials from real users.
What tools do you use ? How ? Do you have the same needs as me ?
Does it ease your life or get into the way ?
Are there sample some use cases, or workspace skeletons out there that I could use as a starting point to see what these tools are capable of ?
Sorry for the length of this message.
And thanks in advance for you advice.
Kind regards,
Raphael
Automatize the releases and versioning of modules (...)
The concepts of versioning and repository are built-in with Maven and they could fit here.
Maven supports SNAPSHOT dependencies. When using a snapshot, Maven will periodically try to download the latest available snapshot from a repository when you run a build. SNAPSHOT are typically used when a project is under active development.
Maven 2 also supports version ranges (I do not really recommend them but that's another story) which allow for example to configure A to depend on version [4.0,) of B (any version greater than or equal to 4.0). If you build and release a new version of B, A would use it.
Integrate strongly with eclipse
The m2eclipse plugin provides bi-directional synchronization with Eclipse.
Enable a "live" and transparent development on local projects.
The m2eclipse plugin supports "workspace resolution": if project A depend on project B and if project B is in the workspace, you can configure A to depend on B sources and not on B.jar (that's the default mode if I'm not wrong). So a change on B sources would be directly visible, without the need to build B.jar.
Store all versions of the releases of my module on an external server.
As mentioned earlier, this is actually a central concept of Maven (you don't even have the choice) and deploying through file:// or dav:// are both supported.
To sum up, Maven is (probably) not the only candidate but I'm sure it would fit:
Your project isn't that exotic or complex, there is nothing scaring from your description (some refactoring of the structure will probably be required but this shouldn't be a big deal).
Maven also brings a workflow based on best practices.
m2eclipse provides strong integration with the IDE.
But Maven has some learning curve.
CI tools? To me, there's only one: the Hudson CI.
I've setup a software development environment for Java once, with the components:
eclipse IDE
mercurial
bugzilla
maven
Nexus
Hudson CI
and some apache, mysql, php, perl, python, .. for integration.
The hudson was not integrated with eclipse and that was on purpose, because I wanted to build on a separate server. for all the other tools I had a perfect cross integration (like: mylyn on eclipse to talk with bugzilla, m2eclipse for using maven eclipse, a lot of plugins for hudson, ...)
We've been starting to integrate Gradle into our build process, and I can add to the answers posted already that Gradle would also work. Your assumptions are mostly correct, gradle is more off the cuff, but is powerful and allows for scripting and such within the build itself. It seems that most things maven can do, gradle does as well.
Now for your individual points:
Versioning: gradle supports dependency maps, versioning, and if you add in a CI server, you can trigger automated/dependent builds. For example, almost all of our 'deliverables' are .wars, but we have several code libs (.jars) and one executable .jar in development. One configuration is to make the the wars and the "fat-jar" dependent on the shared code libs. Then, when the shared libs are updated, bump the versions on the shared libs, test the consuming projects, then use Hudson's ability to fire dependent projects to redeploy those. There are other ways, but that seems to work best for us, for now.
Integrate strongly with eclipse: You're right, gradle can generate the eclipse files. We tend to only use the eclipseCp (to update .classpath) task once we get going, as only classpath needs changed. It's kind of quirky (grabs your default JRE, so make sure it's right, doesn't add exported="true" if you need it), but gets you 99% of the way there.
Enable a "live" and transparent development on local projects: This is one I'm not sure about. I've only hacked around gradle in this case; by removing the artifact in the consuming project and marked the shared project as such in eclipse, then reverted afterwards.
Store all versions of the releases of my module on an external server: simple and many approaches are supported, similar to Maven.
As far as examples, the docs for gradle are good, as well as the example projects that come with the full zip. They'll get you up and running fairly quickly.
Have a look at Ant Ivy. http://ant.apache.org/ivy/
There are no silver bullets, but in my experience Maven is a great project management tool. Personally, I like to use a comibnation of subversion (for version control), maven (for project/build management) and hudson (for continuous build/integration).
I find the convention brought by maven is really useful for context switching, and great for dependency management. It can be frustrating if jars aren't in the repositories, but you can install them locally and when you're ready you can host your own private repository which mirrors other places. I have had a good experience using sonar.nexus by http://www.sonatype.com/ . They also provide an excellenmt free book to get you started.
It might seem like overkill now, but setting up a good build / test / integrate / release environment now, will pay dividends later. It's is always harder to retro-fit, and it's something you can replicate easily.
Lastly, I happen to prefer Netbeans integration for maven, but that's just me :)
Some of your topics are part of deployment and release management.
You could check out a product like: Xebia DeployIt
(with an personal edition which is free)