Question
What is the best way to carry artifacts (jar, class, war) among projects when using docker containers in CI phase.
Let me explain my issue in details, please don't stop the reading... =)
Gitlabs project1
unit tests
etc...
package
Gitlabs project2
unit test
etc...
build (failing)
here I need one artifact (jar) generated in project1
Current scenario / comments
I'm using dockers so in each .gitlab-ci.yml I'll have independent containers
All is working fine in project1
If I use "shell" instead of dockers in my .gitlab-ci.yml I can keep the jar file from the project1 in the disk and use when project2 kicks the build
Today my trigger on call project2 when project1 finish is working nicely
My artifact is not an RPM so I'll not add into my repo
Possible solutions
I can commit the artifact of project1 and checkout when need to build project2
I need to study if cache feature from gitlabs is designed for this purpose (gitlab 8.2.1, How to use cache in .gitlab-ci.yml)
In GitLab silver and premium, there is the
$CI_JOB_TOKEN available, which allows the following .gitlab-ci.yaml snippet:
build_submodule:
image: debian
stage: test
script:
- apt update && apt install -y unzip
- curl --location --output artifacts.zip "https://gitlab.example.com/api/v4/projects/1/jobs/artifacts/master/download?job=test&job_token=$CI_JOB_TOKEN"
- unzip artifacts.zip
only:
- tags
However, if you do not have silver or higher gitlab subscriptions, but rely on free tiers, it is also possible to use the API and pipeline triggers.
Let's assume we have project A building app.jar which is needed by project B.
First, you will need an API Token.
Go to Profile settings/Access Tokens page to create one, then store it as a variable in project B. In my example it's GITLAB_API_TOKEN.
In the CI / CD settings of project B add a new trigger, for example "Project A built". This will give you a token which you can copy.
Open project A's .gitlab-ci.yaml and copy the trigger_build: section from project B's CI / CD settings trigger section.
Project A:
trigger_build:
stage: deploy
script:
- "curl -X POST -F token=TOKEN -F ref=REF_NAME https://gitlab.example.com/api/v4/projects/${PROJECT_B_ID}/trigger/pipeline"
Replace TOKEN with that token (better, store it as a variable in project A -- then you will need to make it token=${TRIGGER_TOKEN_PROJECT_B} or something), and replace REF_NAME with your branch (e.g. master).
Then, in project B, we can write a section which only builds on triggers and retrieves the artifacts.
Project B:
download:
stage: deploy
only:
- triggers
script:
- "curl -O --header 'PRIVATE-TOKEN: ${GITLAB_API_TOKEN}' https://gitlab.example.com/api/v4/projects/${PROJECT_A_ID}/jobs/${REMOTE_JOB_ID}/artifacts/${REMOTE_FILENAME}"
If you know the artifact path, then you can replace ${REMOTE_FILENAME} with it, for example build/app.jar. The project ID can be found in the CI / CD settings.
I extended the script in project A to pass the remaining information as documented in the trigger settings section:
Add variables[VARIABLE]=VALUE to an API request. Variable values can be used to distinguish between triggered pipelines and normal pipelines.
So the trigger passes the REMOTE_JOB_ID and the REMOTE_FILENAME, but of course you can modify this as you need it:
curl -X POST \
-F token=TOKEN \
-F ref=REF_NAME \
-F "variables[REMOTE_FILENAME]=build/app.jar" \
-F "variables[REMOTE_JOB_ID]=${CI_JOB_ID}" \
https://gitlab.example.com/api/v4/projects/${PROJECT_B_ID}/trigger/pipeline
Hello you must take a look at a script named get-last-successful-build-artifact.sh and developed by morph027.
https://gitlab.com/morph027/gitlab-ci-helpers
This script allow to download an artifact and unzip it in the project root. It use Gitlab API to retrieve latest successful build and download corresponding artifact. You can combine multiple artifacts and unzip wherever you want just by updating the script a little.
I'm also currently starting a PHP library to handle build artifacts but it's in a very early stage and tied with laravel for the moment.
For the moment there is no easy way to handle artifact usage between projects, you must build your own using that tools.
I think using shell executor is not the right solution, it's very dangerous because you can't verify the file on the server used during the build !
Hope this help :)
arry artifacts (jar, class, war) among projects
That should be what the package Registry is for.
With GitLab 13.3 (August 2020), it is now available for free!
Package Registry now available in Core
A year and a half ago, we expanded our support for Java projects and developers by building Maven support directly into GitLab. Our goal was to provide a standardized way to share packages and have version control across projects.
Since then, we’ve invested to build out the Package team further while working with our customers and community to better understand your use cases. We also added support for Node, C#/.NET, C/C++, Python, PHP, and Go developers.
Your increased adoption, usage, and contributions to these features have allowed us to expand our vision to a more comprehensive solution, integrated into our single application, which supports package management for all commonly-used languages and binary formats.
This goal could not have been achieved without the explicit support of the GitLab community.
As part of GitLab’s stewardship promises, we are excited to announce that the basic functionality for each package manager format is now available in the GitLab Core Edition.
This means that if you use npm, Maven, NuGet, Conan, PyPI, Composer or Go modules, you will be able to:
Use GitLab as a private (or public) package registry
Authenticate using your GitLab credentials, personal access, or job token
Publish packages to GitLab
Install packages from GitLab
Search for packages hosted on GitLab
Access an easy-to-use UI that displays package details and metadata and allows you to download any relevant files
Ensure that your contributions are available for ALL GitLab users
We look forward to hearing your feedback and continuing to improve these features with all of our users.
See Documentation and Issue.
See this video.
Cool, found my snippet being referenced here ;)
is it possible to use get-last-successful-build-artifact.sh without private-token (in a world-readable repository)? e.g. to share an artifact download command w/o exposing your token
Yes, just add it as a secret variable in project settings -> pipelines -> secret variables.
As of this writing artifacts cannot be shared across project only within the pipeline. See https://docs.gitlab.com/ee/ci/yaml/README.html#artifacts
However there is an open feature to enable this facility which is not yet implemented.
https://gitlab.com/gitlab-org/gitlab-ce/issues/14728
Related
I am planning to use gradle as build tool with docker for containerizing spring boot applications.
I currently have one question regarding best practices/pros/cons from:
a. from general perspective as a best practice.
b. from CI /CD perspective.
I have understood that I can do it in three ways:
1. Do gradle build by running command on your host machine + then dockerize your spring boot app
eg:
/.gradlew build
docker build -f dockerfile...
2. Do gradle build inside dockerfile itself.
for 2, I got inspiration from these guys at dockercon(https://www.docker.com/blog/intro-guide-to-dockerfile-best-practices/).
eg:
FROM gradle:4.7.0-jdk8-alpine AS build
COPY --chown=gradle:gradle . /home/gradle/src
WORKDIR /home/gradle/src
RUN gradle build --no-daemon
FROM openjdk:8-jre-slim
EXPOSE 8080
RUN mkdir /app
COPY --from=build /home/gradle/src/build/libs/*.jar /app/spring-boot-application.jar
ENTRYPOINT ["java", "-XX:+UnlockExperimentalVMOptions", "-XX:+UseCGroupMemoryLimitForHeap", "-Djava.security.egd=file:/dev/./urandom","-jar","/app/spring-boot-application.jar"]
There are other articles as well
https://codefresh.io/docs/docs/learn-by-example/java/gradle/
https://codefresh.io/docker-tutorial/java_docker_pipeline/
here I would also like to point that for option 2 that:
a. I plan to use mount option from docker instead of rebuilding image again and again to reflect local changes.
b. I plan to leverage multistage builds, so that we can just discard heavy gradle input and just focus on jar in final output.
3. Use buildpacks, jib or spring boot build image command.
any ideas? If anyone has experienced any pros cons in this area please share.
After almost 7 years of building Docker images from Gradle, long before Docker became a commonplace thing, I’ve never done option 2. I’ve done options 1 and 3, primarily 3.
The problem with #1 is that you lose the information from your Gradle project that can be used to build the image, like the location of the jar file and the project name (there are several others). You end up redefining them on the command line, and the result could be very different.
The problem with #2 is the loss of developer productivity and conflating responsibilities. I can’t imagine building a Docker image every time I made a change to the code. Gradle is a build tool, Docker is a delivery mechanism, and they have different goals.
There are many articles that you can find online for building Docker images that apply equally well to Spring applications. Most notably:
Use layers to avoid rebuilding code not changed.
Use a Gradle Docker plugin to integrate Docker build within Gradle. I’m not sure if the Spring Boot plugin has integrated Docker build task now, if so, use that.
Use a JRE as base instead of JDK if your code can live with that. Many projects don’t need JDK to run.
After a lot of thought and research I have switched to buildpacks.
Also, I have included image build process in build with maven plugin.
Buildpacks are mature way of building docker images with all best practices and security in mind.
Dockerfile is good option if you have a really really custom requirement that does not fit in ideal scenarios and requires custom development.
Example of buildpacks:
https://paketo.io/docs/howto/java/
For this java buildpack you can configure:
you can configure your ssl certificates
jvm options
remote debugging options
JVM vendor switching
Custom command launch
and list goes on...
Also you do not need to worry for all best practices (https://docs.docker.com/develop/develop-images/dockerfile_best-practices/), buildpacks handle that for you.
In case of dockerfile, you might forget to apply standard practices to your docker image(when we are dealing with lot of services/images).
So its another piece of code to maintain and ensure you follow all practices correctly.
Also you can include buildpack in your builds as well,
eg: you can use spring boot gradle or maven plugin
ref: https://docs.spring.io/spring-boot/docs/current/maven-plugin/reference/htmlsingle/#build-image
Ref: https://tanzu.vmware.com/developer/blog/understanding-the-differences-between-dockerfile-and-cloud-native-buildpacks/
ref: https://cloud.google.com/blog/topics/developers-practitioners/comparing-containerization-methods-buildpacks-jib-and-dockerfile
My Boss is went to holiday for some days, I am a java developer I dont know much about docker (Only basic knowledge)
I have the java Spring boot project , in that there is a Dockerfile at the root of the project.
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
I want to add maven into the same, without modifiying the structure. and maven should clean package it.?
for eg. docker build -t user/myimage .
it should install maven and clean package my project automatically.
without I need to run mvn clean package explicitly
I have got few resources before from stackoverflow but I cannot do experiment, I need code which can run thought the given jdk script mentioned previously.
FROM maven:3.6.0-jdk-11-slim AS build
COPY src /home/app/src
COPY pom.xml /home/app
RUN mvn -f /home/app/pom.xml clean package
I request to you please give a equivalent script for Maven clean package so I can add it to the dockerfile which build my image.
I have got few resources before from stackoverflow but I cannot do
experiment, I need code which can run thought the given jdk script
mentioned previously.
If I understand well, your issue is you want to add the maven packaging step in the docker build while still using that exact version of JDK :
FROM openjdk:8-jdk-alpine at runtime and to not depend on that base image : FROM maven:3.6.0-jdk-11-slim AS build.
First thing : don't worry to depend on another base docker image in a Docker Build.
Docker build has a multi-stage build feature since the 17.05 Docker version (05/2017), which allows to define multiple stages in a build and that suits to your need.
Indeed, in the first stage you could rely on the maven docker image to build your application while in the second stage you could rely on the JDK docker image to run your application.
The multi-stage build feature has also another advantage : image size optimization since intermediate layers produced by previous steps are not kept in the final image size.
For integration/production like environment, that is probably the way that I would use.
To come back to your issue, if you want to use the answer that you linked, the main thing that you need to change is the JDK tag of the final base image to match your requirement and you should also adapt some small things.
As a side note, currently you copy the JAR at the root of the image. That is not a good practice for security reasons. You should rather copy it in a folder, for example /app.
When you reference the jar built by maven, it is also preferable to prefix the jar by its artifactId/final name without its version rather than consider any jar in the target folder as the fat jar as you do here : ARG JAR_FILE=target/*.jar.
So it would give :
# Build stage
FROM maven:3.6.0-jdk-11-slim AS build
COPY pom.xml /app/
COPY src /app/src
RUN mvn -f /app/pom.xml clean package
# Run stage
FROM openjdk:8-jdk-alpine # Use your target JDK here !
COPY --from=build /app/target/app*.jar /app/app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app/app.jar"]
Note that the proposed solution is very simple, is enough in many cases but may not suit to your specific context.
There are really several approaches to handle your question. According to your targeted environment, your requirements are not the same and the strategy will differ.
These requirements may be build speed, final image size optimization, fast feedback for local developers. And all of these are not necessarily compatible.
The Docker build area is really a very large subject. I strongly advise you to study more that to be able to adjust the docker build solution to your real needs.
This would typically be 2 steps:
build jar.
run docker build to build image containing the jar.
There are maven/gradle plugins that help you automate the process from your maven build though, see dockerfile-maven, this is probably what you're looking for: https://github.com/spotify/dockerfile-maven
We are working on a legacy project and the first task is to setup a DevOps for the same.
The important thing is we are very new to this area.
We are planning to use jenkins and sonarqube for the purpose initially. Let me start with the requirements.
Currently the the project is sub divided into multiple projects (not modules)
We had to follow this build structure as there are no plans for re-organising it as a single multimodule maven project
Currently the builds and dependencies are managed manually
Eg: The project is subdivided in to 5 multi-module maven projects,
say A,B,C,D and E
1. A and C are completely independent and can be easly built
2. B depends on the artifact generated by A (jar) and has multiple maven profiles (say tomcat and websphere, it is a webservice module)
3. D depends on the artifact generated by C
4. E depends on A, B and D and has multiple maven profiles (say tomcat and websphere, it is a web project)
Based on jenkins documentation to handle this scenario, we are thinking about parameterized builds using “parameterized build plugin" and "extended choice parameter plugin" with the help of these plugins we are able to parameterize the profile name. But before each build, the builder waits for the profile parameter.
So we are still searching to find an good solution to
1. keep the dependency between projects an built the whole projects if there is any change in SCM (SVN). For that we are used "Build whenever a SNAPSHOT dependency is built" and "SCM polling option". Unfortunately this option seems not working in our case (we have given an interval of 5 min for scm polling but no build is happening based on test commits)
2. Even though we are able to parameterize the profile, this seems as a manual step (is there an option to automate this part too, ie. build with tomcat profile and websphere profile should happen sequentially).
We are struggling to find a solution to cater all these core requirements. Any pointer would be greatly appreciated.
Thanks,
San
My maven knowledge is limited, however since you didnt get any response yet, ill try to give some general advice.
There are usually multiple ways to reach some aim in Jenkins, each has its pros and cons. Choosing the most fitting solution depends on the specific requirements and your environment/setup.
However you first need something that just works, then you can refine it.
A quick result you get with the following
Everything in one job
Configure your subversion repo (Multiple are possible) to be checked out into your workspace
Enable Poll SCM trigger
Build your modules/projects via Execute shell build steps. (Failed builds can be handed to the job result by using Exit 1 on a Execute shell Build step.)
However keep in mind that this will prevent advanced functionality on a per project/module basis, such as mail notifications to the dev to blame. Or trend of metrics, like warnings or static code analysis.
The following solution is easier to extend in that direction.
Wrapper job around your various build jobs
Use Build step Trigger/call builds on other projects to build A, archive needed artifacts
Use Build step Trigger/call builds on other projects with some parameter tomcatto build B tomcat version, use Copy Artifact Plugin to copy over jar from A
...
Use Build step Trigger/call builds on other projects with some parameter tomcatto build E tomcat version. Use Copy Artifact Plugin to copy all needed artifacts, you can specify parameter there if you need artifact of i.e. B tomcat version
In this setup, monitoring the svn is an issue since if you trigger it from polling SCM, it will checkout it in your wrapper workspace while you dont actually need it checked out there, but in your build jobs.
Possible solution: Share the workspace between wrapper job and your build jobs, so the duplicate checkouts in the build jobs will find the files already in the right revision. However then you *need+ to make sure the downstream jobs are executed on the same machine (there are plugins to do so)
Or even more elegant: Use a post-commit hook (See here, section Post-comit hook) on your svn to notify jenkins of changes.
Edit: For the future, its worth looking into the Pipeline Plugin and its documentation for more complex builds, this is the engine for the upcoming jenkins version 2.0, see here.
I would create 5 different jobs for ABCDE.
As you mentioned A and C would be standalone jobs so I would just do mvn clean install/pkg/verify based on your need.
For B I would first build A and then invoke another maven target in build to build B
For D, I would first build C and then build D
Finally for E , i would use invoke top level mvn targets 5 times A , B,C,D and finally E
Edit:
Jenkins 2 is out and has a built-in support for pipelines.
A few pointers for your requirements:
"built the whole projects if there is any change in SCM"
Although Poll SCM usually requires less work, the proper way to do it is to use SVN hooks.
The solution works as follows:
First you enable the Trigger builds remotely feature and enter a random token in Authentication Token.
This allows you to trigger builds remotely using Jenkins REST API (http[s]://JENKINS_URL/job/BUILD_NAME/build?token=TOKEN)
Then you create a SVN hook (a script that runs whenever you commit) which triggers the build by sending a request to that URL (using curl,wget, python,...).
There are a lot of manuals on how to create SVN hooks, here's the first result on "SVN Hooks" from Google.
"keep the dependency between projects"
I would create a different Jenkins Job for each project separately, then make sure builds are executed in the required order.
I think the best way to order your builds (dependencies) is to create a Build Pipeline using the Pipeline Plugin (previously known as Workflow Plugin).
There is a lot to explain here, so it's better you read on your own. You can start here.
There are also other (simpler) solutions, like Build Flow Plugin or Parameterized Trigger Plugin which can help create dependencies between builds, but I think Pipeline is the newest and considered a best practice (it's definitely the most advanced solution).
Still, having said that, if you feel Pipeline is an overkill for you, go for the alternatives.
I would recommend making sure each build does a mvn install to the same local repo, and also deploys the artifact to Artifactory (hopefully you have one).
Automate parameterized builds: "build with tomcat profile and websphere profile"
To do that you'll need to create parameterized builds.
That's pretty easy to do, you just check This build is parameterized in your build config and add a MVN_PROFILE string/choice parameter.
After that you can trigger each build several times, with different parameters, using any one of the plugins mentioned in the previous bullet.
Extra Tip:
While hacking your way through this, consider using Job Configuration History Plugin, it can help review and revert changes made to the configuration.
Good luck, hope this helps :)
I would consider a bit different approach to fully de-couple the projects.
If you are able to create your internal artifactory, than I would consider in the maven build each on of the dependencies as a third party library exactly like it is done with any other external libraries you are using.
This way, each such project can be seperatly built and stored in the artifactory and when a dependent project will be built it will just take the right version as mentioned in the pom file.
This way you'll have different build process for each one of the projects and only relevant projects (relevant = changed) will be built.
Environment:
Around a dozen services: both WARs and JARs (run with the Tanuki wrapper)
Each service is being deployed to development, staging, and production: the environments use different web.xml files, Spring profiles (set on the Tanuki wrapper),...
We want to store the artifacts on AWS S3 so they can be easily fetched by our EC2 instances: some services run on multiple machines, AutoScaling instances automatically fetch the latest version once they boot, development artifacts should automatically be deployed once they are updated,...
Our current deployment process looks like this:
Jenkins builds and tests our code
Once we are satisfied, we do a release with the Artifactory plugin (only for production deployments; development and staging artifacts are based on plain Jenkins builds)
We use the Promoted Builds plugin with 3 different promotion jobs for each project (development, staging, production) to build the artifact for the desired environment and then upload it to S3
This works mostly OK, however we've run into multiple annoyances:
The Artifactory plugin cannot tag and updated the source code due to not being able to commit to the repository (this might be specific to CloudBees, our Jenkins provider). Never mind - we can manually do that.
The Jenkins S3 plugin doesn't properly work with multiple upload targets (we are using one for each environment) - it will try to upload to all of them and duplicate the settings when you save the configuration: JENKINS-18563. Never mind - we are using a shell script for the upload, which is working fine.
The promotion jobs will sometimes fail, because they can be run on a different instance than the original build - either resulting in a failed (build because the files are missing) or in an outdated build (since an old version is being used). Again, this might happen due to the specific setup CloudBees is providing. This can be fixed by running the build again and hopefully having the promotion job run on the same instance as the its build this time (which is pure luck in my understanding).
We've been advised by CloudBees to do a code checkout before running the promotion job, since the workspace is ephemeral and it's not guaranteed that it exists or that it's up to date. However, shell scripts in promotion jobs don't seem to honor environment variables as stated in another StackOverflow thread, even though they are linked below the textarea field. So svn checkout ${SVN_URL} -r ${SVN_REVISION} doesn't work.
I'm pretty sure this can be fixed by some workaround, but surely I must be doing something terribly wrong. I assume many others are deploying their applications in a similar fashion and there must be a better way - without various workarounds and annoyances.
Thanks for any pointers and reading everything to the end!
The biggest issue you face is that you actually want to do a build as part of the promotion. The promoted builds plugin is designed to do a few small actions, primarily on the archived artifacts of your build, or else to perform tagging or other such actions.
The design ensures that it gets a workspace on a slave. But (and this is a general Jenkins thing not a CloudBees specific) firstly the workspace you get need not be the workspace that was used for the build you are trying to promote. It could be:
An empty workspace
The workspace of the most recent build of the project (which may be a newer build than you are trying to promote)
The workspace of an older build of the project (when you cannot get onto the slave that has the most recent build of the project)
Now which of these you get entirely depends on the load that your Jenkins cluster is under. When running on CloudBees, you are usually most likely to encounter either of the first two situations above.
The actions you place in your promotion should therefore be “self-sufficient”. So for example they will copy the archived artifacts into a specific directory and then use those copied artifacts to do their 'thing'. Or they will trigger a downstream build passing through parameters from the build that is promoted. Or they will perform some action on a remote server without using any local state (i.e. flipping the switch in a blue-green deployment)
In your case, you are fighting on multiple fronts. The second front you are fighting on is that you are fighting Maven.
Maven does not like it when a build profile being active produces a different, non-equivalent, artifact.
Maven can probably live with the release profile producing a version of the artifact with JavaScript minified and perhaps debug info stripped (though it would be better if it could be avoided)... but that is because the two artifacts are equivalent. A webapp with JavaScript minified should work just as well as one that is not minified (unless minification introduces bugs)
You use profiles to build completely different artifacts. This is bad as Maven does not store the active profile as part of the information that gets deployed to the Maven repository. Thus when you declare a dependency on one of the artifacts, you do not know which profile was active when that artifact was built. A common side-effect of this is that you end up with split-brain artifacts, where one part of a composite artifact is talking to the development DB and the other part is talking to the production DB. You can tell there is this bad code smell when developers routinely run mvn clean install -Pprod && mvn clean install -Pprod when they want to switch to building a production build because they have been burned by split-brain artifacts in the past. This is a symptom of bad design, not an issue with Maven (other than it would be nice to be able to create architecture specific builds of Maven artifacts... but they can pose their own issues)
The “Maven way” to handle the creation of different artifacts for different deployment environments is to use a separate module to repack the “generic” artifact. As is typical of Maven's subtle approach to handling bad architecture, you will need to add repack modules for each environment you want to target... which discourages having lots of different target environments... (i.e. if you have 10 webapps/services to deploy and three target deployment environments you will have 10 generic modules and 3x10 target specific modules... giving a 40 module build) and hopefully encourages you to find a way to have the artifact self-discover its correct configuration when deployed.
I suspect that the safest way to hack a solution is to rely on copying archived artifacts.
During the main build, create a file that will checkout the same revision as is being built, e.g.
echo '#!/usr/bin/env sh' > checkout.sh
echo "svn revert . -R" >> checkout.sh
echo "svn update --force -r ${SVN_REVISION}" >> checkout.sh
Though you may want to write something a little more robust, e.g. ensuring that you are in a SVN working copy, ensuring that the working copy is using the correct remote URI, removing any files that are not needed, etc
Ensure that the script you create is archived.
Add at the beginning of your promotion process, you copy the archived checkout.sh to the workspace and then run that script. The you should be able to do all your subsequent promotion steps as before.
A better solution would be to create build jobs for each of the promotion processes and pass the SVN revision through as a build parameter (but will require more work to set up for you (as you already have the promotion processes set up)
The best solution is to fix your architecture so that you produce artifacts that discover their correct configuration from their target environment, and thus you can just deploy the same artifact to each required environment without having to rebuild artifacts at all.
The problem: you have a zipped java project distribution, which depends on several libraries like spring-core, spring-context, jacskon, testng and slf4j. The task is to make the thing buildable offline. It's okay to create project-scope local repo with all required library jars.
I've tried to do that. Looks like even as the project contains the jars it requires for javac and runtime, the build would still require internet access. Maven would still lurk into network to fetch most of its own plugins it requires for the build. I assume that maven is run with empty .m2 directory (as this may be the first launch of the build, which may be an offline build). No, I am not okay with distributing full maven repo snapshot along the project itself, as this looks like an utter mess for me.
A bit of backround: the broader task is to create windows portable-style JDK/IntelliJ Idea distribution which goes along the project and allows for some minimal java coding/running inside IDE with minimal configuration and minimal internet access. The project is targeted towards students in a computer class, with little or no control over system configuration. It is desirable to keep console build system intact for the offline mode, but I guess that maven is overly dependent on the network, so I have to ditch it in favor of good old ant.
So, what's your opinion, could we move first maven build in offline mode completely? My gut feeling is that initial maven distribution just contains the bare minimum required to pull essential plugins off the main repo and is not fully functional without seeing the main repo at least once.
Maven has a '-o' switch which allows you to build offline:
-o,--offline Work offline
Of course, you will need to have your dependencies already cached into your $HOME/.m2/repository for this to build without errors. You can load the dependencies with:
mvn dependency:go-offline
I tried this process and it doesn't seem to fully work. I did a:
rm -rf $HOME/.m2/repository
mvn dependency:go-offline # lot of stuff downloaded
# unplugged my network
# develop stuff
mvn install # errors from missing plugins
What did work however is:
rm -rf $HOME/.m2/repository
mvn install # while still online
# unplugged my network
# develop stuff
mvn install
You could run maven dependency:go-offline on a brand new .m2 repo for the concerned project. This should download everything that maven needs to be able to run offline. If these are then put into a project-scope local repo, you should be able to achieve what you want. I haven't tried this though
Specify a local repository location, either within settings.xml file with <localRepository>...</localRepository> or by running mvn with -Dmaven.repo.local=... parameter.
After initial project build, all necessary artifacts should be cached locally, and you can reference this repository location the same ways, while running other Maven builds in offline mode (mvn -o ...).