This may sound crazy but we have our developers all working on the same Linux machine, this machine also has nexus installed as our maven repo. Effectively everyone ends up with artefacts in their ~/.m2/ folder which are also duplicated in the nexus server.
Is it possible to simply tell maven to only look at the artefacts in nexus?
I have for the moment set the property <localRepository>/path/to/global/repo</localRepository> in our global maven config, but unsure if this could cause a problem if two users are grabbing the same file at the same time.
We do this because the company won't buy us powerful workstations so we all ssh to our development server.
I'm not sure if this directly answers your concern, but it looks like what you're doing is correct.
From http://maven.apache.org/settings.html
localRepository: This value is the path of this build system's local
repository. The default value is
${user.home}/.m2/repository. This
element is especially useful for a
main build server allowing all
logged-in users to build from a common
local repository.
I wouldn't want to point my local repo to the nexus datastore, because then installs would update the repository datastore behind nexus' back.
However you could set up a single "machine" local repo separate to the nexus datastore,
and then for each user change the ~/.m2/repository directory to be a symlink pointing to the "machine" local repo.
At least then you'll only have 2 copies of the repo.
Concurrent installs and downloads, are still likely to clobber one another, but this can be fixed with an annoying redo.
Update:
There is a new solution available.
Installation of the TEAM (Takari Extensions for Apache Maven) extensions, provides a thread-safe local repository and an improved algorithm for multi module builds.
See http://takari.io/book/30-team-maven.html#concurrent-safe-local-repository
Use the
--offline
option at command line.
Related
I am working in a company network which blocks connectivity to remote maven repo,What is the best way to use maven offline without sync with remote repo ?
Do we have option without setting up a mirror in local machine.
This is not ideal, but you could build the project outside your company network (at home), then take a copy of your .m2 directory back to your work computer. This works in a pinch and only until you need more dependencies.
We need my current project to be reasonably portable: run, build, test offline in several different, not connected, environments (no internet access, no software development support software can be installed).
We've tried to make it work with Maven offline mode. This, however, does require the local package cache to be managed and kept in-sync across developmental environments. It has proven to be a major headake.
Is there a way to have dependencies versioned in the same repository as the source code still taking advantage of the repository-based build system like Maven (on the machines that are online)?
You could configure Maven local repository as a separate folder in your project. In this case both your source code and a separate folder for Maven local repo will be stored in your version control system. But you will have to manually delete old dependencies in case of version upgrade just not to store unused jars in version control system. There is an option to pass path to local repo as a parameter like this:
mvn -Dmaven.repo.local=<path>
I made changes to the source code of a certain project (that exists in maven repo) by taking its source code from svn and having some lines changed in it,
now I need to install this into our local repository so other people using it have access to this update, what are the recommended steps to install this into my local repo, shall i change the version ? shall it be a snapshot ? or shall i just build it with the same version. I just need more standards on doing this
You definitely should not use the same version no. If you use the same version number there will be a lot of confusion as to which jar are you using? The one thats in the maven repo or your local repo. You don't want a person in the future looking at your dependencies and getting thoroughly confused.
Also to be make it absolutely clear, I would change the artifact id as well. Prefix it with your name or some name so that it can be easily distinguished.
Environment:
Around a dozen services: both WARs and JARs (run with the Tanuki wrapper)
Each service is being deployed to development, staging, and production: the environments use different web.xml files, Spring profiles (set on the Tanuki wrapper),...
We want to store the artifacts on AWS S3 so they can be easily fetched by our EC2 instances: some services run on multiple machines, AutoScaling instances automatically fetch the latest version once they boot, development artifacts should automatically be deployed once they are updated,...
Our current deployment process looks like this:
Jenkins builds and tests our code
Once we are satisfied, we do a release with the Artifactory plugin (only for production deployments; development and staging artifacts are based on plain Jenkins builds)
We use the Promoted Builds plugin with 3 different promotion jobs for each project (development, staging, production) to build the artifact for the desired environment and then upload it to S3
This works mostly OK, however we've run into multiple annoyances:
The Artifactory plugin cannot tag and updated the source code due to not being able to commit to the repository (this might be specific to CloudBees, our Jenkins provider). Never mind - we can manually do that.
The Jenkins S3 plugin doesn't properly work with multiple upload targets (we are using one for each environment) - it will try to upload to all of them and duplicate the settings when you save the configuration: JENKINS-18563. Never mind - we are using a shell script for the upload, which is working fine.
The promotion jobs will sometimes fail, because they can be run on a different instance than the original build - either resulting in a failed (build because the files are missing) or in an outdated build (since an old version is being used). Again, this might happen due to the specific setup CloudBees is providing. This can be fixed by running the build again and hopefully having the promotion job run on the same instance as the its build this time (which is pure luck in my understanding).
We've been advised by CloudBees to do a code checkout before running the promotion job, since the workspace is ephemeral and it's not guaranteed that it exists or that it's up to date. However, shell scripts in promotion jobs don't seem to honor environment variables as stated in another StackOverflow thread, even though they are linked below the textarea field. So svn checkout ${SVN_URL} -r ${SVN_REVISION} doesn't work.
I'm pretty sure this can be fixed by some workaround, but surely I must be doing something terribly wrong. I assume many others are deploying their applications in a similar fashion and there must be a better way - without various workarounds and annoyances.
Thanks for any pointers and reading everything to the end!
The biggest issue you face is that you actually want to do a build as part of the promotion. The promoted builds plugin is designed to do a few small actions, primarily on the archived artifacts of your build, or else to perform tagging or other such actions.
The design ensures that it gets a workspace on a slave. But (and this is a general Jenkins thing not a CloudBees specific) firstly the workspace you get need not be the workspace that was used for the build you are trying to promote. It could be:
An empty workspace
The workspace of the most recent build of the project (which may be a newer build than you are trying to promote)
The workspace of an older build of the project (when you cannot get onto the slave that has the most recent build of the project)
Now which of these you get entirely depends on the load that your Jenkins cluster is under. When running on CloudBees, you are usually most likely to encounter either of the first two situations above.
The actions you place in your promotion should therefore be “self-sufficient”. So for example they will copy the archived artifacts into a specific directory and then use those copied artifacts to do their 'thing'. Or they will trigger a downstream build passing through parameters from the build that is promoted. Or they will perform some action on a remote server without using any local state (i.e. flipping the switch in a blue-green deployment)
In your case, you are fighting on multiple fronts. The second front you are fighting on is that you are fighting Maven.
Maven does not like it when a build profile being active produces a different, non-equivalent, artifact.
Maven can probably live with the release profile producing a version of the artifact with JavaScript minified and perhaps debug info stripped (though it would be better if it could be avoided)... but that is because the two artifacts are equivalent. A webapp with JavaScript minified should work just as well as one that is not minified (unless minification introduces bugs)
You use profiles to build completely different artifacts. This is bad as Maven does not store the active profile as part of the information that gets deployed to the Maven repository. Thus when you declare a dependency on one of the artifacts, you do not know which profile was active when that artifact was built. A common side-effect of this is that you end up with split-brain artifacts, where one part of a composite artifact is talking to the development DB and the other part is talking to the production DB. You can tell there is this bad code smell when developers routinely run mvn clean install -Pprod && mvn clean install -Pprod when they want to switch to building a production build because they have been burned by split-brain artifacts in the past. This is a symptom of bad design, not an issue with Maven (other than it would be nice to be able to create architecture specific builds of Maven artifacts... but they can pose their own issues)
The “Maven way” to handle the creation of different artifacts for different deployment environments is to use a separate module to repack the “generic” artifact. As is typical of Maven's subtle approach to handling bad architecture, you will need to add repack modules for each environment you want to target... which discourages having lots of different target environments... (i.e. if you have 10 webapps/services to deploy and three target deployment environments you will have 10 generic modules and 3x10 target specific modules... giving a 40 module build) and hopefully encourages you to find a way to have the artifact self-discover its correct configuration when deployed.
I suspect that the safest way to hack a solution is to rely on copying archived artifacts.
During the main build, create a file that will checkout the same revision as is being built, e.g.
echo '#!/usr/bin/env sh' > checkout.sh
echo "svn revert . -R" >> checkout.sh
echo "svn update --force -r ${SVN_REVISION}" >> checkout.sh
Though you may want to write something a little more robust, e.g. ensuring that you are in a SVN working copy, ensuring that the working copy is using the correct remote URI, removing any files that are not needed, etc
Ensure that the script you create is archived.
Add at the beginning of your promotion process, you copy the archived checkout.sh to the workspace and then run that script. The you should be able to do all your subsequent promotion steps as before.
A better solution would be to create build jobs for each of the promotion processes and pass the SVN revision through as a build parameter (but will require more work to set up for you (as you already have the promotion processes set up)
The best solution is to fix your architecture so that you produce artifacts that discover their correct configuration from their target environment, and thus you can just deploy the same artifact to each required environment without having to rebuild artifacts at all.
Can someone explain the search function in m2eclipse? I am not clear on where this info is coming from and how to troubleshoot this when it can't find the artifact. I have two Eclipse installations, both with m2eclipse plugin, one works (find some artifacts not others) and the second one doesn't return anything.
When using maven from multiple eclipse installations, one method to get consistent results is to install maven separately and point eclipse (under Preferences->Maven->Installations) to the external maven instance in place of the embedded installation. An additional advantage to this approach is the ability to run maven from the command line independent of IDE to get a 'pure' view of the build process. This can be valuable when troubleshooting.
Regardless, m2eclipse uses the standard maven practice of locating a dependency in the local repository (typically {home directory}/.m2/repository), then turns to any 'remote' repositories. The local repository location can be found in eclipse under Preferences->Maven->User Settings. If no other configuration has been done, the 'central' maven repository at http://repo.maven.apache.org/maven2/ is the next location searched.
Since you are getting different results from each of your eclipse installations, I would assume that they are looking at different repositories, although not certain how the settings would have gotten that way. It would be interesting to know if the artifacts you are seeking are in the registered repository locations.
Note that this assumes you are accessing released artifacts. If you are working with snapshots, the rules change a little and configuration (in settings.xml file) is significant.