I have code in a repository. Now I want to create a job which will build code from the repository and deploy it on two servers.
Right now I create two jobs with exactly the same configuration. The only change is the server on which it needs to deploy.
Is it possible to do this with single job?
Can I suggest to use this maven plugin. You can configure batch tasks (either maven goals, or scripts) that you can attach to your normal maven jobs.
https://wiki.jenkins-ci.org/display/JENKINS/Batch+Task+Plugin
Firstly, you have a Jenkins job that builds your job normally.
Then, using this plugin, you can configure two extra tasks on that same Jenkins job called, say, "Deploy-server-1" and "Deploy-server-2".
After you build your job, click on the "Task" button and you can easily run your deploy tasks.
So the process is:
-> build
-> deploy server 1
-> deploy server 2
If you have a look on the link I added for the Batch Tash Plugin, they have a single task called 'release'. Just imagine you can have more tasks right under it, to do whatever you want.
You probably need admin rights on your Jenkins server in order to install this plugin, if it's not there already...
I solved this adding Flexible Publish plugin, and as a Conditional Action, using Run:'Always'. Now it is possible to add as many actions as like in Flexible publish conditional action, and all these actions can be 'Deploy WAR/EAR to container'.
You can deploy to multiple servers using the Node and Label parameter plugin.
Add the servers you want to deploy your code using Jenkins nodes: Manage Jenkins > Manage nodes > New node.
Be sure to add a label to each node so you can group them together and deploy to that group.
Create a new freestyle project and check the This project is parameterized option. Add a parameter and choose Label, and add the label you just created for your servers (nodes).
From here, every build step will replicate itself in each labeled node, so you can fetch your repository code using a SCM plugin, GitHub, GitLab, etc. or directly from your server, this is up to you, and the code will be deployed on each node.
Be aware that if your code need to be compiled, it will be compiled on the remote server. Also, every time this build is deployed to a node, will create an additional build in your build queue.
You can use Copy Artifact Plugin, where you can copy build artifacts one project to another. So you just have on job which builds what you need and it can be shared across other jobs.
I got same problem, I'm about to solve it using multiple curl statements. See my post http://martin.podval.eu/2013/10/tomcat-7-remote-deployment.html
Well, jenkins does not allow me to add multiple command line execution tasks, so I need to use three curl statements in one.
I solved similar problem with ant tasks - getting server urls as input parameter from build.properties file. In my case there were several environments to support with same script, 1-2 servers on each.
<target name="deploy.application">
<groovy>
if (properties['app.server'] && properties["app.server"] != "") {
ant.project.executeTarget('deploy.server1')
}
if (properties['app.server2'] && properties["app.server2"] != "") {
ant.project.executeTarget('deploy.server2')
}
</groovy>
</target>
Related
We are working on a legacy project and the first task is to setup a DevOps for the same.
The important thing is we are very new to this area.
We are planning to use jenkins and sonarqube for the purpose initially. Let me start with the requirements.
Currently the the project is sub divided into multiple projects (not modules)
We had to follow this build structure as there are no plans for re-organising it as a single multimodule maven project
Currently the builds and dependencies are managed manually
Eg: The project is subdivided in to 5 multi-module maven projects,
say A,B,C,D and E
1. A and C are completely independent and can be easly built
2. B depends on the artifact generated by A (jar) and has multiple maven profiles (say tomcat and websphere, it is a webservice module)
3. D depends on the artifact generated by C
4. E depends on A, B and D and has multiple maven profiles (say tomcat and websphere, it is a web project)
Based on jenkins documentation to handle this scenario, we are thinking about parameterized builds using “parameterized build plugin" and "extended choice parameter plugin" with the help of these plugins we are able to parameterize the profile name. But before each build, the builder waits for the profile parameter.
So we are still searching to find an good solution to
1. keep the dependency between projects an built the whole projects if there is any change in SCM (SVN). For that we are used "Build whenever a SNAPSHOT dependency is built" and "SCM polling option". Unfortunately this option seems not working in our case (we have given an interval of 5 min for scm polling but no build is happening based on test commits)
2. Even though we are able to parameterize the profile, this seems as a manual step (is there an option to automate this part too, ie. build with tomcat profile and websphere profile should happen sequentially).
We are struggling to find a solution to cater all these core requirements. Any pointer would be greatly appreciated.
Thanks,
San
My maven knowledge is limited, however since you didnt get any response yet, ill try to give some general advice.
There are usually multiple ways to reach some aim in Jenkins, each has its pros and cons. Choosing the most fitting solution depends on the specific requirements and your environment/setup.
However you first need something that just works, then you can refine it.
A quick result you get with the following
Everything in one job
Configure your subversion repo (Multiple are possible) to be checked out into your workspace
Enable Poll SCM trigger
Build your modules/projects via Execute shell build steps. (Failed builds can be handed to the job result by using Exit 1 on a Execute shell Build step.)
However keep in mind that this will prevent advanced functionality on a per project/module basis, such as mail notifications to the dev to blame. Or trend of metrics, like warnings or static code analysis.
The following solution is easier to extend in that direction.
Wrapper job around your various build jobs
Use Build step Trigger/call builds on other projects to build A, archive needed artifacts
Use Build step Trigger/call builds on other projects with some parameter tomcatto build B tomcat version, use Copy Artifact Plugin to copy over jar from A
...
Use Build step Trigger/call builds on other projects with some parameter tomcatto build E tomcat version. Use Copy Artifact Plugin to copy all needed artifacts, you can specify parameter there if you need artifact of i.e. B tomcat version
In this setup, monitoring the svn is an issue since if you trigger it from polling SCM, it will checkout it in your wrapper workspace while you dont actually need it checked out there, but in your build jobs.
Possible solution: Share the workspace between wrapper job and your build jobs, so the duplicate checkouts in the build jobs will find the files already in the right revision. However then you *need+ to make sure the downstream jobs are executed on the same machine (there are plugins to do so)
Or even more elegant: Use a post-commit hook (See here, section Post-comit hook) on your svn to notify jenkins of changes.
Edit: For the future, its worth looking into the Pipeline Plugin and its documentation for more complex builds, this is the engine for the upcoming jenkins version 2.0, see here.
I would create 5 different jobs for ABCDE.
As you mentioned A and C would be standalone jobs so I would just do mvn clean install/pkg/verify based on your need.
For B I would first build A and then invoke another maven target in build to build B
For D, I would first build C and then build D
Finally for E , i would use invoke top level mvn targets 5 times A , B,C,D and finally E
Edit:
Jenkins 2 is out and has a built-in support for pipelines.
A few pointers for your requirements:
"built the whole projects if there is any change in SCM"
Although Poll SCM usually requires less work, the proper way to do it is to use SVN hooks.
The solution works as follows:
First you enable the Trigger builds remotely feature and enter a random token in Authentication Token.
This allows you to trigger builds remotely using Jenkins REST API (http[s]://JENKINS_URL/job/BUILD_NAME/build?token=TOKEN)
Then you create a SVN hook (a script that runs whenever you commit) which triggers the build by sending a request to that URL (using curl,wget, python,...).
There are a lot of manuals on how to create SVN hooks, here's the first result on "SVN Hooks" from Google.
"keep the dependency between projects"
I would create a different Jenkins Job for each project separately, then make sure builds are executed in the required order.
I think the best way to order your builds (dependencies) is to create a Build Pipeline using the Pipeline Plugin (previously known as Workflow Plugin).
There is a lot to explain here, so it's better you read on your own. You can start here.
There are also other (simpler) solutions, like Build Flow Plugin or Parameterized Trigger Plugin which can help create dependencies between builds, but I think Pipeline is the newest and considered a best practice (it's definitely the most advanced solution).
Still, having said that, if you feel Pipeline is an overkill for you, go for the alternatives.
I would recommend making sure each build does a mvn install to the same local repo, and also deploys the artifact to Artifactory (hopefully you have one).
Automate parameterized builds: "build with tomcat profile and websphere profile"
To do that you'll need to create parameterized builds.
That's pretty easy to do, you just check This build is parameterized in your build config and add a MVN_PROFILE string/choice parameter.
After that you can trigger each build several times, with different parameters, using any one of the plugins mentioned in the previous bullet.
Extra Tip:
While hacking your way through this, consider using Job Configuration History Plugin, it can help review and revert changes made to the configuration.
Good luck, hope this helps :)
I would consider a bit different approach to fully de-couple the projects.
If you are able to create your internal artifactory, than I would consider in the maven build each on of the dependencies as a third party library exactly like it is done with any other external libraries you are using.
This way, each such project can be seperatly built and stored in the artifactory and when a dependent project will be built it will just take the right version as mentioned in the pom file.
This way you'll have different build process for each one of the projects and only relevant projects (relevant = changed) will be built.
What I want to achieve is as:
Build the maven project and push the jar to repo, using maven & jenkins.
Deploy the application, using script.
Run jmeter test cases and display test results in jenkins dashboard.
First jenkins build my project and push it to repo.
Then I have defined a post build step in jenkins to run script on remote server, this script deploys and starts my application.
Then I have created a post build action in jenkins to invoke top-level maven targets, to run mvn verify, which triggers the jmeter-maven plugin, which runs the test cases on my already running application.
Is this a good approach and if not please let me know a better way to do this?
Thanks in advance.
The bit that may be missing here is how Jenkins knows if the build should be marked as passed or failed? Even if jmeter-maven-analysis plugin execution did exit with zero exit code, it doesn't mean performance-wise the application is fine. It may be, but don't have to be. I came across that kind of concerns some time ago and provided a solution. Check project wiki for usage example and more information.
Environment:
Around a dozen services: both WARs and JARs (run with the Tanuki wrapper)
Each service is being deployed to development, staging, and production: the environments use different web.xml files, Spring profiles (set on the Tanuki wrapper),...
We want to store the artifacts on AWS S3 so they can be easily fetched by our EC2 instances: some services run on multiple machines, AutoScaling instances automatically fetch the latest version once they boot, development artifacts should automatically be deployed once they are updated,...
Our current deployment process looks like this:
Jenkins builds and tests our code
Once we are satisfied, we do a release with the Artifactory plugin (only for production deployments; development and staging artifacts are based on plain Jenkins builds)
We use the Promoted Builds plugin with 3 different promotion jobs for each project (development, staging, production) to build the artifact for the desired environment and then upload it to S3
This works mostly OK, however we've run into multiple annoyances:
The Artifactory plugin cannot tag and updated the source code due to not being able to commit to the repository (this might be specific to CloudBees, our Jenkins provider). Never mind - we can manually do that.
The Jenkins S3 plugin doesn't properly work with multiple upload targets (we are using one for each environment) - it will try to upload to all of them and duplicate the settings when you save the configuration: JENKINS-18563. Never mind - we are using a shell script for the upload, which is working fine.
The promotion jobs will sometimes fail, because they can be run on a different instance than the original build - either resulting in a failed (build because the files are missing) or in an outdated build (since an old version is being used). Again, this might happen due to the specific setup CloudBees is providing. This can be fixed by running the build again and hopefully having the promotion job run on the same instance as the its build this time (which is pure luck in my understanding).
We've been advised by CloudBees to do a code checkout before running the promotion job, since the workspace is ephemeral and it's not guaranteed that it exists or that it's up to date. However, shell scripts in promotion jobs don't seem to honor environment variables as stated in another StackOverflow thread, even though they are linked below the textarea field. So svn checkout ${SVN_URL} -r ${SVN_REVISION} doesn't work.
I'm pretty sure this can be fixed by some workaround, but surely I must be doing something terribly wrong. I assume many others are deploying their applications in a similar fashion and there must be a better way - without various workarounds and annoyances.
Thanks for any pointers and reading everything to the end!
The biggest issue you face is that you actually want to do a build as part of the promotion. The promoted builds plugin is designed to do a few small actions, primarily on the archived artifacts of your build, or else to perform tagging or other such actions.
The design ensures that it gets a workspace on a slave. But (and this is a general Jenkins thing not a CloudBees specific) firstly the workspace you get need not be the workspace that was used for the build you are trying to promote. It could be:
An empty workspace
The workspace of the most recent build of the project (which may be a newer build than you are trying to promote)
The workspace of an older build of the project (when you cannot get onto the slave that has the most recent build of the project)
Now which of these you get entirely depends on the load that your Jenkins cluster is under. When running on CloudBees, you are usually most likely to encounter either of the first two situations above.
The actions you place in your promotion should therefore be “self-sufficient”. So for example they will copy the archived artifacts into a specific directory and then use those copied artifacts to do their 'thing'. Or they will trigger a downstream build passing through parameters from the build that is promoted. Or they will perform some action on a remote server without using any local state (i.e. flipping the switch in a blue-green deployment)
In your case, you are fighting on multiple fronts. The second front you are fighting on is that you are fighting Maven.
Maven does not like it when a build profile being active produces a different, non-equivalent, artifact.
Maven can probably live with the release profile producing a version of the artifact with JavaScript minified and perhaps debug info stripped (though it would be better if it could be avoided)... but that is because the two artifacts are equivalent. A webapp with JavaScript minified should work just as well as one that is not minified (unless minification introduces bugs)
You use profiles to build completely different artifacts. This is bad as Maven does not store the active profile as part of the information that gets deployed to the Maven repository. Thus when you declare a dependency on one of the artifacts, you do not know which profile was active when that artifact was built. A common side-effect of this is that you end up with split-brain artifacts, where one part of a composite artifact is talking to the development DB and the other part is talking to the production DB. You can tell there is this bad code smell when developers routinely run mvn clean install -Pprod && mvn clean install -Pprod when they want to switch to building a production build because they have been burned by split-brain artifacts in the past. This is a symptom of bad design, not an issue with Maven (other than it would be nice to be able to create architecture specific builds of Maven artifacts... but they can pose their own issues)
The “Maven way” to handle the creation of different artifacts for different deployment environments is to use a separate module to repack the “generic” artifact. As is typical of Maven's subtle approach to handling bad architecture, you will need to add repack modules for each environment you want to target... which discourages having lots of different target environments... (i.e. if you have 10 webapps/services to deploy and three target deployment environments you will have 10 generic modules and 3x10 target specific modules... giving a 40 module build) and hopefully encourages you to find a way to have the artifact self-discover its correct configuration when deployed.
I suspect that the safest way to hack a solution is to rely on copying archived artifacts.
During the main build, create a file that will checkout the same revision as is being built, e.g.
echo '#!/usr/bin/env sh' > checkout.sh
echo "svn revert . -R" >> checkout.sh
echo "svn update --force -r ${SVN_REVISION}" >> checkout.sh
Though you may want to write something a little more robust, e.g. ensuring that you are in a SVN working copy, ensuring that the working copy is using the correct remote URI, removing any files that are not needed, etc
Ensure that the script you create is archived.
Add at the beginning of your promotion process, you copy the archived checkout.sh to the workspace and then run that script. The you should be able to do all your subsequent promotion steps as before.
A better solution would be to create build jobs for each of the promotion processes and pass the SVN revision through as a build parameter (but will require more work to set up for you (as you already have the promotion processes set up)
The best solution is to fix your architecture so that you produce artifacts that discover their correct configuration from their target environment, and thus you can just deploy the same artifact to each required environment without having to rebuild artifacts at all.
I'm in a team to develop plugins in java.
We got Maven, a repository and Jenkins,
And I got myself a debian-server to test my applications.
When I push my commits, this happens:
Push in repository, upload and build in Jenkins.
Users download these .jar files and upload these to their server
What I want to be happen:
There are two ways,
First: After building with Jenkins: Download these files,
Second: When pressing "maven build", maven builds my applications and copy these to my server
How can I do this?
(Sorry for my bad English: I'm German)
If I understand you correctly, following scenario should help.
create two jobs
first job builds your artifact and archives it: user will be able to download it, job is triggered by source modification
second job is only executed manually (and requires two plugins, listed below), copies artifacts from first job and then uploads them to the server
Following plugins you would need to install:
copy artifact plugin to enable jenkins to copy artifacts from another job
publish over ssh plugin to enable jenkins to upload binaries and run commands on remote servers
You want when you push your SCM repository, to get Jenkins build into your local projet (update your jars), then manipulate send your final build to your server. Is that you want ?
If yes, I think that plugin would be interesting :
http://evgeny-goldin.com/wiki/Jenkins-maven-plugin#Supported_Plugins
you can manage jenkins goals from your pom.xml but I am not sure if you can get jenkins build.
I'm using Hudson to build my application. I have several branches that come and go. Whenever there's a new branch, I have to set up the following builds for it:
a continuous build that runs after every change in SVN
a nightly build
a nightly site generation (I'm using Maven under the hood)
and a weekly integration build for some branches
currently this means I need to copy four template configurations and set them up with the branch URL. I don't like this for two reasons:
It's redundant, so modifying something is error-prone and takes a lot of time.
I need four full checkouts of the product per branch on every build slave, plus four separate private Maven repository, not to mention the built artifacts. This is a lot of space wasted.
What I'd like instead is to have one workspace and one configuration for allthese builds. Is this possible with Hudson?
If you go for the assumption that your nightly build is the same than your continuous build. You can publish your continuous build artifacts into a folder/repository path that contains the date. So your second and subsequent builds of a day will overwrite the previous builds of that day.
The site generation and the weekly integration build is more difficult since you would need conditional build steps. (The idea is to run batch/shell scripts that will determine if it is time for the action (like site build) and run that as part of that script).
In my opinion the better solution is to write a batch/shell script (or a Java program would work too) that copies your templates and replaces the svn entry in all your new jobs. Than you have two steps for creating a new branch. First run your script with the SVN path as the parameter and second tell Hudson to reload the configuration. The beauty of the solution is, that you can change your templates when necessary without making changes to your scripts.