We are using Spring Cloud contract testing in a few projects because it has nice features and all our projects use Spring. However these tests are becoming quite unstable and the devs are just disabling them because they break the build even when there isn't any change on the interfaces.
We have the tests configured for Jenkins in online mode so they download the stubs from Artifactory. However quite often (at least twice per month) the tests crash because the stubs are already in the repository. We don't have access to the remote repository for deleting the stubs manually so we change the configuration for running them on offline mode. This works until the version of the provider changes as the tests cannot find the stubs for the new version locally and they crash again, and we change them again to the online mode.
As you may imagine this is not ideal and we are also worried because the local stubs may be for an outdate copy of the current version and we are not going to detect when the provider has breaking changes.
Is there a better way to configure the tests? It would be great if we could configure them so they always download the stubs and override the local ones.
Duplicate of Spring Cloud Contract remote artifact download clashes with local, how to make it temporary?
Let me copy the answer here too:
This problem might (it doesn't always happen) occur in CI systems when
you have a shared .m2. In the next release (1.2.x and 2.0.0), thanks
to closing of this issue
https://github.com/spring-cloud/spring-cloud-contract/issues/545 ,
you'll be able to pass stubrunner.snapshot-check-skip system property
or STUBRUNNER_SNAPSHOT_CHECK_SKIP environment variable or set the
plugin property (for 2.0.0 only) to disable the check.
Related
I manage a large propriety system that's compromised of about a dozen services in java. We have a core set of java libs that these all share ), and all the components/apps are built using maven. Outside of the core SDK jars though each app has its own unique set of dependencies. I can't figure out what the best approach is to both building and deploying inside docker. Ideally I want the entire lifecycle in docker, using a multi-stage build approach. But, I can't see how to optimize this with the huge number of dependencies.
It looks like I can do 2 approaches.
Build as we have before, using maven and a common cache on the CI server (jenkins) so that dependencies are fetched once and cached, and accessible to all the apps. Then have a dockerfile for each app that just copies the product jar and it's dependencies (or a fat jar) into the container, and set it up to execute. Downside of this approach is that the build itself is something that could differ between developers and the CI server. Potentially use a local maven cache like nexus just to avoid pulling deps from the internet everytime? But that still doesn't solve the problem that a dev build won't necessarily match the CI build environment.
Use multi-stage dockerfile for each project. I've tried this, and it does work and I managed to get the maven dependencies layer to cache so that it doesn't fetch too often. Unfortunately that intermediate build layer was hitting 1-2gb per application, and I can't remove the 'dangling' intermediates from the daemon or all the caching is blowing away. It also means there's a tremendous amount of duplication in the jars that have to be downloaded for each application if something changes in the poms. (ie they all use junit and log4j and many other similarities)
Is there a way to solve this optimally that I'm not seeing? All the blogs I've found basically focus on the 2 approaches above (with some that focus on running maven itself in a container, which really doesn't solve anything for me). I'll probably need to end up going with Option 1 if there aren't any other good solutions.
I've checked around on stackoverflow and blogs, and everything I can find seems to assume that you're really just building a single app and not a suite of them, where it becomes important to not repeat the dependency downloads.
I think it is OK to use the .m2/repository filesystem cache as long as you set the --update-snapshots option in your maven build. It scales better, because you cache each .jar only once per build environment and not once per application. Additionally a change in a single dependency does not invalidate the entire cache, which would be the case if you use docker-layer-caching.
Unfortunately that cannot be combined well with multi-stage builds at the moment, but you are not the only one asking for it.
This issue requests adding a --volume option to the docker build command. This one asks for allowing instructions like this in the Dockerfile: RUN --mount=m2repo=/var/mvn/repo mvn install.
Both features would allow you to use the local maven filesystem cache during your multistage build.
For the moment I would advise to keep your option 1 as solution, unless you are facing many issues which are due to differing build environments.
We are generating a Web Service for deployment to Azure. This includes four pipeline stages for Dev, Test, Full UAT and production. On initial deployment to Dev I want to perform a set of Selenium smoke tests. Then when deployed to UAT, a full set of automated tests should be triggered.
Our test team are happier using Selenium through its Java route. After a couple of days it became clear that the process was to generate a UI agent (really important to anyone who hasn't done this yet, as ChromeDriver does run without a session, but will just hang, making you think it must be close to running), assign a SELENIUM_TEST agent property, and set this flag as a build dependency (this helps it to find the correct agent), and ensure that you set the required java and maven variables in the VSTS settings (apart from the path), rather than the local machine environment. Finally to use the clean, update and -X parameters to force the environment to be configured as part of the test process.
Now I have the problem of how do I trigger these tests from the deployment pipeline. I have searched and found articles on a large number sites and cant find anything on how this may be achieved using the Maven Java Selenium combination.
Can anyone help?
For build and deploy Java Selenium Tests in VSTS, you can refer the document Testing Java applications with VSTS for detail steps:
Besides, you can also refer the blog Continuous Testing of a Java Web App in VSTS using Selenium for build and deploy Java Web App in VSTS.
I am not posting this as a full answer, but I wanted to respond to the kind input from Michele and Marina. I am not sure that there isn't a better way of approaching this but with the assistance of both I was able to at least get closer to an answer. I did prepare images, but apparently you need a reputation to do so.
So this is what I actually ended up doing.
Step 1 – the MVC web app was generated and appropriate deployment slots set up to receive the web build artefacts.
Step 2 – Created a CI process purely to generate code I could deploy into the WebApp CD pipeline.
Step 3 – Generated an empty “Smoke Test” environment in the WebApp Deployment pipeline, and added the new Artefact from step 1 into this.
Configured the Smoke Tests item
Configured it to only receive the _AutoTest-CI artefact
Set it to use the “default” pipeline
Added the “Demand” that specifies the machine configured for Selenium tests.
Added the Maven task, and pointed it at the Maven POM
At this stage it succeeded to run through the configured tests. The Maven deployment step seems to have the idea that it can generate test results, but the output gives warnings that no test results were generated. It will generate the output, and reports a success or failure, so this is a semi-success. The missing last piece is to report the full test results, which I have yet to achieve.
You can trigger the Tasks inside of an Environment by configuring the Triggers with this tools in the Release management UI.
If the trigger contitions are met the process will start automatically. Inside of your process you can do whatever Task you need.
Reference
Microsoft VSTS Docs
I'm getting started with SonarQube usage for JSF page static analysis[1] in Maven. I'm only really interested in using it in Maven since I don't like the idea to introduce another build command.
After going through Analyzing the source code and the specific Maven guide I gained the impression that the plugin can only be used after downloading, installing/unpacking and starting a SonarQube instance at localhost and specifying the connection information in the plugin declaration in the POM. The plugin configuration parameter confirm that.
While this workflow might have advantages it is painful to use on CI services and the necessity to start a service manually in order to be able to build seems not very user friedly (given the fact that other development tools like Selenium or Arquillian pull entire browser, driver and servers in the background without one single line of configuration). Am I missing something about a separate plugin or configuration which manages an embedded or otherwise temporary instance to perform the analysis with a single plugin declaration?
[1] I'm aware that there're other tools based on XML validation which could do the job, but setting up a much more powerful tools like SonarQube seems to be a more flexible approach which will probably pay off.
You don't have to install SonarQube on your build server, but it is necessary to execute analysis (results will be pushed to it). It means that you have a working server somewhere and next you have to set required parameters:
sonar.host.url (http://localhost:9000 is a default value)
sonar.login and sonar.password (if your SonarQube server is secured)
See all Analysis Parameters.
Am using Maven and Jenkins to manage deployment of my web application. Essentially:
When deploy is triggered, CI box checks the code out of version control.
If code passes tests, it triggers the Maven release plugin to build a versioned war, and puts it in our local nexus repo
In same build, pulls the artifact from nexus, and copies the artifact into tomcat, triggering Tocmat to re-explode war.
This works fine, and using this technique I can use maven to replace the appropriate environment specific configurations, so long as they are within the project. However, my SysAdmin considers it a security risk to have production credentials in VC. Instead, we would prefer to store the production credentials on the production machines that will be using them. I can imagine writing a simple bash script to ssh into the service box, and soft link the conf file onto the classpath, but this seems like a pretty inelegant solution.
Is this reasonable? Is there a better/more standard way of acheiving this? Is it actually a security risk to hold production credentials in VC?
You have your conf file on your production server at some location. This location could be a property too.
If there is no specific reason for not loading it as a file from disk rather than loading as a resource from classpath, you could create a separate Maven profile production that would filter the location replacing it with the file path for your production server.
Yes, it's a security risk to have production credentials in version control. It frees your developers to do pretty much whatever they want to production. Regulations like HIPAA in medicine or PCI for e-commerce or SoX for public US companies would frown on that. Your sys-admin is reasonable to as well.
The basic strategy is to externalize this configuration and have the deployment process roll in the environment specific data.
Having that information on the production server itself is an ok, but not great solution. It's a good fit when you have just one target server. Once you have a bunch, there's a maintenance headache. Whenever env. specific data changes, it has to be updated on every server. You also need to be sure to only have env. specific information in there or else changes developers make to early environments may not be communicated to the sys-admin to change at deployment time leading to production deployment errors.
This is where, I think, Hudson lets you down from a continuous delivery perspective. Some of the commercial tools, including my company's uBuild/AnthillPro, formally track different environments and would securely let the sys-admin configure the production credentials and developers configure the dev credentials with the tool. Likewise the application release automation tools like our uDeploy that would pull builds out of Hudson and deploy them, should have this kind of per environment configuration baked.
In these scenarios, most of the property / xml files have generic config, and the deployment engine substitutes env. specific data in as it deploys.
Adding a new tool for just this problem is probably overkill, but the basic strategy of externalizing environment specific info into a central place where it can be looked up a deployment time could work. Since you're a Maven shop, you might consider stashing some of this in your Maven repo in an area locked down for access by only operations. Then pull the latest config for the appropriate environment at deployment time.
You have a range of options here. Consider how things vary by environment; what varies by server; what needs to be secured, what changes with time on the dev side, etc. And please, please, please sit down with your sys-admin and work out a solution together. You each have insight the other doesn't and the end solution will be better for the cooperation.
Background. My org uses Maven, Bamboo and Artifactory to support a continuous integration process. We rely on Maven's SNAPSHOT qualifier to help manage storage in Artifactory (rotate out old SNAPSHOT builds) and also to help keep cross-team integrations current (Maven checks for updates to SNAPSHOT dependencies automatically on each build).
Problem. One of the challenges we're having is around correctly promoting builds from environment to environment while continuing to use SNAPSHOT. Say that a tester deploys version 1.8.2-SNAPSHOT to a functional test environment, and it's at rev 1400 in Subversion. Let's say also that it passes functional test. By the time a tester decides to pull 1.8.2-SNAPSHOT from Artifactory into the performance testing environment, a developer could have committed a change to Subversion, so the actual binary in Artifactory is at a different rev. How do we ensure that the rev doesn't change out from under us when using SNAPSHOT builds?
Constraints. We obviously don't want to deploy different builds unknowingly. We also don't want to rebuild from source as we want to test the exact binary in performance test that we tested in functional test.
Approaches we've considered. The thought is that we want to stamp the versions with a fourth component, like 1.8.2.1400, where the fourth component is a Subversion rev. (As a side question, is there a Maven plugin or something else that does that automatically?) But if we do that, then essentially we lose the SNAPSHOT feature since Maven and Artifactory think that these are different versions.
We are using Scrum, so we deploy to the test environments very early (like day two or so). I don't think it makes sense to remove the SNAPSHOT qualifier that early in the dev cycle because we lose the SNAPSHOT benefits again.
Would appreciate knowing how other orgs solve this issue.
Just to circle back on this one, I wanted to share what we are doing.
Basically we deploy snapshot builds like 1.8.2-SNAPSHOT into the development environment. No other teams need to use these builds, so it is fine to leave -SNAPSHOT on them.
But any build that we deploy to a test environment (e.g. functional test, system test) or else production must include the revision; e.g., 1.8.2.1400. We call these "quads". The reason for insisting upon quads in test is that we can attach issues (features, bugfixes, etc.) to specific revisions so the testers know what to test. For production it's really just because we want to deploy exactly the same artifact that we tested, so that means we're deploying a quad.
Anyway hope that information is useful to somebody.
if you enable "uniqueVersion" for you snapshot builds, every snapshot deployed will have a unique id. you can use that to ensure you are deploying the correctly promote builds across environments.
and, as a side note, you can use the buildnumber-maven-plugin to add subversion buildnumbers to artifacts.
Rather than embed the build number of VCS revision in the artifact's version, we embed the CI build number in the META-INF/MANIFEST-MF file .
See for instance Using Hudson environment variables to identify your builds . Although the article is applicable to Jenkins/Hudson I believe it is trivial to port to Bamboo.