Goal - track unit/functional/integration tests coverage history on some regular basis (weekly, every release, etc) for relatively long period (6 months).
JUnit plugin in Jenkins out of the box does that per every build and problem here that it does not allow track specific slices (milestones, releases, etc) and history is kept for some fixed number of builds. So, here we are dependent on Jenkins workspace folder content which is not reliable.
Currently, we are capturing metrics from Jenkins and manually moving them into a table in confluence, so that we can use raw data to build graphs, trends. As you understand this approach requires a lot of manual effort and does not scale for cases when we need to track different test types, multiple projects
Is there any existing tool that provide capability to track history and show the trend?
Atlassian Clover can track historical coverage, however, keep in mind that you will still have to gather history point files in some place:
https://confluence.atlassian.com/display/CLOVER/'Historical'+Report
https://confluence.atlassian.com/display/CLOVER/clover-historypoint
Related
I have JQAssistant scanning my project(s) and can query each of the projects. The documentation cites a Team Server ability where all projects/builds are stored in a central Neo4j db.
I cannot find any documentation though on how this would be handled, or what happens for multiple builds. Nodes do not seem to be tagged with a build number, and neither with the project name, so it appears to be one big lump.
Is there an easy way to tag everything on the way in with projectName and BuildNumber, or am I missing something? I assume I could tag everything one JQAssistant is run and tag everything missing these tags, but then I lose parallelism and seems too hacky.
This would also help pruning data based on old builds to avoid too much build-up.
Any help much appreciated,
There's a little misunderstanding: the idea behind the team instance is to have Neo4j instance per project with a single (i.e. latest) snapshot of the graph (usually filled by a nightly CI run). So there's currently no build (identifiable by a date, number etc.) in the data - but could be an interesting feature.
Im currently reading Continius Delivery and in the book the author says that it is crucial to build the binarys only once, and then use the same binarys for every deployment. What im having problem understanding is how this can be done in practice? For examaple in order to run the mocked unit tests there would be a special build? What im refering to is the scope tag in Maven.
If you look at the Maven life Cycle you'll see that you have only one compile task. For your tests, they will be compiled and executed right after the source compilation. With mocked unit tests, it is the same : two separated compilation for two objectives.
I think that the author of your book refers to a problem that may appear when you deploy automatically on several environment : it create more environment to debug. It is compulsory to have only one final binary for all the environment. If you have several binaries that you split over your environment, you can be assured that you will forget what difference there are between two of them, which argument you give to the first one and not the other. For continuous Delivery, it has to be the same everywhere.
Let's come back on Maven. Maven has a lot of possibilities during its life cycle. Sometimes you'll have to run several build to complete everything (as code coverage for example). This may be useful in your continuous integration process and can be done through different build type (every hour for unit test, everyday for code coverage, quality analysis and integration tests).
But in the end, when you'll enter Continuous Delivery, you'll build one final binary, one unique binary copy over your environment
I have Cobertura integrated with my project, and it works as expected. However, I am not sure which of the Cobertura artifacts to check-in to SVN.
The directory structure looks something like:
MainProjectDir
cobertura.ser
coberturaDir
cssDir
imagesDir
instrumentedDir
js
reports
LOTS OF html FILES
There is just over 1 MEG of space in the coberturaDir, and checking in that directory seems troublesome for future commits.
My goal is to keep track of the total for the project and each class.
Of the cobertura artifacts, what should I be committing to SVN?
Thanks,
Sean
None of them.
You should be able to regenerate the Cobertura reports by pointing toward an older revision in your version control system. Since the reports are a derivative product of the version of software, there's no need to store them. This is the same principle that applies to generated documentation (javadoc, doxygen) and binary files produced from your source code (jars, exes, class files).
If you need history, I'd suggest saving the report outside of version control, in someplace like a file server. You can then compress old report directories into ZIPs or tarballs so they are available, but archived to reduce space and make finding the latest data easier. You can also take the measurements and metrics that are most important and put them into a single file, such as a spreadsheet, and put that in a file server.
Like Thomas Owens said: None of them.
Ah, you say. I want to be able to see the results and save them. I want to be able to link them back to the developers and see how my test coverage changes over time.
In that case, use a Continuous Integration system like Jenkins. Jenkins can examine your XML based Cobertura coverage reports and display them as graphs. It can save these graphs with each build. Each build will show you who made the commit that triggered the build and the changes in coverage since the last build. You can even play a CI Game and award points to developers who create Unit tests that expand your coverage. (First prize is a Cadillac Eldorado. Second prize is a set of steak knives, and third prize is you're fired.)
Jenkins is pretty simple to setup and get working. You'll need to download the Cobertura plugin which is pretty easy to do. It'll do what you want without having to check in your Cobertura files.
I am currently working on a rather large project with a team distributed across the United States. Developers regular commit code to the source repository. We have the following application builds (all are managed by an application, no manual processes):
Continuous Integration: a monitor checks to see if the code repository has been updated, if so it does a build and runs our unit test suite. On errors, the team receive email notifications
Daily Build: Developers use this build to verify their bug fixes or new code on an actual application server, and if "things" succeed, the developer may resolve the task.
Weekly Build: Testers verify the resolved issue queue on this build. It is a more stable testing environment.
Current Release build: used for demoing and an open testing platform for potential new users.
Each build refreshes the database associated with it. This cleans data and verifies any databases changes that go along with the new code are pulled in. One concern I hear from our testers is that we need to pre-populate the weekly build database with some expected testing data, as opposed to more generic data that developers work with. This seems like a legitimate concern/need and is something we are working on.
I am tossing what we are doing out to see if the SO community sees any gap with what we are doing, or have any concerns. Things seems to be working well, but it FEELS like it could be better. Your thoughts?
An additional step that is followed is that once the release build passes tests (say smoke test) then it is qualified as a good build (say a golden build) and you use some kind of labeling mechanism to label all the artefacts (code, install scripts, makefiles, installable etc.) that went into the creation of the golden image. The golden build may become a release candidate later or not.
Probably you are already doing this, since you don't mention I added what I had observed.
this is pretty much the way we do it.
The DB of the testers themselves is only reset on demand. If we would refresh this automatically every week then
we would lose the references to bug symptoms; if a bug is found but a developer only looks at it a few weeks later (or simply after the weekend) then all eveidence of that bug may have dissapeared
testers might be in the middle of a big test case (taking more than 1 day for instance)
we have tons of unit tests which are running against a DB which is refreshed (automatically of course) each time an integration build is executed
regards,
Stijn
I think you have a good, comprehensive process, as long as it fits in with when your customers want to see updates. One possible gap I can see is that it looks like you wouldn't be able to get a critical customer bug fix into production in less than a week, since your test builds are weekly and then you'd need time for the testers to verify the fix.
If you fancy thinking about things a different way, have a look at this article on continuous deployment - it can be a bit hard to accept the concept at first, but it definitely has some potential.
How does your team handle Builds?
We use Cruise Control, but (due to lack of knowledge) we are facing some problems - Code freeze in SVN - Build management
Specifically, how do you make available a particular release when code is constantly being checked in?
Generally, can you discuss what best practices you use in release management?
I'm positively astonished that this isn't a duplicate, but I can't find another one.
Okay, here's the deal. They are two separate, but related questions.
For build management, the essential point is that you should have an automatic, repeatable build that rebuilds the entire collection of software from scratch, and goes all the way to your deliverable configuration. in other words, you should build effectively a release candidate every time. Many projects don't really do this, but I've seen it burn people (read "been burned by it") too many times.
Continuous integration says that this build process should be repeated every time there is a significant change event to the code (like a check in) if at all possible. I've done several projects in which this turned into a build every night because the code was large enough that it took several hours to build, but the ideal is to set up your build process so that some automatic mechanism --- like an ant script or make file --- only rebuilds the pieces affected by a change.
You handle the issue of providing a specific release by in some fashion preserving the exact configuration of all affected artifacts for each build, so you can apply your repeatable build process to the exact configuration you had. (That's why it's called "configuration management.") The usual version control tools, like git or subversion, provide ways to identify and name configurations so they can be recovered; in svn, for example, you might construct a tag for a particular build. You simply need to keep a little bit of metadata around so you know which configuration you used.
You might want to read one of the "Pragmatic Version Control" books, and of course the stuff on CI and Cruise Control on Martin Fowler's site is essential.
Look at continuous integration: best pratices, from Martin Fowler.
Well, I have managed to find a related thread, I participated in, a year ago. You might find it useful, as well.
And here is how we do it.
[Edited]
We are using Cruise Control as integration tool. We just deal with the trunk, which is the main Subversion repository in our case. We seldom pull out a new branch for doing new story cards, when there is a chance of complex conflicts. Normally, we pull out a branch for a version release and create the build from that and deliver that to our test team. Meanwhile we continue the work in trunk and wait for the feedback from test team. Once all tested we create a tag from the branch, which is immutable logically in our case. So, we can release any version any time to any client in case. In case of bugs in the release we don't create tag, we fix the things there in the branch. After getting everything fixed and approved by test team, we merge the changes back to trunk and create a new tag from the branch specific to that release.
So, the idea is our branches and tags are not really participating in continuous integration, directly. Merging branch code back to the trunk automatically make that code becomes the part CI (Continuous Integration). We normally do just bugfixes, for the specific release, in branches, so it doesn't really participate into CI process, I believe. To the contrary, if we start doing new story cards, for some reasons, in a branch, then we don't keep that branch apart too long. We try to merge it back to trunk as soon as possible.
Precisely,
We create branches manually, when we plan a next release
We create a branch for the release and fix bugs in that branch in case
After getting everything good, we make a tag from that branch, which is logically immutable
At last we merge the branch back to trunk if has some fixes/modifications
Release Management goes well beyond continuous integration.
In your case, you should use Cruise Control to automatically make a tag, which allows developers to go on coding while your incremental build can take place.
If your build is incremental, that means you can trigger it every x minutes (and not for every commit, because if they are too frequent, and if your build is too long, it may not have time to finish before the next build tries to take place). The 'x' should be tailored to be longer that a compilation/unit test cycle.
A continuous integration should include automatic launch of unit tests as well.
Beyond that, a full release management process will involve:
a series of deployment on homologation servers
a full cycle of homologation / UAT (User Acceptance Test)
non-regression tests
performance / stress tests
pre-production (and parallel run tests)
before finally releasing into production.
Again "release management" is much more complex than just "continuous integration" ;)
Long story short: Create a branch copied from trunk and checkout/build your release on that branch on the build server.
However, to get to that point in a completely automated fashion using cc.net is not an easy task. I could go into details about our build process if you like, but it's probably too fine grained for this discussion.
I agree with Charlie about having an automatic, repeatable build from scratch. But we don't do everything for the "Continuous" build, only for Nightly, Beta, Weekly or Omega (GA/RTM/Gold) release builds. Simply because some things, like generating documentation, can take a long time, and for the continuous build you want to provide developer with rapid feedback on a build result.
I totally agree with preserving exact configuration, which is why branching a release or tagging is a must. If you have to maintain a release, i.e. you can't just release another copy of trunk, then a branch on release approach is the way to go, but you will need to get comfortable with merging.
You can use Team Foundation Server 2008 and Microsoft Studio Team System to accomplish your source control, branching, and releases.