I am currently working on a rather large project with a team distributed across the United States. Developers regular commit code to the source repository. We have the following application builds (all are managed by an application, no manual processes):
Continuous Integration: a monitor checks to see if the code repository has been updated, if so it does a build and runs our unit test suite. On errors, the team receive email notifications
Daily Build: Developers use this build to verify their bug fixes or new code on an actual application server, and if "things" succeed, the developer may resolve the task.
Weekly Build: Testers verify the resolved issue queue on this build. It is a more stable testing environment.
Current Release build: used for demoing and an open testing platform for potential new users.
Each build refreshes the database associated with it. This cleans data and verifies any databases changes that go along with the new code are pulled in. One concern I hear from our testers is that we need to pre-populate the weekly build database with some expected testing data, as opposed to more generic data that developers work with. This seems like a legitimate concern/need and is something we are working on.
I am tossing what we are doing out to see if the SO community sees any gap with what we are doing, or have any concerns. Things seems to be working well, but it FEELS like it could be better. Your thoughts?
An additional step that is followed is that once the release build passes tests (say smoke test) then it is qualified as a good build (say a golden build) and you use some kind of labeling mechanism to label all the artefacts (code, install scripts, makefiles, installable etc.) that went into the creation of the golden image. The golden build may become a release candidate later or not.
Probably you are already doing this, since you don't mention I added what I had observed.
this is pretty much the way we do it.
The DB of the testers themselves is only reset on demand. If we would refresh this automatically every week then
we would lose the references to bug symptoms; if a bug is found but a developer only looks at it a few weeks later (or simply after the weekend) then all eveidence of that bug may have dissapeared
testers might be in the middle of a big test case (taking more than 1 day for instance)
we have tons of unit tests which are running against a DB which is refreshed (automatically of course) each time an integration build is executed
regards,
Stijn
I think you have a good, comprehensive process, as long as it fits in with when your customers want to see updates. One possible gap I can see is that it looks like you wouldn't be able to get a critical customer bug fix into production in less than a week, since your test builds are weekly and then you'd need time for the testers to verify the fix.
If you fancy thinking about things a different way, have a look at this article on continuous deployment - it can be a bit hard to accept the concept at first, but it definitely has some potential.
Related
Goal - track unit/functional/integration tests coverage history on some regular basis (weekly, every release, etc) for relatively long period (6 months).
JUnit plugin in Jenkins out of the box does that per every build and problem here that it does not allow track specific slices (milestones, releases, etc) and history is kept for some fixed number of builds. So, here we are dependent on Jenkins workspace folder content which is not reliable.
Currently, we are capturing metrics from Jenkins and manually moving them into a table in confluence, so that we can use raw data to build graphs, trends. As you understand this approach requires a lot of manual effort and does not scale for cases when we need to track different test types, multiple projects
Is there any existing tool that provide capability to track history and show the trend?
Atlassian Clover can track historical coverage, however, keep in mind that you will still have to gather history point files in some place:
https://confluence.atlassian.com/display/CLOVER/'Historical'+Report
https://confluence.atlassian.com/display/CLOVER/clover-historypoint
We have a Java web app, and a number of developers working with it. Every developer is working with his/her own feature, in its own branch. When the feature is ready - we want to review it and visually test (after all unit and integration tests are passed, of course). We want to automate this process of deployment. Ideally, we would like to let our developers click just one button somewhere to make the application deployed to http://example.com/staging/branches/foo (where branches/foo is developer's path in SVN repository).
Then, the deployment is reviewed (by project sponsors mostly), merged into /trunk, and removed from the staging server.
I think that I'm not the first one who needs to implement such a scenario. What are the tools and technologies that may help me?
Typically, I would use a stage environment to test the "trunk" (ie all the individual branches for a release merged together). Several reasons for this:
Stakeholders and sponsors usually don't have time to test individual branches. They want to test the entire release. It also tend to get very confusing for people not inside the immediate team to keep track of different, changing URLs and understanding why feature X works on one URL and not the other. Always keep it simple for your sponsors.
It tends to become very messy and costly to maintain more than one instance of third-party dependencies (databases, service providers etc) for proper stage testing. Bear in mind that you want to maintain realistic test-data at all times.
Until you merge all individual branches together for a release, there will be collisions and integration bugs that will be missed. Assume that automated integration tests won't be perfect.
All that being said, there are lots of good tools for automatic build/deploy out there. Not knowing anything about your build setup and deployment environment, a standard setup could consist of a build-server, maven and tomcat. The build-server would execute the build and deploy the resulting appplication to the test-server. If you are using maven and tomcat, there is a plugin available for this task (http://mojo.codehaus.org/tomcat-maven-plugin/introduction.html). There are a number of good build-servers out there as well with good support for maven. Teamcity is popular, as is Hudson CI.
Basically you can use Hudson/Jenkins.
There are ways to manage have multiple deployments on one machine with some plugins, as stated on the following post on Jenkins Users, you'll just have to manage those multiple deployments to be the branches the developers are working on.
As #pap said, Hudson and other CI software build, test (if you have any tests in it) and deploy webapps, you'll just have to configure this procedure. Hope the link is helpful.
I have a couple of design/architectural questions that always come up in our shop. I said "our", as opposed to "me" personally. Some of the decisions were made and made when J2EE was first introduced so there are some bad design choices and some good.
In a web environment, how do you work with filters. When should you use J2EE filters and when shouldn't you? Is it possible to have many filters, especially if you have too much logic in them. For example, there is a lot of logic in our authentication process. If you are this user, go to this site and if not go to another one. It is difficult to debug because one URL path could end up rendering different target pages.
Property resource bundle files for replacement values in JSP files: It seems that the consensus in the Java community is to use bundle files that contain labels and titles for a jsp parsing. I can see the benefit if you are doing development with many different languages and switching the label values based on locale. But what if you aren't working with multiple languages? Should every piece of static text in a JSP file or other template file really have to be put into a property file. Once again, we run into issues with debugging where text may not show up due to misspelling with property value keys or corrupt property files. Also, we have a process where graphic designers will send us html templates and then we convert them to jsp. It seems it more confusing to then remove the static text, add a key, add the key/value in a property file, etc.
E.g. A labels.properties file may contain the Username: label. That gets replaced by some key and rendered to the user.
Unit Testing for all J2EE development - we don't encourage unit testing. Some people do but I have never worked at shop that uses extensive unit testing. Once place did and then when crunch time hit, we stopped doing unit testing and then after a while the unit tests were useless and wouldn't ever compile. Most of the development I have done has been with servers, web application development, database connectivity. I see where unit testing can be cumbersome because you need an environment to unit test against. I think unit test manifestos encourage developers not to actually connect to external sources. But it seems like a major portion of the testing should be connecting to a database and running all of the code, not just a particular unit. So that is my question, for all types of development (like you see in CRUD oriented J2EE development) should we write unit tests in all cases? And if we don't write unit tests, what other developer testing mechanisms could we use?
Edited: Here are some good resources on some of these topics.
http://www.ibm.com/developerworks/java/library/j-diag1105.html
Redirection is a simpler way to handle different pages depending on role. The filter could be used simply for authentication, to get the User object and any associated Roles into the session.
As James Black said, if you had a central controller you could obviate the need to put this logic in the filters. To do this you'd map the central controller to all urls (or all non-static urls). Then the filter passes a User and Roles to the central controller which decides where to send the user. If the user tries to access a URL he doesn't have permission for, this controller can decide what to do about it.
Most major MVC web frameworks follow this pattern, so just check them out for a better understanding of this.
I agree with James here, too - you don't have to move everything there but it can make things simpler in the future. Personally, I think you often have to trade this one off in order to work efficiently with designers. I've often put the infrastructure and logic in to make it work but then littered my templates with static text while working with designers. Finally, went back and pulled all the static text out into the external files. Sure enough, found some spelling mistakes that way!
Testing - this is the big one. In my experience, a highly disciplined test-first approach can eliminate 90% of the stress in developing these apps. But unit tests are not quite enough.
I use three kinds of tests, as indicated by the Agile community:
acceptance/functional tests - customer defines these with each requirement and we don't ship til they all pass (look at FitNesse, Selenium, Mercury)
integration tests - ensure that the logic is correct and that issues don't come up across tiers or with realistic data (look at Cactus, DBUnit, Canoo WebTest)
unit tests - both defines the usage and expectations of a class and provides assurance that breaking changes will be caught quickly (look at JUnit, TestNG)
So you see that unit testing is really for the benefit of the developers... if there are five of us working on the project, not writing unit tests leads one of two things:
an explosion of necessary communication as developers try and figure out how to use (or how somebody broke) each other's classes
no communication and increased risk due to "silos" - areas where only one developer touches the code and in which the company is entirely reliant on that developer
Even if it's just me, it's too easy to forget why I put that little piece of special case logic in the class six months ago. Then I break my own code and have to figure out how... it's a big waste of time and does nothing to reduce my stress level! Also, if you force yourself to think through (and type) the test for each significant function in your class, and figure out how to isolate any external resources so you can pass in a mock version, your design improves immeasurably. So I tend to work test-first regardless.
Arguably the most useful, but least often done, is automated acceptance testing. This is what ensures that the developers have understood what the customer was asking for. Sometimes this is left to QA, and I think that's fine, but the ideal situation is one in which these are an integral part of the development process.
The way this works is: for each requirement the test plan is turned into a script which is added to the test suite. Then you watch it fail. Then you write code to make it pass. Thus, if a coder is working on changes and is ready to check in, they have to do a clean build and run all the acceptance tests. If any fail, fix before you can check in.
"Continuous integration" is simply the process of automating this step - when anyone checks code in, a separate server checks out the code and runs all the tests. If any are broken it spams the last developer to check in until they are fixed.
I once consulted with a team that had a single tester. This guy was working through the test plans manually, all day long. When a change took place, however minor, he would have to start over. I built them a spreadsheet indicating that there were over 16 million possible paths through just a single screen, and they ponied up the $10k for Mercury Test Director in a hurry! Now he makes spreadsheets and automates the test plans that use them, so they have pretty thorough regression testing without ever-increasing QA time demands.
Once you've begun automating tests at every layer of your app (especially if you work test-first) a remarkable thing happens. Worry disappears!
So, no, it's not necessary. But if you find yourself worrying about technical debt, about the big deployment this weekend, or about whether you're going to break things while trying to quickly change to meet the suddenly-urgent customer requirements, you may want to more deeply investigate test-first development.
Filters are useful to help move logic such as is the user authenticated, to properly handle this, since you don't want this logic in every page.
Since you don't have a central controller it sounds like your filters are serving this function, which is fine, but, as you mentioned, it does make debugging harder.
This is where unit tests can come in handy, as you can test different situations, with each filter individually, then with all the filters in a chain, outside of your container, to ensure it works properly.
Unit testing does require discipline, but, if the rule is that nothing goes to QA without a unit test then it may help, and there are many tools to help generate tests so you just have to write the test. Before you debug, write or update the unit test, and show that the unit test is failing, so the problem is duplicated.
This will ensure that that error won't return, and that you fixed it, and you have updated a unit test.
For resource bundles. If you are certain you will never support another language, then as you refactor you can remove the need for the bundles, but, I think it is easier to make spelling/grammar corrections if the text is actually in one place.
Filters in general are expected to perform smaller units of functionality and filter-chaining would be used to apply the filters as needed. In your case, maybe a refactoring can help to move out some of the logic to additional filters and the redirecting logic can be somewhat centralized through a controller to be easier to debug and understand.
Resource bundles are necessary to maintain flexibility, but if you know absolutely that the site is going to be used in a single locale, then you might skip it. Maybe you can move some of the work in maintaining the bundles to the designers i.e let them have access to the resource bundles, so that you get the HTML with the keys in place.
Unit testing is much easier to implement at the beginning of a project as opposed to building it into a existing product. For existing software, you may still implement unit tests for the new features. However, it requires a certain amount of insistence from team leads and the team needs to buy into the necessity of having unit tests. Code review for unit tests helps and a decision on what parts of the code need to be absolutely covered can help developers. Tools/plugins like Coverlipse can indicate the unit testing coverage, but they tend to look at every possible code path, some of which may be trivial.
At one of my earlier projects, unit tests were just compulsory and unit tests would be automatically kicked off after each check-in. However, this was not Test-driven development, as the tests were mostly written after the small chunks of code were written. TDD can result in developers writing code to just work with the unit tests and as a result, developers can lose the big picture of the component they are developing.
In a web environment, how do you work with filters. When should you use J2EE filters and when shouldn't you?
Filters are meant to steer/modify/intercept the actual requests/responses/sessions. For example: setting the request encoding, determining the logged-in user, wrapping/replacing the request or response, determining which servlet it should forward the request to, and so on.
To control the actual user input (parameters) and output (the results and the destination) and to execute actual business logic, you should use a servlet.
Property resource bundle files for replacement values in JSP files.
If you don't do i18n, just don't use them. But if you ever grow and the customer/users want i18n, then you'll be happy that you're already prepared. And not only that, it also simplifies the use of a CMS to edit the content by just using the java.util.Properties API.
Unit Testing for all J2EE development
JUnit can take care about it. You can also consider to "officially" do user tests only. Create several use cases and test it.
How does your team handle Builds?
We use Cruise Control, but (due to lack of knowledge) we are facing some problems - Code freeze in SVN - Build management
Specifically, how do you make available a particular release when code is constantly being checked in?
Generally, can you discuss what best practices you use in release management?
I'm positively astonished that this isn't a duplicate, but I can't find another one.
Okay, here's the deal. They are two separate, but related questions.
For build management, the essential point is that you should have an automatic, repeatable build that rebuilds the entire collection of software from scratch, and goes all the way to your deliverable configuration. in other words, you should build effectively a release candidate every time. Many projects don't really do this, but I've seen it burn people (read "been burned by it") too many times.
Continuous integration says that this build process should be repeated every time there is a significant change event to the code (like a check in) if at all possible. I've done several projects in which this turned into a build every night because the code was large enough that it took several hours to build, but the ideal is to set up your build process so that some automatic mechanism --- like an ant script or make file --- only rebuilds the pieces affected by a change.
You handle the issue of providing a specific release by in some fashion preserving the exact configuration of all affected artifacts for each build, so you can apply your repeatable build process to the exact configuration you had. (That's why it's called "configuration management.") The usual version control tools, like git or subversion, provide ways to identify and name configurations so they can be recovered; in svn, for example, you might construct a tag for a particular build. You simply need to keep a little bit of metadata around so you know which configuration you used.
You might want to read one of the "Pragmatic Version Control" books, and of course the stuff on CI and Cruise Control on Martin Fowler's site is essential.
Look at continuous integration: best pratices, from Martin Fowler.
Well, I have managed to find a related thread, I participated in, a year ago. You might find it useful, as well.
And here is how we do it.
[Edited]
We are using Cruise Control as integration tool. We just deal with the trunk, which is the main Subversion repository in our case. We seldom pull out a new branch for doing new story cards, when there is a chance of complex conflicts. Normally, we pull out a branch for a version release and create the build from that and deliver that to our test team. Meanwhile we continue the work in trunk and wait for the feedback from test team. Once all tested we create a tag from the branch, which is immutable logically in our case. So, we can release any version any time to any client in case. In case of bugs in the release we don't create tag, we fix the things there in the branch. After getting everything fixed and approved by test team, we merge the changes back to trunk and create a new tag from the branch specific to that release.
So, the idea is our branches and tags are not really participating in continuous integration, directly. Merging branch code back to the trunk automatically make that code becomes the part CI (Continuous Integration). We normally do just bugfixes, for the specific release, in branches, so it doesn't really participate into CI process, I believe. To the contrary, if we start doing new story cards, for some reasons, in a branch, then we don't keep that branch apart too long. We try to merge it back to trunk as soon as possible.
Precisely,
We create branches manually, when we plan a next release
We create a branch for the release and fix bugs in that branch in case
After getting everything good, we make a tag from that branch, which is logically immutable
At last we merge the branch back to trunk if has some fixes/modifications
Release Management goes well beyond continuous integration.
In your case, you should use Cruise Control to automatically make a tag, which allows developers to go on coding while your incremental build can take place.
If your build is incremental, that means you can trigger it every x minutes (and not for every commit, because if they are too frequent, and if your build is too long, it may not have time to finish before the next build tries to take place). The 'x' should be tailored to be longer that a compilation/unit test cycle.
A continuous integration should include automatic launch of unit tests as well.
Beyond that, a full release management process will involve:
a series of deployment on homologation servers
a full cycle of homologation / UAT (User Acceptance Test)
non-regression tests
performance / stress tests
pre-production (and parallel run tests)
before finally releasing into production.
Again "release management" is much more complex than just "continuous integration" ;)
Long story short: Create a branch copied from trunk and checkout/build your release on that branch on the build server.
However, to get to that point in a completely automated fashion using cc.net is not an easy task. I could go into details about our build process if you like, but it's probably too fine grained for this discussion.
I agree with Charlie about having an automatic, repeatable build from scratch. But we don't do everything for the "Continuous" build, only for Nightly, Beta, Weekly or Omega (GA/RTM/Gold) release builds. Simply because some things, like generating documentation, can take a long time, and for the continuous build you want to provide developer with rapid feedback on a build result.
I totally agree with preserving exact configuration, which is why branching a release or tagging is a must. If you have to maintain a release, i.e. you can't just release another copy of trunk, then a branch on release approach is the way to go, but you will need to get comfortable with merging.
You can use Team Foundation Server 2008 and Microsoft Studio Team System to accomplish your source control, branching, and releases.
We've trying to separate a big code base into logical modules. I would like some recommendations for tools as well as whatever experiences you might have had with this sort of thing.
The application consists of a server WAR and several rich-clients distributed in JARs. The trouble is that it's all in one big, hairy code base, one source tree of > 2k files war. Each JAR has a dedicated class with a main method, but the tangle of dependencies ensnares quickly. It's not all that bad, good practices were followed consistently and there are components with specific tasks. It just needs some improvement to help our team scale as it grows.
The modules will each be in a maven project, built by a parent POM. The process has already started on moving each JAR/WAR into it's own project, but it's obvious that this will only scratch the surface: a few classes in each app JAR and a mammoth "legacy" project with everything else. Also, there are already some unit and integration tests.
Anyway, I'm interesting in tools, techniques, and general advice to breaking up an overly large and entangled code base into something more manageable. Free/open source is preferred.
Have a look a Structure 101. It is awesome for visualizing dependencies, and showing the dependencies to break on your way to a cleaner structure.
We recently have accomplished a similar task, i.e. a project that consisted of > 1k source files with two main classes that had to be split up. We ended up with four separate projects, one for the base utility classes, one for the client database stuff, one for the server (the project is a rmi-server-client application), and one for the client gui stuff. Our project had to be separated because other applications were using the client as a command line only and if you used any of the gui classes by accident you were experiencing headless exceptions which only occurred when starting on the headless deployment server.
Some things to keep in mind from our experience:
Use an entire sprint for separating the projects (don't let other tasks interfere with the split up for you will need the the whole time of a sprint)
Use version control
Write unit tests before you move any functionality somewhere else
Use a continuous integration system (doesn't matter if home grown or out of the box)
Minimize the number of files in the current changeset (you will save yourself a lot of work when you have to undo some changes)
Use a dependency analysis tool all the way before moving classes (we have made good experiences with DependencyFinder)
Take the time to restructure the packages into reasonable per project package sets
Don't fear to change interfaces but have all dependent projects in the workspace so that you get all the compilation errors
Two advices: The first thing you need is Test suites. The second advice is to work in small steps.
If you have a strong test suite already then you're in a good position. Otherwise, I would some good high level tests (aka: system tests).
The main advantage of high level tests is that a relatively small amount of tests can get you great coverage. They will not help you in pin-pointing a bug, but you won't really need that: if you work in small steps and you make sure to run the tests after each change you'll be able to quickly detect (accidentally introduced) bugs: the root of the bug is in the small portion of the code has changed since the last time you ran the tests.
I would start with the various tasks that you need to accomplish.
I was faced with a similar task recently, given a 15 year old code base that had been made by a series of developers who didn't have any communication with one another (one worked on the project, left, then another got hired, etc, with no crosstalk). The result is a total mishmash of very different styles and quality.
To make it work, we've had to isolate the necessary functionality, distinct from the decorative fluff to make it all work. For instance, there's a lot of different string classes in there, and one person spent what must have been a great deal of time making a 2k line conversion between COleDateTime to const char* and back again; that was fluff, code to solve a task ancillary to the main goal (getting things into and out of a database).
What we ended up having to do was identify a large goal that this code accomplished, and then writing the base logic for that. When there was a task we needed to accomplish that we know had been done before, we found it and wrapped it in library calls, so that it could exist on its own. One code chunk, for instance, activates a USB device driver to create an image; that code is untouched by this current project, but called when necessary via library calls. Another code chunk works the security dongle, and still another queries remote servers for data. That's all necessary code that can be encapsulated. The drawing code, though, was built over 15 years and such an edifice to insanity that a rewrite in OpenGL over the course of a month was a better use of time than to try to figure out what someone else had done and then how to add to it.
I'm being a bit handwavy here, because our project was MFC C++ to .NET C#, but the basic principles apply:
find the major goal
identify all the little goals that make the major goal possible
Isolate the already encapsulated portions of code, if any, to be used as library calls
figure out the logic to piece it all together.
I hope that helps...
To continue Itay's answer, I suggest reading Michael Feathers' "Working Effectively With Legacy Code"(pdf). He also recommends every step to be backed by tests. There is also A book-length version.
Maven allows you to setup small projects as children of a larger one. If you want to extract a portion of your project as a separate library for other projects, then maven lets you do that as well.
Having said that, you definitely need to document your tasks, what each smaller project will accomplish, and then (as has been stated here multiple times) test, test, test. You need tests which work through the whole project, then have tests that work with the individual portions of the project which will wind up as child projects.
When you start to pull out functionality, you need additional tests to make sure that your functionality is consistent, and that you can mock input into your various child projects.