Im currently reading Continius Delivery and in the book the author says that it is crucial to build the binarys only once, and then use the same binarys for every deployment. What im having problem understanding is how this can be done in practice? For examaple in order to run the mocked unit tests there would be a special build? What im refering to is the scope tag in Maven.
If you look at the Maven life Cycle you'll see that you have only one compile task. For your tests, they will be compiled and executed right after the source compilation. With mocked unit tests, it is the same : two separated compilation for two objectives.
I think that the author of your book refers to a problem that may appear when you deploy automatically on several environment : it create more environment to debug. It is compulsory to have only one final binary for all the environment. If you have several binaries that you split over your environment, you can be assured that you will forget what difference there are between two of them, which argument you give to the first one and not the other. For continuous Delivery, it has to be the same everywhere.
Let's come back on Maven. Maven has a lot of possibilities during its life cycle. Sometimes you'll have to run several build to complete everything (as code coverage for example). This may be useful in your continuous integration process and can be done through different build type (every hour for unit test, everyday for code coverage, quality analysis and integration tests).
But in the end, when you'll enter Continuous Delivery, you'll build one final binary, one unique binary copy over your environment
Related
I'm having this logic to check a java file content and verify that it has a comment, which says who is the author(not necessarily the creator) of that file - special requirement of the project.
I wrote a unit test using Junit to check the logic and it works fine.
And I want all the .java files adhere to that standard and make the build fail if at least one of them does not comply to that.
So far I have my Junit test method to do the following,
Read all the .java file contents in the application
For each content check if it contains the comment which has the standard format
Fail the test case if at least one of them with no comment with such a format (so that eventually the build will fail)
Is that a correct approach? It will serve the purpose but is it a good practice to use Junit test to do some verification work.?
If not, what kind of approach should I use to analyze(using my logic - I have a Analyzer.java file with the logic) all the files in the build time and have the build be success iff all files comply too the required standard.
EDIT :
Code comment check is only one verification. There are several checks that need to be done. (ex : variable names should end with a given suffix, patterns to use some internal libraries .. etc) All the scenarios are handled in that logic (Analyzer.java). Just need to check all the java file contents and use that logic to verify them.
It's safe to say like I have a java library and when invoked a method that accept a file name , check(fileName) , it will analyze the file and return true if it pass some invisible logic. And if it returns false build should be failed. Since I need to fail the build if something is not right I'm using it in a jUnit test to check all my .java files in the code base.
If this can be done by a static code analyzing tool (but needs to use the logic I have) it is also acceptable. But I don't have an idea whether this kind of custom verification supports by the existing static code analyzers.
Is that approach I'm using correct ? ... is it a good practice to use junit test to do some verification work
No. Unit testing is for checking the integrity of a code unit, ensuring the unit's behavior works properly. You should not be checking comments/documentation in unit tests.
Since I need to fail the build if something is not right..
You need to add more steps to your build process, more specifically a static analysis step.
Unit testing is considered a build step, along with compilation, execution and deployment. Your project requires an extra step, which brings me to the following...
You could use a build tool, such as Apache Ant, to add steps to your project's build. Although static analysis doesn't come bundled (this is simply a build automation tool), it does allow you to ensure that the build fails of a custom build step fails.
With that said, you could add a step that triggers a static analysis program. This page contains an example of using Ant to create multiple build steps, including static code analysis and bug checking. You could even create your own analyzer to use.
For more information about build tools & automation:
StackOverflow: What is a Build Tool?
Wiki: Software Build > Build Tools
Wiki: Build Automation
You can use Checkstyle for that.
Build can be failed.
Check comments. It is called static code analysis.
To define the format for an author tag or a version tag, set property authorFormat or versionFormat respectively to a regular expression.
If that would be my project then i would not be placing this kind of verifications in the standard src/test/java and run as part of the test suite of the application.
Unit tests suite should be testing production logic and not doing 'coding style' checks.
A place for this kind of verification would be for example on a pre-commit to git repository. And all the scripts checking this (or style-checking tools) would be invoked in that place.
You can put everything in one place, but as far as i can see the, the separation of concerns in all the areas of the software development are the leading trend for quite a while.. and there is a good epoint to that.
First of all I am not good at testing. I have an OSGI CRUD application. And I want to make tests for auto testing business logic. And I see here two options:
run tests at compilation time - at certain maven phase
make tests as separate bundle and run it after starting the application. For example, if I click somewhere at main menu.
Which one is the right choice? Or both are possible?
The reasons why I am asking this questions are the following:
I've never seen option 2, but seen a lot of option 1.
Option 2 is for me a better one, because business logic includes working with database, index system and memory cache and I've got not idea how to check it during compilation time.
Technically, option 1 is not a compile time testing. Maven run the test before install/deploy the bundle after compile your code.
Option 1 is for Unit testing. In detail, before install or deploy any bundle, need to make sure each and every unit of your code working as expected.
Option 2 is for functionality testing. Test begins by invoking or testing the main gateway or main functionality, that will invoke the multiple modules internally. Based on the input, some of the unit may execute or may not.. primary focus of this testing is to cover and make sure the different scenario of the functionality.
Good developer should do both. Hope this helps!!
I am working on a number of projects and we are using Java, Springs, Maven and Jenkins for CI but I am running into a issues that some of the programmers are not adding real junit test cases to the projects. I want maven and jenkins to run the test before deploying to the server. Some of the programers made a blank test so it starts and stops and will pass the tests.
Can someone please tell me how can I automat this check so maven and jenkins can see if the test put out some output.
I have not found any good solution to this issue, other than reviewing the code.
Code coverage fails to detect the worst unit tests I ever saw
Looking at the number of tests, fails there too. Looking at the test names, you bet that fails.
If you have developers like the "Kevin" who writes tests like those, you'll only catch those tests by code review.
The summary of how "Kevin" defeats the checks:
Write a test called smokes. In this test you invoke every method of the class under test with differing combinations of parameters, each call wrapped in try { ... } catch (Throwable t) {/* ignore */}. This gives you great coverage, and the test never fails
Write a load of empty tests with names that sound like you have thought up fancy test scenarios, eg widgetsTurnRedWhenFlangeIsOff, widgetsCounterrotateIfFangeGreaterThan50. These are empty tests, so will never fail, and a manager inspection the CI system will see lots of detailed test cases.
Code review is the only way to catch "Kevin".
Hope your developers are not that bad
Update
I had a shower moment this morning. There is a type of automated analysis that can catch "Kevin", unfortunately it can still be cheated around, so while it is not a solution to people writing bad tests, it does make it harder to write bad tests.
Mutation Testing
This is an old project, and won't work on recent code, and I am not suggesting you use this. But I am suggesting that it hints at a type of automated analysis that would stop "Kevin"
If I were implementing this, what I would do is write a "JestingClassLoader" that uses, e.g. ASM, to rewrite the bytecode with one little "jest" at a time. Then run the test suite against your classes when loaded with this classloader. If the tests don't fail, you are in "Kevin" land. The issue is that you need to run all the tests against every branch point in your code. You could use automatic coverage analysis and test time profiling to speed things up, though. In other words, you know what code paths each test executes, so when you make a "jest" against one specific path, you only run the tests that hit that path, and you start with the fastest test. If none of those tests fail, you have found a weakness in your test coverage.
So if somebody were to "modernize" Jester, you'd have a way to find "Kevin" out.
But that will not stop people writing bad tests. Because you can pass that check by writing tests that verify the code behaves as it currently behaves, bugs and all. Heck there are even companies selling software that will "write the tests for you". I will not give them the Google Page Rank by linking to them from here, but my point is if they get their hands on such software you will have loads of tests that straight-jacket your codebase and don't find any bugs (because as soon as you change anything the "generated" tests will fail, so now making a change requires arguing over the change itself as well as the changes to all the unit tests that the change broke, increasing the business cost to make a change, even if that change is fixing a real bug)
I would recommend using Sonar which has a very useful build breaker plugin.
Within the Sonar quality profile you can set alerts on any combination of metrics, so, for example you could mandate that your java projects should have
"Unit tests" > 1
"Coverage" > 20
Forcing developers to have at least 1 unit test that covers a minimum of 20% of their codebase. (Pretty low quality bar, but I suppose that's your point!)
Setting up an additional server may appear like extra work, but the solution scales when you have multiple Maven projects. The Jenkins plugin for Sonar is all you'll need to configure.
Jacoco is the default code coverage tool, and Sonar will also automatically run other tools like Checkstyle, PMD and Findbugs.
Finally Stephen is completely correct about code review. Sonar has some basic, but useful, code review features.
You need to add a code coverage plugin such as JaCoCo, EMMA, Cobertura, or the likes. Then you need to define in the plugin's configuration the percent of code coverage (basically "code covered by the tests") that you would like to have, in order for the build to pass. If it's below that number, you can have the build failing. And, if the build is failing, Jenkins (or whatever your CI is) won't deploy.
As others have pointed out, if your programmers are already out to cheat coding practices, using better coverage tools won't solve your problem. They can be out cheated as well.
You need to sit down with your team and have an honest talk with them about professionalism and what software engineering is supposed to be.
In my experience, code reviews are great but they need to happen before the code is committed. But for that to work in a project where people are 'cheating', you'll need to at least have a reviewer you can trust.
http://pitest.org/ is a good solution for so called "mutation testing":
Faults (or mutations) are automatically seeded into your code, then your tests are run. If your tests fail then the mutation is killed, if your tests pass then the mutation lived.
The quality of your tests can be gauged from the percentage of mutations killed.
The good thing is that you can easyli use it in conjunction with maven, jenkins and ... SonarQube!
I was wondering if it's possible to run two projects at the same time in Eclipse, for example by using two different instances of JVM (if that makes any sense).
A bit of the background: I have a project that executes relatively long experiments (6-8h). I have recently managed to come to a point in development where I could branch off to develop different strategies for improving/adding code to the project. However at the same time I need to get some experiments done, and as the experiments take a long while to finish I'd like to make use of the long waiting time, and work on the branch code.
In short my ideal scenario is: start an experiment on the trunk in Eclipse, switch to the branch and develop code/run shorter experiments on the branch when I need to test functionality. Is this possible, or do I need to come up with an alternative strategy?
Thanks in advance!
EDIT: I have realized that the the word choice "test" was misleading as it could be misunderstood. I mean executing the program as it's supposed to run, not testing with JUnit or anything like that. I apologize for the inconvenience.
I just check out different branches as different projects. MyProjectTrunk, MyProjectBranch1, MyProjectBranch2 etc. No problem. The projects will never run on on the same JVM if you're using Run as Application.
Of course it is possible - you just need to have them configured as two separate projects with separate run configurations for each of them.
Unfortunately as far as I remember when you close a project all the associated running tasks (svn commits, debugs, runs, etc.) shut down as well and having two separate branches of the same project open in the same time might get very confusing when using keyboard shortcuts for class browsing.
We've trying to separate a big code base into logical modules. I would like some recommendations for tools as well as whatever experiences you might have had with this sort of thing.
The application consists of a server WAR and several rich-clients distributed in JARs. The trouble is that it's all in one big, hairy code base, one source tree of > 2k files war. Each JAR has a dedicated class with a main method, but the tangle of dependencies ensnares quickly. It's not all that bad, good practices were followed consistently and there are components with specific tasks. It just needs some improvement to help our team scale as it grows.
The modules will each be in a maven project, built by a parent POM. The process has already started on moving each JAR/WAR into it's own project, but it's obvious that this will only scratch the surface: a few classes in each app JAR and a mammoth "legacy" project with everything else. Also, there are already some unit and integration tests.
Anyway, I'm interesting in tools, techniques, and general advice to breaking up an overly large and entangled code base into something more manageable. Free/open source is preferred.
Have a look a Structure 101. It is awesome for visualizing dependencies, and showing the dependencies to break on your way to a cleaner structure.
We recently have accomplished a similar task, i.e. a project that consisted of > 1k source files with two main classes that had to be split up. We ended up with four separate projects, one for the base utility classes, one for the client database stuff, one for the server (the project is a rmi-server-client application), and one for the client gui stuff. Our project had to be separated because other applications were using the client as a command line only and if you used any of the gui classes by accident you were experiencing headless exceptions which only occurred when starting on the headless deployment server.
Some things to keep in mind from our experience:
Use an entire sprint for separating the projects (don't let other tasks interfere with the split up for you will need the the whole time of a sprint)
Use version control
Write unit tests before you move any functionality somewhere else
Use a continuous integration system (doesn't matter if home grown or out of the box)
Minimize the number of files in the current changeset (you will save yourself a lot of work when you have to undo some changes)
Use a dependency analysis tool all the way before moving classes (we have made good experiences with DependencyFinder)
Take the time to restructure the packages into reasonable per project package sets
Don't fear to change interfaces but have all dependent projects in the workspace so that you get all the compilation errors
Two advices: The first thing you need is Test suites. The second advice is to work in small steps.
If you have a strong test suite already then you're in a good position. Otherwise, I would some good high level tests (aka: system tests).
The main advantage of high level tests is that a relatively small amount of tests can get you great coverage. They will not help you in pin-pointing a bug, but you won't really need that: if you work in small steps and you make sure to run the tests after each change you'll be able to quickly detect (accidentally introduced) bugs: the root of the bug is in the small portion of the code has changed since the last time you ran the tests.
I would start with the various tasks that you need to accomplish.
I was faced with a similar task recently, given a 15 year old code base that had been made by a series of developers who didn't have any communication with one another (one worked on the project, left, then another got hired, etc, with no crosstalk). The result is a total mishmash of very different styles and quality.
To make it work, we've had to isolate the necessary functionality, distinct from the decorative fluff to make it all work. For instance, there's a lot of different string classes in there, and one person spent what must have been a great deal of time making a 2k line conversion between COleDateTime to const char* and back again; that was fluff, code to solve a task ancillary to the main goal (getting things into and out of a database).
What we ended up having to do was identify a large goal that this code accomplished, and then writing the base logic for that. When there was a task we needed to accomplish that we know had been done before, we found it and wrapped it in library calls, so that it could exist on its own. One code chunk, for instance, activates a USB device driver to create an image; that code is untouched by this current project, but called when necessary via library calls. Another code chunk works the security dongle, and still another queries remote servers for data. That's all necessary code that can be encapsulated. The drawing code, though, was built over 15 years and such an edifice to insanity that a rewrite in OpenGL over the course of a month was a better use of time than to try to figure out what someone else had done and then how to add to it.
I'm being a bit handwavy here, because our project was MFC C++ to .NET C#, but the basic principles apply:
find the major goal
identify all the little goals that make the major goal possible
Isolate the already encapsulated portions of code, if any, to be used as library calls
figure out the logic to piece it all together.
I hope that helps...
To continue Itay's answer, I suggest reading Michael Feathers' "Working Effectively With Legacy Code"(pdf). He also recommends every step to be backed by tests. There is also A book-length version.
Maven allows you to setup small projects as children of a larger one. If you want to extract a portion of your project as a separate library for other projects, then maven lets you do that as well.
Having said that, you definitely need to document your tasks, what each smaller project will accomplish, and then (as has been stated here multiple times) test, test, test. You need tests which work through the whole project, then have tests that work with the individual portions of the project which will wind up as child projects.
When you start to pull out functionality, you need additional tests to make sure that your functionality is consistent, and that you can mock input into your various child projects.