I have a class of high school programming students and I would like an automated way to check the validity of their work. I go through their code and look for structure, efficiency and basic expected outcomes but I was hoping to take it to another level.
Would Unit Testing be a viable solution?
Is there an elegant way of check a bunch of student programs at once?
We are using Eclipse and I've imported their project, containing all of their programs, from their local network drive. Works great. I'm just trying to give them more feedback on how they are doing, and even introduce them to unit testing, which is something I've never done.
yes sure you can do that for checking the students work as
Unit Testing reduces the level of bugs in production code.
Automated tests can be run as frequently as required.
In my university we had automated tests for excercises. We just mailed the class files or built jars to an email address. On the serverside you set them in the classpath and start your tests. It's actually quite easy to implement. The important thing is to clearly document package structure and such in the given requirements, maybe even supply a project skeleton.
Nice plus: the students were given a smaller sets of tests so they could verify they work before they submitted it.
I think introducing them to unit testing and TDD is a great idea.
However, if YOU write the unit tests and give them to your students before they do the assignment, then they won't learn to write unit tests. Also, they will structure their code according to your test, which may or may not be what you want.
If they, on the other hand, write the unit tests, they will learn how to do that, but you won't know what their tests are testing.
Perhaps you could extract code test coverage and assign them to reach 80, 90 or 100% coverage or something like that.
I did a review of a programming test today, for a eventual new hire, and I feel that the reasoning behind programming choices is really important.
Related
I'm new to the world of unit testing in Spring and with some research I think I'll be using Junit5 with mockito and AssertJ library for the same. I am just starting out and it does look really confusing.
1st Question: Do we write our units first?? meaning our controller methods or service methods or any unit.. Do we write those firsts and then write tests for them or Do we first design our tests thinking ok, this methods needs to do this so I'll be writing my tests and then actually implement what I have to implement? I know this a bit silly of a question but I want to know how people go about it this in their work.
2nd Question: Suppose we write our tests, it fails for a certain unit. After it fails, what should be my first approach i.e. should I be thinking that the way I wrote the test was bad, so I need to properly write the test in order for the test to pass, or should I be like, ok my unit must have been bad, I'll change or refactor my code right away and then again test it after the refactoring? What is the 'thing' to do just after your test fails for some unit?
3rd Question/Help: Testing simple CRUD methods seems okayish and there are a lot of resources for it although it's still confusing.. Could someone recommend me a github repo or a link or any source where unit tests have been written for some complex functions with some actual logic inside it and not only CRUD operations? I want to see how tests are written for the logic inside the code and how things are actually tested.
Any help would be highly valued. Thank-you, Have a good day.
You are asking if you should to "Test Driven Development" (TDD for short) , where writing test first is the basic idea. But since you did not write any tests yet, start small, write the test afterwards, and as soon you feel comfortable writing tests, write all test before implementing. And read more about "TDD", google is you friend.
this depends if you trust the test code more than the production code or vice versa. Of cause, test code can also be buggy. But test code should actually be easier to understand, so it should be easier to decide who much you can trust it.
See https://medium.com/#sheikarbaz5/spring-boot-with-tdd-test-driven-development-part-i-be1b90da51e or https://livebook.manning.com/book/spring-boot-in-action/chapter-4/
I'm a CS master student. Throughout my studies I coded many course projects in Java. Soon I will graduate. When I explore some github projects I often find people organize their projects as /main and /test. I have never organized it in such a way, i.e. I always have my source code files without any test directories. I think that folder often contains what I think is called 'test cases' or so.
Since I will find a job soon, then I would like to learn about production-quality code.
My questions:
Why people often have that folder? What does it contain?
Can you provide me with a link to a good tutorial about the practice of testing in java? i.e how to do it? In a nutshell I wanna understand the idea of that /tests/ folder.
I often find people organize their projects as /main and /test
This is a matter of taste. Not 100% sure but at least maven projects have such organization.
From Maven: Introduction to the Standard Directory Layout, this would be the project layout:
src
main
java <-- your Java source code
resources
filters
config
scripts
webapp
test
java <-- your unit tests for Java
resources
filters
it
assembly
site
Why people often have that folder? What does it contain?
Usually, people write test cases to cover the code and check if the code works as expected. This is known as Code Coverage. Code coverage also serves as regression tests in case somebody makes changes in the code for enhancements like code refactoring.
The test cases you will find them usually are for Unit Testing. Depending on the type of the project, you could also find Integration Tests.
There is also Test Driven Development, or TDD, which is a practice whose basis is writing the test cases before writing the real code.
Can you provide me with a link to a good tutorial about the practice of testing in java?
This is off topic for the site. There are plenty tutorials on the net about this.
I don't have a separate folder for mine but usually people keep their Unit Tests in that folder. A unit test generally sets up "fake" data to test a given class so that a developer can easily debug any issues.
The reason people provide a /test folder is to contain unit test for their project.
There are really many ways of testing Java but JUnit is a very commonly used method of testing.
It is a good practice to write tests for your code. Begin with writing Unit Tests. I found this tutorial very useful. Writing test ensures that your code behaves as expected , corner cases are tested and adding new code in the future does not break existing functionality.
There are also mocking frameworks like JMock and Mockito that make writing stubs and drivers for your methods easy.
What is even more interesting is people prefer writing tests before they write the actual implementation. This approach is called Test Driven Development or Extreme Programming. Writing tests first ensure one already has a prep code or pseudo code for the methods in mind.
I am working on a number of projects and we are using Java, Springs, Maven and Jenkins for CI but I am running into a issues that some of the programmers are not adding real junit test cases to the projects. I want maven and jenkins to run the test before deploying to the server. Some of the programers made a blank test so it starts and stops and will pass the tests.
Can someone please tell me how can I automat this check so maven and jenkins can see if the test put out some output.
I have not found any good solution to this issue, other than reviewing the code.
Code coverage fails to detect the worst unit tests I ever saw
Looking at the number of tests, fails there too. Looking at the test names, you bet that fails.
If you have developers like the "Kevin" who writes tests like those, you'll only catch those tests by code review.
The summary of how "Kevin" defeats the checks:
Write a test called smokes. In this test you invoke every method of the class under test with differing combinations of parameters, each call wrapped in try { ... } catch (Throwable t) {/* ignore */}. This gives you great coverage, and the test never fails
Write a load of empty tests with names that sound like you have thought up fancy test scenarios, eg widgetsTurnRedWhenFlangeIsOff, widgetsCounterrotateIfFangeGreaterThan50. These are empty tests, so will never fail, and a manager inspection the CI system will see lots of detailed test cases.
Code review is the only way to catch "Kevin".
Hope your developers are not that bad
Update
I had a shower moment this morning. There is a type of automated analysis that can catch "Kevin", unfortunately it can still be cheated around, so while it is not a solution to people writing bad tests, it does make it harder to write bad tests.
Mutation Testing
This is an old project, and won't work on recent code, and I am not suggesting you use this. But I am suggesting that it hints at a type of automated analysis that would stop "Kevin"
If I were implementing this, what I would do is write a "JestingClassLoader" that uses, e.g. ASM, to rewrite the bytecode with one little "jest" at a time. Then run the test suite against your classes when loaded with this classloader. If the tests don't fail, you are in "Kevin" land. The issue is that you need to run all the tests against every branch point in your code. You could use automatic coverage analysis and test time profiling to speed things up, though. In other words, you know what code paths each test executes, so when you make a "jest" against one specific path, you only run the tests that hit that path, and you start with the fastest test. If none of those tests fail, you have found a weakness in your test coverage.
So if somebody were to "modernize" Jester, you'd have a way to find "Kevin" out.
But that will not stop people writing bad tests. Because you can pass that check by writing tests that verify the code behaves as it currently behaves, bugs and all. Heck there are even companies selling software that will "write the tests for you". I will not give them the Google Page Rank by linking to them from here, but my point is if they get their hands on such software you will have loads of tests that straight-jacket your codebase and don't find any bugs (because as soon as you change anything the "generated" tests will fail, so now making a change requires arguing over the change itself as well as the changes to all the unit tests that the change broke, increasing the business cost to make a change, even if that change is fixing a real bug)
I would recommend using Sonar which has a very useful build breaker plugin.
Within the Sonar quality profile you can set alerts on any combination of metrics, so, for example you could mandate that your java projects should have
"Unit tests" > 1
"Coverage" > 20
Forcing developers to have at least 1 unit test that covers a minimum of 20% of their codebase. (Pretty low quality bar, but I suppose that's your point!)
Setting up an additional server may appear like extra work, but the solution scales when you have multiple Maven projects. The Jenkins plugin for Sonar is all you'll need to configure.
Jacoco is the default code coverage tool, and Sonar will also automatically run other tools like Checkstyle, PMD and Findbugs.
Finally Stephen is completely correct about code review. Sonar has some basic, but useful, code review features.
You need to add a code coverage plugin such as JaCoCo, EMMA, Cobertura, or the likes. Then you need to define in the plugin's configuration the percent of code coverage (basically "code covered by the tests") that you would like to have, in order for the build to pass. If it's below that number, you can have the build failing. And, if the build is failing, Jenkins (or whatever your CI is) won't deploy.
As others have pointed out, if your programmers are already out to cheat coding practices, using better coverage tools won't solve your problem. They can be out cheated as well.
You need to sit down with your team and have an honest talk with them about professionalism and what software engineering is supposed to be.
In my experience, code reviews are great but they need to happen before the code is committed. But for that to work in a project where people are 'cheating', you'll need to at least have a reviewer you can trust.
http://pitest.org/ is a good solution for so called "mutation testing":
Faults (or mutations) are automatically seeded into your code, then your tests are run. If your tests fail then the mutation is killed, if your tests pass then the mutation lived.
The quality of your tests can be gauged from the percentage of mutations killed.
The good thing is that you can easyli use it in conjunction with maven, jenkins and ... SonarQube!
I am working on a web application with an existing code base that has probably been around for 10 years, there are ~1000 class files and ~100,000 lines of code. The good news is that the code is organized well, business logic is separate from the controller domain, and there is a high level of reusability. The bad news is there is only the very beginnings of a test suite (JUnit); there's maybe 12 dozen tests at most.
The code is organized fairly typically for an enterprise Java project. There is a stuts-esque controller package, the model consists of almost purely data objects, there is a hibernate like database layer that is largely encapsulated within data access objects, and a handful of service packages that are simple, self contained, and logical. The end goal of building this test suite is to move towards a continuous integration development process.
How would you go about building a test suite for such an application?
What tools would you use to make the process simpler?
Any suggestions welcome. thanks!
Start by reading Working Effectively with Legacy Code (short version here). Next I would write a couple of end-to-end smoke tests to cover the most common use cases. Here are some ideas on how to approach it: http://simpleprogrammer.com/getting-up-to-bat-series/
Then when I need to change some part of the system, I would cover it with focused unit tests (refer to the aforementioned book) and then do the change. Little by little the system - or at least the parts which change the most often - would be better covered and working with it would become easier.
I would create a few integration tests. Since they toch a lot of code, you probably will get an error when you screw up bigtime.
I wouldn't 'build a testsuite' as such, but rather before changing some part define a testset for it, and then go about changing it.
I would suggest looking into a test coverage tool (I don't code Java, so no clue what tool the best is for Java). While it does not tell you when you've tested enough, it does tell you when you tested too little ;)
Good luck!
If the project isn't already maven-ized I would do that. Also be sure to use a mocking framework like mockito. Hudson is a nice CI tool that integrates nicely with maven.
It looks like you are going to be writing both unit and functional tests, so JUnit might not be the best fit for this. Have you considered TestNG? Since you only have very few tests right now, you have the option to pick what's best for the job.
Right my junit tests look like a long story:
I create 4 users
I delete 1 user
I try to login with the deleted user and make sure it fails
I login with one of the 3 remaining user and verify I can login
I send a message from one user to the other and verify that it appears in the outbox of the sender and in the inbox of the receiver.
I delete the message
...
...
Advantages:
The tests are quite effective (are very good at detecting bugs) and are very stable, becuase they only use the API, if I refactor the code then the tests are refactored too. As I don't use "dirty tricks" such as saving and reloading the db in a given state, my tests are oblivious to schema changes and implementation changes.
Disadvantages:
The tests are getting difficult to maintain, any change in a test affects other tests. The tests run 8-9 min which is great for continuous integration but is a bit frustrating for developers. Tests cannot be run isolated, the best you can do is to stop after the test you are interested in has run - but you absolutely must run all the tests that come before.
How would you go about improving my tests?
First, understand the tests you have are integration tests (probably access external systems and hit a wide range of classes). Unit tests should be a lot more specific, which is a challenge on an already built system. The main issue achieving that is usually the way the code is structured:
i.e. class tightly coupled to external systems (or to other classes that are). To be able to do so you need to build the classes in such a way that you can actually avoid hitting external systems during the unit tests.
Update 1: Read the following, and consider that the resulting design will allow you to actually test the encryption logic without hitting files/databases - http://www.lostechies.com/blogs/gabrielschenker/archive/2009/01/30/the-dependency-inversion-principle.aspx (not in java, but ilustrates the issue very well) ... also note that you can do a really focused integration tests for the readers/writers, instead of having to test it all together.
I suggest:
Gradually include real unit tests on your system. You can do this when doing changes and developing new features, refactoring appropriately.
When doing the previous, include focused integration tests where appropriate. Make sure you are able to run the unit tests separated from the integration tests.
Consider your tests are close to testing the system as a whole, thus are different from automated acceptance tests only in that they operate on the border of the API. Given this think about factors related to the importance of the API for the product (like if it will be used externally), and whether you have good coverage with automated acceptance tests. This can help you understand what is the value of having these on your system, and also why they naturally take so long. Take a decision on whether you will be testing the system as a whole on the interface level, or both the interface+api level.
Update 2: Based on other answers, I want to clear something regarding doing TDD. Lets say you have to check whether some given logic sends an email, logs the info on a file, saves data on the database, and calls a web service (not all at once I know, but you start adding tests for each of those). On each test you don't want to hit the external systems, what you really want to test is if the logic will make the calls to those systems that you are expecting it to do. So when you write a test that checks that an email is sent when you create an user, what you test is if the logic calls the dependency that does that. Notice that you can write these tests and the related logic, without actually having to implement the code that sends the email (and then having to access the external system to know what was sent ...). This will help you focus on the task at hand and help you get a decoupled system. It will also make it simple to test what is being sent to those systems.
unit tests should - ideally - be independent, and able to run in any order. So, I would suggest that you:
break up your tests to be independent
consider using an in-memory database as the backend for your tests
consider wrapping each test or suite in a transaction that is rolled back at the end
profile the unit tests to see where the time is going, and concentrate on that
if it takes 8 minutes to create a few users and send a few messages, the performance problem may not be in the tests, rather this may be a symptom of performance problems with the system itself - only your profiler knows for sure!
[caveat: i do NOT consider these kinds of tests to be 'integration tests', though i may be in the minority; i consider these kinds of tests to be unit tests of features, a la TDD]
Now you are testing many things in one method (a violation of One Assertion Per Test). This is a bad thing, because when any of those things changes, the whole test fails. This leads it to not being immediately obvious why a test failed and what needs to be fixed. Also when you intentionally change the behaviour of the system, you need to change more tests to correspond the changed behaviour (i.e. the tests are fragile).
To know what kind of tests are good, it helps to read more on BDD: http://dannorth.net/introducing-bdd http://techblog.daveastels.com/2005/07/05/a-new-look-at-test-driven-development/ http://jonkruger.com/blog/2008/07/25/why-behavior-driven-development-is-good/
To improve the test that you mentioned, I would split it into the following three test classes with these context and test method names:
Creating user accounts
Before a user is created
the user does not exist
When a user is created
the user exists
When a user is deleted
the user does not exist anymore
Logging in
When a user exists
the user can login with the right password
the user can not login with a wrong password
When a user does not exist
the user can not login
Sending messages
When a user sends a message
the message appears in the sender's outbox
the message appears in the reciever's inbox
the message does not appear in any other message boxes
When a message is deleted
the message does not anymore exist
You also need to improve the speed of the tests. You should have a unit test suite with good coverage, which can run in a couple of seconds. If it takes longer than 10-20 seconds to run the tests, then you will hesitate to run them after every change, and you lose some of quick feedback that running the tests gives you. (If it talks to the database, it's not a unit test, but a system or integration test, which have their uses, but are not fast enough to be executed continually.) You need to break the dependencies of the classes under test by mocking or stubbing them. Also from your description it appears that your tests are not isolated, but instead the tests depend on the side-effects caused by previous tests - this is a no-no. Good tests are FIRST.
Reduce dependencies between tests. This can be done by using Mocks. Martin Fowler speaks about it in Mocks aren't stubs, especially why mocking reduces dependencies between tests.
You can use JExample, an extension of JUnit that allows test methods to have return values that are reused by other tests. JExample tests run with the normal JUnit plugin in Eclipse, and also work side by side with normal JUnit tests. Thus migration should be no problem. JExample is used as follows
#RunWith(JExample.class)
public class MyTest {
#Test
public Object a() {
return new Object();
}
#Test
#Given("#a")
public Object b(Object object) {
// do something with object
return object;
}
#Test
#Given("#b")
public void c(Object object) {
// do some more things with object
}
}
Disclaimer, I am among the JExample developers.
If you use TestNG you can annotate tests in a variety of ways. For example, you can annotate your tests above as long-running. Then you can configure your automated-build/continuous integration server to run these, but the standard "interactive" developer build would not (unless they explicitly choose to).
This approach depends on developers checking into your continuous build on a regular basis, so that the tests do get run!
Some tests will inevitably take a long time to run. The comments in this thread re. performance are all valid. However if your tests do take a long time, the pragmatic solution is to run them but not let their time-consuming nature impact the developers to the point that they avoid running them.
Note: you can do something similar with JUnit by (say) naming tests in different fashions and getting your continuous build to run a particular subset of test classes.
By testing stories like you describe, you have very brittle tests. If only one tiny bit of functionality is changing, your whole test might be messed up. Then you will likely to change all tests, which are affected by that change.
In fact the tests you are describing are more like functional tests or component tests than unit tests. So you are using a unit testing framework (junit) for non-unit tests. In my point of view there is nothing wrong to use a unit testing framework to do non-unit tests, if (and only if) you are aware of it.
So there are following options:
Choose another testing framework which supports a "story telling"-style of testing much better, like other user already have suggested. You have to evaluate and find a suitable testing framework.
Make your tests more “unit test”-like. Therefore you will need to break up your tests and maybe change your current production code. Why? Because unit testing aims on testing small units of code (unit testing purists suggest only one class at once). By doing this your unit tests become more independent. If you change the behavior of one class, you just need to change a relatively small amount of unit test code. This makes your unit test more robust. During that process you might see that your current code does not support unit testing very well -- mostly because of dependencies between classes. This is the reason that you will also need to modify your production code.
If you are in a project and running out of time, both options might not help you any further. Then you will have to live with those tests, but you can try to ease your pain:
Remove code duplication in your tests: Like in production code eliminate code duplication and put the code into helper methods or helper classes. If something changes, you might only need to change the helper method or class. This way you will converge to the next suggestion.
Add another layer of indirection to your tests: Produce helper methods and helper classes which operate on a higher level of abstraction. They should act as API for your tests. These helpers are calling you production code. Your story tests should only call those helpers. If something changes, you need to change only one place in your API and don't need to touch all your tests.
Example signatures for your API:
createUserAndDelete(string[] usersForCreation, string[] userForDeletion);
logonWithUser(string user);
sendAndCheckMessageBoxes(string fromUser, string toUser);
For general unit testing I suggest to have a look into XUnit Test Patterns from Gerard Meszaros.
For breaking dependencies in your production tests have a look into Working Effectively with Legacy Code from Michael Feathers
In addition to the above, pick up a good book on TDD (I can recommend "TDD and Acceptance TDD for Java Developers"). Even though it will approach from a TDD point of view there is alot of helpful information about writing the right kind of unit tests.
Find someone who has alot of knowledge in the area and use them to figure out how you can improve your tests.
Join a mailing list to ask questions and just read the traffic coming through. The JUnit list at yahoo (something like groups.yahoo.com/junit). Some of the movers and shakers in the JUnit world are on that list and actively participate.
Get a list of the golden rules of unit tests and stick them on your (and others) cubicle wall, something like:
Thou shalt never access an external system
Thou shalt only test the code under test
Thou shalt only test one thing at once
etc.
Since everyone else is talking about structure I'll pick different points. This sounds like a good opportunity to profile the code to find bottleknecks and to run it through code coverage to see if you are missing anything (given the time it takes to run it the results could be interesting).
I personally use the Netbeans profiler, but there are ones in other IDEs and stand alone ones as well.
For code coverage I use Cobertura, but EMMA works too (EMMA had an annoyance that Cobertura didn't have... I forget what it was and it may not be an issue anymore). Those two are free, there are paid ones as well that are nice.