Multiple versions of tested app tested by one JUnit test - java

Our app is as follows:
[Frontend] <-restAPI-> [Backend]
Backend supposed to be always at the latest version and can support multiple versions of the frontend, like Ver1, Ver2, etc. There could be minor changes in the restAPI protocol or even how the frontend reacts (more function or different behavior).
This test project tests correct communication, how the frontend behaves and backend for serving right data.
We would like to have the same test project branch to be used for all supported versions, right now there are really only minor differences so our java test code have
if (version == "ver1") {
....
} else if (version == "ver2") {
....
}
Question is what is the most elegant way to split the versions ? Right now it works, but as version number will rise it would became a mess.
I thought like to have a parent #Test method and decide which child test to run according to the version.
BasicTest.java
BasicVer1Test.java
BasicVer2Test.java
Question is if my idea is good, maybe somebody faced similar problem.

The responsibility of your "test system" is testing. Not version control.
In other words: the "correct" answer here is to use a source code management system!
Your code base contains
A) source code
B) associated tests
So, when your product has several distinct versions, than should be managed by having A) and B) together within same branches.
Whereas your setup seems to be that those aspects are really separated; and your "test code base" is not in the same way version controlled as your product code. That is the crucial point to address.
Anything else is just fighting symptoms!
[EDIT] To add as example
Branch 1 - Version 1
Source for Version 1
Tests for Version 1
-------------------------------
Branch 2 - Version 2
Source for Version 2
Tests for Version 2
When new versions add more function or change behavior, it should be tested separately and its source should be maintained separately!

Related

How to ignore tests and log reason in Spock 1.x

I have been writing some tests using Groovy/Spock and came to the need of some decorator/logic that would only run some tests based in a variable's value
I was reading about #IgnoreIf which is the perfect option as it allows me to write something in the lines of
#IgnoreIf(value = { !enabled }, reason = "Feature flag not enabled")
The problem I have is, the reason argument only got released in 2.1 and my company is using 1.3 due to some major issues whilst migrating to v2.
Is there any other option I could use to achieve the same result, bearing in mind that, at the end of they day, I want to see skipped tests in the pipeline but with a reason why.
Thanks
The problem I have is, the reason argument only got released in 2.1 and my company is using 1.3 due to some major issues whilst migrating to v2.
My recommendation is to fix those issues and upgrade instead of investing a lot of work to add a feature to a legacy Spock version. Solve the problem instead of evading it.
I want to see skipped tests in the pipeline but with a reason why.
You did not specify what exactly you need. Is this about console log or rather about generated test reports? In the former case, a cheap trick would be:
#IgnoreIf({ println 'reason: back-end server unavailable on Sundays'; true })
If you need a nice ignore message in the test report, it gets more complicated. You might need to develop your own annotation-based Spock extension which would react to your custom #MyIgnoreIf(value = { true }, reason = 'whatever') and also somehow hook into report generation, which might be possible, but I never tried.
Besides, the reason that Spock 2.x can offer users skip reasons more easily is that its engine runs on top of the JUnit 5 platform, which has a Node.SkipResult.skip(String reason) method out of the box that Spock can delegate the skip reason to.
Spock 1.x however runs on top of JUnit 4, where there is no such infrastructure. Instead, you could use JUnit 4 assumptions, which are basically the equivalent of dynamic skip conditions with reasons, but that would be code-based rather than annotation-based, e.g.:
def dummy() {
org.junit.Assume.assumeFalse('back-end server unavailable on Sundays', true)
expect:
true
}
The output would be something like this and the rest of your test method execution skipped:
org.junit.AssumptionViolatedException: back-end server unavailable on Sundays
at org.junit.Assume.assumeTrue(Assume.java:68)
at org.junit.Assume.assumeFalse(Assume.java:75)
at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:234)
at de.scrum_master.stackoverflow.q74575745.AuthRequestInterceptorTest.dummy(AuthRequestInterceptorTest.groovy:26)
In IntelliJ IDEA, it looks like this, the method actually being reported as skipped:
I think that JUnit 4 assumptions are good enough for your legacy code base, until you upgrade them to Spock 2.x and can enjoy all of the nice new syntax features and extensions.

Maven versioning for multiprofile project

I need to support two builds with some set of differences in libraries versions, so I made two build profiles and this works fine, but now I have a versioning problem while preparing a release.
I use the major.minor.revision-qualifier versioning schema, where:
major - a breaking change
minor - a backward-compatible changes (new features)
revision - a bug fix
qualifier - the only qualifier I use now is SNAPSHOT to mark unreleased versions.
But since now I have two builds, I need to add some qualifiers to release versions e.g. 1.8.0-v1 and 1.8.0-v2, but then I won't be able to have two SNAPSHOT versions. Or I need to break the "rules" about major\minor version usage and make two "branches" e.g. release 1.8.0 and 1.9.0 and then increase only last number no matter when fixing a bug or adding a new features.
I have a feeling like I am doing something antipattern, could anyone please give me some advice?
P.S. I already have heavily reworked 2.x version, so I can't have separate "branches" as 2.x and 1.x versions, unless i change this new version to 3.0
upd
I guess i can't make this story short, so here we go.
In my project i use to have ojdbc6 and aqapi jars(oracle libraries), my app was working on java 7 and Apache ServiceMix 5 with oracle 11 database. But then some clients updated to oracle 12 and i need new libraries for that, but they only work on java 8, but ActiveMQ i am using as part of ServiceMix 5 doesn't work on java 8. So i updated to servicemix 7 and and after some chances it works fine. So rest of the difference in build profiles are versions of servicemix provided libraries (a complete list is redundant here i guess).
In the end despite the fact that new jdbc driver is fully compatible with old database(not completely sure about aqapi and client side of ActiveMQ, but they should be also compatible), i can't force every client to update and reinstall java\servicemix at the same time, but i still wanna be able to fix\add stuff for all of them.
So i need to support two builds for different versions of servicemix, at least for now(its a temporary solution, but as proverb says: there is nothing more permanent than temporary, so i want to make it in the most right way possible)
P.S.
I decided to make profiles instead of separate brunch in VCS, because it looks like much easier solution, but it doesn't metter in terms of the versioning problem.
So as #Software Engineer said, after thinking about reasons and writing a post update i realised its not multiprofile problem, it's purely versioning problem, it would be the absolutely the same if i make brunch in VCS.
So in the end i decided to make 1.x.x and 2.x.x versions despite the fact that changes are not that "breaking", but they are not fully backward-compatible(even tho new version can work with old database it still needs new servicemix).
This multiprofile workaround doesn't looks pretty, but i left it there, it allows me to build both versions in one go(i use mvn versions:set -DnewVersion command after the first build) and i don't need to support two brunches this way, so it saves some time.

Shall I Make my REST Gateway a Library?

Suppose there is a REST service with several clients. Each client has to do roughly the same to call the REST service: construct a URL, send a request, deserialize the response, map HTTP return codes to exceptions, etc.
This seems to be duplicate code so maybe it's a good idea to put the REST gateway code into a reusable library. In Java this could look like the following:
There is a jar file which depends on Jersey and contains the rest client code.
The rest client is a simple class or CDI bean.
All clients simply depend on this jar and call the members of the aforementioned class.
So no client has to worry about REST and can just call methods.
Is this a good idea?
(this question originates from this other question)
This can get you into trouble with dependencies. It is a quite typical problem with dependencies and not bound to REST or Jersey. But let's have a look at the following scenario:
Dependencies
Suppose there are two servers (S1 and S2), two clients (C1 and C2) and two libraries containing the rest client code for accessing teh servers (L1 and L2). Both clients need to query both servers so the call structure looks like this:
C1 ---> L1 ---> S1
\ ^
\ /
X
/ \
/ v
C2 ---> L2 ---> S2
Furthermore both L1 and L2 depend on Jersey. Also the container (maybe you are running your application on a Glassfish or Wildfly application server) has a dependency on Jersey (or at least on the jax-rs API [1]).
The simplified dependency structure of C1 looks like this:
C1
/ | \
/ | \
< v >
Container L1 L2
| | |
v v v
Jersey Jersey Jersey
As long as all the three versions of Jersey are the same, everything is fine. No problem what so ever. But if the versions differ, you may run into nasty NoClassDefFoundErrors, MethodNotFoundErrors and the like [2].
Rigid Structures
If the versions are similar enough you won't run into these errors directly. Maybe it will work for quite a while. But then the following may happen:
You want to update your container. This updates the container version of Jersey which may get incompatible. ==> Boom. So using L1 and L2 prevents you from updating. You are stuck with the old version as long as L1 and L2 are not updated.
L2 is updated to a new version of Jersey. You can only use it if L1 and your container are also updated. So you stick to the old version (you can do that because of the loose coupling of REST). But then new functionality is added to S2 which is only usable with a new version of L2 and again you are stuck.
Keep in mind that the errors may or may not occur. There is no guarantee that you get into trouble and there is no guarantee that it will work. It's a risk, a ticking bomb.
On the other hand this is a simple example with just two services and two clients. When there are more, the risk is increasing, of course.
Stable Dependencies Principle
Dependencies should adhere to the Stable Dependencies Principle (SDP) which reads "Depend in the direction of stability." Uncle Bob defines it using the number afferent and efferent dependencies but SDP can be understood in a broader sense.
The connection to SDP becomes obvious when we look at the given dependency structure and replace Jersey with guava. This problem is really common with guava. Problem is that Guava is typically not downwards compatible. It's a rather unstable library that can be used in an application (which is unstable) but you should never use it in a reusable library (which ought to be stable).
With Jersey it's not that obvious because you may consider Jersey quite stable. In fact it is very stable with respect to Bob Martin's definition. But L2 may be quite stable, too. Maybe it's developed by another team, by some guy who left the company and nobody dares to touch it because last year somebody tried to do so which resulted in the C2 team having dependency issues. Maybe L2 is also stable because there is some ongoing quarrel between the managers of the teams developing C1 and L2 so the L2 manager claims there are no resources, no budget or whatever so updating L2 can only be done at the end of next year.
So depending on you code base, your organizational structure, etc. libraries can be more stable than you would which them to be.
Exceptions and Remedies
There are some languages which have "isolated dependencies", which means you can have multiple versions of the same library in the same application.
You could repackage Jersey (copy the source, change the package name and maintain it for yourself). Then you can have the normal Jersey in version X and your repackaged version of Jersey in version Y. This works but of course this has its own problems. It's much work and you have to maintain software which is not written by you.
The problem can be negligible if everything is developed in the same team. Then you have control over the whole thing and you don't depend on other teams doing something. Nevertheless as soon as your code base grows, updating to a new version can be much work as you cannot do it gradually anymore but you have to move many services to a new version at once.
Build tools like maven give you some control over the version you depend on. You may consider marking the Jersey dependency optional in L1 and L2 and provided in C1 and C2. Then it is not included in your war file so there is just one version of Jersey (the one in the container). As long as this version is interface compatible, it will work [3].
What to Do Instead
I would recommend to just put your representation classes, i.e. your DTOs, into a client jar and let each client decide on which library to use for making the REST call. Typically those REST gateways are quite simple and sharing them between applications is normally not worth the risk. At least think about the issues mentioned here before running into them without even knowing.
[1] In order to keep the explanation simple, lets neglect the difference between Jersey and the jax-rs API. The argument is the same.
[2] In reality it's a bit more complicated. Typically your build tool (maven, etc.) will select one version of Jersey and put it into the war file. The other version of Jersey is "omitted for conflict" with the other version. This is OK as long as the versions are compatible. So you end up with two versions the one in your container and the one in your war file. Whereas interface compatibility in the former case should be enough, it won't be enough for these remaining two versions.
[3] As there is only one version, the problem mentioned in [2] cannot occur. The problem with interface compatibility stays of course.

Can builds from the same source code yield functionally different executables?

Recently, a colleague of mine said something along these lines: "consecutive APKs (executables) produced by build server from the same source code might not be the same". The context for this discussion was whether QA performed on build X also applies to build Y, which was performed by the same build server (configured the same way) from the same source code.
I think that generated executables might not be identical due to various factors (e.g. different timestamp), but the question is whether they can be functionally different.
The only scenario, that I can think of, in which the same source code could produce different functionality is that of multi-threading issue: in case of incorrect synchronization of multi-threaded code, different re-ordering/optimization actions performed at compile time could affect this poorly synchronized code and change its functional behavior.
My questions are:
Is it true that consecutive builds performed by the same build server from the same source code can be functionally different?
If #1 is true, are these differences limited to incorrectly synchronized multi-threaded code?
If #2 is false, what are the other parts that can change?
Links to any related material will be appreciated.
It's certainly possible in a few cases. I'll assume you are using Gradle to build your Android app.
Case 1: You are using a 3rd party dependency that's included with a version wildcard, such as:
compile somelib.1+
It's possible for the dependency to change in this case, which is why it's highly recommended to use explicit dependency versions.
Case 2: You're injecting environment information into your app using Gradle's buildConfigFields. These values will be injected into your app's BuildConfig class. Depending on how you use those values, the app behavior could vary on consecutive builds.
Case 3: You update the JDK on your CI in-between consecutive builds. It's possible, though I'd assume highly unlikely, that your app behavior could change depending on how it's compiled. For example, you might be hitting an edge case in the JDK that gets fixed in a later version, causing code that previously worked before to act differently.
I think this answers your first question and second question.
edit: sorry, I think I missed some important info from your OP. My case 2 is an example of your e.g. different timestamp and case 3 violates your configured the same way. I'll leave the answer here though.
I think that different functionality may be caused only by discrepancies in environment or maybe you are using snapshot version of some 3rd party library, and thus it was updated after some time.
some advice:
if it possible to rebuild it, use verbose mode of build tool (-X in maven for example) and compare output line by line with some diff program
If the same source code could produce different results on the same machine / configuration, programming as we know it would probably not be possible.
There is always an option that things break, when the language level, operating system, or some other dependency changes. If all that changes it the time of the build, you would have to do something fundamentally wrong.
Using android / gradle, one possible reason to lead to a different behavior or errors in general is using + in your build.gradle file for library versions. This is why you should avoid doing so, since a consecutive build could fetch a newer / different version, hence you'd have different source code, and thus it could create a functional different executable.
A good build should always be repeatable. This means given the same configuration it should have the same results. If it isn't, you could never rely on anything and would have to do total regression testing on everything.
[...] consecutive builds performed by the same build server from the same source code can be functionally different
No. As described above, if you use the same versions, the same source code, it should produce the same behavior. Unless you do something very wrong.
[...] are these differences limited to incorrectly synchronized multi-threaded code?
This would imply a bug with your compiler. While this is possible, it is extremely unlikely.
[...] what are the other parts that can change?
Besides the timestamp and the build number nothing else should change, given the same source code and configuration.
It is always a good idea to include unit (and other) tests in your build. This way you can test specific behavior to be the same with each build.
They should be identical,except:
there is threading/optimization issues in build system.
hardware failures CPU/RAM/HDD issues on build environment
time/platform related code in build system itself or build scripts
So if you are building exact same code on exact same HW using exact same version of build system, same OS version and your code DO NOT SPECIALLY DEPEND from build time result should be same. They even should have exact same check sums and size.
Also results is same ONLY if your code do not depend on external modules which is downloaded from Internet at build time like Gradle/Maven does - you can't grantee this libraries the same because of they are not in version control. Moreover there is can be dependency where module version specified not exactly (like 2.0.+) so if maintainer updated this module your build system will use updated one -> so basically your builds generated from different source code.
As somebody mention using Unit tests on build server is good practice to make sure your build is stable and don't contain obvious bugs.
While this question addresses Java/Android, Jon Skeet blogged about different C# parsers treating some Unicode characters differently, mostly due to changes in the Unicode character database.
In his examples, the Mongolian Vowel Separator (U+180E) is considered either a whitespace character or a character allowed within an identifier, yielding different results in variable assignments.
It is definately possible. You can construct an example program that will behave different in functionality everytime you start it up.
Imagine a strategy design pattern that lets you choose between algorithms during runtime and you load one algorithm based on RNG.

Multiple versions in a web application: duplication or messy code?

I was used to manage versions with a tag in Git. But that was a long time ago, for stand-alone applications. Now the problem is that I have a web application, and at the same application might connect clients that expect to communicate to different versions of the application.
So, I added to the input a path variable for the version in that way :
#PathParam("version") String version
And the client can specify the version in the URL:
https://whatever.com/v.2/show
Then across the code I added conditions like this:
if(version.equals("v.2") {
// Do something
}
else if(version.equals("v.3") {
// Do something else
}
else {
// Or something different
}
The problem is that my code is becoming very messy. So I decided to do in a different way. I added this condition only in one point of the code, and from there I call different classes according to the version:
MyClassVersion2.java
MyClassVersion3.java
MyClassVersion4.java
The problem now is that I have a lot of duplication.
And I want to solve this problem as well. How can I do now to have a web application that:
1) Deal with multiple versions
2) It is not messy (with a lot of conditions)
3) Doesn't have much duplication
Normally, when we speak of an old version of an application, we mean that the behavior and appearance of that version is cast in stone and does not change. If you make even the slightest modification to the source files of that application, then its behavior and/or appearance may change, (and according to Murphy's law it will change,) which is unacceptable.
So, if I were you, I would lock all the source files of the old version in the source code repository, so that nobody can commit to them, ever. This approach solves the problem and dictates how you have to go about everything else: Every version would have to have its own set of source files which would be completely unrelated to the source files of all other versions.
Now, if the old versions of the application must have something in common with the newest version, and this thing changes, (say, the database,) then we are not exactly talking about different versions of the application, we have something more akin to different skins: The core of the application evolves, but users who picked a skin some time ago are allowed to stick with that skin. In this case, the polymorphism solution which has already been suggested by others might be a better approach.
your version number is in a place in the URL named the 'Context Root'.
You could release multiple different WAR files each of which is configured to respond on different Context Roots.
So one war for version 1, one war for version 2 etc.
This leaves you with code duplication.
So what you are really asking is, "how do I efficiently modularise Java web applications?".
This is a big question, and leads you into "Enterprise Java".
Essentially you need to solve it by abstracting your common code to a different application. Usually this is called 'n-tier' design.
So you'd create an 'integration tier' application which your 'presentation' layer war files speaks to.
The Integration tier contains all the common code so that it isn't repeated.
Your integration tier could be EJB or webservices etc.
Or you could investigate using OSGi.

Categories