Some time ago, I implemented a remote test execution feature on JUnit 4 for the z2-environment (a Java server development and execution environment for large Java applications). Possibly similar to the Teleporter Rule of Apache Sling (for which I failed to find a JUnit 5 version).
This worked essentially like this:
A custom Runner (Z2UnitTestRunner) is declared on the test class using #RunWith
Z2UnitTestRunner passes a test invocation (actually a test description) to the remote side
On the remote side the test description is executed by a TestExecutor
A registered RunListener logs all test events back to the client side
On the client side any registered RunNotifier will be passed the test events received from the remote side
So it is rather simple actually: It just establishes a man-in-the-middle between Runner and TestExecuter. The cool thing is: While all test execution is performed in the "native" server environment of the application, tests can be triggered from the IDE or ANT/Jenkins, as if running locally. We use that quite a lot.
I am now trying to implement support for JUnit 5. I had a deeper look lately at the various extension and configuration tweaks supported by JUnit 5 but haven't really found a complete solution yet.
The most robust solution, I think, would be to integrate with the DefaultLauncher (as that is used by IDEs and ANT as far as I can tell) or via a custom launcher. The altered behavior would make sure that selectors and filters are sent to the remote side while all TestExecutionListener events would be conveyed to be client side.
Neither approach seems to be supported currently. At least, as far as I can tell, there is no way to provide a custom Launcher nor a way to change the behavior of the DefaultLauncher. But there is a LauncherFactory and a DefaultLauncher - which looks like there IS the intention to support custom launchers (are they?)!
I also looked into implementing a custom engine that would somehow take over test execution delegated to the remote side. But that seems to be the wrong level of interception. Plus I haven't found a way to "suppress" execution via the Jupiter Engine anyway.
Currently I am looking for any good idea or example that would help me move forward. Any suggestion welcome!
Related
I have a Spring Boot application that receives an API instruction and then begins streaming in a file, hash totaling the file and then streaming it out somewhere else. In the real world this could take one second or it could take hours.
I'd like to add that using POSTMAN and curl we have fully tested this app and it works as per its design.
We need to cover this with JUnit.
We are using JUnit 5 I am trying to run a test where the API is called on a very small file to process (probably a few seconds in total) However the Spring Boot Application shuts down too quickly meaning that the test never actually completes.
The Inbound/Outbound Streams are both performed using #Aysnc methods which I don't think helps as these dive into separate threads.
I also whole-heartedly believe that this kind of processing should not be tested with JUnit. But we have a coverage target to hit. This is IST testing.
My question is...
Does anyone know of a way to keep the Spring Boot Application running for a longer time, within the JUnit?
Just long enough to see the file come out the other side.
I've not used any Mock Frameworks at this point in time. I'm open to this idea but some direction would be appreciated if this is a viable option.
You'll need to introduce some sort of blocking/polling to wait for the asynchronous task to complete before allowing the #Test method to complete.
Awaitility provides good support for testing scenarios like that.
I am writing integration tests for a Java EE Servlet using Arquillian + JUnit. I need to be able to execute code before the server launches.
So is it possible to execute code before #Deployment? I tried #BeforeClass with no luck.
The reason I need to do this, is because trust and keystores for ssl needs to exists before the server starts. I am creating the stores problematically and is saving them to files afterwards.
I know a possible workaround would be to have static trust and keystores, but I prefer to create them programmatically before the test starts for full flexibility when writing tests.
There is not really a need to have your own specialization of Arquillian JUnit runner. This solution would be only for JUnit 4.x in that case which you are using for writing your tests.
Arquillian let you hook through extensions mechanism to its runtime and this way you can have some custom logic executed before server startup to provide your keystores. I believe this is more elegant and portable solution.
Please have a look at sample extensions on Github (especially lifecycle would be a good starting point). If you feel like implementing it this way I'm more than happy to help you. The event you might want to observe on is either BeforeSetup or BeforeStart.
You have two other options for executing code before and after your test:
Rules or ClassRules are executed around and before/after
Using a custom Testrunner (extending the default 'Arquillian' runner)
But as the static deployment method is not invoked by a rule, I assume you have to go for the testrunner.
I want to test the effects of a library call of my program with a real device. This call starts a service, that sends an HTTP request to a server whose URL that is hard-coded in the resources.
I want to verify that the request is sent correctly. So I set up a local HTTP server, but to be able to use it I have to change/override/mock the resource so it points to http://127.0.0.1 instead.
I want to do "end-to-end" testing; in this case it's important that the service makes an actual network request, although locally.
I've tried to override the value by creating a string resource with the same name in androidTest/res/values/strings.xml, but that resource is only visible in the test package, not in the application package.
Using the Instrumentation class only allows me to obtain the Context reference, but there's no way to replace it (or the return value of getResources()) with a mock or something similar.
How can I change a resource value of an Application under test?
You have a couple choices:
Dependency injection
Stubs/mocks
SharedPreferences
Scripts or gradle tasks
Dependency injection
Use a library like RoboGuice or Dapper. Inject an object that handles making the API requests. Then, in your test setup, you can replace the injection modules with testing versions instead. That way your test code runs instead of the original; that code can pass in different strings (either hard-coded or from the test strings.xml) instead.
DI libraries can be expensive to setup: high learning curve and can be performance problems if not used correctly. Or even can introduce hard to debug problems if the scope/lifetime of the objects isn't configured correctly. If testing is the only reason to use DI, it might not be worth it to you if you're not comfortable with a DI container.
Stubs/mocks
Wrap up your calls in something that implements a custom interface you write. Your main implementation then fills in the host URL and calls the API. Then, in tests, use a combination of stubs or mocks on that interface to replace the code that fills in the host URL part.
This is less of an integration test since the stubs or mocks will be replacing parts of the code. But is simpler than setting up a dependency injection framework.
SharedPreferences
Use the Android SharedPreferences system. Have it default to a certain endpoint (production). But allow the app to be started on the testing device, then some dialog or settings to let you change the host URL. Run the tests again and now they point to a different API URL.
Scripts or gradle tasks
Write some script or gradle task to modify the source before it is compiled in certain scenarios.
This can be fairly complicated and might even be too platform or system-dependent if not done right. Will probably be fairly brittle to changes in the system. Might introduce bugs if the wrong command is run to build the final packaged version and the wrong code goes out to the market.
Personal opinion
Which do I recommend? If you and/or your team is familiar with a DI library like RoboGuice or Dapper, I recommend that option. It is the most formal, type-safe and strict solution. It also maintains more of the integrity of the stack to test the whole solution.
If you're not familiar with a good DI library, stubs/mocks and interface wrappers are a good fall back solution. They partly have to be used in the DI solution anyway, and you can write enough tests around them to cover a good majority of the cases you need to test (and are in control of). It is close enough to the DI solution that I would recommend this to everyone who doesn't use DI in the project already.
The SharedPreferences solution works great for switching between staging and production environments for QA and support. However, I wouldn't recommend it for automated tests since the app will most likely be reinstalled/reset so often during development, it would get annoying resetting that URL that often. Also, first runs of tests would probably fail; headless tests on a CI server would fail, etc. (You could default the URL to the localhost, but then you run the risk of accidentally release that default to production sometime.)
I don't recommend scripts or the hacked-up gradle tasks. Too brittle, less clear to other developers that come behind you, and more complicated then they're worth, IMO.
In addition to Jon Adams's solutions, there's a further one:
Override resource in build type
By default, a library module is built in release mode when it's used by another module. The debug mode is only used for testing (unit tests and instrumented tests). Therefore, using the resource overriding it's possible to change the resource value for the instrumentation tests for that library only, and use the original value in the library's users.
This has some caveats though:
Instrumented/integration tests must stay on the library itself, not on the main application package;
The same resource values have to be shared across all tests (unless using product flavors)
for some testing purposes it would be great not having to restart my jetty server for every test run.
With jrebel i can apply source changes directly.
Is it possible to run my jetty server in a way that i could inject changes dynamically and then rerun the tests without having to restart the server?
It depends on the kind of changes that you want to inject.
That said, I believe there is a deeper issue here. Restarting Jetty is the right thing to do from a test-quality standpoint. It ensures that each test starts from a clean page thereby minimizing the risk of inter-test dependencies. On the other hand, this is costly (time-wise) and make your suite runs slower.
If I were you, I would address this as follows: I will refactor the code that I want to test (presumably: servlets) such that they do not depend on the Jetty infrastructure, and can run stand-alone. For instance, If I have a servlet class SomeServlet with its doGet() method, I will refactor it such that it implement MyServelt whose goGet() takes a MyRequest, MyResponse parameters.
Once you do that, you can unit-test MyServlet without a Jetty server. This will allow you not only to test faster, but also ease your debugging sessions and make your components more decoupled. Of course, you will need to add some plumbing code: a class that adapts the servelt interface to a MyServelt object (via delegation).
We would like to have a set of tests as part of our web application. The tests will be used for analyzing the health status of the application, so a support person or a scheduler can run the test to see if the application itself and various required remote systems are available.
I have seen this being done using some kind of webbased JUnit frontend, it allowed to run tests and reported the results as HTML. This would be great because the developers know JUnit, but I couldn't find the library in the intertubes.
Where can I find a library doing this?
You can use some free services to verify the availability of your system. Here are two that I've used:
mon.itor.us
pingdom
Another thing you can take a look at is JMeter, but it does not have a web UI.
Original answer:
Perhaps you mean functional tests (that can be run through JUnit). Take a look at
Selenium - it's web functional testing tool.
(Note that these are not unit tests. They don't test individual units of the code. Furthermore unit tests are executed at build time, not at runtime.)
Bozho is correct, these are not unit tests but I have done something similar. At my company I am not the one that ultimately deploys these things to our test environment or production environment. During development I create a couple of servlets that test things like it can get a valid database connection, it can hit our AD server etc. It than basically prints out a message and indicates success or failure.
That way when I have the code deployed to one of our environments, I can have the person deploying it hit the URL and make sure everything comes back OK. When I get ready to do the final deployment I just remove the servlet config.
If you already have a set of tests composed and ready to run, then Hudson can run those tests on a schedule and report on the results.
Update: If you're looking for a tool to check your servers and applications every few minutes for availability check out Nagios.
Maybe you mean some kind of acceptance test tool. If so, have a look at Fitnesse.
What you're probably looking for is CruiseControl.Net - it combines with NUnit/JUnit etc to make an automated testing framework with HTML reporting tools and a tray app for your desktop as well. I actually just downloaded it again an hour ago for a new role - it's really good.
It can be used to run anything from unit tests to getting files from source control, to kicking off compiler builds or rebooting servers (when used with NAnt - a .Net build tool).
You should look for a Continous Integration tool like Jenkin.