I have a bunch of Java unit tests, and I'd like to integrate a continuous testing framework into my codebase. Ideally, I would like to write a Maven / Ant target or bash script which would start running tests whenever the files it's watching change. I've looked at a couple of options so far (Infinitest, JUnit Max) but both of them appear to want to run as IDE plugins.
My motivation for using a CLI-only tool is that my coworkers use a broad set of text editors and IDEs, but I want to ensure that anyone can run the tests constantly.
EDIT: I did not consider Jenkins or other more typical CI solutions for several reasons:
We already have a CI build tool for running unit and integration tests after every push.
They hide the runtime of the tests (because they run asynchronously), allowing tests to become slower and slower without people really noticing.
They usually only run tests if your repository is in some central location. I want unit tests to be running while I'm editing, not after I've already pushed the code somewhere. The sooner I run the tests, the sooner I can fix whatever mistake I made while editing. Our JavaScript team has loved a similar tool, quoting speedup of 3x for iterating on unit test development.
I am using a continuous polling for directory change solution for that. ( general code: http://www.qualityontime.eu/articles/directory-watcher/groovy-poll-watcher/ (in Hungarian, but source code is English) )
A customized solution for compiling nanoc based site. Review and customize to your need. (Groovy)
def job = {
String command = /java -jar jruby-nanoc2.jar -S nanoc compile/
println "Executing "+command
def proc = command.execute()
proc.waitForProcessOutput(System.out, System.err)
}
params = [
closure: job,
sleepInterval: 1000,
dirPath: /R:\java\dev\eclipse_workspaces\project\help\content/
]
import groovy.transform.Canonical;
#Canonical
class AutoRunner{
def closure
def sleepInterval = 3000
// running for 8 hours then stop automatically if checking every 3 seconds
def nrOfRepeat = 9600
def dirPath = "."
long lastModified = 0
def autorun(){
println "Press CTRL+C to stop..."
println this
def to_run = {
while(nrOfRepeat--){
sleep(sleepInterval)
if(anyChange()){
closure()
}
}
} as Runnable
Thread runner = new Thread(to_run)
runner.start()
}
def boolean anyChange(){
def max = lastModified
new File(dirPath).eachFileRecurse {
if(it.name.endsWith('txt') && it.lastModified() > max){
max = it.lastModified()
}
}
if(max > lastModified){
lastModified = max
return true
}
return false;
}
}
new AutoRunner(params).autorun()
Why not use a CI tool like Jenkins to run your tests on every code change, as part of your overall CI build? It's easy to get Jenkins to poll your source control system and run a build or separate test job when a file changes.
Usual way is to use a Continuous Integration server such as Jenkins.
Have Jenkins poll your version control system every 15 minutes and it will build your project as it notices commits. You will never be far away from knowing that your source code works.
I would recommend to use a CI tool such as Jenkins, where you have the possibility not only to install on premise, but also to get a Jenkins cloud instance through a PaaS where you can easily test if this solution could meet or not your goal without spending too much time in the set-up process.
This PaaS provides some ClickStarts that you can use as a template for your own projects, on premise or on the cloud. It will generates you a Jenkins job fully set-up and working.
Some articles that you can take a look at are:
Painless Maven Builds with Jenkins where you can see the dashboard you can get. You will see the maven tests which are passed per build and you can also get a graph showing the maven tests passed, skipped and failed.
iOS dev: How to setup quality metrics on your Jenkins job? Although this article talks specifically about iOS, you can also get the same goal for a standard java maven project: Test coverage, Test results, Code duplication, Code metrics (LOC), ...
Yeah having a build server (like Bamboo, Cruise Control er TeamCity) and a build tool like Maven (along with surefire-plugin for TestNg/Junit and Failsafe-plugin for integration testing maybe using something like Selenium 2) is quite popular, because it's relatively trivial to setup (works almost out-of-the-box). :)
Related
Surprisingly, I was not able to find a this question on this website, so here it comes :
How can I run my Spring tests before every git commit/push (CLI, GUI and IDE integration) and have this command fail on test fail ?
I am aware of the existence of git hooks and run my tests using mvnw test. How to combine this to get the described behavior ?
You can use any (bash) script as a git pre-commit or pre-push hook. Git should abort if the script returns a non-zero return code.
So create a script named pre-commit.tests or pre-push.tests that looks roughly like this
#!/bin/bash
mvnw test
and register the hook, e.g. by placing the script in .git/hooks.
mvn test should already return a non-zero return code if tests fail.
If not you would need to determine whether the tests succeeded in your script. For instance by piping the result to grep and looking for an ERROR entry or a more indicative line that either indicates success or failure.
Note: If you happen to be working in a Windows/Mac environment you'd likely need to adapt this based on how you integrated git, i.e. whether you run in a bash-compatible console or not.
I have a single test automation project with test scripts which is integrated with VSTS and jenkins. It means VSTS build step run Jenkins job and after this test scripts are running on remote machine, but I have hardcoded URL in my driver.get(url just for test env, but I need run on dev or prod env) method.
So my question is how to parameterize driver.get(parameter) method to still use this one project and run test scripts on many env not just on test env?
For example: If new build queued is on QA branch then run scripts on http://QAenv.app.com else if queued on PROD branch then run scripts on http://PRODenv.app.com.
What about storing it in properties and reading it?
Example:
driver.get(System.getProperty("myPropertyKey", "http://myDefaultTestUrl"));
Regarding Jenkins Queue Job step/task, you can specify Job parameters.
For the example you provided, you can add a variable to the build definition and change the value per to predefined variables (e.g. Build.SourceBranch ), then specify the variable in Jenkins Queue Job step/task.
Regarding set variable value, you can use Write-Host "##vso[task.setvariable variable=testvar;]testvalue", more information, you can refer to: Logging Commands
I have an Android project that has grown with time, and with the size have grown the gradle build times.
It was bearable while it was under the 65k limit - around 14s.
Now with multidex it takes 36s.
So my question is - are there any ways to "turn off" parts of the code that are not being used so it's back under the 65k limit?
For e.g. turn off the amazon s3 sdk which is brought in via gradle and has n thousands of methods.
I know you can strip code with proguard, but that just bumps up the build time even higher.
I'm happy with it crashing at runtime when I open the parts that use it, just want to make testing quicker.
At the moment when I remove amazon from gradle imports, I obviously get this:
Error:(24, 26) error: package com.amazonaws.auth does not exist
Is there a way to somehow ignore the error? I know that in Picasso, it has a runtime check to see if you have OkHttp, and if you haven't - use standard networking.
static Downloader createDefaultDownloader(Context context) {
if (SDK_INT >= GINGERBREAD) {
try {
Class.forName("com.squareup.okhttp.OkHttpClient");
return OkHttpLoaderCreator.create(context);
} catch (ClassNotFoundException ignored) {}
}
return new UrlConnectionDownloader(context);
}
Is there something like this I could do? Or any other way?
The only realistic way of doing this (that I'm aware of) is to refactor your project so that your packages are split into separate modules. You would therefore have separate gradle build files for each module, but would only have to recompile each module whenever they were touched. You could, for instance, have a data access package and a UI package. That seems like a pretty natural split.
I realize that this is a disappointing answer but the issue you're complaining about is that your build dependencies require all those extra unnecessary libraries and method calls: not that your code uses them.
The only other tip I can give you is that the Google Play API kit has tens of thousands of method calls. If you can use only the pieces that you're using you stand a much better chance of being beneath the 65k limit.
It is possible to specify compile-time dependencies for each build type independently. I use this method to include "production-only" dependencies in only the release builds, reducing the method count for debug builds.
For example, I only include Crashlytics in release builds. So in build.gradle I include the dependency for only my release build (and beta and alpha):
releaseCompile('com.crashlytics.sdk.android:crashlytics:2.5.5#aar') {
transitive = true;
}
Then I abstract the functionality of Crashlytics into a class called CrashReportingService. In my debug source code, this class does nothing:
/app/src/debug/java/com/example/services/CrashReportingService.java:
public class CrashReportingService {
public static void initialise(Context context) {
}
public static void logException(Throwable throwable) {
}
}
And I flesh out the implementation in my release source code:
/app/src/release/java/com/example/services/CrashReportingService.java
public class CrashReportingService {
public static void initialise(Context context) {
Fabric.with(context, new Crashlytics());
}
public static void logException(Throwable throwable) {
Crashlytics.getInstance().core.logException(throwable);
}
}
Crashlytics is now only included in release builds and there is no reference to Crashlytics in my debug builds. Back under 65k methods, hooray!
I have got another option. That also helps to speed up but not as your
demand. That is using demon.
If you use the new Gradle build system with Android (or Android Studio) you might have realized, that even the simplest Gradle call (e.g. gradle project or grade tasks) is pretty slow. On my computer it took around eight seconds for that kind of Gradle calls. You can decrease this startup time of Gradle (on my computer down to two seconds), if you tell Gradle to use a daemon to build. Just create a file named gradle.properties in the following directory:
/home/<username>/.gradle/ (Linux)
/Users/<username>/.gradle/ (Mac)
C:\Users\<username>\.gradle (Windows)
Add this line to the file:
org.gradle.daemon=true
From now on Gradle will use a daemon to build, whether you are using Gradle from command line or building in Android Studio. You could also place the gradle.properties file to the root directory of your project and commit it to your SCM system. But you would have to do this, for every project (if you want to use the daemon in every project).
Note: If you don’t build anything with Gradle for some time (currently
3 hours), it will stop the daemon, so that you will experience a long
start-up time at the next build.
How does the Gradle Daemon make builds faster?
The Gradle Daemon is a long lived build process. In between builds it waits idly for the next build. This has the obvious benefit of only requiring Gradle to be loaded into memory once for multiple builds, as opposed to once for each build. This in itself is a significant performance optimization, but that's not where it stops.
A significant part of the story for modern JVM performance is runtime code optimization. For example, HotSpot (the JVM implementation provided by Oracle and used as the basis of OpenJDK) applies optimization to code while it is running. The optimization is progressive and not instantaneous. That is, the code is progressively optimized during execution which means that subsequent builds can be faster purely due to this optimization process.
Experiments with HotSpot have shown that it takes somewhere between 5
and 10 builds for optimization to stabilize. The difference in
perceived build time between the first build and the 10th for a Daemon
can be quite dramatic.
The Daemon also allows more effective in memory caching across builds. For example, the classes needed by the build (e.g. plugins, build scripts) can be held in memory between builds. Similarly, Gradle can maintain in-memory caches of build data such as the hashes of task inputs and outputs, used for incremental building.
Some other ways to speed up the process
How to Speed Up Your Gradle Build From 90 to 8 Minutes?
How to optimize gradle build performance regarding build duration and RAM usage?
I am currently working on a Maven Project, using JUnit for defining tests and Jenkins for CI and am looking into how I can group my tests.
Say I had a test class with 20 tests, but I don't want to run all 20 tests, I want to be able to configure which tests to run. For Example, in another standalone project using TestNG and Selenium you can create a test method with the following annotation:
#Test (groups = { "AllTest" })
public void myTestMethod()
{
.. do something
.. assert something
}
... and then I am able to call which group to run based on an XML configuration.
Is it possible to define such type of groupings using Jenkins? I have researched into this and came across the plugin "Tests Selector Plugin" however can't understand how to get started once I've installed the plugin. There is a Wiki Page for it but I can't understand what to do after installing.
I have copy pasted the example property file, and didn't really understand what I needed to manipulate in it. When building, I simply get that the property file cannot be found or Jenkins doesn't have permission; can't find a way around this either :(
It's possible via maven + maven-surefire-plugin
http://maven.apache.org/surefire/maven-surefire-plugin/examples/single-test.html
You can run a single test, set of tests or tests by regexp.
Relatively new to java and gradle -- trying to do things "right". Prior to building my application (I've added the gradle "application" plugin) I want to setup some environment and system things -- for example, I'd like to create the log/ directory and log.txt file.
So I'm doing something like:
task setup {
println 'Setup task executing ...'
File d = new File('log');
d.mkdir();
f = new File(d.getPath() + '/log.txt');
f.createNewFile();
}
Which works -- but I get a bunch of stdout warnings when running > gradle setup
Setup task executing ...
Creating properties on demand (a.k.a. dynamic properties) has been deprecated and is scheduled to be removed in Gradle 2.0. Please read http://gradle.org/docs/current/dsl/org.gradle.api.plugins.ExtraPropertiesExtension.html for information on the replacement for dynamic properties.
Deprecated dynamic property: "f" on "task ':setup'", value: "log/log.txt".
:setup UP-TO-DATE
So one question: What is the correct way to leverage Gradle to perform setup / installation tasks? (This should only really be executed once, when the application is deployed)
Ah, you are mixing task configuration and execution. This:
task foo {
// Stuff
}
is not the same as this:
task foo << {
// Stuff
}
In the first, "stuff" is run at configuration time, leading to the warnings that you're seeing (because f is interpreted as a project variable during this phase). In the second, it's run at execution time.
(Gradle is great, but this very subtle syntax distinction can be the source of many infuriating bugs!)
As for how to do setup properly, as you're using the Application plugin, you should look into Including other resources in the distribution.
(You should also consider moving the directory-creation logic into your application itself, as ideally you want it to be robust against someone deleting the log directory!)