I have a single test automation project with test scripts which is integrated with VSTS and jenkins. It means VSTS build step run Jenkins job and after this test scripts are running on remote machine, but I have hardcoded URL in my driver.get(url just for test env, but I need run on dev or prod env) method.
So my question is how to parameterize driver.get(parameter) method to still use this one project and run test scripts on many env not just on test env?
For example: If new build queued is on QA branch then run scripts on http://QAenv.app.com else if queued on PROD branch then run scripts on http://PRODenv.app.com.
What about storing it in properties and reading it?
Example:
driver.get(System.getProperty("myPropertyKey", "http://myDefaultTestUrl"));
Regarding Jenkins Queue Job step/task, you can specify Job parameters.
For the example you provided, you can add a variable to the build definition and change the value per to predefined variables (e.g. Build.SourceBranch ), then specify the variable in Jenkins Queue Job step/task.
Regarding set variable value, you can use Write-Host "##vso[task.setvariable variable=testvar;]testvalue", more information, you can refer to: Logging Commands
Related
I am trying to pass env variables locally with strictly a command line command. At deploy, these variables get passed into the docker container, but when running locally, they are not present and need to be set locally.
They need to be removed before committing though because they are access keys so i dont want them exposed in the repo. That is why running tests locally (without an IDE) would require a command that passes these variables.
I have already tried this:
./gradlew clean build -Dspring.profiles.active=local -DMY_ENV_VAR1=xxxxxx -DMY_ENV_VAR2=xxxxxx
and it doesnt seem to be working. i cant find the docs for the build command's options, but i thought this was how you pass them. what am i doing wrong here? or is it not possible?
Another reason for environment variables not working is the gradle daemon.
Run this to kill any old daemons:
./gradlew --stop
Then try again. Lost far too much time on that.
For passing env variables
MY_ENV_VAR1=xxxxxx MY_ENV_VAR2=xxxxxx ./gradlew bootRun
For arguments/overriding properties values
./gradlew bootRun --args='--spring.profiles.active=local --db.url=something --anotherprop=fafdf'
For both passing env variable and overriding properties values.
MY_ENV_VAR1=xxxxxx MY_ENV_VAR2=xxxxxx ./gradlew bootRun --args='--spring.profiles.active=local --db.url=something --anotherprop=fafdf'
This related post worked for me: https://stackoverflow.com/a/57890208/1441210
The solution was to use the --args option of gradlew to get the environment variable to be passed to the spring boot app:
./gradlew bootRun --args='--spring.profiles.active=local'
I just put the env variable setting before calling command as the way a regular Unix shell does. Work with my Zsh.
MY_ENV_VAR1=xxxxxx MY_ENV_VAR2=xxxxxx gradlew clean test
If you want to pass values to the JVM that runs the gradle you can use the '-D' switch. I suppose you have to pass values to the gradle build file from the command line. If that's the case there are two options for that:
You can use the -P switch and specify the value there. For example:
gradle -PmySecretKey="This key is so secret" yourTask
If you are using linux or variants you can set environment variable as follows:
export ORG_GRADLE_PROJECT_mySecretKey="This key is so secret"
After this you can access the value in the gradle build file as follows (I am using kotlin dsl)
val mySecretKey: String by project
println(mySecretKey)
To answer your question, as far as I know, there's no way to set environment variables manually through Gradle. What your doing right now is just passing in regular CLI arguments/parameters to your tests.
when running locally, they are not present and need to be set locally.
running tests locally (without an IDE) would require a command that passes these variables.
I see from your snippet, you are using Spring, likely Spring Boot. And since you're already specifying the profile as local, why not define these variables in a profile specific configuration? Example:
application.yml -- base configuration
my-config-value: ${MY_ENV_VAR}
application-local.yml -- local profile configuration that overrides the base
my-config-value: some-dummy-value-for-local-development
Surprisingly, I was not able to find a this question on this website, so here it comes :
How can I run my Spring tests before every git commit/push (CLI, GUI and IDE integration) and have this command fail on test fail ?
I am aware of the existence of git hooks and run my tests using mvnw test. How to combine this to get the described behavior ?
You can use any (bash) script as a git pre-commit or pre-push hook. Git should abort if the script returns a non-zero return code.
So create a script named pre-commit.tests or pre-push.tests that looks roughly like this
#!/bin/bash
mvnw test
and register the hook, e.g. by placing the script in .git/hooks.
mvn test should already return a non-zero return code if tests fail.
If not you would need to determine whether the tests succeeded in your script. For instance by piping the result to grep and looking for an ERROR entry or a more indicative line that either indicates success or failure.
Note: If you happen to be working in a Windows/Mac environment you'd likely need to adapt this based on how you integrated git, i.e. whether you run in a bash-compatible console or not.
In a Jenkins job I'm doing a couple of actions that reside in the pre-step build, such as executing a shell script.
With the use of the Jenkins plugin "EnvInject" I want to inject environment variables into my maven build (Unit tests) so that those can be used inside my Java unit tests.
Inside the shell script im doing something similar as:
echo "ip=$IP" >> unit-test.properties
While building Jenkins outputs the following:
[EnvInject] - Injecting environment variables from a build step.
[EnvInject] - Injecting as environment variables the properties file path 'unit-test.properties'
[EnvInject] - Variables injected successfully.
But the "ip" variable is not available inside my Java code (unit test). When I do a full print of both System.getProperties() and System.getenv() I do not see the "ip" enlisted.
Do I need to perform any special actions for maven to pass the variable to my Java code? Is something else wrong?
I'm pretty much stuck from this point onward, I do want to inject a key=value from a pre-step into my Java code.
My solution:
Create a "Build a free-style software project".
Jenkins > New Item > Build a free-style software project
Add 1st step: Execute shell # Build, and echo key=value pairs to a .properties file
Add 2nd step: Inject environment variables, use the .properties file as defined in step 2
Add 3rd step: Invoke top-level Maven targets
All custom environment variables are accessible with the key as defined in step #2.
This was the only way I found to inject environment variables from shell to java.
I had a similar requirement in my project, except my project was Maven. To use a variable value from jenkins to my java code, I used -DargLine="-DEnv=$Environment" inside "Build->Advanced->JVM Options". From my java code, I fetched the value of "Env" using System.getProperty(). FYI "Environment" is my Jenkins Variable, and "Env" is variable which is storing the value passed from jenkins into its variable(Environment) and fetched in java code using System.getProperty().
I am currently working on a Maven Project, using JUnit for defining tests and Jenkins for CI and am looking into how I can group my tests.
Say I had a test class with 20 tests, but I don't want to run all 20 tests, I want to be able to configure which tests to run. For Example, in another standalone project using TestNG and Selenium you can create a test method with the following annotation:
#Test (groups = { "AllTest" })
public void myTestMethod()
{
.. do something
.. assert something
}
... and then I am able to call which group to run based on an XML configuration.
Is it possible to define such type of groupings using Jenkins? I have researched into this and came across the plugin "Tests Selector Plugin" however can't understand how to get started once I've installed the plugin. There is a Wiki Page for it but I can't understand what to do after installing.
I have copy pasted the example property file, and didn't really understand what I needed to manipulate in it. When building, I simply get that the property file cannot be found or Jenkins doesn't have permission; can't find a way around this either :(
It's possible via maven + maven-surefire-plugin
http://maven.apache.org/surefire/maven-surefire-plugin/examples/single-test.html
You can run a single test, set of tests or tests by regexp.
I have a bunch of Java unit tests, and I'd like to integrate a continuous testing framework into my codebase. Ideally, I would like to write a Maven / Ant target or bash script which would start running tests whenever the files it's watching change. I've looked at a couple of options so far (Infinitest, JUnit Max) but both of them appear to want to run as IDE plugins.
My motivation for using a CLI-only tool is that my coworkers use a broad set of text editors and IDEs, but I want to ensure that anyone can run the tests constantly.
EDIT: I did not consider Jenkins or other more typical CI solutions for several reasons:
We already have a CI build tool for running unit and integration tests after every push.
They hide the runtime of the tests (because they run asynchronously), allowing tests to become slower and slower without people really noticing.
They usually only run tests if your repository is in some central location. I want unit tests to be running while I'm editing, not after I've already pushed the code somewhere. The sooner I run the tests, the sooner I can fix whatever mistake I made while editing. Our JavaScript team has loved a similar tool, quoting speedup of 3x for iterating on unit test development.
I am using a continuous polling for directory change solution for that. ( general code: http://www.qualityontime.eu/articles/directory-watcher/groovy-poll-watcher/ (in Hungarian, but source code is English) )
A customized solution for compiling nanoc based site. Review and customize to your need. (Groovy)
def job = {
String command = /java -jar jruby-nanoc2.jar -S nanoc compile/
println "Executing "+command
def proc = command.execute()
proc.waitForProcessOutput(System.out, System.err)
}
params = [
closure: job,
sleepInterval: 1000,
dirPath: /R:\java\dev\eclipse_workspaces\project\help\content/
]
import groovy.transform.Canonical;
#Canonical
class AutoRunner{
def closure
def sleepInterval = 3000
// running for 8 hours then stop automatically if checking every 3 seconds
def nrOfRepeat = 9600
def dirPath = "."
long lastModified = 0
def autorun(){
println "Press CTRL+C to stop..."
println this
def to_run = {
while(nrOfRepeat--){
sleep(sleepInterval)
if(anyChange()){
closure()
}
}
} as Runnable
Thread runner = new Thread(to_run)
runner.start()
}
def boolean anyChange(){
def max = lastModified
new File(dirPath).eachFileRecurse {
if(it.name.endsWith('txt') && it.lastModified() > max){
max = it.lastModified()
}
}
if(max > lastModified){
lastModified = max
return true
}
return false;
}
}
new AutoRunner(params).autorun()
Why not use a CI tool like Jenkins to run your tests on every code change, as part of your overall CI build? It's easy to get Jenkins to poll your source control system and run a build or separate test job when a file changes.
Usual way is to use a Continuous Integration server such as Jenkins.
Have Jenkins poll your version control system every 15 minutes and it will build your project as it notices commits. You will never be far away from knowing that your source code works.
I would recommend to use a CI tool such as Jenkins, where you have the possibility not only to install on premise, but also to get a Jenkins cloud instance through a PaaS where you can easily test if this solution could meet or not your goal without spending too much time in the set-up process.
This PaaS provides some ClickStarts that you can use as a template for your own projects, on premise or on the cloud. It will generates you a Jenkins job fully set-up and working.
Some articles that you can take a look at are:
Painless Maven Builds with Jenkins where you can see the dashboard you can get. You will see the maven tests which are passed per build and you can also get a graph showing the maven tests passed, skipped and failed.
iOS dev: How to setup quality metrics on your Jenkins job? Although this article talks specifically about iOS, you can also get the same goal for a standard java maven project: Test coverage, Test results, Code duplication, Code metrics (LOC), ...
Yeah having a build server (like Bamboo, Cruise Control er TeamCity) and a build tool like Maven (along with surefire-plugin for TestNg/Junit and Failsafe-plugin for integration testing maybe using something like Selenium 2) is quite popular, because it's relatively trivial to setup (works almost out-of-the-box). :)