Can I export, in any conventional file format, a history of builds, with their time/date and success. And hopefully even promotion status.
You can make use of Jenkins rest api :
Start at : Traverse all jobs on your Jenkins Server using :
http://JENKINS_URl/api/json?tree=jobs[name,url]
This will give json response with all jobs with job name and job url.
Then for each job access its builds using api :
http://JENKINS_URL/job/JOB_NAME/api/json?tree=allBuilds[number,url]
This will give all the builds for job JOB_NAME and return json response with build number and build url.
Now Traverse each build using api :
http://JENKINS_URL/job/JOB_NAME/BUILD_NUMBER/api/json
This will give everything related to the build as json response. Like Build status, how build was triggered, time etc.
For automation, you can use bash, curl and jq to achieve this.
Have written small bash script to retrieve build status and timestamp for each job on Jenkins server :
#!/bin/bash
JENKINS_URL=<YOUR JENKINS URL HERE>
for job in `curl -sg "$JENKINS_URL/api/json?tree=jobs[name,url]" | jq '.jobs[].name' -r`;
do
echo "Job Name : $job"
echo -e "Build Number\tBuild Status\tTimestamp"
for build in `curl -sg "$JENKINS_URL/job/$job/api/json?tree=allBuilds[number]" | jq '.allBuilds[].number' -r`;
do
curl -sg "$JENKINS_URL/job/$job/$build/api/json" | jq '(.number|tostring) + "\t\t" + .result + "\t\t" + (.timestamp|tostring)' -r
done
echo "================"
done
Note : Above script assumes that Jenkins server does not have any authentication. For authentication, add below parameter to each curl call :
-u username:API_TOKEN
Where :
username:API_TOKEN with your username and password/API_Token
Similar way you can export all build history in any format you want.
Parvez' suggestion to use the REST API is perfectly fine.
However, the REST API is awkward to use if it does not directly provide the data you're looking for, leading to convoluted and multiple invocations of the REST API. This is slow and it makes you depend on stability of that API.
For anything but the most basic queries, I usually prefer to run a small groovy script that will extract the required data from Jenkins' internal structures. This is way faster, and often it's also more simple to use. Here's a small script that will fetch the data that you're looking for:
import jenkins.model.*
import hudson.plugins.promoted_builds.*
import groovy.json.JsonOutput
def job = Jenkins.instance.getItemByFullName( 'TESTJOB' )
def buildInfos = []
for ( build in job.getBuilds() ) {
def promotionList = []
for ( promotion in build.getAction(PromotedBuildAction.class).getPromotions() ) {
promotionList += promotion.getName()
}
buildInfos += [
result : build.getResult().toString(),
number : build.getNumber(),
time : build.getTime().toString(),
promotions: promotionList
]
}
println( JsonOutput.toJson( buildInfos ) )
The script will produce the result in JSON format, like this (prettified):
[
{
"number": 2,
"promotions": [
"promotionA"
],
"result": "SUCCESS",
"time": "Thu Oct 18 11:50:37 EEST 2018"
},
{
"number": 1,
"promotions": [],
"result": "SUCCESS",
"time": "Thu Oct 18 11:50:34 EEST 2018"
}
]
You can run such a script via the Jenkins "Script Console" GUI, or via the REST API for running groovy scripts (sic). There's also a CLI interface command for doing that.
Related
I've a JavaFX application where I've a list of a bunch of script files. Once the application loads, it reads it and and checks which ones are running.
To do that I use a ProcessHandle, as mentioned in various examples here on StackOverflow and other guides/tutorials on the internet.
The problem is, it never finds any of them. There for I programmatically started one, which I know for a fact that it will be running, via Process process = new ProcessBuilder("/path/to/file/my_script.sh").start(); - and it won't find this one either.
Contents of my_script.sh:
#!/bin/bash
echo "Wait for 5 seconds"
sleep 5
echo "Completed"
Java code:
// List of PIDs which correspond to the processes shown after "INFO COMMAND:"
System.out.println("ALL PROCESSES: " + ProcessHandle.allProcesses().toList());
Optional<ProcessHandle> scriptProcessHandle = ProcessHandle.allProcesses().filter(processHandle -> {
System.out.println("INFO COMMAND: " + processHandle.info().command());
Optional<String> processOptional = processHandle.info().command();
return processOptional.isPresent() && processOptional.get().equals("my_script.sh");
}).findFirst();
System.out.println("Script process handle is present: " + scriptProcessHandle.isPresent());
if (scriptProcessHandle.isPresent()) { // Always false
// Do stuff
}
Thanks to the good old fashioned System.out.println(), I noticed that I get this in my output console every time:
ALL PROCESSES: [1, 2, 28, 85, 128, 6944, 21174, 29029, 29071]
INFO COMMAND: Optional[/usr/bin/bwrap]
INFO COMMAND: Optional[/usr/bin/bash]
INFO COMMAND: Optional[/app/idea-IC/jbr/bin/java]
INFO COMMAND: Optional[/app/idea-IC/bin/fsnotifier]
INFO COMMAND: Optional[/home/username/.jdks/openjdk-17.0.2/bin/java]
INFO COMMAND: Optional[/usr/bin/bash]
INFO COMMAND: Optional[/home/username/.jdks/openjdk-17.0.2/bin/java]
INFO COMMAND: Optional[/home/username/.jdks/openjdk-17.0.2/bin/java]
INFO COMMAND: Optional[/usr/bin/bash]
Script process handle is present: false
The first line in the Javadoc of ProcessHandle.allProcess() reads:
Returns a snapshot of all processes visible to the current process.
So how come I can't see the rest of the operating system's processes?
I'm looking for a non-os-dependent solution, if possible. Why? For better portability and hopefully less maintenance in the future.
Notes:
A popular solution for GNU/Linux seems to be to check the proc entries, but I don't know if that would work for at least the majority of the most popular distributions - if it doesn't, adding support for them in a different way, would create more testing and maintenance workload.
I'm aware of ps, windir, tasklist.exe possible solutions (worst comes to worst).
I found the JavaSysMon library but it seems dead and unfortunately:
CPU speed on Linux only reports correct values for Intel CPUs
Edit 1:
I'm on Pop_OS! and installed IntelliJ via the PopShop as flatpak.
In order to start it as root as suggested by mr mcwolf, I went to /home/username/.local/share/flatpak/app/com.jetbrains.IntelliJ-IDEA-Community/x86_64/stable/active/export/bin and found com.jetbrains.IntelliJ-IDEA-Community file.
When I run sudo ./com.jetbrains.IntelliJ-IDEA-Community or sudo /usr/bin/flatpak run --branch=stable --arch=x86_64 com.jetbrains.IntelliJ-IDEA-Community in my terminal, I get error: app/com.jetbrains.IntelliJ-IDEA-Community/x86_64/stable not installed
So I opened the file and ran its contents:
exec /usr/bin/flatpak run --branch=stable --arch=x86_64 com.jetbrains.IntelliJ-IDEA-Community "$#"
This opens IntelliJ, but not as root, so instead I ran:
exec sudo /usr/bin/flatpak run --branch=stable --arch=x86_64 com.jetbrains.IntelliJ-IDEA-Community "$#"
Which prompts for a password and when I write it in, the terminal crashes.
Edit 1.1:
(╯°□°)╯︵ ┻━┻ "flatpak run" is not intended to be ran with sudo
Edit 2:
As mr mcwolf said, I downloaded the IntelliJ from the official website, extracted it and ran the idea.sh as root.
Now a lot more processes are shown. 1/3 of them show up as INFO COMMAND: Optional.empty.
scriptProcessHandle.isPresent() is still unfortunately returning false. I searched through them and my_script.sh is nowhere to be found. I also tried processOptional.isPresent() && processOptional.get().equals("/absolute/path/to/my_script.sh") but I still get false on isPresent() and it's not in the list of shown processes.
Though the last sentence might be a different problem. I'll do more digging.
Edit 3:
Combining .commandLine() and .contains() (instead of .equals()) solves the problem mentioned in "Edit 2".
Optional<ProcessHandle> scriptProcessHandle = ProcessHandle.allProcesses().filter(processHandle -> {
System.out.println("INFO COMMAND LINE: " + processHandle.info().commandLine());
Optional<String> processOptional = processHandle.info().commandLine();
return processOptional.isPresent() && processOptional.get().contains("/absolute/path/to/my_script.sh");
}).findFirst();
System.out.println("Script process handle is present: " + scriptProcessHandle.isPresent());
if (scriptProcessHandle.isPresent()) { // Returns true
// Do stuff
}
.commandLine() also shows script arguments, so that must be kept in mind.
I trained a SVC classifier in python using Sklearn and other libraries. I did it through building pipeline(sklearn)
I am able to dump the trained model in pickle file and made another python script which would load the pickle file and takes input from command line to do prediction. I am able to call this python script from java and its working fine.
Only issue is that it takes a lot of time, as I have nltk, numpy, panda libraries called in the python script, required for the preprocessing of the input argument. I am calling this python script multiple times and that's increasing the time.
How can I work around this issue.
thats how my pipleline looks
pipeline = Pipeline([
# Use FeatureUnion to combine the features from dataset
('union', FeatureUnion(
transformer_list=[
# Pipeline for getting POS
('ngrams', Pipeline([
('selector', ItemSelector(key='Sentence')),
('vect', CountVectorizer(analyzer='word')),
('tfidf', TfidfTransformer()),
])),
],
# weight components in FeatureUnion
transformer_weights={
'ngrams': 0.7,
},
)),
# Use a SVC classifier on the combined features
('clf', LinearSVC()),
])
Here's an example of setting a simple FLASK serving REST API for a scikit model.
import sys
import os
import time
import traceback
from flask import Flask, request, jsonify
from sklearn.externals import joblib
app = Flask(__name__)
model_directory = 'model'
model_file_name = '%s/model.pkl' % model_directory
# These will be populated at training time
clf = None
#app.route('/predict', methods=['POST'])
def predict():
if clf:
try:
json_ = request.json
# query = get the payload from the json and feed it to your model
prediction = list(clf.predict(query))
return jsonify({'prediction': prediction})
except Exception, e:
return jsonify({'error': str(e), 'trace': traceback.format_exc()})
else:
return 'no model here'
if __name__ == '__main__':
try:
port = int(sys.argv[1])
except Exception, e:
port = 80
try:
clf = joblib.load(model_file_name)
print 'model loaded'
app.run(host='0.0.0.0', port=port, debug=True)
Under http://[JENKINS_NAME]/job/[JOB_NAME]/[BUILD_NUMBER]/
I can see Started by user [USER_NAME].
I want to get that username from my java application.
Any help is much appreciated.
You can make a http call to get all these details. URL to get those details is:
http://<Jenkins URL>/job/<job name>/<build number>/api/json
After the rest call, you will be getting this json.
{
"_class": "hudson.model.FreeStyleBuild",
"actions": [
{
"_class": "hudson.model.CauseAction",
"causes": [
{
"_class": "hudson.model.Cause$UserIdCause",
"shortDescription": "Started by user XXXXXX",
"userId": "xxx#yyy.com",
"userName": "ZZZZZZZZ"
}
]
},
{},
{
"_class": "jenkins.metrics.impl.TimeInQueueAction"
},
{},
{}
],
...
}
So All you have do is parse this json and get the value under javavar['actions'][0]['causes'][0]['userName']. Definitely it will be like that only. I maynot be sure about the indexes. You just try and figure out. Hope this helps.
Mostly for every page in the jenkins instance, you will be having REST API link. Please click on it to see the rest api url and its output for that url.
You could get the build user from Jenkins environment (i.e as an env var). If you use Jenkins 2 pipeline, For example:
pipeline {
//rest of the pipeline
stages {
stage('Build Info') {
steps {
wrap([$class: 'BuildUser']) {
sh 'java -jar <your_java_app>.jar'
}
}
}
}
In your java app you should be able to get the environment variable using System.getenv("BUILD_USER") or else you could pass it as a JVM arg. Ex: sh 'java -jar -DbuildUser=$BUILD_USER <your_java_app>.jar' and get the buildUser system property in the application.
On older version of Jenkins, you may use Build User Vars Plugin or Env Inject plugin. As in the answers on this question. how to get the BUILD_USER in Jenkins when job triggered by timer
I want to get list of commit Id's shown in Git Changelog Plugin output as post-build action and iterate through it using java. Which script/method should I use?
With a Pipeline you can have the plugin return its context. And then loop through it. This works with plugin version 2.0 and later. In this example I list all commit id:s between develop and master. But you can specify type: 'COMMIT' and specific commit if that is what you want.
node {
sh """
git clone git#github.com:jenkinsci/git-changelog-plugin.git .
"""
def changelogContext = gitChangelog returnType: 'CONTEXT',
from: [type: 'REF', value: 'master'],
to: [type: 'REF', value: 'develop']
changelogContext.commits.each { commit ->
println "Commit id: ${commit.hashFull}"
}
}
If you want to do this in pure Java, not a Pipeline. You can just use the lib.
I'm using net.neoremind.sshxcute SSH Java API library to connect to a sftp server and execute a shell script present on that server.
My Shell Script does a simple job of moving files from that SFTP location to a HDFS location on some other machine.
Currently, there's no way to report if any of the files are not moved due to any reason such as connection failure, file with illegal name, empty file etc.
I wonder, how can I show that set of information for each failed file move from shell command back to Java code ?
This is my sample code :
// e.g sftpScriptPath => /abc/xyz
// sftpScriptCommand => sudo ./move.sh
// arguments => set of arguments to shell script.
task = new ExecShellScript(sftpScriptPath, sftpScriptCommand, arguments);
result = m_SshExec.exec(task);
if(result.isSuccess && result.rc == 0)
{
isSuccessful = true;
s_logger.info("Shell script executed successfully");
s_logger.info("Return code : " + result.rc);
s_logger.info("Sysout : " + result.sysout);
}
else
{
isSuccessful = false;
s_logger.info("Shell script execution failed");
s_logger.info("Return code : " + result.rc);
s_logger.info("Sysout : " + result.sysout);
}
The Result object returned from the exec method call includes:
exit status or return code (Result.rc),
standard output (stdout) (Result.sysout),
standard error (stderr) (Result.error_msg), and
an indication of success, based on return code and output (Result.isSuccess).
So, if you are committed to the current method of executing a shell script using the sshxcute framework, then the simplest way would be to have the move.sh script provide information about any failures while moving files. This could be done via a combination of return codes and standard output (stdout) and/or standard error (stderr) messages. Your Java code would then obtain this information from the returned Result object.