Im using Jenkins on my local in docker from jenkins official docker hub (but I even tried jenkins we have on bluemix instance).
Im writing program (test driven currently) to trigger job from java and then get job id, using
jenkins api.
Properties jenkinsProps = new Properties();
InputStream jenkinsPropsIs = Files.newInputStream(jenkinsPropsFilePath);
jenkinsProps.load(jenkinsPropsIs);
// for building url
String jenkinsServerUrl = jenkinsProps.getProperty(JenkinsPropertiesKeys.KEY_JENKINS_SERVER_URL);
String jobName = jenkinsProps.getProperty(JenkinsPropertiesKeys.KEY_JENKINS_JOB_NAME);
String jobRemoteAccessToken = jenkinsProps.getProperty(JenkinsPropertiesKeys.KEY_JENKINS_JOB_ACCESS_TOKEN);
// for headers
String jenkinsUser = jenkinsProps.getProperty(JenkinsPropertiesKeys.KEY_JENKINS_USERNAME);
String jenkinsUserApiToken = jenkinsProps.getProperty(JenkinsPropertiesKeys.KEY_JENKINS_API_TOKEN);
String jenkinsCrumb = jenkinsProps.getProperty(JenkinsPropertiesKeys.KEY_JENKINS_CSRF_CRUMB);
// build parameters
Map<String, String> params = new LinkedHashMap<>();
params.put("param1", "test1");
params.put("param2", "test2");
params.put("param3", "test3");
// Jenkins cause - to identify which process had run this job
String procID = UUID.randomUUID().toString();
params.put("cause", procID);
String url = getJenkinsBuildWithParametersUrl(jenkinsServerUrl, jobName, jobRemoteAccessToken, params);
WebRequest request = new WebRequest(); // own HttpConnection based client
// setup Jenkins crumb to avoid CSRF
request.setHeader(HEADER_NAME_JENKINS_CRUMB, jenkinsCrumb);
// user authentification (Basic + base64 encoded user:apiToken)
setupAuthenticationHeader(request, jenkinsUser, jenkinsUserApiToken);
// execute POST request
request = request.post(url);
// asserts
assertNotNull(request);
assertEquals(201, request.getResponseCode());
/* GET JOB ID */
Thread.currentThread().sleep(8000); // !!! if less then 8sec, jenkins returns old job number
request.reset();
setupAuthenticationHeader(request, jenkinsUser, jenkinsUserApiToken);
url = getJenkinsLastBuildUrl(jenkinsServerUrl, jobName);
// execute get request to /api/json
request = request.get(url);
assertTrue(request.isOK());
// get note & compare with proc id, to match job
String jenkinsJobProcId = null;
JsonObject jenkinsLastBuildJson = request.getResponseAsJson();
JsonArray jenkinsActions = jenkinsLastBuildJson.get("actions").getAsJsonArray();
for (JsonElement action : jenkinsActions) {
JsonObject actionJson = action.getAsJsonObject();
if (actionJson.get("_class").getAsString().equals("hudson.model.CauseAction")) {
JsonArray causeActionJsonArray = actionJson.get("causes").getAsJsonArray();
for (JsonElement cause : causeActionJsonArray) {
JsonObject causeJson = cause.getAsJsonObject();
if (causeJson.get("_class").getAsString().equals("hudson.model.Cause$RemoteCause")) {
jenkinsJobProcId = causeJson.get("note").getAsString();
break;
}
}
if (!jenkinsJobProcId.isEmpty()) {
break;
}
}
}
System.out.println("LastBuild prodId : " + jenkinsJobProcId);
assertEquals(procID, jenkinsJobProcId);
// get jenkins build number
int lastBuildNumber = jenkinsLastBuildJson.get("number").getAsInt();
System.out.println("LastBuild buildNumber : " + lastBuildNumber);
assertTrue(lastBuildNumber > 0);
Once i trigger job, it takes like 8 sec, to job apear in /api/json.
Do you know what can be the problem ?
How to tune it up ?
Please check if you still need a delay between the two executions.
trigger the job
curl -X POST http://${JENKINS_HOTS}:${JENKINS_PORT}/job/${JOB_NAME}/build \
--user ${USER}:${PASSWORD} \
--data-urlencode json='{"parameter": [{"name":"delay", "value":"0sec"}]}'
get job info
curl http://${JENKINS_HOTS}:${JENKINS_PORT}/job/${JOB_NAME}/api/json \
--user ${USER}:${PASSWORD}
If you still need to wait around 8 seconds, check the setting of the quiet period in the job. If it's not yet enabled, enable it and set the period to 0 seconds. This should remove the delay between the executions.
Depending on the workload off the Jenkins instance it might be necessary, even with a period of zero seconds, that you need to wait a short period.
Related
We are creating kubernates job using java kubernates client api (V:5.12.2) like below.
I am struck with two places . Could some one please help on this ?
podList.getItems().size() in below code snippet is some times returning zero even though I see pod get created and with other existing jobs.
How to specify particular label to job pod?
KubernetesClient kubernetesClient = new DefaultKubernetesClient();
String namespace = System.getenv(POD_NAMESPACE);
String jobName = TextUtils.concatenateToString("flatten" + Constants.HYPHEN+ flattenId);
Job jobRequest = createJob(flattenId, authValue);
var jobResult = kubernetesClient.batch().v1().jobs().inNamespace(namespace)
.create(jobRequest);
PodList podList = kubernetesClient.pods().inNamespace(namespace)
.withLabel("job-name", jobName).list();
// Wait for pod to complete
var pods = podList.getItems().size();
var terminalPodStatus = List.of("succeeded", "failed");
_LOGGER.info("pods created size:" + pods);
if (pods > 0) {
// returns zero some times.
var k8sPod = podList.getItems().get(0);
var podName = k8sPod.getMetadata().getName();
kubernetesClient.pods().inNamespace(namespace).withName(podName)
.waitUntilCondition(pod -> {
var podPhase = pod.getStatus().getPhase();
//some logic
return terminalPodStatus.contains(podPhase.toLowerCase());
}, JOB_TIMEOUT, TimeUnit.MINUTES);
kubernetesClient.close();
}
private Job createJob(String flattenId, String authValue) {
return new JobBuilder()
.withApiVersion(API_VERSION)
.withNewMetadata().withName(jobName)
.withLabels(labels)
.endMetadata()
.withNewSpec()
.withTtlSecondsAfterFinished(300)
.withBackoffLimit(0)
.withNewTemplate()
.withNewMetadata().withAnnotations(LINKERD_INJECT_ANNOTATIONS)
.endMetadata()
.withNewSpec()
.withServiceAccount(Constants.TEST_SERVICEACCOUNT)
.addNewContainer()
.addAllToEnv(envVars)
.withImage(System.getenv(BUILD_JOB_IMAGE))
.withName(“”test)
.withCommand("/bin/bash", "-c", "java -jar test.jar")
.endContainer()
.withRestartPolicy(RESTART_POLICY_NEVER)
.endSpec()
.endTemplate()
.endSpec()
.build();
}
Pods are not instantly created as consequence of creating a Job: The Job controller needs to become active and create the Pods accordingly. Depending on the load on your control plane and number of Job instances you may need to wait more or less time.
I changed my config-test.json,but application did not print "new config:...",the before scanhander has print .
JsonObject jsonConfig = new JsonObject();
jsonConfig.put("path", "test.json");
ConfigStoreOptions config = new ConfigStoreOptions();
config.setType("file").setOptional(true).setConfig(jsonConfig);
ConfigRetrieverOptions options =
new ConfigRetrieverOptions().addStore(config).setScanPeriod(5000);
ConfigRetriever configRetriever = ConfigRetriever.create(vertx, options);
configRetriever.setBeforeScanHandler(h -> {
System.out.println("config:" + configRetriever.getCachedConfig());
});
configRetriever.listen(change -> {
JsonObject newConfiguration = change.getNewConfiguration();
System.out.println("new config:" + newConfiguration);
JsonObject old = change.getPreviousConfiguration();
System.out.println("old config:" + old);
});
The Javadoc of setBeforeScanHandler says:
Registers a handler called before every scan. This method is mostly used for logging purpose.
Which means that in your case the program will check for changes in the JSON file every five seconds as specified in .setScanPeriod(5000). If the program finds the change during one of those checks, first the configRetriever.setBeforeScanHandler method will trigger, followed by configRetriever.listen.
The order of messages when there is a change will be:
config : {...}
new config:{...}
old config:{...}
The program does not register the config change immediately, but only on the regularly scheduled interval checks, as is stated in the setScanPeriod JavaDoc:
Configures the scan period, in ms. This is the time amount between two
checks of the configuration updates.
I have a Java application that uses the Google Cloud Print API (https://github.com/jittagornp/GoogleCloudPrint) to print pdf's. In most cases it works like a charm but every now and then (approx 1 out of 100) it gives an error back which looks like this "Det går inte att göra ändringar i ett avslutat jobb".
Roughly translated to english it reads: "Can't make changes to a finished job"
The kicker is that the pages prints perfectly every time, with or without the error.
Code looks like this for every call to GCP:
public static SubmitJobResponse submitPrintJob(String filename, String type){
String jsonTicket;
String printerId;
SubmitJobResponse response = null;
String jobtitle;
jobtitle = filename.substring(filename.lastIndexOf('/') + 1);
// Login to GCP
GoogleCloudPrint cloudPrint = gcpLogin();
jsonTicket = "{'version':'1.0','print':{'vendor_ticket_item':[],'color':{'type':1},'copies':{'copies':4}}}";
printerId = session("printerlabel");
try {
// Get file as byte array
Path path = Paths.get(filename);
byte[] content = Files.readAllBytes(path);
Gson gson = new Gson();
Ticket ticket = gson.fromJson(jsonTicket, Ticket.class);
String json = gson.toJson(ticket);
//create job
SubmitJob submitJob = new SubmitJob();
submitJob.setContent(content);
submitJob.setContentType("application/pdf");
submitJob.setPrinterId(printerId);
submitJob.setTicket(ticket);
submitJob.setTitle(jobtitle);
//send job
response = cloudPrint.submitJob(submitJob);
Logger.debug("PrinterID => {}", printerId);
Logger.debug("submit job response => {}", response.isSuccess() + "," + response.getMessage());
if(response.isSuccess()) {
Logger.debug("submit job id => {}", response.getJob().getId());
}
cloudPrint.disconnect();
} catch (Exception ex) {
Logger.warn(null, ex);
}
return response;
}
The API uses a call to "/submit?output=json&printerid=" at GCP so why does it say that I'm trying to change a finished job - I'm not submitting a job id - only printer id.
How do I get rid of these false errors?
I want to use the elasticsearch bulk api using java and wondering how I can set the batch size.
Currently I am using it as:
BulkRequestBuilder bulkRequest = getClient().prepareBulk();
while(hasMore) {
bulkRequest.add(getClient().prepareIndex(indexName, indexType, artist.getDocId()).setSource(json));
hasMore = checkHasMore();
}
BulkResponse bResp = bulkRequest.execute().actionGet();
//To check failures
log.info("Has failures? {}", bResp.hasFailures());
Any idea how I can set the bulk/batch size?
It mainly depends on the size of your documents, available resources on the client and the type of client (transport client or node client).
The node client is aware of the shards over the cluster and sends the documents directly to the nodes that hold the shards where they are supposed to be indexed. On the other hand the transport client is a normal client that sends its requests to a list of nodes in a round-robin fashion. The bulk request would be sent to one node then, which would become your gateway when indexing.
Since you're using the Java API, I would suggest you to have a look at the BulkProcessor, which makes it much easier and flexibile to index in bulk. You can either define a maximum number of actions, a maximum size and a maximum time interval since the last bulk execution. It's going to execute the bulk automatically for you when needed. You can also set a maximum number of concurrent bulk requests.
After you created the BulkProcessor like this:
BulkProcessor bulkProcessor = BulkProcessor.builder(client, new BulkProcessor.Listener() {
#Override
public void beforeBulk(long executionId, BulkRequest request) {
logger.info("Going to execute new bulk composed of {} actions", request.numberOfActions());
}
#Override
public void afterBulk(long executionId, BulkRequest request, BulkResponse response) {
logger.info("Executed bulk composed of {} actions", request.numberOfActions());
}
#Override
public void afterBulk(long executionId, BulkRequest request, Throwable failure) {
logger.warn("Error executing bulk", failure);
}
}).setBulkActions(bulkSize).setConcurrentRequests(maxConcurrentBulk).build();
You just have to add your requests to it:
bulkProcessor.add(indexRequest);
and close it at the end to flush any eventual requests that might have not been executed yet:
bulkProcessor.close();
To finally answer your question: the nice thing about the BulkProcessor is also that it has sensible defaults: 5 MB of size, 1000 actions, 1 concurrent request, no flush interval (which might be useful to set).
you need to count your bulk request builder when it hits your batch size limit then index them and flush older bulk builds .
here is example of code
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", "MyClusterName").build();
TransportClient client = new TransportClient(settings);
String hostname = "myhost ip";
int port = 9300;
client.addTransportAddress(new InetSocketTransportAddress(hostname, port));
BulkRequestBuilder bulkBuilder = client.prepareBulk();
BufferedReader br = new BufferedReader(new InputStreamReader(new DataInputStream(new FileInputStream("my_file_path"))));
long bulkBuilderLength = 0;
String readLine = "";
String index = "my_index_name";
String type = "my_type_name";
String id = "";
while((readLine = br.readLine()) != null){
id = somefunction(readLine);
String json = new ObjectMapper().writeValueAsString(readLine);
bulkBuilder.add(client.prepareIndex(index, type, id).setSource(json));
bulkBuilderLength++;
if(bulkBuilderLength % 1000== 0){
logger.info("##### " + bulkBuilderLength + " data indexed.");
BulkResponse bulkRes = bulkBuilder.execute().actionGet();
if(bulkRes.hasFailures()){
logger.error("##### Bulk Request failure with error: " + bulkRes.buildFailureMessage());
}
bulkBuilder = client.prepareBulk();
}
}
br.close();
if(bulkBuilder.numberOfActions() > 0){
logger.info("##### " + bulkBuilderLength + " data indexed.");
BulkResponse bulkRes = bulkBuilder.execute().actionGet();
if(bulkRes.hasFailures()){
logger.error("##### Bulk Request failure with error: " + bulkRes.buildFailureMessage());
}
bulkBuilder = client.prepareBulk();
}
hope this helps you
thanks
Am trying to convert a VBScript to java using JACOB - Java COM bridge library.
'Create' method in VBScript accepts a [out] param in it's method and it sets it upon method execution and i couldn't figure out how to retrieve it back via JACOB.
VBScript in question:
Function CreateProcess(strComputer, strCommand)
Dim objWMIService, objProcess
Set objWMIService = GetObject("winmgmts:" & "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")
Set objProcess = objWMIService.Get("Win32_Process")
errReturn = objProcess.Create (strCommand, Null, Null, intProcessID)
Set objWMIService = Nothing
Set objProcess = Nothing
CreateProcess = intProcessID
End Function
intProcessID is [out] param set after method execution. (Create API contract)
Converted java code(incomplete and modified slightly for demonstration):
public static void createProcess() {
String host = "localhost";
String connectStr = String
.format("winmgmts:{impersonationLevel=impersonate}!\\\\%s\\root\\CIMV2",
host);
ActiveXComponent axWMI = new ActiveXComponent(connectStr);
Variant vCollection = axWMI.invoke("get", new Variant("Win32_Process"));
Dispatch d = vCollection.toDispatch();
Integer processId = null;
int result = Dispatch.call(d, "Create", "notepad.exe", null, null, processId)
.toInt();
System.out.println("Result:" + result);
// WORKS FINE until here i.e. notepad launches properly, however processId still seems to be null. Following commented code is wrong - doesn't work
//Variant v = Dispatch.get(d, "processId"); // even ProcessId doesn't work
//int pId = v.getInt();
//System.out.println("process id:"
// + pId);
// what is the right way to get the process ID set by 'Create' method?
}
Would be great if you could provide some pointers or relevant code. Ask me more if needed. Thanks in advance.
Replacing
Integer processId = null;
with
Variant processId = new Variant(0, true);
should solve the problem. You should then have process ID of the notepad.exe process in the processId variant, and it can be fetched by
processId.getIntRef()