I have a simple Eureka Server, Config Server, Zuul Gateway, and a test service(named aService below) registed in eureka.
Besides, a implemention of FallbackProvider is registed and timeoutInMilliseconds for default command is 10000.
I send a request to aService, in which will sleep 15 seconds and print tick per second.After 10 seconds a HystrixTimeoutException occured and my custom fallbackResponse accessed, but the tick still go on until 15 seconds end.
My question is, abviously, why is the request not interrupted?Could someone please explain what hystrix and zuul do after HystrixTimeout?
Dependency version:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-commons-dependencies</artifactId>
<version>Edgware.SR2</version>
</dependency>
<dependency>
<groupId>com.netflix.zuul</groupId>
<artifactId>zuul-core</artifactId>
<version>1.3.0</version>
</dependency>
Some of my hystrix configurations:
zuul.servletPath=/
hystrix.command.default.execution.isolation.strategy=THREAD
hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds=10000
hystrix.command.aService.execution.isolation.strategy=THREAD
ribbon.ReadTimeout=60000
ribbon.ConnectTimeout=3000
Some of my FallbackProvider:
#Component
public class ServerFallback implements FallbackProvider {
#Override
public String getRoute() {
return "*";
}
#Override
public ClientHttpResponse fallbackResponse() {
// some logs
return simpleClientHttpResponse();
}
#Override
public ClientHttpResponse fallbackResponse(Throwable cause) {
// some logs
return simpleClientHttpResponse();
}
}
``
When using zuul with ribbon(default), the executionIsolationStrategy in HytrixCommandProperties will be overrided by AbstractRibbonCommand,which is SEMAPHORE by default.In this isolation strategy, request will not interrupted immediatelly.See ZuulProxy fails with “RibbonCommand timed-out and no fallback available” when it should do failover
Related
Problem:
When the further processing of a message fails, than the message is not requeued but just lost for good. Otherwise everything is working fine.
Environment:
Quarkus application on AKS with camel-quarkus-amqp, queue = Azure service bus
We have similar applications to that one, running on a jboss server with the same configuration (service bus queue with same properties, similar camel route setup) that are not showing that behaviour.
Code:
The application was previously built based on quarkus 1.11.3.Final but even after an update to 2.7.6.Final, the behaviour is the same.
from pom.xml (standard like from code generation):
<properties>
...
<quarkus.platform.version>2.7.6.Final</quarkus.platform.version>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.camel.quarkus</groupId>
<artifactId>camel-quarkus-amqp</artifactId>
</dependency>
<dependency>
<groupId>org.apache.camel.quarkus</groupId>
<artifactId>camel-quarkus-bean</artifactId>
</dependency>
<dependencies>
application.properties:
quarkus.qpid-jms.url=failover:(amqps://xyz.servicebus.windows.net)
quarkus.qpid-jms.username=xxx
quarkus.qpid-jms.password=xxx
Implementation - Consumer:
public class MessageConsumer extends RouteBuilder {
#Inject private UpdateService updateService;
#Override
public void configure() {
from("amqp:queue:" + "queueName")
.routeId(MessageConsumer.class.getName() + ".consumeQueue")
.process(exchange -> exchange.getIn().setBody(UUID.fromString(exchange.getIn().getBody(String.class))))
.bean(updateService);
}
}
Implementation - Producer:
import javax.inject.Inject;
import javax.jms.ConnectionFactory;
import javax.jms.JMSContext;
import java.util.UUID;
public class MessageRepository {
#Inject ConnectionFactory connectionFactory;
public void sendMessage(UUID uuid) {
try (JMSContext context = connectionFactory.createContext(JMSContext.AUTO_ACKNOWLEDGE)) {
context.createProducer().send(context.createQueue("queueName"), uuid.toString());
} catch (Exception e) {
throw new ApplicationException("Invalid message");
}
}
}
Question:
What do I need to change / add to get messages requeued (not acknowledged) if an exception will be throw inside the bean processing of updateService in the camel route?
The likely issue here is that you need to enable transactions on the route so that the consumption of the message that fails will trigger a rollback of the transaction when an error is thrown.
There is considerable documentation about this and how transaction are managed in the Camel docs here.
Since there is a lack of transactional support for AMQP in camel-quarkus in versions below 3.4.0, I needed to do that plain without camel. Here is my implementation as reference for others maybe facing the same or similar problem:
public class MessageConsumer implements Runnable {
private static final long WAIT_UNTIL_RECONNECT = 30000; // 30 seconds
#Inject private UpdateService updateService;
#Inject ConnectionFactory connectionFactory;
private final ExecutorService scheduler = Executors.newSingleThreadExecutor();
void onStart(#Observes StartupEvent ev) { scheduler.submit(this); }
void onStop(#Observes ShutdownEvent ev) { scheduler.shutdown(); }
#SneakyThrows
#Override
public void run() {
// loop here to make sure that it will reconnect to the queue in case of disconnect
while(true) {
try {
openConnectionAndConsume();
} catch (Exception e) {
log.error("Exception in connection: ", e);
Thread.sleep(WAIT_UNTIL_RECONNECT);
}
}
}
private void openConnectionAndConsume() {
try (JMSContext context = connectionFactory.createContext(JMSContext.CLIENT_ACKNOWLEDGE);
JMSConsumer consumer = context.createConsumer(context.createQueue(messagingProperties.getQueue()))) {
while (true) {
Message message = consumer.receive();
handleMessage(message);
}
}
}
private void handleMessage(Message message) {
if (message == null) return;
try {
updateService.update(UUID.fromString(message.getBody(String.class)));
message.acknowledge();
} catch (Exception e) {
log.error("Error in route: ", e);
}
}
}
My Spring-Boot Application is running on Kubernetes.
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.5.12</version>
</parent>
According to Spring-Boot reference, livenessProbe and readinessProbe would be enabled automatically.
https://docs.spring.io/spring-boot/docs/2.5.14/reference/htmlsingle/#actuator.endpoints.kubernetes-probes
These health groups are only enabled automatically if the application is running in a Kubernetes environment.
They will also be exposed as separate HTTP Probes using Health Groups: /actuator/health/liveness and /actuator/health/readiness.
But the endpoints (/actuator/health/liveness and /actuator/health/readiness) in My Spring-Boot Application returns 404, what's wrong here?
The component of livenessState and readinessState is enabled, but the health group is not.
I debug the code and find that the method org.springframework.boot.actuate.autoconfigure.health.HealthEndpointConfiguration.HealthEndpointGroupsBeanPostProcessor#postProcessAfterInitialization doesn't work. Because there's no bean instanceof HealthEndpointGroups, Why?
/**
* {#link BeanPostProcessor} to invoke {#link HealthEndpointGroupsPostProcessor}
* beans.
*/
static class HealthEndpointGroupsBeanPostProcessor implements BeanPostProcessor {
private final ObjectProvider<HealthEndpointGroupsPostProcessor> postProcessors;
HealthEndpointGroupsBeanPostProcessor(ObjectProvider<HealthEndpointGroupsPostProcessor> postProcessors) {
this.postProcessors = postProcessors;
}
#Override
public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
// there's no bean instanceof HealthEndpointGroups here, Why?
if (bean instanceof HealthEndpointGroups) {
return applyPostProcessors((HealthEndpointGroups) bean);
}
return bean;
}
private Object applyPostProcessors(HealthEndpointGroups bean) {
for (HealthEndpointGroupsPostProcessor postProcessor : this.postProcessors.orderedStream()
.toArray(HealthEndpointGroupsPostProcessor[]::new)) {
bean = postProcessor.postProcessHealthEndpointGroups(bean);
}
return bean;
}
}
I try to run the application locally with adding KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_HOST environment variables, to enable probes. But it didn't work as well.
/**
* {#link SpringBootCondition} to enable or disable probes.
* <p>
* Probes are enabled if the dedicated configuration property is enabled or if the
* Kubernetes cloud environment is detected/enforced.
*/
static class ProbesCondition extends SpringBootCondition {
private static final String ENABLED_PROPERTY = "management.endpoint.health.probes.enabled";
private static final String DEPRECATED_ENABLED_PROPERTY = "management.health.probes.enabled";
#Override
public ConditionOutcome getMatchOutcome(ConditionContext context, AnnotatedTypeMetadata metadata) {
Environment environment = context.getEnvironment();
ConditionMessage.Builder message = ConditionMessage.forCondition("Probes availability");
ConditionOutcome outcome = onProperty(environment, message, ENABLED_PROPERTY);
if (outcome != null) {
return outcome;
}
outcome = onProperty(environment, message, DEPRECATED_ENABLED_PROPERTY);
if (outcome != null) {
return outcome;
}
if (CloudPlatform.getActive(environment) == CloudPlatform.KUBERNETES) {
return ConditionOutcome.match(message.because("running on Kubernetes"));
}
return ConditionOutcome.noMatch(message.because("not running on a supported cloud platform"));
}
private ConditionOutcome onProperty(Environment environment, ConditionMessage.Builder message,
String propertyName) {
String enabled = environment.getProperty(propertyName);
if (enabled != null) {
boolean match = !"false".equalsIgnoreCase(enabled);
return new ConditionOutcome(match, message.because("'" + propertyName + "' set to '" + enabled + "'"));
}
return null;
}
}
I run application on Kubernetes, but it doesn't work for me. I find that removing dependency of Spring Cloud Sleuth, it would run normally. Is there any conflict between SpringBoot and Spring Cloud Sleuth?
Removing the follow dependency
<!--traceId -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-sleuth-zipkin</artifactId>
</dependency>
yaml:
management:
server:
port: 9235
endpoints:
web:
exposure:
include: ${MANAGEMENT_ENDPOINTS_WEB_EXPOSURE_INCLUDE:*}
k8s:
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: {{ .Values.deployment.managementPort }}
failureThreshold: 3
initialDelaySeconds: 150
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: {{ .Values.deployment.managementPort }}
failureThreshold: 3
initialDelaySeconds: 180
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
AFAIK Health groups are automatically enabled only if the application met below 2 conditions:
1)Runs in a Kubernetes deployment environment
Spring Boot auto-detects Kubernetes deployment environments by checking the environment for "*_SERVICE_HOST" and "*_SERVICE_PORT" variables. You can override this detection with the spring.main.cloud-platform configuration property. Spring Boot helps you to manage the state of your application and export it with HTTP Kubernetes Probes using Actuator.
2)The management.health.probes.enabled property is set to true
If the application meets either of the above conditions, then auto-configuration registers.
Refer to Baeldung, Liveness and Readiness Probes in Spring Boot, Auto-Configurations by Mona Mohamadinia, for more information.
Also refer to github link, Kubernetes readiness probe endpoint returning 404 #22562 for more information.
I am triggering a Lambda function from an SQS event with the following code:
#Override
public Void handleRequest(SQSEvent sqsEvent, Context context) {
for (SQSMessage sqsMessage : sqsEvent.getRecords()) {
final String body = sqsMessage.getBody();
try {
//do stuff here
} catch (Exception ex) {
//send to DLQ
}
}
return null;
}
The "do stuff" is calling another Lambda function with the following code:
private final AWSLambda client;
private final String functionName;
public LambdaService(AWSLambdaAsync client, String functionName) {
this.client = client;
this.functionName = functionName;
}
public void runWithPayload(String payload) {
logger.info("Invoking lambda {} with payload {}", functionName, payload);
final InvokeRequest request = new InvokeRequest();
request.withFunctionName(functionName).withPayload(payload);
final InvokeResult invokeResult = client.invoke(request);
final Integer statusCode = invokeResult.getStatusCode();
logger.info("Invoked lambda {} with payload {}. Got status code {} and response payload {}",
functionName,
payload,
statusCode,
StandardCharsets.UTF_8.decode(invokeResult.getPayload()).toString());
if(statusCode.equals(200) == false) {
throw new IllegalStateException(String.format("There was an error executing the lambda function %s with payload %s", functionName, payload));
}
}
I am using the following libraries:
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-core</artifactId>
<version>1.2.0</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-events</artifactId>
<version>2.2.6</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-sqs</artifactId>
<version>1.11.505</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-lambda</artifactId>
<version>1.11.505</version>
</dependency>
The problem is that it looks like the SQS message is not removed from the queue and it gets reprocessed over and over. It happens every 30 seconds which is exactly the value of Default Visibility Timeout. Now, as far as I know, if the lambda consuming the sqs messages is terminating properly it should automatically delete the message from the queue, but this is not happening.
I don't think there is any error happening in the lambda because I am not getting any message in the DLQ (and I have a catch-all block) and I cannot see any stacktrace in the logs in Cloudwatch. I am bit confused about what's happening here, anyone has some good idea?
Unless something changed recently, I don't think the AWS SDK for Java automatically deletes the message from the queue. You need to write the code to do that.
I would love to be proven wrong on that one, please share the doc excerpt I missed.
Code sample :
https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/examples-sqs-messages.html
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/standard-queues-getting-started-java.html
I am working in an microservices architecture that works as follows
I have two service web applications (REST services) that register themselves correctly in an eureka server, then I have a client application that fetches the eureka registry and using ribbon as a client side load balancer, determines to which service application go (at the moment, a simple Round Robin is being used).
My problem is that when I stop one of the service applications (they currently run in docker containers btw), eureka does not remove them from their registry (seems to take a few minutes) so ribbon still thinks that there are 2 available services, making around 50% of the calls fail.
Unfortunately I am not using Spring Cloud (reasons out of my control). So my config for eureka is as follows.
For the service applications:
eureka.registration.enabled=true
eureka.name=skeleton-service
eureka.vipAddress=skeleton-service
eureka.statusPageUrlPath=/health/ping
eureka.healthCheckUrlPath=/health/check
eureka.port.enabled=8042
eureka.port=8042
eureka.appinfo.replicate.interval=5
## configuration related to reaching the eureka servers
eureka.preferSameZone=true
eureka.shouldUseDns=false
eureka.serviceUrl.default=http://eureka-container:8080/eureka/v2/
eureka.decoderName=JacksonJson
For the client application (eureka + ribbon)
###Eureka Client configuration for Sample Eureka Client
eureka.registration.enabled=false
eureka.name=skeleton-web
eureka.vipAddress=skeleton-web
eureka.statusPageUrlPath=/health/ping
eureka.healthCheckUrlPath=/health/check
eureka.port.enabled=8043
eureka.port=8043
## configuration related to reaching the eureka servers
eureka.preferSameZone=true
eureka.shouldUseDns=false
eureka.serviceUrl.default=http://eureka-container:8080/eureka/v2/
eureka.decoderName=JacksonJson
eureka.renewalThresholdUpdateIntervalMs=3000
#####################
# RIBBON STUFF HERE #
#####################
sample-client.ribbon.NIWSServerListClassName=com.netflix.niws.loadbalancer.DiscoveryEnabledNIWSServerList
# expressed in milliseconds
sample-client.ribbon.ServerListRefreshInterval=3000
# movieservice is the virtual address that the target server(s) uses to register with Eureka server
sample-client.ribbon.DeploymentContextBasedVipAddresses=skeleton-service
I had faced similar issue in my development ,
There are multiple things I tried and worked for me .
1)Instead of using eureka registry ,use only underlying ribbon and modify it's health check mechanism as per our need ,for doing this provide own implementation for IPing
public class PingUrl implements com.netflix.loadbalancer.IPing {
public boolean isAlive(Server server) {
String urlStr = "";
if (isSecure) {
urlStr = "https://";
} else {
urlStr = "http://";
}
urlStr += server.getId();
urlStr += getPingAppendString();
boolean isAlive = false;
try {
ResponseEntity response = getRestTemplate().getForEntity(urlStr, String.class);
isAlive = (response.getStatusCode().value()==200);
} catch (Exception e) {
;
}
return isAlive;
}
}
override the load balancing behaviour
#SpringBootApplication
#EnableZuulProxy
#RibbonClients(defaultConfiguration = LoadBalancer.class)
#ComponentScan(basePackages = {"com.test"})
public class APIGateway {
public static void main(String[] args) throws Exception {
SpringApplication.run(APIGateway .class, args);
}
}
public class LoadBalancer{
#Autowired
IClientConfig ribbonClientConfig;
#Bean
public IPing ribbonPing() {
return new PingUrl(getRoute() + "/ping");
}
}
private String getRoute() {
return RequestContext.getCurrentContext().getRequest().getServletPath();
}
providing availability filtering rule
public class AvailabilityBasedServerSelectionRule extends AvailabilityFilteringRule {
#Override
public Server choose(Object key) {
Server chosenServer = super.choose(key);
int count = 1;
List<Server> reachableServers = this.getLoadBalancer().getReachableServers();
List<Server> allServers = this.getLoadBalancer().getAllServers();
if(reachableServers.size() > 0) {
while(!reachableServers.contains(chosenServer) && count++ < allServers.size()) {
chosenServer = reachableServers.get(0);
}
}
return chosenServer;
}
}
2)You can specify the time interval for refreshing the list
ribbon.eureka.ServerListRefreshInterval={time in ms}
I wrote a Job of two Steps, with one of two steps being a partitioning step.
The partition step uses TaskExecutorPartitionHandler and runs 5 slave steps in threads.
The job is started in the main() method. But it's not stopping after every slave ItemReader returned null- the finish symbol. And even after the program ran past the last line of code in main() method (which is System.out.println("Finished")) the program process won't stop, hanging in memory and doing nothing. I have to press the stop button on Eclipse's panel to stop the program.
the following is the content of a JobExecution returned by JobLauncher.run(), signaling the successful status of the Job run..
JobExecution: id=0, version=2, startTime=Fri Nov 27 06:05:23 CST 2015, endTime=Fri Nov 27 06:05:39 CST 2015, lastUpdated=Fri Nov 27 06:05:39 CST 2015, status=COMPLETED, exitStatus=exitCode=COMPLETED;exitDescription=, job=[JobInstance: id=0, version=0, Job=[jobCensoredPages]], jobParameters=[{}]
7217
Finished
Why does a Spring Batch program with a successful Job run still hang?
Please point me where to work it out. I'm suspecting the multithreading part managed by Spring Batch does not stop..
simple job run code
Job job = (Job) context.getBean("jobPages");
try {
JobParameters p=new JobParametersBuilder()
.toJobParameters();
JobExecution result = launcher.run(job, new JobParameters());
System.out.println(result.toString());
} catch (Exception e) {
e.printStackTrace();
}
context.getBean("idSet");
AtomicInteger n=(AtomicInteger) context.getBean("pageCount");
System.out.println(n.get());
System.out.println("Finished");
Configuation for Patitioner and PatitionHandler
#Bean #Autowired
public PartitionHandler beanPartitionHandler(
TaskExecutor beanTaskExecutor,
#Qualifier("beanStepSlave") Step beanStepSlave
) throws Exception
{
TaskExecutorPartitionHandler h=new TaskExecutorPartitionHandler();
h.setGridSize(5);
h.setTaskExecutor(beanTaskExecutor);
h.setStep(beanStepSlave);
h.afterPropertiesSet();
return h;
}
#Bean public TaskExecutor beanTaskExecutor() {
ThreadPoolTaskExecutor e = new ThreadPoolTaskExecutor();
e.setMaxPoolSize(5);
e.setCorePoolSize(5);
e.afterPropertiesSet();
return e;
}
the only step and it's slave step
#Bean public Step beanStepMaster(
Step beanStepSlave,
Partitioner beanPartitioner,
PartitionHandler beanPartitionHandler
) throws Exception
{
return stepBuilderFactory().get("stepMaster")
.partitioner(beanStepSlave)
.partitioner("stepSlave", beanPartitioner)
.partitionHandler(partitionHandler)
.build();
}
#Bean #Autowired
public Step beanStepSlave(
ItemReader<String> beanReaderTest,
ItemProcessor<String, String> beanProcessorTest,
ItemWriter<String> beanWriterTest) throws Exception{
return stepBuilderFactory().get("stepSlave")
.<String, String>chunk(1)
.reader(beanReaderTest)
.processor(beanProcessorTest)
.writer(beanWriterTest)
.build();
}
My pom.xml file
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>RELEASE</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-core</artifactId>
<version>4.2.3.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-context</artifactId>
<version>4.2.3.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-tx</artifactId>
<version>4.2.3.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.batch</groupId>
<artifactId>spring-batch-core</artifactId>
<version>RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.retry</groupId>
<artifactId>spring-retry</artifactId>
<version>1.1.2.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-beans</artifactId>
<version>RELEASE</version>
</dependency>
I also had difficulty with my partitioned Spring batch application hanging on completion when I used a ThreadPoolTaskExecutor. In addition, I saw that the executor was not allowing the work of all the partitions to finish.
I found two ways of solving those issues.
The first solution is using a SimpleAsyncTaskExecutor instead of a ThreadPoolTaskExecutor. If you do not mind the extra overhead in re-creating threads, this is a simple fix.
The second solution is creating a JobExecutionListener that calls shutdown on the ThreadPoolTaskExecutor.
I created a JobExecutionListener like this:
#Bean
public JobExecutionListener jobExecutionListener(ThreadPoolTaskExecutor executor) {
return new JobExecutionListener() {
private ThreadPoolTaskExecutor taskExecutor = executor;
#Override
public void beforeJob(JobExecution jobExecution) {
}
#Override
public void afterJob(JobExecution jobExecution) {
taskExecutor.shutdown();
}
};
}
and added it to my Job definition like this:
#Bean
public Job partitionedJob(){
return jobBuilders.get("partitionedJob")
.listener(jobExecutionListener(taskExecutor()))
.start(partitionedStep())
.build();
}
All of the above answers are hack/work around.
Root cause of the issue posted in the question is that the threadPoolTaskExecutor doesn't shares the lifecycle of the step. Hence while destroying the step/job context , the threadpool is not destroyed automatically and it is running forever.
Bringing the threadPoolExecutor within the the stepContext "#StepScope" should do the trick. Spring takes care of destroying it.
#Bean
#StepScope
public ThreadPoolTaskExecutor threadPoolTaskExecutor() {
There are 2 solutions to your problem, although I don't know the cause.
First, you can use a CommandLineJobRunner to launch the Job. See documentation here. This class automatically exits the program at the end of the job and converts the ExitStatus to a return code (COMPLETED = 0, FAILED = 1...). The default return code are provided by a SimpleJvmExitCodeMapper.
The second solution would be to manually call a System.exit() instruction after your JobLauncher.run(). You can also convert the ExitStatus of the Job manually and use it in your manual exit :
// Create Job
JobLauncher jobLauncher = (JobLauncher) context.getBean("jobLauncher");
Job job = (Job) context.getBean(jobName);
// Create return codes mapper
SimpleJvmExitCodeMapper mapper = new SimpleJvmExitCodeMapper();
// Start Job
JobExecution execution = jobLauncher.run(job, new JobParameters());
// Close context
context.close();
// Map codes and exit
String status = execution.getExitStatus().getExitCode();
Integer returnCode = mapper.intValue(status);
System.exit(returnCode);