Want to Persist entries in Mapstate in Flink - java

I am using MapState in CoProcessFunction in Flink.
Goal:
In CoProcessFunction, I want to make a mapState such that any entry that I insert into it, should persist.
Current Situation:
There is a stream of dtos from kafka stream.
From the stream, a dto arrives at x time and a key "abc" is stored in mapstate.
Then at any time>x, another dto arrives with the same key "abc", I want to check if this key has already been stored in mapstate or not.
Problem:
Now the problem is, on every element of the stream, a new Process Function is triggered, a new map is generated, so I can't access the previous Elements.
Possible but not applicable solution:
I can use a WindowProcessFunction, but it doesn't have the concept of timers. Since, I also need to set Timer based on every element, so this method in not applicable.
Main.java
dataStream1.connect(dataStream2)
.keyBy((KeySelector<DTORetryMetadata, Object>) PrimaryKey::getPrimaryKey,
(KeySelector<DTORetryMetadata, Object>) PrimaryKey::getPrimaryKey)
.process(new FlinkStateStorageCoProcess());
FlinkStateStorageCoProcess.java
public class FlinkStateStorageCoProcess extends KeyedCoProcessFunction<Object, DTORetryMetadata, DTORetryMetadata, DTORetryMetadata> {
private static final long serialVersionUID = 1L;
private transient MapState<String, DTORetryMetadata> updatedMap;
private transient MapState<String, DTORetryMetadata> retriesMaxedOutMap;
#Override
public void processElement1(DTORetryMetadata dtoRetryMetadata, KeyedCoProcessFunction<Object, DTORetryMetadata, DTORetryMetadata, DTORetryMetadata>.Context context, Collector<DTORetryMetadata> collector) throws Exception {
if(Objects.nonNull(dtoRetryMetadata)) {
collector.collect(dtoRetryMetadata);
}
else {
throw new Exception("DtoRetryMetaData is null");
}
String dtoRetryMetaDataId = (String) context.getCurrentKey();
if(updatedMap.contains(dtoRetryMetaDataId)) {
updatedMap.remove(dtoRetryMetaDataId);
} else {
updatedMap.put(dtoRetryMetaDataId, dtoRetryMetadata);
log.info("{} added to Flink Memory", dtoRetryMetaDataId);
try {
TriggerTimeHelper triggerTimeHelper = new TriggerTimeHelper();
Long triggerTime = triggerTimeHelper.getTriggerTime(dtoRetryMetadata.getRetryNo());
log.info("Trigger timer set at {} from {}", triggerTime, context.timestamp());
context.timerService().registerProcessingTimeTimer(
context.timestamp() + triggerTime);
} catch (RuntimeException e) {
e.printStackTrace();
throw new RuntimeException("Failed to add timer to DTO,Retry Mechanism Failure. ");
}
}
}
#Override
public void processElement2(DTORetryMetadata dtoRetryMetadata, KeyedCoProcessFunction<Object, DTORetryMetadata, DTORetryMetadata, DTORetryMetadata>.Context context, Collector<DTORetryMetadata> collector) throws Exception {
if(Objects.nonNull(dtoRetryMetadata)) {
collector.collect(dtoRetryMetadata);
}
else {
throw new Exception("DtoRetryMetaData is null");
}
String dtoRetryMetaDataId = (String) context.getCurrentKey();
System.out.println("BEFORE INSERT IN KAFKA");
for(DTORetryMetadata dtoRetryMetadata1: updatedMap.values()) {
System.out.println("ENTRY : " + dtoRetryMetadata1);
}
if(updatedMap.contains(dtoRetryMetaDataId)) {
updatedMap.remove(dtoRetryMetaDataId);
} else {
updatedMap.put(dtoRetryMetaDataId, dtoRetryMetadata);
try {
TriggerTimeHelper triggerTimeHelper = new TriggerTimeHelper();
Long triggerTime = triggerTimeHelper.getTriggerTime(dtoRetryMetadata.getRetryNo());
context.timerService().registerProcessingTimeTimer(
context.timestamp() + triggerTime);
} catch (RuntimeException e) {
e.printStackTrace();
throw new RuntimeException("Failed to add timer to DTO,Retry Mechanism Failure. ");
}
}
}
#Override
public void onTimer(long timestamp, KeyedCoProcessFunction<Object, DTORetryMetadata, DTORetryMetadata, DTORetryMetadata>.OnTimerContext context, Collector<DTORetryMetadata> out) throws Exception {
String key = (String) context.getCurrentKey();
if (updatedMap.contains(key)) {
DTORetryMetadata dtoRetryMetadata = updatedMap.get(key);
if (dtoRetryMetadata.getRetryNo() >= dtoRetryMetadata.getMaxRetryNo()) {
updatedMap.remove(key);
} else {
RetryKafkaProducer retryKafkaProducer = new RetryKafkaProducer();
if (retryKafkaProducer.sendMessageWithHeader("", dtoRetryMetadata)) {
updatedMap.remove(key);
} else {
updatedMap.put(key, dtoRetryMetadata);
try {
TriggerTimeHelper triggerTimeHelper = new TriggerTimeHelper();
Long triggerTime = triggerTimeHelper.getTriggerTime(dtoRetryMetadata.getRetryNo());
context.timerService().registerProcessingTimeTimer(
context.timestamp() + triggerTime);
} catch (RuntimeException e) {
e.printStackTrace();
throw new RuntimeException("Failed to add timer to DTO, Retry Mechanism Failure. ");
}
log.info("ACK Not Received, Updating state {}:{}", key, updatedMap.get(key));
}
}
}
}
#Override
public void open(Configuration parameters) throws Exception {
try {
MapStateDescriptor<String, DTORetryMetadata> updatedMapDescriptor = new MapStateDescriptor(
"updatedMapState", String.class, DTORetryMetadata.class);
updatedMap = getRuntimeContext().getMapState(updatedMapDescriptor);
} catch (RuntimeException e) {
e.printStackTrace();
throw new RuntimeException("Flink state initialisation failed. "+ e.getMessage());
}
}
}

Related

CompletableFuture.supplyAsync() without Lambda

I'm struggling with the functional style of Supplier<U>, etc and creating testable code.
So I have an InputStream that is split into chunks which are processed asynchronously, and I want to know when they are all done. To write testable code, I outsource the processing logic to its own Runnable:
public class StreamProcessor {
public CompletableFuture<Void> process(InputStream in) {
List<CompletableFuture> futures = new ArrayList<>();
while (true) {
try (SizeLimitInputStream chunkStream = new SizeLimitInputStream(in, 100)) {
byte[] data = IOUtils.toByteArray(chunkStream);
CompletableFuture<Void> f = CompletableFuture.runAsync(createTask(data));
futures.add(f);
} catch (EOFException ex) {
// end of stream reached
break;
} catch (IOException ex) {
return CompletableFuture.failedFuture(ex);
}
}
return CompletableFuture.allOf(futures.toArray(CompletableFuture<?>[]::new));
}
ChunkTask createTask(byte[] data) {
return new ChunkTask(data);
}
public class ChunkTask implements Runnable {
final byte[] data;
ChunkTask(byte[] data) {
this.data = data;
}
#Override
public void run() {
try {
// do something
} catch (Exception ex) {
// checked exceptions must be wrapped
throw new RuntimeException(ex);
}
}
}
}
This works well, but poses two problems:
The processing code cannot return anything; it's a Runnable after all.
Any checked exceptions caught inside ChunkTask.run() must be wrapped into a RuntimeException. Unwrapping the failed combined CompletableFuture returns the RuntimeException which needs to be unwrapped again to reach the original cause - in contrast to the IOException.
So I'm looking for a way to do this with CompletableFuture.supplyAsync(), but I can't figure out how to do this without lambdas (bad to test) or to return a CompletableFuture.failedFuture() from the processing logic.
I can think of two approaches:
1. With supplyAsync:
When using CompletableFuture.supplyAsync, you need a supplier instead of a runnable:
public static class ChunkTask implements Supplier<Object> {
final byte[] data;
ChunkTask(byte[] data) {
this.data = data;
}
#Override
public Object get() {
Object result = ...;
// Do something or throw an exception
return result;
}
}
and then:
CompletableFuture
.supplyAsync( new ChunkTask( data ) )
.whenComplete( (result, throwable) -> ... );
If an exception happens in Supplier.get(), it will b e propagated and you can see it in CompletableFuture.whenComplete, CompletableFuture.handle or CompletableFuture.exceptionally.
2. Passing a CompletableFuture to the thread
You can pass a CompletableFuture to ChunkTask:
public class ChunkTask implements Runnable {
final byte[] data;
private final CompletableFuture<Object> future;
ChunkTask(byte[] data, CompletableFuture<Object> future) {
this.data = data;
this.future = future;
}
#Override
public void run() {
try {
Object result = null;
// do something
future.complete( result );
} catch (Throwable ex) {
future.completeExceptionally( ex );
}
}
}
Then the logic becomes:
while (true) {
CompletableFuture<Object> f = new CompletableFuture<>();
try (SizeLimitInputStream chunkStream = new SizeLimitInputStream(in, 100)) {
byte[] data = IOUtils.toByteArray(chunkStream);
startThread(new ChunkTask(data, f));
futures.add(f);
} catch (EOFException ex) {
// end of stream reached
break;
} catch (IOException ex) {
f.completeExceptionally( ex );
return f;
}
}
Probably, Number 2 is the one that gives you more flexibility on how to manage the exception.

Preventing thread from duplicate processing in java

Problem statement
I have a JMS listener running as a thread listening to a topic. As soon a message comes in, I spawn a new Thread to process the in-bounded message. So for each incoming message I spawn a new Thread.
I have a scenario where duplicate message is also being processed when it is injected immediately in a sequential order. I need to prevent this from being processed. I tried using a ConcurrentHashMap to hold the process times where I add in the entry as soon as Thread is spawn and remove it from the map as soon Thread completes its execution. But it did not help when I tried with the scenario where I passed in same one after the another in concurrent fashion.
General Outline of my issue before you plunge into the actual code base
onMessage(){
processIncomingMessage(){
ExecutorService executorService = Executors.newFixedThreadPool(1000);
//Map is used to make an entry before i spawn a new thread to process incoming message
//Map contains "Key as the incoming message" and "value as boolean"
//check map for duplicate check
//The below check is failing and allowing duplicate messages to be processed in parallel
if(entryisPresentInMap){
//return doing nothing
}else{
//spawn a new thread for each incoming message
//also ensure a duplicate message being processed when it in process by an active thread
executorService.execute(new Runnable() {
#Override
public void run() {
try {
//actuall business logic
}finally{
//remove entry from the map so after processing is done with the message
}
}
}
}
Standalone example to mimic the scenario
public class DuplicateCheck {
private static Map<String,Boolean> duplicateCheckMap =
new ConcurrentHashMap<String,Boolean>(1000);
private static String name=null;
private static String[] nameArray = new String[20];
public static void processMessage(String message){
System.out.println("Processed message =" +message);
}
public static void main(String args[]){
nameArray[0] = "Peter";
nameArray[1] = "Peter";
nameArray[2] = "Adam";
for(int i=0;i<=nameArray.length;i++){
name=nameArray[i];
if(duplicateCheckMap.get(name)!=null && duplicateCheckMap.get(name)){
System.out.println("Thread detected for processing your name ="+name);
return;
}
addNameIntoMap(name);
new Thread(new Runnable() {
#Override
public void run() {
try {
processMessage(name);
} catch (Exception e) {
System.out.println(e.getMessage());
} finally {
freeNameFromMap(name);
}
}
}).start();
}
}
private static synchronized void addNameIntoMap(String name) {
if (name != null) {
duplicateCheckMap.put(name, true);
System.out.println("Thread processing the "+name+" is added to the status map");
}
}
private static synchronized void freeNameFromMap(String name) {
if (name != null) {
duplicateCheckMap.remove(name);
System.out.println("Thread processing the "+name+" is released from the status map");
}
}
Snippet of the code is below
public void processControlMessage(final Message message) {
RDPWorkflowControlMessage rdpWorkflowControlMessage= unmarshallControlMessage(message);
final String workflowName = rdpWorkflowControlMessage.getWorkflowName();
final String controlMessageEvent=rdpWorkflowControlMessage.getControlMessage().value();
if(controlMessageStateMap.get(workflowName)!=null && controlMessageStateMap.get(workflowName)){
log.info("Cache cleanup for the workflow :"+workflowName+" is already in progress");
return;
}else {
log.info("doing nothing");
}
Semaphore controlMessageLock = new Semaphore(1);
try{
controlMessageLock.acquire();
synchronized(this){
new Thread(new Runnable(){
#Override
public void run() {
try {
lock.lock();
log.info("Processing Workflow Control Message for the workflow :"+workflowName);
if (message instanceof TextMessage) {
if ("REFRESH".equalsIgnoreCase(controlMessageEvent)) {
clearControlMessageBuffer();
enableControlMessageStatus(workflowName);
List<String> matchingValues=new ArrayList<String>();
matchingValues.add(workflowName);
ConcreteSetDAO tasksSetDAO=taskEventListener.getConcreteSetDAO();
ConcreteSetDAO workflowSetDAO=workflowEventListener.getConcreteSetDAO();
tasksSetDAO.deleteMatchingRecords(matchingValues);
workflowSetDAO.deleteMatchingRecords(matchingValues);
fetchNewWorkflowItems();
addShutdownHook(workflowName);
}
}
} catch (Exception e) {
log.error("Error extracting item of type RDPWorkflowControlMessage from message "
+ message);
} finally {
disableControlMessageStatus(workflowName);
lock.unlock();
}
}
}).start();
}
} catch (InterruptedException ie) {
log.info("Interrupted Exception during control message lock acquisition"+ie);
}finally{
controlMessageLock.release();
}
}
private void addShutdownHook(final String workflowName) {
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
disableControlMessageStatus(workflowName);
}
});
log.info("Shut Down Hook Attached for the thread processing the workflow :"+workflowName);
}
private RDPWorkflowControlMessage unmarshallControlMessage(Message message) {
RDPWorkflowControlMessage rdpWorkflowControlMessage = null;
try {
TextMessage textMessage = (TextMessage) message;
rdpWorkflowControlMessage = marshaller.unmarshalItem(textMessage.getText(), RDPWorkflowControlMessage.class);
} catch (Exception e) {
log.error("Error extracting item of type RDPWorkflowTask from message "
+ message);
}
return rdpWorkflowControlMessage;
}
private void fetchNewWorkflowItems() {
initSSL();
List<RDPWorkflowTask> allTasks=initAllTasks();
taskEventListener.addRDPWorkflowTasks(allTasks);
workflowEventListener.updateWorkflowStatus(allTasks);
}
private void clearControlMessageBuffer() {
taskEventListener.getRecordsForUpdate().clear();
workflowEventListener.getRecordsForUpdate().clear();
}
private synchronized void enableControlMessageStatus(String workflowName) {
if (workflowName != null) {
controlMessageStateMap.put(workflowName, true);
log.info("Thread processing the "+workflowName+" is added to the status map");
}
}
private synchronized void disableControlMessageStatus(String workflowName) {
if (workflowName != null) {
controlMessageStateMap.remove(workflowName);
log.info("Thread processing the "+workflowName+" is released from the status map");
}
}
I have modified my code to incorporate suggestions provided below but still it is not working
public void processControlMessage(final Message message) {
ExecutorService executorService = Executors.newFixedThreadPool(1000);
try{
lock.lock();
RDPWorkflowControlMessage rdpWorkflowControlMessage= unmarshallControlMessage(message);
final String workflowName = rdpWorkflowControlMessage.getWorkflowName();
final String controlMessageEvent=rdpWorkflowControlMessage.getControlMessage().value();
if(controlMessageStateMap.get(workflowName)!=null && controlMessageStateMap.get(workflowName)){
log.info("Cache cleanup for the workflow :"+workflowName+" is already in progress");
return;
}else {
log.info("doing nothing");
}
enableControlMessageStatus(workflowName);
executorService.execute(new Runnable() {
#Override
public void run() {
try {
//actual code
fetchNewWorkflowItems();
addShutdownHook(workflowName);
}
}
} catch (Exception e) {
log.error("Error extracting item of type RDPWorkflowControlMessage from message "
+ message);
} finally {
disableControlMessageStatus(workflowName);
}
}
});
} finally {
executorService.shutdown();
lock.unlock();
}
}
private void addShutdownHook(final String workflowName) {
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
disableControlMessageStatus(workflowName);
}
});
log.info("Shut Down Hook Attached for the thread processing the workflow :"+workflowName);
}
private synchronized void enableControlMessageStatus(String workflowName) {
if (workflowName != null) {
controlMessageStateMap.put(workflowName, true);
log.info("Thread processing the "+workflowName+" is added to the status map");
}
}
private synchronized void disableControlMessageStatus(String workflowName) {
if (workflowName != null) {
controlMessageStateMap.remove(workflowName);
log.info("Thread processing the "+workflowName+" is released from the status map");
}
}
This is how you should add a value to a map. This double checking makes sure that only one thread adds a value to a map at any particular moment of time and you can control the access afterwards. Remove all the locking logic afterwards. It is as simple as that
public void processControlMessage(final String workflowName) {
if(!tryAddingMessageInProcessingMap(workflowName)){
Thread.sleep(1000); // sleep 1 sec and try again
processControlMessage(workflowName);
return ;
}
System.out.println(workflowName);
try{
// your code goes here
} finally{
controlMessageStateMap.remove(workflowName);
}
}
private boolean tryAddingMessageInProcessingMap(final String workflowName) {
if(controlMessageStateMap .get(workflowName)==null){
synchronized (this) {
if(controlMessageStateMap .get(workflowName)==null){
controlMessageStateMap.put(workflowName, true);
return true;
}
}
}
return false;
}
Read here more for https://en.wikipedia.org/wiki/Double-checked_locking
The issue is fixed now. Many thanks to #awsome for the approach. It is avoiding the duplicates when a thread is already processing the incoming duplicate message. If no thread is processing then it gets picked up
public void processControlMessage(final Message message) {
try {
lock.lock();
RDPWorkflowControlMessage rdpWorkflowControlMessage = unmarshallControlMessage(message);
final String workflowName = rdpWorkflowControlMessage.getWorkflowName();
final String controlMessageEvent = rdpWorkflowControlMessage.getControlMessage().value();
new Thread(new Runnable() {
#Override
public void run() {
try {
if (message instanceof TextMessage) {
if ("REFRESH".equalsIgnoreCase(controlMessageEvent)) {
if (tryAddingWorkflowNameInStatusMap(workflowName)) {
log.info("Processing Workflow Control Message for the workflow :"+ workflowName);
addShutdownHook(workflowName);
clearControlMessageBuffer();
List<String> matchingValues = new ArrayList<String>();
matchingValues.add(workflowName);
ConcreteSetDAO tasksSetDAO = taskEventListener.getConcreteSetDAO();
ConcreteSetDAO workflowSetDAO = workflowEventListener.getConcreteSetDAO();
tasksSetDAO.deleteMatchingRecords(matchingValues);
workflowSetDAO.deleteMatchingRecords(matchingValues);
List<RDPWorkflowTask> allTasks=fetchNewWorkflowItems(workflowName);
updateTasksAndWorkflowSet(allTasks);
removeWorkflowNameFromProcessingMap(workflowName);
} else {
log.info("Cache clean up is already in progress for the workflow ="+ workflowName);
return;
}
}
}
} catch (Exception e) {
log.error("Error extracting item of type RDPWorkflowControlMessage from message "
+ message);
}
}
}).start();
} finally {
lock.unlock();
}
}
private boolean tryAddingWorkflowNameInStatusMap(final String workflowName) {
if(controlMessageStateMap.get(workflowName)==null){
synchronized (this) {
if(controlMessageStateMap.get(workflowName)==null){
log.info("Adding an entry in to the map for the workflow ="+workflowName);
controlMessageStateMap.put(workflowName, true);
return true;
}
}
}
return false;
}
private synchronized void removeWorkflowNameFromProcessingMap(String workflowName) {
if (workflowName != null
&& (controlMessageStateMap.get(workflowName) != null && controlMessageStateMap
.get(workflowName))) {
controlMessageStateMap.remove(workflowName);
log.info("Thread processing the " + workflowName+ " is released from the status map");
}
}

MVEL executeExpression function cannot be concurrent

Run the main function in File2 , the problem is : threads stuck at "rval=MVEL.executeExpression(compiledExpression, vars);" , 10 threads run in sequential order, not parallel , I wanna know why this happened.
PS: I'm using MVEL 2.2 , the latest version
File1:MVELHelper.java
public class MVELHelper {
private static ParserContext _ctx = new ParserContext(false);
//public static Object execute(String expression, Map<String, Object> vars, Databus databus) throws Exception {
public static Object execute(String expression, Map<String, Object> vars) throws Exception {
Object rval = null;
try {
if(vars == null) {
rval = MVEL.eval(expression, new HashMap<String,Object>());
}
else {
rval = MVEL.eval(expression, vars);
}
return rval;
}
catch(Exception e) {
throw new Exception("MVEL FAILED:"+expression,e);
}
}
public static Serializable compile(String text, ParserContext ctx)
throws Exception {
if(ctx == null) {
//ctx = _ctx;
ctx=new ParserContext(false);
}
Serializable exp = null;
try {
exp = MVEL.compileExpression(text, ctx);
//exp = MVEL.compileExpression(text);
}
catch (Exception e) {
throw new Exception("failed to compile expression.", e);
}
return exp;
}
public static Object compileAndExecute(String expression, Map<String, Object> vars) throws Exception {
Object rval = null;
try {
Serializable compiledExpression=compile(expression,null);
System.out.println("[COMPILE OVER, Thread Id="+Thread.currentThread().getId()+"] ");
if(vars == null) {
rval=MVEL.executeExpression(compiledExpression, new HashMap<String,Object>());
//rval = MVEL.eval(exp, new HashMap<String,Object>());
}
else {
//rval=MVEL.executeExpression(compiledExpression, vars,(VariableResolverFactory)null);
rval=MVEL.executeExpression(compiledExpression, vars);
//rval = MVEL.eval(expression, vars);
}
return rval;
}
catch(Exception e) {
throw new Exception("MVEL FAILED:"+expression,e);
}
}
}
File2:ExecThread3.java
public class ExecThread3 implements Runnable{
Map dataMap=null;
public Map getDataMap() {
return dataMap;
}
public void setDataMap(Map dataMap) {
this.dataMap = dataMap;
}
#Override
public void run() {
Map varsMap = new HashMap();
Map dataMap=new HashMap();
dataMap.put("count",100);
varsMap.put("dataMap", dataMap);
String expression="System.out.println(\"[BEFORE Thread Id=\"+Thread.currentThread().getId()+\"] \"+dataMap.get(\"count\"));"+
"Thread.sleep(3000);"+
"System.err.println(\"[AFTER Thread Id=\"+Thread.currentThread().getId()+\"] \"+dataMap.get(\"count\"));";
try {
//MVEL.compileExpression(expression);
MVELHelper.compileAndExecute(expression, varsMap);
}
catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
public static void main(String[] args) {
for(int k=0;k<10;k++){
ExecThread3 execThread=new ExecThread3();
new Thread(execThread).start();
}
}
}

Guice doesn't initialize property

I'm newly with Guice.
I want to use Guice for initializing object without writing new directly.
Here is my main():
public class VelocityParserTest {
public static void main(String[] args) throws IOException {
try {
PoenaRequestService poenaService = new PoenaRequestService();
System.out.println(poenaService.sendRequest("kbkCode"));
} catch (PoenaServiceException e) {
e.printStackTrace();
}
}
}
PoenaRequestService:
public class PoenaRequestService {
private static final String TEMPLATE_PATH = "resources/xml_messages/bp12/message01.xml";
public static final org.apache.log4j.Logger LOG = org.apache.log4j.Logger.getLogger(PoenaRequestService.class);
#Inject
#Named("poena_service")
private HttpService poenaService;
public String sendRequest(/*TaxPayer taxPayer,*/ String kbk) throws PoenaServiceException {
LOG.info(String.format("Generating poena message request for string: %s", kbk));
Map<String, String> replaceValues = new HashMap<>();
replaceValues.put("guid", "guid");
replaceValues.put("iinbin", "iinbin");
replaceValues.put("rnn", "rnn");
replaceValues.put("taxOrgCode", "taxOrgCode");
replaceValues.put("kbk", "kbk");
replaceValues.put("dateMessage", "dateMessage");
replaceValues.put("applyDate", "applyDate");
ServiceResponseMessage result;
try {
String template = IOUtils.readFileIntoString(TEMPLATE_PATH);
Document rq = XmlUtil.parseDocument(StringUtils.replaceValues(template, replaceValues));
result = poenaService.execute(HttpMethod.POST, null, rq);
} catch (IOException e) {
throw new PoenaServiceException("Unable to read template file: " + TEMPLATE_PATH, e);
} catch (SAXException e) {
throw new PoenaServiceException("Unable to parse result document, please check template file: " + TEMPLATE_PATH, e);
} catch (HttpServiceException e) {
throw new PoenaServiceException(e);
}
if (result.isSuccess()) {
return (String) result.getResult();
}
throw new PoenaServiceException("HTTP service error code '" + result.getStatusCode() + "', message: " + result.getStatusMessage());
}
}
When I tried to debug this I see next picture:
As e result I got NullPointerException.
I couldn't figure out this behavior. Why does this exactly happen?
Any suggestions?
It's not working because you're not actually using Guice. You need to create an injector and bind your dependencies to something. Something akin to this:
public class VelocityParserTest {
public static void main(String[] args) throws IOException {
Injector injector = Guice.createInjector(new AbstractModule() {
#Override
protected void configure() {
bind(PoenaRequestService.class).asEagerSingleton();
bind(HttpService.class)
.annotatedWith(Names.named("poena_service"))
.toInstance(...);
}
});
try {
PoenaRequestService poenaService = injector.getInstance(PoenaRequestService.class);
System.out.println(poenaService.sendRequest("kbkCode"));
} catch (PoenaServiceException e) {
e.printStackTrace();
}
}
}

How to throw exception from spring batch processor process() method to Spring batch job started method?

I am having the Web-service method to start spring batch job.If any exception occurred in spring batch processing control is coming back till processor process method. But i need the controller to came back to web-service method there i have to catch and code to email that exception.
Web-service method:
public void processInputFiles() throws ServiceFault {
String[] springConfig = { CONTEXT_FILE_NAME };
ApplicationContext context = new ClassPathXmlApplicationContext(springConfig);
try {
setClientInfo();
JobLauncher jobLauncher = (JobLauncher) context.getBean(JOB_LAUNCHER);
Job job = (Job) context.getBean(REMITTANCE_JOB);
jobLauncher.run(job, new JobParameters());
}catch (Exception e) {
String errorMessage = "LockboxService exception:: Could not process Remittance(CSV) files";
final Message message = MessageFactory.createErrorMessage(MyService.class, errorMessage, e);
ErrorSenderFactory.getInstance().send(message, new Instruction[] { Instruction.ERROR_EMAIL });
}
Processor process method:
#Override
public Transmission process(InputDetail remrow) throws ServiceException {
try {
business logic here
}
catch(Exception e) {
throw new Exception("Unable to process row having the int number:");
}
}
Here is startJob which I use the to start the job in web application, Tye to throw specific exception
public boolean StartJob()
throws MyException{
try {
final JobParameters jobParameters = new JobParametersBuilder()
.addLong("time", System.nanoTime())
.addString("file", jobInputFolder.getAbsolutePath())
.toJobParameters();
final JobExecution execution = jobLauncher.run(job,
jobParameters);
final ExitStatus status = execution.getExitStatus();
if (ExitStatus.COMPLETED.getExitCode().equals(
status.getExitCode())) {
result = true;
} else {
final List<Throwable> exceptions = execution
.getAllFailureExceptions();
for (final Throwable throwable : exceptions) {
if (throwable instanceof MyException) {
throw (MyException) throwable;
}
if (throwable instanceof FlatFileParseException) {
Throwable rootException = throwable.getCause();
if (rootException instanceof IncorrectTokenCountException) {
throw new MyException(logMessage, errorCode);
}
if (rootException instanceof BindException) {
BindException bindException = (BindException) rootException;
final FieldError fieldError = bindException
.getFieldError();
final String field = fieldError.getField();
throw new MyException(logMessage, errorCode);
}
}
}
}
}
} catch (JobExecutionAlreadyRunningException ex) {
} catch (JobRestartException ex) {
} catch (JobInstanceAlreadyCompleteException ex) {
} catch (JobParametersInvalidException ex) {
} catch (IOException ex) {
} finally {
}
return result;
}
If Item processor is as below
#Override
public KPData process(InputDetail inputData) throws MyException {
try {
business logic here
}
catch(Exception e) {
throw new MyException("Some issue");
}
}

Categories