Async task in activiti not executed - java

I have a problem with a simple workflow implemented with activiti engine and Java
I'm not able to execute a task asynchronously. The workflow is very simple
The upper part of the workflow start a process execute the "Start Service Task" emit a signal and execute the "Loop service Task" and repeat 10 times.
The bottom part is triggered by the signal emited in the upper part and must be execute asynchronously respect the loop part but in reality it block the "Loop service Task".
I try with set the async attribute in the bottom flow but in this case the bottom is not executed.
Follow the link to github project.
https://github.com/giane88/testActiviti
package configuration;
import org.activiti.engine.ProcessEngine;
import org.activiti.engine.ProcessEngineConfiguration;
import org.activiti.engine.RepositoryService;
import org.activiti.engine.impl.asyncexecutor.AsyncExecutor;
import org.activiti.engine.impl.asyncexecutor.ManagedAsyncJobExecutor;
import org.activiti.engine.impl.cfg.StandaloneInMemProcessEngineConfiguration;
public class WorkflowConfiguration {
final ProcessEngine processEngine;
public WorkflowConfiguration(final String workFlowName) {
processEngine = setUpProcessEngine(workFlowName);
}
public ProcessEngine getProcessEngine() {
return processEngine;
}
private ProcessEngine setUpProcessEngine(String workFlowName) {
ProcessEngineConfiguration cfg = null;
cfg = new StandaloneProcessEngineConfiguration()
.setJdbcUrl("jdbc:h2:mem:activiti;DB_CLOSE_DELAY=1000")
.setJdbcUsername("sa")
.setJdbcPassword("")
.setJdbcDriver("org.h2.Driver")
.setDatabaseSchemaUpdate(ProcessEngineConfiguration.DB_SCHEMA_UPDATE_TRUE);
final ProcessEngine processEngine = cfg.buildProcessEngine();
RepositoryService repositoryService = processEngine.getRepositoryService();
repositoryService.createDeployment().addClasspathResource("activiti/" + workFlowName)
.deploy();
return processEngine;
}
}
package configuration;
import org.activiti.engine.ProcessEngine;
import org.activiti.engine.runtime.ProcessInstance;
import java.util.HashMap;
import java.util.Map;
public class WorkflowManipulator {
private final Map<String, Object> nextDelegateVariables ;
private final String wfName;
private final ProcessEngine engine;
public WorkflowManipulator(String wfName, ProcessEngine engine) {
this.nextDelegateVariables = new HashMap<>();
this.wfName = wfName;
this.engine = engine;
}
public ProcessInstance startProcess() {
if (nextDelegateVariables.size() > 0) {
return engine.getRuntimeService().startProcessInstanceByKey(wfName, nextDelegateVariables);
} else {
return engine.getRuntimeService().startProcessInstanceByKey(wfName);
}
}
}
#Log4j2
public class TestWorkFlowMain {
public static void main(String[] args) throws IOException {
WorkflowConfiguration workflowConfiguration = new WorkflowConfiguration("test.bpmn");
WorkflowManipulator workflowManipulator = new WorkflowManipulator("testProcess", workflowConfiguration.getProcessEngine());
ProcessInstance processInstance = workflowManipulator.startProcess();
}
}
package delegates;
import lombok.extern.log4j.Log4j2;
import org.activiti.engine.delegate.DelegateExecution;
import org.activiti.engine.delegate.JavaDelegate;
#Log4j2
public class AsyncServiceTask implements JavaDelegate {
#Override
public void execute(DelegateExecution execution) throws Exception {
log.info("Sleeping for 3 second");
Thread.sleep(3000);
log.warn("AsyncCompleted");
}
}

As usualy i found the solution some minutes after post the question.
The problems was in the process engine configuration you need to set up an asyncExecutor and enable and active it like in the example below.
ProcessEngineConfiguration cfg;
AsyncExecutor asyncExecutor = new ManagedAsyncJobExecutor();
cfg = new StandaloneInMemProcessEngineConfiguration()
.setAsyncExecutor(asyncExecutor)
.setAsyncExecutorEnabled(true)
.setAsyncExecutorActivate(true);
final ProcessEngine processEngine = cfg.buildProcessEngine();

Related

MongoDB Reactive Streams hangs when performing a query

I'm using the MongoDB Reactive Streams Java API which I implemented following this example, but I'm encountering a serious problem: sometimes, when I try to query a collection, the await methods doesn't work, and it hangs until the timeout is reached.
The onSubscribe methods gets called correctly, but then neither onNext, nor onError nor onComplete get called.
There doesn't seem to be a specific circumstance causing this issue.
This is my code
MongoDatabase database = MongoDBConnector.getClient().getDatabase("myDb");
MongoCollection<Document> collection = database.getCollection("myCollection");
FindPublisher<Document> finder = collection.find(Filters.exists("myField"));
SettingSubscriber tagSub = new SettingSubscriber(finder);
//SettingsSubscriber is a subclass of ObservableSubscriber which calls publisher.subscribe(this)
tagSub.await(); //this is where it hangs
return tagSub.getWrappedData();
I wrote a simple implementation of what I assumed the SettingSubscriber looked like and tried to recreate the problem using a groovy script. I couldn't - my code runs without hanging, prints each output record and exits. Code for reference below:
#Grab(group = 'org.mongodb', module = 'mongodb-driver-reactivestreams', version = '4.3.3')
#Grab(group = 'org.slf4j', module = 'slf4j-api', version = '1.7.32')
#Grab(group = 'ch.qos.logback', module = 'logback-classic', version = '1.2.6')
import com.mongodb.MongoClientSettings;
import com.mongodb.MongoCredential;
import com.mongodb.ServerAddress;
import com.mongodb.reactivestreams.client.MongoClients;
import com.mongodb.reactivestreams.client.MongoClient;
import com.mongodb.reactivestreams.client.MongoDatabase;
import com.mongodb.reactivestreams.client.MongoCollection;
import com.mongodb.reactivestreams.client.FindPublisher;
import com.mongodb.client.model.Filters;
import org.bson.Document;
import org.reactivestreams.Subscriber;
import org.reactivestreams.Subscription;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.CountDownLatch;
MongoClientSettings.Builder clientSettingsBuilder = MongoClientSettings.builder()
.applyToClusterSettings { clusterSettingsBuilder ->
clusterSettingsBuilder.hosts( Arrays.asList(new ServerAddress("localhost", 27017)))
};
MongoClient mongoClient = MongoClients.create(clientSettingsBuilder.build());
MongoDatabase database = mongoClient.getDatabase("myDb");
MongoCollection<Document> collection = database.getCollection("myCollection");
FindPublisher<Document> finder = collection.find(Filters.exists("myField"));
SettingSubscriber tagSub = new SettingSubscriber(finder);
tagSub.await();
class SettingSubscriber implements Subscriber<Document> {
private final CountDownLatch latch = new CountDownLatch(1);
private Subscription subscription;
private List<Document> data = new ArrayList<>();
public SettingSubscriber(FindPublisher<Document> finder) {
finder.subscribe(this);
}
#Override
public void onSubscribe(final Subscription subscription) {
this.subscription = subscription;
subscription.request(1);
}
#Override
public void onNext(final Document document) {
System.out.println("Received: " + document);
data.add(document);
subscription.request(1);
}
#Override
public void onError(final Throwable throwable) {
throwable.printStackTrace();
latch.countDown();
}
#Override
public void onComplete() {
System.out.println("Completed");
latch.countDown();
}
public List<Document> getWrappedData() {
return data;
}
public void await() throws Throwable {
await(Long.MAX_VALUE, TimeUnit.MILLISECONDS);
}
public void await(final long timeout, final TimeUnit unit) throws Throwable {
if (!latch.await(timeout, unit)) {
System.out.println("Publish timed out");
}
}
}
Can you compare this implementation of the SettingSubscriber with yours to see if something is missed?

EJB MDB listening through JCA adapter failing on starting

I am writing this MDB which will listen message on IBM MQ queues and onMessage it will make call to ilrsession for running jrules. JCA adapter
and Activation config is configured on WAS console
While starting this MDB throws the following error. Is it the static block which is why its failing.
I am posting here if something can review the code and provide some suggestions.
Here is the exception I get while starting the MDB.
An operation in the enterprise bean constructor failed. It is recommended that component initialization logic be placed in a PostConstruct method instead of the bean class no-arg constructor.; nested exception is:
java.lang.NullPointerException
at com.ibm.ws.ejbcontainer.runtime.SharedEJBRuntimeImpl.startBean(SharedEJBRuntimeImpl.java:620)
at com.ibm.ws.runtime.component.WASEJBRuntimeImpl.startBean(WASEJBRuntimeImpl.java:586)
at com.ibm.ws.ejbcontainer.runtime.AbstractEJBRuntime.fireMetaDataCreatedAndStartBean(AbstractEJBRuntime.java:1715)
at com.ibm.ws.ejbcontainer.runtime.AbstractEJBRuntime.startModule(AbstractEJBRuntime.java:667)
... 52 more
Caused by: java.lang.NullPointerException
Here is the MDB Code.
package com.abc.integration.ejb
import ilog.rules.res.model.IlrPath;
import ilog.rules.res.session.IlrEJB3SessionFactory;
import ilog.rules.res.session.IlrSessionException;
import ilog.rules.res.session.IlrSessionRequest;
import ilog.rules.res.session.IlrSessionResponse;
import ilog.rules.res.session.IlrStatelessSession;
import java.net.URL;
import java.util.Date;
import java.util.List;
import java.util.Map;
import javax.ejb.ActivationConfigProperty;
import javax.ejb.MessageDriven;
import javax.jms.Connection;
import javax.jms.ConnectionFactory;
import javax.jms.Destination;
import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.MessageListener;
import javax.jms.MessageProducer;
import javax.jms.ObjectMessage;
import javax.jms.Session;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import org.apache.log4j.Logger;
import org.apache.log4j.xml.DOMConfigurator;
/**
* Message-Driven Bean implementation class for: DecisionServiceMDB
*
*/
#MessageDriven(activationConfig = { #ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue") })
public class DecisionServiceMDB implements MessageListener
{
/**
* default serial version id
*/
private static final long serialVersionUID = 2300836924029589692L;
private static final Logger responseTimeLogger = Logger.getLogger(PropertyManager.getResponseTimeLogger());
private static final Logger errorLogger = Logger.getLogger(PropertyManager.getErrorLogger());
private static final Logger ruleExceptionLogger = Logger.getLogger(PropertyManager.getRuleExceptionLogger());
private static final String RULEAPP_NAME = PropertyManager.getRuleAppName();
private static final String RULESET_NAME = PropertyManager.getRuleSetName();
private static InitialContext ic;
private static ConnectionFactory cf;
private static Destination destination;
private static String qcfLookup = PropertyManager.getQueueFactoryJndiName();
private static String qLookup = PropertyManager.getQueueDestinationJndiName();
private Connection c = null;
private Session s = null;
private MessageProducer mp = null;
private boolean isInitializedOkay = true;
private static IlrEJB3SessionFactory factory;
private static IlrStatelessSession ruleSession;
private static IlrPath path;
private IlrSessionRequest sessionRequest;
static {
URL url = Thread.currentThread().getContextClassLoader().getResource("log4j.xml");
DOMConfigurator.configure(url);
errorLogger.info("log4j xml initialized::::::::::::::::");
}
public DecisionServiceMDB() throws NamingException, JMSException
{
try
{
if (ic == null)
{
ic = new InitialContext();
}
if (cf == null)
{
cf = (ConnectionFactory) ic.lookup(qcfLookup);
}
if (destination == null)
{
destination = (Destination) ic.lookup(qLookup);
}
} catch (NamingException e)
{
isInitializedOkay = false;
errorLogger.error("FATAL:NamingException Occurred: " + e.getMessage());
errorLogger.error(e.getMessage(), e);
e.printStackTrace();
//throw e;
}
// 1. Get a POJO Session Factory
if (factory == null)
{
//factory = new IlrJ2SESessionFactory();
//to log rule execution start time by using bre logger
//transactionLogger.setRuleExecutionStartTime(new Date());
factory = new IlrEJB3SessionFactory();
// As the EJBS are embedded within the ear file, we need to prepend
// the ear file name to the JNDI.
factory.setStatelessLocalJndiName("ejblocal:ilog.rules.res.session.ejb3.IlrStatelessSessionLocal");
}
// 2. Create a stateless rule session using this factory
try
{
if (ruleSession == null)
{
ruleSession = factory.createStatelessSession();
}
} catch (Exception e)
{
e.printStackTrace();
return;
}
// 3. Create a session request to invoke the RES (defining the ruleset
// path and the input ruleset parameters)
if (path == null)
{
path = new IlrPath(RULEAPP_NAME, RULESET_NAME);
}
sessionRequest = factory.createRequest();
sessionRequest.setRulesetPath(path);
}
public void onMessage(Message receivedMsg)
{
// onMessage code goes here.
}
}
You are violating programming restrictions (see EJB spec 16.2.2):
An enterprise bean must not use read/write static fields. Using read-only static fields is allowed. Therefore, it is recommended that all static fields in the enterprise bean class be declared as final.
Remove your (non-final) static fields, your static initializer and the constructor. Intialisation is done within the postConstruct lifecycle callback method.
Your EJB could look like this:
#MessageDriven(activationConfig = { #ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue") })
public class DecisionServiceMDB implements MessageListener
{
private Logger responseTimeLogger;
private Logger errorLogger;
private Logger ruleExceptionLogger;
private String RULEAPP_NAME;
private String RULESET_NAME;
private InitialContext ic;
private ConnectionFactory cf;
private Destination destination;
private String qcfLookup;
private String qLookup;
private Connection c;
private Session s;
private MessageProducer mp;
private boolean isInitializedOkay = true;
private IlrEJB3SessionFactory factory;
private IlrStatelessSession ruleSession;
private IlrPath path;
private IlrSessionRequest sessionRequest;
#PostConstruct
public void init() {
responseTimeLogger = Logger.getLogger(PropertyManager.getResponseTimeLogger());
errorLogger = Logger.getLogger(PropertyManager.getErrorLogger());
ruleExceptionLogger = Logger.getLogger(PropertyManager.getRuleExceptionLogger());
RULEAPP_NAME = PropertyManager.getRuleAppName();
RULESET_NAME = PropertyManager.getRuleSetName();
qcfLookup = PropertyManager.getQueueFactoryJndiName();
qLookup = PropertyManager.getQueueDestinationJndiName();
ic = new InitialContext();
cf = (ConnectionFactory) ic.lookup(qcfLookup);
destination = (Destination) ic.lookup(qLookup);
factory = new IlrEJB3SessionFactory();
factory.setStatelessLocalJndiName("ejblocal:ilog.rules.res.session.ejb3.IlrStatelessSessionLocal");
ruleSession = factory.createStatelessSession();
sessionRequest.setRulesetPath(path);
}
public void onMessage(Message receivedMsg)
{
// onMessage code goes here.
}
}
I removed the static initializer. Looks like you try to configure logging, which should not be done within your EJB. Refer to the documentation of your application server how to do this properly.
This is just some example code based on the original code.
The implementation you provided looks like a plain java implementation. Be aware of requirements in an enterprise environment (e.g. EJB lifecycle, EJB providers responsibility, ...). Read the EJB spec and some JEE tutorials before proceeding.
With this in mind, you should take a look at your PropertyManager afterwards. This might not be implemented in a suitable manner.

kafka streams exception Could not find a public no-argument constructor for org.apache.kafka.common.serialization.Serdes$WrapperSerde

getting the below error stack trace while working with kafka streams
UPDATE: as per #matthias-j-sax, have implemented my own Serdes with default constructor for WrapperSerde but still getting the following exceptions
org.apache.kafka.streams.errors.StreamsException: stream-thread [streams-request-count-4c239508-6abe-4901-bd56-d53987494770-StreamThread-1] Failed to rebalance.
at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests (StreamThread.java:836)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce (StreamThread.java:784)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop (StreamThread.java:750)
at org.apache.kafka.streams.processor.internals.StreamThread.run (StreamThread.java:720)
Caused by: org.apache.kafka.streams.errors.StreamsException: Failed to configure value serde class myapps.serializer.Serdes$WrapperSerde
at org.apache.kafka.streams.StreamsConfig.defaultValueSerde (StreamsConfig.java:972)
at org.apache.kafka.streams.processor.internals.AbstractProcessorContext.<init> (AbstractProcessorContext.java:59)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.<init> (ProcessorContextImpl.java:42)
at org.apache.kafka.streams.processor.internals.StreamTask.<init> (StreamTask.java:136)
at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask (StreamThread.java:405)
at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask (StreamThread.java:369)
at org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.createTasks (StreamThread.java:354)
at org.apache.kafka.streams.processor.internals.TaskManager.addStreamTasks (TaskManager.java:148)
at org.apache.kafka.streams.processor.internals.TaskManager.createTasks (TaskManager.java:107)
at org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsAssigned (StreamThread.java:260)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete (ConsumerCoordinator.java:259)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded (AbstractCoordinator.java:367)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup (AbstractCoordinator.java:316)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll (ConsumerCoordinator.java:290)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce (KafkaConsumer.java:1149)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll (KafkaConsumer.java:1115)
at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests (StreamThread.java:827)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce (StreamThread.java:784)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop (StreamThread.java:750)
at org.apache.kafka.streams.processor.internals.StreamThread.run (StreamThread.java:720)
Caused by: java.lang.NullPointerException
at myapps.serializer.Serdes$WrapperSerde.configure (Serdes.java:30)
at org.apache.kafka.streams.StreamsConfig.defaultValueSerde (StreamsConfig.java:968)
at org.apache.kafka.streams.processor.internals.AbstractProcessorContext.<init> (AbstractProcessorContext.java:59)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.<init> (ProcessorContextImpl.java:42)
at org.apache.kafka.streams.processor.internals.StreamTask.<init> (StreamTask.java:136)
at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask (StreamThread.java:405)
at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask (StreamThread.java:369)
at org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.createTasks (StreamThread.java:354)
at org.apache.kafka.streams.processor.internals.TaskManager.addStreamTasks (TaskManager.java:148)
at org.apache.kafka.streams.processor.internals.TaskManager.createTasks (TaskManager.java:107)
at org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsAssigned (StreamThread.java:260)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete (ConsumerCoordinator.java:259)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded (AbstractCoordinator.java:367)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup (AbstractCoordinator.java:316)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll (ConsumerCoordinator.java:290)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce (KafkaConsumer.java:1149)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll (KafkaConsumer.java:1115)
at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests (StreamThread.java:827)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce (StreamThread.java:784)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop (StreamThread.java:750)
at org.apache.kafka.streams.processor.internals.StreamThread.run (StreamThread.java:720)
Here's my usecase:
I will be getting json responses as input to the stream, I want to count requests whose status codes are not 200. Initially, I went through the documentation of kafka streams in official documentation as well as confluent, then implemented WordCountDemo which is working very fine, then I tried to wrote this code, but getting this exception, I am very new to kafka streams, I went through the stack trace, but couldn't understood the context, hence came here for help!!!
Here's my code
LogCount.java
package myapps;
import java.util.Properties;
import java.util.concurrent.CountDownLatch;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Produced;
import myapps.serializer.JsonDeserializer;
import myapps.serializer.JsonSerializer;
import myapps.Request;
public class LogCount {
public static void main(String[] args) {
Properties props = new Properties();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-request-count");
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
JsonSerializer<Request> requestJsonSerializer = new JsonSerializer<>();
JsonDeserializer<Request> requestJsonDeserializer = new JsonDeserializer<>(Request.class);
Serde<Request> requestSerde = Serdes.serdeFrom(requestJsonSerializer, requestJsonDeserializer);
props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, requestSerde.getClass().getName());
final StreamsBuilder builder = new StreamsBuilder();
KStream<String, Request> source = builder.stream("streams-requests-input");
source.filter((k, v) -> v.getHttpStatusCode() != 200)
.groupByKey()
.count()
.toStream()
.to("streams-requests-output", Produced.with(Serdes.String(), Serdes.Long()));
final Topology topology = builder.build();
final KafkaStreams streams = new KafkaStreams(topology, props);
final CountDownLatch latch = new CountDownLatch(1);
System.out.println(topology.describe());
// attach shutdown handler to catch control-c
Runtime.getRuntime().addShutdownHook(new Thread("streams-shutdown-hook") {
#Override
public void run() {
streams.close();
latch.countDown();
}
});
try {
streams.cleanUp();
streams.start();
latch.await();
} catch (Throwable e) {
System.exit(1);
}
System.exit(0);
}
}
JsonDeserializer.java
package myapps.serializer;
import com.google.gson.Gson;
import org.apache.kafka.common.serialization.Deserializer;
import java.util.Map;
public class JsonDeserializer<T> implements Deserializer<T> {
private Gson gson = new Gson();
private Class<T> deserializedClass;
public JsonDeserializer(Class<T> deserializedClass) {
this.deserializedClass = deserializedClass;
}
public JsonDeserializer() {
}
#Override
#SuppressWarnings("unchecked")
public void configure(Map<String, ?> map, boolean b) {
if(deserializedClass == null) {
deserializedClass = (Class<T>) map.get("serializedClass");
}
}
#Override
public T deserialize(String s, byte[] bytes) {
if(bytes == null){
return null;
}
return gson.fromJson(new String(bytes),deserializedClass);
}
#Override
public void close() {
}
}
JsonSerializer.java
package myapps.serializer;
import com.google.gson.Gson;
import org.apache.kafka.common.serialization.Serializer;
import java.nio.charset.Charset;
import java.util.Map;
public class JsonSerializer<T> implements Serializer<T> {
private Gson gson = new Gson();
#Override
public void configure(Map<String, ?> map, boolean b) {
}
#Override
public byte[] serialize(String topic, T t) {
return gson.toJson(t).getBytes(Charset.forName("UTF-8"));
}
#Override
public void close() {
}
}
As I mentioned, I will be getting JSON as input, the structure is like this,
{
"RequestID":"1f6b2409",
"Protocol":"http",
"Host":"abc.com",
"Method":"GET",
"HTTPStatusCode":"200",
"User-Agent":"curl%2f7.54.0",
}
The corresponding Request.java file looks like this
package myapps;
public final class Request {
private String requestID;
private String protocol;
private String host;
private String method;
private int httpStatusCode;
private String userAgent;
public String getRequestID() {
return requestID;
}
public void setRequestID(String requestID) {
this.requestID = requestID;
}
public String getProtocol() {
return protocol;
}
public void setProtocol(String protocol) {
this.protocol = protocol;
}
public String getHost() {
return host;
}
public void setHost(String host) {
this.host = host;
}
public String getMethod() {
return method;
}
public void setMethod(String method) {
this.method = method;
}
public int getHttpStatusCode() {
return httpStatusCode;
}
public void setHttpStatusCode(int httpStatusCode) {
this.httpStatusCode = httpStatusCode;
}
public String getUserAgent() {
return userAgent;
}
public void setUserAgent(String userAgent) {
this.userAgent = userAgent;
}
}
EDIT: when I exit from kafka-console-consumer.sh, it's saying Processed a total of 0 messages.
As the error indicate, a class is missing a non-argument default constructor for Serdes$WrapperSerde:
Could not find a public no-argument constructor
The issue is this construct:
Serde<Request> requestSerde = Serdes.serdeFrom(requestJsonSerializer, requestJsonDeserializer);
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, requestSerde.getClass().getName());
Serdes.serdeFrom return WrapperSerde that does not have an empty default constructor. Thus, you cannot pass it into the StreamsConfig. You can use Serdes generate like this only if you pass objects into the corresponding API calls (ie, overwrite default Serde for certain operators).
To make it work (ie, to be able to set the Serde in the config), you would need to implement a proper class that implement Serde interface.
The requestSerde.getClass().getName() did not work for me. I needed to provide my own WrapperSerde implementation in an inner class. You probably need to do the same with something like:
public class MySerde extends WrapperSerde<Request> {
public MySerde () {
super(requestJsonSerializer, requestJsonDeserializer);
}
}
Instead of specifying in properties, add the custom serde in streams creation
KStream<String, Request> source = builder.stream("streams-requests-input",Consumed.with(Serdes.String(), requestSerde));

Spring Integration File reading

I am newbie to Spring Integration. I am working on solution, but I am stuck on a specific issue while using inbound file adapter ( FileReadingMessageSource ).
I have to read files from different directories and process them and save the files in different directories. As I understand, the directory name is fixed at the start of the flow.
Can some one help me on changing the directory name for different requests.
I attempted the following. First of all, I am not sure whether it is correct way to about and although it worked for only one directory. I think Poller was waiting for more files and never came back to read another directory.
#SpringBootApplication
#EnableIntegration
#IntegrationComponentScan
public class SiSampleFileProcessor {
#Autowired
MyFileProcessor myFileProcessor;
#Value("${si.outdir}")
String outDir;
#Autowired
Environment env;
public static void main(String[] args) throws IOException {
ConfigurableApplicationContext ctx = new SpringApplication(SiSampleFileProcessor.class).run(args);
FileProcessingService gateway = ctx.getBean(FileProcessingService.class);
boolean process = true;
while (process) {
System.out.println("Please enter the input Directory: ");
String inDir = new Scanner(System.in).nextLine();
if ( inDir.isEmpty() || inDir.equals("exit") ) {
process=false;
} else {
System.out.println("Processing... " + inDir);
gateway.processFilesin(inDir);
}
}
ctx.close();
}
#MessagingGateway(defaultRequestChannel="requestChannel")
public interface FileProcessingService {
String processFilesin( String inputDir );
}
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata poller() {
return Pollers.fixedDelay(1000).get();
}
#Bean
public MessageChannel requestChannel() {
return new DirectChannel();
}
#ServiceActivator(inputChannel = "requestChannel")
#Bean
GenericHandler<String> fileReader() {
return new GenericHandler<String>() {
#Override
public Object handle(String p, Map<String, Object> map) {
FileReadingMessageSource fileSource = new FileReadingMessageSource();
fileSource.setDirectory(new File(p));
Message<File> msg;
while( (msg = fileSource.receive()) != null ) {
fileInChannel().send(msg);
}
return null; // Not sure what to return!
}
};
}
#Bean
public MessageChannel fileInChannel() {
return MessageChannels.queue("fileIn").get();
}
#Bean
public IntegrationFlow fileProcessingFlow() {
return IntegrationFlows.from(fileInChannel())
.handle(myFileProcessor)
.handle(Files.outboundAdapter(new File(outDir)).autoCreateDirectory(true).get())
.get();
}
}
EDIT: Based on Gary's response replaced some methods as
#MessagingGateway(defaultRequestChannel="requestChannel")
public interface FileProcessingService {
boolean processFilesin( String inputDir );
}
#ServiceActivator(inputChannel = "requestChannel")
public boolean fileReader(String inDir) {
FileReadingMessageSource fileSource = new FileReadingMessageSource();
fileSource.setDirectory(new File(inDir));
fileSource.afterPropertiesSet();
fileSource.start();
Message<File> msg;
while ((msg = fileSource.receive()) != null) {
fileInChannel().send(msg);
}
fileSource.stop();
System.out.println("Sent all files in directory: " + inDir);
return true;
}
Now it is working as expected.
You can use this code
FileProcessor.java
import org.springframework.messaging.Message;
import org.springframework.stereotype.Component;
#Component
public class FileProcessor {
private static final String HEADER_FILE_NAME = "file_name";
private static final String MSG = "%s received. Content: %s";
public void process(Message<String> msg) {
String fileName = (String) msg.getHeaders().get(HEADER_FILE_NAME);
String content = msg.getPayload();
//System.out.println(String.format(MSG, fileName, content));
System.out.println(content);
}
}
LastModifiedFileFilter.java
package com.example.demo;
import org.springframework.integration.file.filters.AbstractFileListFilter;
import java.io.File;
import java.util.HashMap;
import java.util.Map;
public class LastModifiedFileFilter extends AbstractFileListFilter<File> {
private final Map<String, Long> files = new HashMap<>();
private final Object monitor = new Object();
#Override
protected boolean accept(File file) {
synchronized (this.monitor) {
Long previousModifiedTime = files.put(file.getName(), file.lastModified());
return previousModifiedTime == null || previousModifiedTime != file.lastModified();
}
}
}
Main Class= DemoApplication.java
package com.example.demo;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration;
import org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaAutoConfiguration;
import java.io.File;
import java.io.IOException;
import java.nio.charset.Charset;
import org.apache.commons.io.FileUtils;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.integration.annotation.Aggregator;
import org.springframework.integration.annotation.InboundChannelAdapter;
import org.springframework.integration.annotation.Poller;
import org.springframework.integration.channel.DirectChannel;
import org.springframework.integration.channel.QueueChannel;
import org.springframework.integration.core.MessageSource;
import org.springframework.integration.dsl.IntegrationFlow;
import org.springframework.integration.dsl.IntegrationFlows;
import org.springframework.integration.dsl.channel.MessageChannels;
import org.springframework.integration.dsl.core.Pollers;
import org.springframework.integration.file.FileReadingMessageSource;
import org.springframework.integration.file.filters.CompositeFileListFilter;
import org.springframework.integration.file.filters.SimplePatternFileListFilter;
import org.springframework.integration.file.transformer.FileToStringTransformer;
import org.springframework.integration.scheduling.PollerMetadata;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.PollableChannel;
import org.springframework.stereotype.Component;
#SpringBootApplication
#Configuration
public class DemoApplication {
private static final String DIRECTORY = "E:/usmandata/logs/input/";
public static void main(String[] args) throws IOException, InterruptedException {
SpringApplication.run(DemoApplication.class, args);
}
#Bean
public IntegrationFlow processFileFlow() {
return IntegrationFlows
.from("fileInputChannel")
.transform(fileToStringTransformer())
.handle("fileProcessor", "process").get();
}
#Bean
public MessageChannel fileInputChannel() {
return new DirectChannel();
}
#Bean
#InboundChannelAdapter(value = "fileInputChannel", poller = #Poller(fixedDelay = "1000"))
public MessageSource<File> fileReadingMessageSource() {
CompositeFileListFilter<File> filters =new CompositeFileListFilter<>();
filters.addFilter(new SimplePatternFileListFilter("*.log"));
filters.addFilter(new LastModifiedFileFilter());
FileReadingMessageSource source = new FileReadingMessageSource();
source.setAutoCreateDirectory(true);
source.setDirectory(new File(DIRECTORY));
source.setFilter(filters);
return source;
}
#Bean
public FileToStringTransformer fileToStringTransformer() {
return new FileToStringTransformer();
}
#Bean
public FileProcessor fileProcessor() {
return new FileProcessor();
}
}
The FileReadingMessageSource uses a DirectoryScanner internally; it is normally set up by Spring after the properties are injected. Since you are managing the object outside of Spring, you need to call Spring bean initialization and lifecycle methods afterPropertiesSet() , start() and stop().
Call stop() when the receive returns null.
> return null; // Not sure what to return!
If you return nothing, your calling thread will hang in the gateway waiting for a response. You could change the gateway to return void or, since your gateway is expecting a String, just return some value.
However, your calling code is not looking at the result anyway.
> gateway.processFilesin(inDir);
Also, remove the #Bean from the #ServiceActivator; with that style, the bean type must be MessageHandler.

Start elasticsearch within gradle build for integration tests

Is there a way to start elasticsearch within a gradle build before running integration tests and afterwards stop elasticsearch?
My approach so far is the following, but this blocks the further execution of the gradle build.
task runES(type: JavaExec) {
main = 'org.elasticsearch.bootstrap.Elasticsearch'
classpath = sourceSets.main.runtimeClasspath
systemProperties = ["es.path.home":"$buildDir/elastichome",
"es.path.data":"$buildDir/elastichome/data"]
}
For my purpose i have decided to start elasticsearch within my integration test in java code.
I've tried out ElasticsearchIntegrationTest but that didn't worked with spring, because it didn't harmony with SpringJUnit4ClassRunner.
I've found it easier to start elasticsearch in the before method:
My test class testing some 'dummy' productive code (indexing a document):
import static org.hamcrest.CoreMatchers.notNullValue;
import static org.junit.Assert.assertThat;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.common.settings.ImmutableSettings.Builder;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.indices.IndexAlreadyExistsException;
import org.elasticsearch.node.Node;
import org.elasticsearch.node.NodeBuilder;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
public class MyIntegrationTest {
private Node node;
private Client client;
#Before
public void before() {
createElasticsearchClient();
createIndex();
}
#After
public void after() {
this.client.close();
this.node.close();
}
#Test
public void testSomething() throws Exception {
// do something with elasticsearch
final String json = "{\"mytype\":\"bla\"}";
final String type = "mytype";
final String id = index(json, type);
assertThat(id, notNullValue());
}
/**
* some productive code
*/
private String index(final String json, final String type) {
// create Client
final Settings settings = ImmutableSettings.settingsBuilder().put("cluster.name", "mycluster").build();
final TransportClient tc = new TransportClient(settings).addTransportAddress(new InetSocketTransportAddress(
"localhost", 9300));
// index a document
final IndexResponse response = tc.prepareIndex("myindex", type).setSource(json).execute().actionGet();
return response.getId();
}
private void createElasticsearchClient() {
final NodeBuilder nodeBuilder = NodeBuilder.nodeBuilder();
final Builder settingsBuilder = nodeBuilder.settings();
settingsBuilder.put("network.publish_host", "localhost");
settingsBuilder.put("network.bind_host", "localhost");
final Settings settings = settingsBuilder.build();
this.node = nodeBuilder.clusterName("mycluster").local(false).data(true).settings(settings).node();
this.client = this.node.client();
}
private void createIndex() {
try {
this.client.admin().indices().prepareCreate("myindex").execute().actionGet();
} catch (final IndexAlreadyExistsException e) {
// index already exists => we ignore this exception
}
}
}
It is also very important to use elasticsearch version 1.3.3 or higher. See Issue 5401.

Categories