Idea is to read the edi file from file system and transfer it to XML. I tried the example downloaded from smooks and it works fine. But when I start using the same code (and edi file) from Camel Processor, I get a nullpointer.
Code
public class MyRouteBuilder extends RouteBuilder
{
#Override
public void configure()
{
from("file://C:/Users/Owner/Desktop/BPMN").process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception
{
System.err.println("We just downloaded: " + exchange.getIn().getHeader("CamelFileName"));
Locale defaultLocale = Locale.getDefault();
Locale.setDefault(new Locale("en", "IE"));
// Instantiate Smooks with the config...
Smooks smooks = new Smooks("smooks-config.xml");
// smooks.setReaderConfig(new
// UNEdifactReaderConfigurator("urn:org.milyn.edi.unedifact:d03b-mapping:v1.4"));
System.err.println("Loaded smooks cfg");
try
{
// Create an exec context - no profiles....
ExecutionContext executionContext = smooks.createExecutionContext();
System.err.println("created execution context");
DOMResult domResult = new DOMResult();
// Configure the execution context to generate a report...
// executionContext.setEventListener(new HtmlReportGenerator("target/report/report.html"));
// Filter the input message to the outputWriter, using the execution context...
byte[] body = exchange.getIn().getBody(String.class).getBytes();
System.err.println("Retrieved the body " + new String(body));
smooks.filterSource(executionContext, new StreamSource(new ByteArrayInputStream(body)), domResult);
Locale.setDefault(defaultLocale);
System.err.println(domResult.getNode());
// System.err.println
System.err.println(XmlUtil.serialize(domResult.getNode().getChildNodes(), true));
}
finally
{
smooks.close();
}
}
}).to("file:C:/ws-juno");
}
}
Log
[ Thread-1] FakeFtpServer INFO Starting the server on port 0
[ Thread-1] FakeFtpServer INFO Actual server port is 49852
[ main] MainSupport INFO Apache Camel 2.9.0 starting
[ main] DefaultCamelContext INFO Apache Camel 2.9.0 (CamelContext: camel-1) is starting
[ main] ManagementStrategyFactory INFO JMX enabled. Using ManagedManagementStrategy.
[ main] ultManagementLifecycleStrategy INFO StatisticsLevel at All so enabling load performance statistics
[ main] AnnotationTypeConverterLoader INFO Found 3 packages with 15 #Converter classes to load
[ main] DefaultTypeConverter INFO Loaded 168 core type converters (total 168 type converters)
[ main] DefaultTypeConverter INFO Loaded additional 0 type converters (total 168 type converters) in 0.004 seconds
[ main] rFileExclusiveReadLockStrategy WARN Deleting orphaned lock file: C:\Users\Owner\Desktop\BPMN\input-message.edi.camelLock
[ main] DefaultCamelContext INFO Route: route1 started and consuming from: Endpoint[file://C:/Users/Owner/Desktop/BPMN]
[ main] DefaultCamelContext INFO Total 1 routes, of which 1 is started.
[ main] DefaultCamelContext INFO Apache Camel 2.9.0 (CamelContext: camel-1) started in 4.508 seconds
We just downloaded: input-message.edi
Loaded smooks cfg
created execution context
Retrieved the body HDR*1*0*59.97*64.92*4.95*Wed Nov 15 13:45:28 EST 2006
CUS*user1*Harry^Fletcher*SD
ORD*1*1*364*The 40-Year-Old Virgin*29.98
ORD*2*1*299*Pulp Fiction*29.99
null
[://C:/Users/Owner/Desktop/BPMN] DefaultErrorHandler ERROR Failed delivery for exchangeId: ID-Owner-PC-49853-1329098945139-0-1. Exhausted after delivery attempt: 1 caught: java.lang.NullPointerException
java.lang.NullPointerException
at com.xcg.routes.MyRouteBuilder$1.process(MyRouteBuilder.java:69)[file:/C:/ws-juno/routes/target/classes/:]
at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.DelegateAsyncProcessor.processNext(DelegateAsyncProcessor.java:99)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:90)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.DelegateAsyncProcessor.processNext(DelegateAsyncProcessor.java:99)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:90)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:71)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.DelegateAsyncProcessor.processNext(DelegateAsyncProcessor.java:99)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:90)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.interceptor.TraceInterceptor.process(TraceInterceptor.java:91)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.RedeliveryErrorHandler.processErrorHandler(RedeliveryErrorHandler.java:322)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:213)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.RouteContextProcessor.processNext(RouteContextProcessor.java:45)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:90)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.interceptor.DefaultChannel.process(DefaultChannel.java:303)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.Pipeline.process(Pipeline.java:117)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.Pipeline.process(Pipeline.java:80)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.RouteContextProcessor.processNext(RouteContextProcessor.java:45)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:90)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.UnitOfWorkProcessor.processAsync(UnitOfWorkProcessor.java:150)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.UnitOfWorkProcessor.process(UnitOfWorkProcessor.java:117)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.DelegateAsyncProcessor.processNext(DelegateAsyncProcessor.java:99)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:90)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:71)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.component.file.GenericFileConsumer.processExchange(GenericFileConsumer.java:352)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.component.file.GenericFileConsumer.processBatch(GenericFileConsumer.java:175)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.component.file.GenericFileConsumer.poll(GenericFileConsumer.java:136)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.impl.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:140)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.impl.ScheduledPollConsumer.run(ScheduledPollConsumer.java:92)[camel-core-2.9.0.jar:2.9.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)[:1.6.0_26]
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(Unknown Source)[:1.6.0_26]
at java.util.concurrent.FutureTask.runAndReset(Unknown Source)[:1.6.0_26]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(Unknown Source)[:1.6.0_26]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(Unknown Source)[:1.6.0_26]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)[:1.6.0_26]
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)[:1.6.0_26]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)[:1.6.0_26]
at java.lang.Thread.run(Unknown Source)[:1.6.0_26]
Answering my own question for the benefit of others.
The problem with my posted code is that I (actually camel by default is logging only the outermost exception) was swallowing the exceptions. Once I caught the exception and printed the stack trace, I found the root cause was incorrect mapping in the Smook's edi-message-mapping xml.
Also, Smooks has a website available on GAE (http://edi-to-xml.appspot.com/) that allows your to parse and convert a edi message to xml.
Related
I'm having same problems as this question
with rabbitmq dropping connection same second as it has started. I have a minimal project on github.
It is a Spring Boot project where two docker containers are running. One with RMQ and one with psql. Using the containers when running the project is no problem, just the rabbitmq testcontainer that seem to be unstable. I did try to have the container wrapped in a generic container instead, with same result.
The first test to see if the containers are up and running passes, so it seems to be something with the amqp connection only.
Configclass:
#SpringBootTest(classes = TestContainersDemoApplication.class)
#Testcontainers
#AutoConfigureMockMvc
#ExtendWith(SpringExtension.class)
public abstract class TestContainersConfig {
#Autowired
public MockMvc mockMvc;
#Container
public static final RabbitMQContainer rabbitMQContainer = new RabbitMQContainer("rabbitmq:3.8-management-alpine");
#Container
public static PostgreSQLContainer sqlContainer = new PostgreSQLContainer("postgres:latest")
.withDatabaseName("demo")
.withUsername("postgres")
.withPassword("postgres");
#DynamicPropertySource
static void registerProperties(DynamicPropertyRegistry dynamicPropertyRegistry) {
dynamicPropertyRegistry.add("spring.datasource.url", () -> sqlContainer.getJdbcUrl());
dynamicPropertyRegistry.add("spring.datasource.username", () -> sqlContainer.getUsername());
dynamicPropertyRegistry.add("spring.datasource.password", () -> sqlContainer.getPassword());
dynamicPropertyRegistry.add("spring.rabbitmq.host", rabbitMQContainer::getHost);
dynamicPropertyRegistry.add("spring.rabbitmq.port", rabbitMQContainer::getAmqpPort);
}
static {
Startables.deepStart(Stream.of(rabbitMQContainer, sqlContainer)).join();
}
}
The stacktrace seem to be identical as the linked question:
2022-11-18 21:00:23.555 INFO 441007 --- [ main] o.s.a.r.l.SimpleMessageListenerContainer : Broker not available; cannot force queue declarations during start: java.io.IOException
2022-11-18 21:00:23.559 INFO 441007 --- [ntContainer#0-1] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [localhost:49596]
2022-11-18 21:00:23.770 WARN 441007 --- [127.0.0.1:49596] c.r.c.impl.ForgivingExceptionHandler : An unexpected connection driver error occurred (Exception message: Socket closed)
2022-11-18 21:00:23.773 ERROR 441007 --- [ntContainer#0-1] o.s.a.r.l.SimpleMessageListenerContainer : Failed to check/redeclare auto-delete queue(s).
org.springframework.amqp.AmqpIOException: java.io.IOException
at org.springframework.amqp.rabbit.support.RabbitExceptionTranslator.convertRabbitAccessException(RabbitExceptionTranslator.java:70) ~[spring-rabbit-2.4.7.jar:2.4.7]
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.createBareConnection(AbstractConnectionFactory.java:602) ~[spring-rabbit-2.4.7.jar:2.4.7]
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.createConnection(CachingConnectionFactory.java:725) ~[spring-rabbit-2.4.7.jar:2.4.7]
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils.createConnection(ConnectionFactoryUtils.java:252) ~[spring-rabbit-2.4.7.jar:2.4.7]
at org.springframework.amqp.rabbit.core.RabbitTemplate.doExecute(RabbitTemplate.java:2180) ~[spring-rabbit-2.4.7.jar:2.4.7]
at org.springframework.amqp.rabbit.core.RabbitTemplate.execute(RabbitTemplate.java:2153) ~[spring-rabbit-2.4.7.jar:2.4.7]
at org.springframework.amqp.rabbit.core.RabbitTemplate.execute(RabbitTemplate.java:2133) ~[spring-rabbit-2.4.7.jar:2.4.7]
at org.springframework.amqp.rabbit.core.RabbitAdmin.getQueueInfo(RabbitAdmin.java:463) ~[spring-rabbit-2.4.7.jar:2.4.7]
at org.springframework.amqp.rabbit.core.RabbitAdmin.getQueueProperties(RabbitAdmin.java:447) ~[spring-rabbit-2.4.7.jar:2.4.7]
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.attemptDeclarations(AbstractMessageListenerContainer.java:1930) ~[spring-rabbit-2.4.7.jar:2.4.7]
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.redeclareElementsIfNecessary(AbstractMessageListenerContainer.java:1911) ~[spring-rabbit-2.4.7.jar:2.4.7]
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.initialize(SimpleMessageListenerContainer.java:1377) ~[spring-rabbit-2.4.7.jar:2.4.7]
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1223) ~[spring-rabbit-2.4.7.jar:2.4.7]
at java.base/java.lang.Thread.run(Thread.java:833) ~[na:na]
Caused by: java.io.IOException: null
at com.rabbitmq.client.impl.AMQChannel.wrap(AMQChannel.java:129) ~[amqp-client-5.14.2.jar:5.14.2]
at com.rabbitmq.client.impl.AMQChannel.wrap(AMQChannel.java:125) ~[amqp-client-5.14.2.jar:5.14.2]
at com.rabbitmq.client.impl.AMQConnection.start(AMQConnection.java:396) ~[amqp-client-5.14.2.jar:5.14.2]
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:1225) ~[amqp-client-5.14.2.jar:5.14.2]
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:1173) ~[amqp-client-5.14.2.jar:5.14.2]
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.connectAddresses(AbstractConnectionFactory.java:640) ~[spring-rabbit-2.4.7.jar:2.4.7]
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.connect(AbstractConnectionFactory.java:615) ~[spring-rabbit-2.4.7.jar:2.4.7]
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.createBareConnection(AbstractConnectionFactory.java:565) ~[spring-rabbit-2.4.7.jar:2.4.7]
... 12 common frames omitted
Caused by: com.rabbitmq.client.ShutdownSignalException: connection error
at com.rabbitmq.utility.ValueOrException.getValue(ValueOrException.java:66) ~[amqp-client-5.14.2.jar:5.14.2]
at com.rabbitmq.utility.BlockingValueOrException.uninterruptibleGetValue(BlockingValueOrException.java:36) ~[amqp-client-5.14.2.jar:5.14.2]
at com.rabbitmq.client.impl.AMQChannel$BlockingRpcContinuation.getReply(AMQChannel.java:502) ~[amqp-client-5.14.2.jar:5.14.2]
at com.rabbitmq.client.impl.AMQConnection.start(AMQConnection.java:326) ~[amqp-client-5.14.2.jar:5.14.2]
... 17 common frames omitted
Caused by: java.io.EOFException: null
at java.base/java.io.DataInputStream.readUnsignedByte(DataInputStream.java:296) ~[na:na]
at com.rabbitmq.client.impl.Frame.readFrom(Frame.java:91) ~[amqp-client-5.14.2.jar:5.14.2]
at com.rabbitmq.client.impl.SocketFrameHandler.readFrame(SocketFrameHandler.java:184) ~[amqp-client-5.14.2.jar:5.14.2]
at com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:665) ~[amqp-client-5.14.2.jar:5.14.2]
... 1 common frames omitted
2022-11-18 21:00:23.774 INFO 441007 --- [ntContainer#0-1] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [localhost:49596]
2022-11-18 21:00:23.978 WARN 441007 --- [127.0.0.1:49596] c.r.c.impl.ForgivingExceptionHandler : An unexpected connection driver error occurred (Exception message: Socket closed)
2022-11-18 21:00:24.022 INFO 441007 --- [ main] message.MessageControllerTest : Started MessageControllerTest in 4.155 seconds (JVM running for 14.327)
2022-11-18 21:00:24.378 INFO 441007 --- [ main] message.MessageControllerTest : sqlcontianers are working
2022-11-18 21:00:24.390 INFO 441007 --- [ main] message.MessageControllerTest : InsertNewMessage
2022-11-18 21:00:24.447 INFO 441007 --- [ main] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [localhost:49596]```
You are manually managing the container lifecycle, which is a good approach:
Startables.deepStart(Stream.of(rabbitMQContainer, sqlContainer)).join();
In this case, remove the #Container and #Testcontainers annotation that interfere with the test lifecycle.
Your tests fails with:
Caused by: com.rabbitmq.client.AuthenticationFailureException: ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.
The default admin username and password in RabbitMQContainer should be guest. Changing it accordingly in the application.yml makes the test pass. Once changed it will also work when still using the #Testcontainers extension (although not recommended in this case).
Btw. thanks for sharing the reproducer, it made debugging very easy.
I can't really understand what's going on. I'm studying Spring Batch and I'd like to execute two steps for some reasons, one after the other.
Now please don't mind what the steps are currently doing, just keep in mind that I would like to perform two steps sequentially.
This is the code:
#Configuration
#EnableBatchProcessing
public class JobConfiguration {
#Autowired
private JobBuilderFactory jobBuilderFactory;
#Autowired
private StepBuilderFactory stepBuilderFactory;
private List<Employee> employeesToSave = new ArrayList<Employee>();
public JsonItemReader<Employee> jsonReader() {
System.out.println("Try to read JSON");
final ObjectMapper mapper = new ObjectMapper();
final JacksonJsonObjectReader<Employee> jsonObjectReader = new JacksonJsonObjectReader<>(
Employee.class);
jsonObjectReader.setMapper(mapper);
return new JsonItemReaderBuilder<Employee>().jsonObjectReader(jsonObjectReader)
.resource(new ClassPathResource("input.json"))
.name("myReader")
.build();
}
public ListItemReader<Employee> listReader() {
System.out.println("Read from list");
return new ListItemReader<Employee>(employeesToSave);
*/
}
public ItemProcessor<Employee,Employee> filterProcessor() {
return employee -> {
System.out.println("Processing JSON");
return employee;
};
}
public ItemWriter<Employee> filterWriter() {
return listEmployee -> {
employeesToSave.addAll(listEmployee);
System.out.println("Save on list " + listEmployee.toString());
};
}
public ItemWriter<Employee> insertToDBWriter() {
System.out.println("Try to save on DB");
return listEmployee -> {
System.out.println("Save on DB " + listEmployee.toString());
};
}
public Step filterStep() {
StepBuilder stepBuilder = stepBuilderFactory.get("filterStep");
SimpleStepBuilder<Employee, Employee> simpleStepBuilder = stepBuilder.chunk(5);
return simpleStepBuilder.reader(jsonReader()).processor(filterProcessor()).writer(filterWriter()).build();
}
public Step insertToDBStep() {
StepBuilder stepBuilder = stepBuilderFactory.get("insertToDBStep");
SimpleStepBuilder<Employee, Employee> simpleStepBuilder = stepBuilder.chunk(5);
return simpleStepBuilder.reader(listReader()).writer(insertToDBWriter()).build();
}
#Bean
public Job myJob(JobRepository jobRepository, PlatformTransactionManager platformTransactionManager) {
return jobBuilderFactory.get("myJob").incrementer(new RunIdIncrementer())
.start(filterStep())
.next(insertToDBStep())
.build();
}
}
Why doesn't the insertToDBStep starts at the end of the filterStep and it actually looks like the filter is running at the same time? And why it looks like the job starts after the initialization of Root WebApplicationContext?
This is the output.
2022-05-23 15:40:49.418 INFO 14008 --- [ restartedMain] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1024 ms
Try to read JSON
Read from list
Try to save on DB
2022-05-23 15:40:49.882 INFO 14008 --- [ restartedMain] o.s.b.d.a.OptionalLiveReloadServer : LiveReload server is running on port 35729
2022-05-23 15:40:49.917 INFO 14008 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2022-05-23 15:40:49.926 INFO 14008 --- [ restartedMain] c.marco.firstbatch.TestBatchApplication : Started TestBatchApplication in 1.985 seconds (JVM running for 2.789)
2022-05-23 15:40:49.927 INFO 14008 --- [ restartedMain] o.s.b.a.b.JobLauncherApplicationRunner : Running default command line with: []
2022-05-23 15:40:49.928 WARN 14008 --- [ restartedMain] o.s.b.c.c.a.DefaultBatchConfigurer : No datasource was provided...using a Map based JobRepository
2022-05-23 15:40:49.928 WARN 14008 --- [ restartedMain] o.s.b.c.c.a.DefaultBatchConfigurer : No transaction manager was provided, using a ResourcelessTransactionManager
2022-05-23 15:40:49.943 INFO 14008 --- [ restartedMain] o.s.b.c.l.support.SimpleJobLauncher : No TaskExecutor has been set, defaulting to synchronous executor.
2022-05-23 15:40:49.972 INFO 14008 --- [ restartedMain] o.s.b.c.l.support.SimpleJobLauncher : Job: [SimpleJob: [name=myJob]] launched with the following parameters: [{run.id=1}]
2022-05-23 15:40:50.003 INFO 14008 --- [ restartedMain] o.s.batch.core.job.SimpleStepHandler : Executing step: [filterStep]
Processing JSON
Processing JSON
Processing JSON
Processing JSON
Processing JSON
Save on list [com.marco.firstbatch.Employee#958d6e7, com.marco.firstbatch.Employee#464d17f8, com.marco.firstbatch.Employee#705520ac, com.marco.firstbatch.Employee#1a9f8e93, com.marco.firstbatch.Employee#55bf8cc9]
Processing JSON
Processing JSON
Save on list [com.marco.firstbatch.Employee#55d706c0, com.marco.firstbatch.Employee#1bc46dd4]
2022-05-23 15:40:50.074 INFO 14008 --- [ restartedMain] o.s.batch.core.step.AbstractStep : Step: [filterStep] executed in 70ms
2022-05-23 15:40:50.081 INFO 14008 --- [ restartedMain] o.s.batch.core.job.SimpleStepHandler : Executing step: [insertToDBStep]
2022-05-23 15:40:50.084 INFO 14008 --- [ restartedMain] o.s.batch.core.step.AbstractStep : Step: [insertToDBStep] executed in 3ms
2022-05-23 15:40:50.088 INFO 14008 --- [ restartedMain] o.s.b.c.l.support.SimpleJobLauncher : Job: [SimpleJob: [name=myJob]] completed with the following parameters: [{run.id=1}] and the following status: [COMPLETED] in 96ms
Thanks in advance.
The steps are executed correctly in sequence. You are putting System.out.println statements in two "kind" of places:
In the bean definition methods executed by Spring Framework when configuring the application context
In the code of batch artefacts (item processor, item writer) which are called by Spring Batch when running your job
In your case, Spring Framework will call the following bean definition methods in order to define the first step, filterStep():
jsonReader(): prints Try to read JSON. The file is not read at this time, only the json reader bean is defined. A more accurate log message would be: json reader bean created.
listReader(): prints Read from list. Same here, the file reading has not started yet. A more accurate log message would be: list reader bean created.
filterProcessor(): prints nothing. The log statement is in the ItemProcessor#process method. This will be called by Spring Batch at runtime, not at this point in time which is a configuration time
filterWriter(): same here, the print statement is in the write method called at runtime and not at configuration time
This results in the following output for filterStep():
Try to read JSON
Read from list
Now Spring Framework moves to defining the next step, insertToDBStep(). For this, it will call the following methods in order, according to your step definition:
listReader(): this bean has already bean defined, Spring will reuse the same instance (by default, Spring beans are singletons). Hence, there is no output from this method.
insertToDBWriter(): prints Try to save on DB. Same here, there is no actual save to DB here. A more accurate log message would be insertToDBWriter bean created (or even more accurate, attempting to create insertToDBWriter bean, in case the code that follows throws an exception).
You now have the following cumulative output:
Try to read JSON
Read from list
Try to save on DB
At this point, Spring Framework has finished its job of configuring the application context, Spring Batch takes over and starts the job. The actual processing of filterStep() begins:
The reader (ListItemReader) does not have any output in the read method.
The processor prints Processing JSON
The writer prints Save on list ...
You seem to have two chunks (the first with 5 items and the second with 2 items), which leads to the following output:
2022-05-23 15:40:50.003 INFO 14008 --- [ restartedMain] o.s.batch.core.job.SimpleStepHandler : Executing step: [filterStep]
Processing JSON
Processing JSON
Processing JSON
Processing JSON
Processing JSON
Save on list [com.marco.firstbatch.Employee#958d6e7, com.marco.firstbatch.Employee#464d17f8, com.marco.firstbatch.Employee#705520ac, com.marco.firstbatch.Employee#1a9f8e93, com.marco.firstbatch.Employee#55bf8cc9]
Processing JSON
Processing JSON
Save on list [com.marco.firstbatch.Employee#55d706c0, com.marco.firstbatch.Employee#1bc46dd4]
2022-05-23 15:40:50.074 INFO 14008 --- [ restartedMain] o.s.batch.core.step.AbstractStep : Step: [filterStep] executed in 70ms
Then, the next step starts its execution and you get the following output:
2022-05-23 15:40:50.081 INFO 14008 --- [ restartedMain] o.s.batch.core.job.SimpleStepHandler : Executing step: [insertToDBStep]
2022-05-23 15:40:50.084 INFO 14008 --- [ restartedMain] o.s.batch.core.step.AbstractStep : Step: [insertToDBStep] executed in 3ms
Here you might be asking why there are no items written by insertToDBWriter() (ie why there are no Save on DB .. logs). This is because the listReader() is a singleton bean and you are using it in both steps, so when the second step calls its read method, it will still return null, because the same instance is used and which has already exhausted the list of items in step 1. Hence, this step ends immediately since there are no items to process. If you want to re-read the items from the list in the second step, you can annotate the reader method with #StepScope. This will create a distinct instance of the reader for each step.
Route loadfile is getting started automatically when I start main class.
On exception, when process should finish. It starts loadfile again and again.
It should get start from timer and then should call loadfile route, but loadfile is starting independent as well as from timer.
CamelContext context = new DefaultCamelContext(sr);
try {
context.addRoutes(new RouteBuilder() {
#Override
public void configure() throws Exception {
onException(Exception.class)
.log(LoggingLevel.INFO, "Extype:${exception.message}")
.stop();
from("timer://alertstrigtimer?period=60s&repeatCount=1")
.startupOrder(1)
.log(LoggingLevel.INFO, "*******************************Job-Alert-System: Started: alertstrigtimer******************************")
.to("direct:loadFile").stop();
from("direct:loadFile").routeId("loadfile")
.log(LoggingLevel.INFO, "*******************************Job-Alert-System: Started: direct:loadFile******************************")
.from(getTriggerFileURI(getWorkFilePath(), getWorkFileName())).choice()
.
.
});
context.start();
Thread.sleep(40000);
Following is log:
[main] INFO org.apache.camel.impl.DefaultCamelContext - Apache Camel 2.21.1 (CamelContext: camel-1) is starting
[main] INFO org.apache.camel.management.ManagedManagementStrategy - JMX is enabled
[main] INFO org.apache.camel.impl.converter.DefaultTypeConverter - Type converters loaded (core: 194, classpath: 14)
[main] INFO org.apache.camel.impl.DefaultCamelContext - StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html
[main] INFO org.apache.camel.impl.DefaultCamelContext - Route: route1 started and consuming from: timer://alertstrigtimer?period=60s&repeatCount=1
[main] INFO org.apache.camel.impl.DefaultCamelContext - Skipping starting of route loadfile as its configured with autoStartup=false
[main] INFO org.apache.camel.impl.DefaultCamelContext - Route: loadDataAndAlerts started and consuming from: direct://loadDataAndAlerts
[main] INFO org.apache.camel.impl.DefaultCamelContext - Total 4 routes, of which 2 are started
[main] INFO org.apache.camel.impl.DefaultCamelContext - Apache Camel 2.21.1 (CamelContext: camel-1) started in 0.761 seconds
[Camel (camel-1) thread #1 - timer://alertstrigtimer] INFO route1 - *******************************Job-Alert-System: Started: alertstrigtimer******************************
[Camel (camel-1) thread #2 - timer://alertstrigtimer] INFO loadfile - *******************************Job-Alert-System: Started: direct:loadFile******************************
[Camel (camel-1) thread #1 - file://null] INFO loadfile - *******************************Job-Alert-System: Started: direct:loadFile******************************
The problem could be cause by this line .from(getTriggerFileURI(getWorkFilePath(), getWorkFileName())) in loadfile route. Route with multiple from endpoint is known as Multiple Input and this pattern is removed in Camel 3.x.
From RedHat,
from("URI1").from("URI2").from("URI3").to("DestinationUri");
..., exchanges from each of the input endpoints,
URI1, URI2, and URI3, are processed independently of each other and in
separate threads. In fact, you can think of the preceding route as
being equivalent to the following three separate routes:
from("URI1").to("DestinationUri");
from("URI2").to("DestinationUri");
from("URI3").to("DestinationUri");
Rather than using multiple from endpoint (extra independent input), try content enricher pattern (pollEnrich for file component).
I have a Spring Boot App using GlobalKTable. It worked fine until the update to kafka-streams-5.5.0-css (Confluent Platform version compatible with Apache Kafka 2.5.0 ) from 5.3.2-css (
Apache Kafka 2.3.1).
So this is my configuration:
#Configuration
#EnableKafkaStreams
public class GlobalTableConfiguration {
public GlobalTableConfiguration() {
}
#Bean
public GlobalKTable<String, String> table(StreamsBuilder kStreamsBuilder) {
return kStreamsBuilder.globalTable("topic1", Consumed.with(null, null),
Materialized.as("topic1-store"));
}
}
I'm getting the store like this:
streamsBuilderFactoryBean.getKafkaStreams().
store("topic1-store", QueryableStoreTypes.keyValueStore());
this fails with:
Request processing failed; nested exception is java.lang.IllegalStateException: KafkaStreams is not running. State is ERROR.
org.springframework.web.util.NestedServletException: Request processing failed; nested exception is java.lang.IllegalStateException: KafkaStreams is not running. State is ERROR.
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014)
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898)
Caused by: java.lang.IllegalStateException: KafkaStreams is not running. State is ERROR.
at org.apache.kafka.streams.KafkaStreams.validateIsRunningOrRebalancing(KafkaStreams.java:316)
at org.apache.kafka.streams.KafkaStreams.store(KafkaStreams.java:1182)
at org.apache.kafka.streams.KafkaStreams.store(KafkaStreams.java:1169)
I can see in that the stream thread is shutting down before this:
2020-06-16 13:22:46.943 INFO 72423 --- [ Test worker] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.5.0
2020-06-16 13:22:46.944 INFO 72423 --- [ Test worker] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 66563e712b0b9f84
2020-06-16 13:22:46.944 INFO 72423 --- [ Test worker] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1592299366943
2020-06-16 13:22:46.946 INFO 72423 --- [ad | producer-2] org.apache.kafka.clients.Metadata : [Producer clientId=producer-2] Cluster ID: aKrIp_7wQcqF9OlSUoBgSQ
2020-06-16 13:22:47.496 INFO 72423 --- [ Test worker] org.apache.kafka.streams.KafkaStreams : stream-client [app-d09c3f52-8d77-4814-944b-ba08b79ed8a4] State transition from ERROR to PENDING_SHUTDOWN
2020-06-16 13:22:47.497 INFO 72423 --- [ms-close-thread] o.a.k.s.p.internals.StreamThread : stream-thread [app-d09c3f52-8d77-4814-944b-ba08b79ed8a4-StreamThread-1] Informed to shut down
2020-06-16 13:22:47.497 INFO 72423 --- [ms-close-thread] o.a.k.s.p.internals.GlobalStreamThread : global-stream-thread [app-d09c3f52-8d77-4814-944b-ba08b79ed8a4-GlobalStreamThread] State transition from RUNNING to PENDING_SHUTDOWN
2020-06-16 13:22:47.557 INFO 72423 --- [balStreamThread] o.a.k.s.p.internals.GlobalStreamThread : global-stream-thread [app-d09c3f52-8d77-4814-944b-ba08b79ed8a4-GlobalStreamThread] Shutting down
2020-06-16 13:22:47.571 INFO 72423 --- [balStreamThread] o.a.k.s.p.internals.GlobalStreamThread : global-stream-thread [app-d09c3f52-8d77-4814-944b-ba08b79ed8a4-GlobalStreamThread] State transition from PENDING_SHUTDOWN to DEAD
2020-06-16 13:22:47.571 INFO 72423 --- [balStreamThread] o.a.k.s.p.internals.GlobalStreamThread : global-stream-thread [app-d09c3f52-8d77-4814-944b-ba08b79ed8a4-GlobalStreamThread] Shutdown complete
After some experiments I made it work by adding to my configuration:
#Bean
public KStream kStream(StreamsBuilder kStreamsBuilder) {
return kStreamsBuilder.stream("some-topic", Consumed.with(null, null));
}
So basically when I have any KStream defined (consuming from any topic) the stream thread stays alive and everything works as before the upgrade.
My question is, what would be the correct way to do it without this useless bean (and topic).
EDIT
There was a similar issue discussed here: Kafka Streams 2.5.0 requires input topic
Looks like this will be fixed in kafka-streams 2.5.1 and util then setting num.stream.threads: 0 is nicer workaround than what declaring dummy stream.
This appears to have nothing to do with Spring and is caused by some internal changes in the kafka-streams classes.
This works fine with Boot 2.2.x (Kafka-streams 2.3.x).
#SpringBootApplication
#EnableKafkaStreams
public class So62406117Application {
public static void main(String[] args) {
SpringApplication.run(So62406117Application.class, args);
}
#Bean
public GlobalKTable<String, String> table(StreamsBuilder kStreamsBuilder) {
return kStreamsBuilder.globalTable("topic1", Consumed.with(null, null),
Materialized.as("topic1-store"));
}
#Bean
public ApplicationRunner runner(StreamsBuilderFactoryBean fb) {
return args -> {
ReadOnlyKeyValueStore<Object, Object> store =
fb.getKafkaStreams().store("topic1-store", QueryableStoreTypes.keyValueStore());
System.out.println(store);
};
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("topic1").partitions(1).replicas(1).build();
}
}
But fails with Boot 2.3 (Kafka-Streams 2.5.0).
We are definitely starting the KafkaStreams (in the factory bean start() method, but during that start() we get
java.lang.IllegalStateException: Consumer is not subscribed to any topics or assigned any partitions
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1228) ~[kafka-clients-2.5.0.jar:na]
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1216) ~[kafka-clients-2.5.0.jar:na]
at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:853) ~[kafka-streams-2.5.0.jar:na]
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:753) ~[kafka-streams-2.5.0.jar:na]
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:697) ~[kafka-streams-2.5.0.jar:na]
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:670) ~[kafka-streams-2.5.0.jar:na]
2020-06-16 17:44:02.700 INFO 10635 --- [-StreamThread-1] o.a.k.s.p.internals.StreamThread : stream-thread [foo-235af8e6-6618-4e73-86ad-75307130004b-StreamThread-1] State transition from STARTING to PENDING_SHUTDOWN
2020-06-16 17:44:02.700 INFO 10635 --- [-StreamThread-1] o.a.k.s.p.internals.StreamThread : stream-thread [foo-235af8e6-6618-4e73-86ad-75307130004b-StreamThread-1] Shutting down
2020-06-16 17:44:02.700 INFO 10635 --- [-StreamThread-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=foo-235af8e6-6618-4e73-86ad-75307130004b-StreamThread-1-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions
2020-06-16 17:44:02.700 INFO 10635 --- [-StreamThread-1] o.a.k.clients.producer.KafkaProducer : [Producer clientId=foo-235af8e6-6618-4e73-86ad-75307130004b-StreamThread-1-producer] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
2020-06-16 17:44:02.704 INFO 10635 --- [-StreamThread-1] o.a.k.s.p.internals.StreamThread : stream-thread [foo-235af8e6-6618-4e73-86ad-75307130004b-StreamThread-1] State transition from PENDING_SHUTDOWN to DEAD
2020-06-16 17:44:02.704 INFO 10635 --- [-StreamThread-1] org.apache.kafka.streams.KafkaStreams : stream-client [foo-235af8e6-6618-4e73-86ad-75307130004b] State transition from REBALANCING to ERROR
2020-06-16 17:44:02.704 ERROR 10635 --- [-StreamThread-1] org.apache.kafka.streams.KafkaStreams : stream-client [foo-235af8e6-6618-4e73-86ad-75307130004b] All stream threads have died. The instance will be in error state and should be closed.
2020-06-16 17:44:02.704 INFO 10635 --- [-StreamThread-1] o.a.k.s.p.internals.StreamThread : stream-thread [foo-235af8e6-6618-4e73-86ad-75307130004b-StreamThread-1] Shutdown complete
I have barely started on Camel and I am trying out the file component. I want to process each file in a directory recursively. I tried to chain from("file:") to a processor and I don't understand why it's not working. In the trace, I can see that the camel context started the route.
public static void main(String[] args) throws Exception {
System.out.println("Starting camel");
final CamelContext camelContext = new DefaultCamelContext();
camelContext.addRoutes(new RouteBuilder() {
#Override
public void configure() throws Exception {
from(
"file://Users/abc123/Documents?recursive=true&noop=true&idempotent=true")
.process(new Processor() {
public void process(Exchange exchange)
throws Exception {
System.out.println("exchange=" + exchange);
}
});
}
});
camelContext.setTracing(true);
camelContext.start();
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
try {
camelContext.stop();
} catch (Exception e) {
throw new RuntimeException(e);
}
}
});
Thread.currentThread().join();
}
Here's the log trace:
Starting camel
2015-06-03 00:00:59 INFO DefaultCamelContext - Apache Camel 2.15.2 (CamelContext: camel-1) is starting
2015-06-03 00:00:59 INFO DefaultCamelContext - Tracing is enabled on CamelContext: camel-1
2015-06-03 00:00:59 INFO ManagedManagementStrategy - JMX is enabled
2015-06-03 00:00:59 INFO DefaultTypeConverter - Loaded 182 type converters
2015-06-03 00:00:59 INFO DefaultCamelContext - AllowUseOriginalMessage is enabled. If access to the original message is not needed, then its recommended to turn this option off as it may improve performance.
2015-06-03 00:00:59 INFO DefaultCamelContext - StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html
2015-06-03 00:00:59 INFO FileEndpoint - Using default memory based idempotent repository with cache max size: 1000
2015-06-03 00:00:59 INFO DefaultCamelContext - Route: route1 started and consuming from: Endpoint[file://Users/abc123/Documents?idempotent=true&noop=true&recursive=true]
2015-06-03 00:00:59 INFO DefaultCamelContext - Total 1 routes, of which 1 is started.
2015-06-03 00:00:59 INFO DefaultCamelContext - Apache Camel 2.15.2 (CamelContext: camel-1) started in 0.422 seconds
Be sure to specify the correct path to the folder. The best way is to specify the absolute path:
"file:/Users/abc123/Documents?recursive=true&noop=true&idempotent=true"
You can use the relative path, but then make sure that it really is the relative path for the current working directory of the application.