I have barely started on Camel and I am trying out the file component. I want to process each file in a directory recursively. I tried to chain from("file:") to a processor and I don't understand why it's not working. In the trace, I can see that the camel context started the route.
public static void main(String[] args) throws Exception {
System.out.println("Starting camel");
final CamelContext camelContext = new DefaultCamelContext();
camelContext.addRoutes(new RouteBuilder() {
#Override
public void configure() throws Exception {
from(
"file://Users/abc123/Documents?recursive=true&noop=true&idempotent=true")
.process(new Processor() {
public void process(Exchange exchange)
throws Exception {
System.out.println("exchange=" + exchange);
}
});
}
});
camelContext.setTracing(true);
camelContext.start();
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
try {
camelContext.stop();
} catch (Exception e) {
throw new RuntimeException(e);
}
}
});
Thread.currentThread().join();
}
Here's the log trace:
Starting camel
2015-06-03 00:00:59 INFO DefaultCamelContext - Apache Camel 2.15.2 (CamelContext: camel-1) is starting
2015-06-03 00:00:59 INFO DefaultCamelContext - Tracing is enabled on CamelContext: camel-1
2015-06-03 00:00:59 INFO ManagedManagementStrategy - JMX is enabled
2015-06-03 00:00:59 INFO DefaultTypeConverter - Loaded 182 type converters
2015-06-03 00:00:59 INFO DefaultCamelContext - AllowUseOriginalMessage is enabled. If access to the original message is not needed, then its recommended to turn this option off as it may improve performance.
2015-06-03 00:00:59 INFO DefaultCamelContext - StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html
2015-06-03 00:00:59 INFO FileEndpoint - Using default memory based idempotent repository with cache max size: 1000
2015-06-03 00:00:59 INFO DefaultCamelContext - Route: route1 started and consuming from: Endpoint[file://Users/abc123/Documents?idempotent=true&noop=true&recursive=true]
2015-06-03 00:00:59 INFO DefaultCamelContext - Total 1 routes, of which 1 is started.
2015-06-03 00:00:59 INFO DefaultCamelContext - Apache Camel 2.15.2 (CamelContext: camel-1) started in 0.422 seconds
Be sure to specify the correct path to the folder. The best way is to specify the absolute path:
"file:/Users/abc123/Documents?recursive=true&noop=true&idempotent=true"
You can use the relative path, but then make sure that it really is the relative path for the current working directory of the application.
Related
Route loadfile is getting started automatically when I start main class.
On exception, when process should finish. It starts loadfile again and again.
It should get start from timer and then should call loadfile route, but loadfile is starting independent as well as from timer.
CamelContext context = new DefaultCamelContext(sr);
try {
context.addRoutes(new RouteBuilder() {
#Override
public void configure() throws Exception {
onException(Exception.class)
.log(LoggingLevel.INFO, "Extype:${exception.message}")
.stop();
from("timer://alertstrigtimer?period=60s&repeatCount=1")
.startupOrder(1)
.log(LoggingLevel.INFO, "*******************************Job-Alert-System: Started: alertstrigtimer******************************")
.to("direct:loadFile").stop();
from("direct:loadFile").routeId("loadfile")
.log(LoggingLevel.INFO, "*******************************Job-Alert-System: Started: direct:loadFile******************************")
.from(getTriggerFileURI(getWorkFilePath(), getWorkFileName())).choice()
.
.
});
context.start();
Thread.sleep(40000);
Following is log:
[main] INFO org.apache.camel.impl.DefaultCamelContext - Apache Camel 2.21.1 (CamelContext: camel-1) is starting
[main] INFO org.apache.camel.management.ManagedManagementStrategy - JMX is enabled
[main] INFO org.apache.camel.impl.converter.DefaultTypeConverter - Type converters loaded (core: 194, classpath: 14)
[main] INFO org.apache.camel.impl.DefaultCamelContext - StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html
[main] INFO org.apache.camel.impl.DefaultCamelContext - Route: route1 started and consuming from: timer://alertstrigtimer?period=60s&repeatCount=1
[main] INFO org.apache.camel.impl.DefaultCamelContext - Skipping starting of route loadfile as its configured with autoStartup=false
[main] INFO org.apache.camel.impl.DefaultCamelContext - Route: loadDataAndAlerts started and consuming from: direct://loadDataAndAlerts
[main] INFO org.apache.camel.impl.DefaultCamelContext - Total 4 routes, of which 2 are started
[main] INFO org.apache.camel.impl.DefaultCamelContext - Apache Camel 2.21.1 (CamelContext: camel-1) started in 0.761 seconds
[Camel (camel-1) thread #1 - timer://alertstrigtimer] INFO route1 - *******************************Job-Alert-System: Started: alertstrigtimer******************************
[Camel (camel-1) thread #2 - timer://alertstrigtimer] INFO loadfile - *******************************Job-Alert-System: Started: direct:loadFile******************************
[Camel (camel-1) thread #1 - file://null] INFO loadfile - *******************************Job-Alert-System: Started: direct:loadFile******************************
The problem could be cause by this line .from(getTriggerFileURI(getWorkFilePath(), getWorkFileName())) in loadfile route. Route with multiple from endpoint is known as Multiple Input and this pattern is removed in Camel 3.x.
From RedHat,
from("URI1").from("URI2").from("URI3").to("DestinationUri");
..., exchanges from each of the input endpoints,
URI1, URI2, and URI3, are processed independently of each other and in
separate threads. In fact, you can think of the preceding route as
being equivalent to the following three separate routes:
from("URI1").to("DestinationUri");
from("URI2").to("DestinationUri");
from("URI3").to("DestinationUri");
Rather than using multiple from endpoint (extra independent input), try content enricher pattern (pollEnrich for file component).
I'm trying to create Kafka topics using Java. But I get a Exception in thread "main" java.lang.NoSuchFieldError: DEFAULT_SSL_PRINCIPAL_MAPPING_RULES and I can't fix it.
My goal is to create a topic so that when I run my Kafka server to display my topics using this command bin/kafka-topics.sh --list --bootstrap-server localhost:9092, I can actually see my topic in the list.
(the command is from Kafka's official website https://kafka.apache.org/quickstart)
I looked up to this problem How to create a Topic in Kafka through Java which actually inspired my code, but not only it doesn't really help, but it seems like it uses deprecated classes and methods.
I tried to use what I believed to be more recent classes such as ZooKeeperClient, KafkaZkClient and AdminZkClient, but from what I understand, the method adminZkClient.createTopic(topic, noOfPartitions, noOfReplication, topicConfiguration, RackAwareMode.Disabled$.MODULE$); is what brings the exception. I don't know what it is about that function that creates the exception, whether I forgot something from my application.properties file or something else.
Here's my code
import java.util.Properties;
import org.I0Itec.zkclient.ZkClient;
import org.I0Itec.zkclient.ZkConnection;
import org.apache.kafka.common.utils.Time;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.data.jpa.repository.config.EnableJpaAuditing;
import kafka.admin.AdminUtils;
import kafka.admin.RackAwareMode;
import kafka.utils.ZkUtils;
import kafka.zk.AdminZkClient;
import kafka.zk.KafkaZkClient;
import kafka.zookeeper.ZooKeeperClient;
import kafka.utils.*;
#SpringBootApplication
#EnableJpaAuditing
public class ChatApplication {
#SuppressWarnings("deprecation")
public static void main(String[] args) {
try {
String zookeeperHost = "127.0.0.1:2181";
int sessionTimeOutInMs = 15 * 1000;
int connectionTimeOutInMs = 10 * 1000;
ZooKeeperClient zooKeeperClient = new ZooKeeperClient(zookeeperHost, sessionTimeOutInMs, connectionTimeOutInMs, 2, Time.SYSTEM, "BytesInPerSec", "BytesOutPerSec");
KafkaZkClient kafkaZkClient = new KafkaZkClient(zooKeeperClient, true, Time.SYSTEM);
String topic = "superTopic";
int noOfPartitions = 2;
int noOfReplication = 1;
Properties topicConfiguration = new Properties();
AdminZkClient adminZkClient = new AdminZkClient(kafkaZkClient);
adminZkClient.createTopic(topic, noOfPartitions, noOfReplication, topicConfiguration, RackAwareMode.Disabled$.MODULE$);
} catch (Exception ex) {
ex.printStackTrace();
}
}
//SpringApplication.run(ChatApplication.class, args);
}
Here's the output I get :
06:50:06.796 [main] INFO kafka.utils.Log4jControllerRegistration$ - Registered kafka:type=kafka.Log4jController MBean
06:50:07.101 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient] Initializing a new session to 127.0.0.1:2181.
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=kunta
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=11.0.3
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Oracle Corporation
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/java-11-openjdk-amd64
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/home/robscientist/STS-WORKSPACE/poc_chat/target/classes:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-web/2.1.3.RELEASE/spring-boot-starter-web-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter/2.1.3.RELEASE/spring-boot-starter-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-logging/2.1.3.RELEASE/spring-boot-starter-logging-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar:/home/robscientist/.m2/repository/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar:/home/robscientist/.m2/repository/org/apache/logging/log4j/log4j-to-slf4j/2.11.2/log4j-to-slf4j-2.11.2.jar:/home/robscientist/.m2/repository/org/apache/logging/log4j/log4j-api/2.11.2/log4j-api-2.11.2.jar:/home/robscientist/.m2/repository/org/slf4j/jul-to-slf4j/1.7.25/jul-to-slf4j-1.7.25.jar:/home/robscientist/.m2/repository/javax/annotation/javax.annotation-api/1.3.2/javax.annotation-api-1.3.2.jar:/home/robscientist/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-json/2.1.3.RELEASE/spring-boot-starter-json-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/com/fasterxml/jackson/datatype/jackson-datatype-jsr310/2.9.8/jackson-datatype-jsr310-2.9.8.jar:/home/robscientist/.m2/repository/com/fasterxml/jackson/module/jackson-module-parameter-names/2.9.8/jackson-module-parameter-names-2.9.8.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-tomcat/2.1.3.RELEASE/spring-boot-starter-tomcat-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.16/tomcat-embed-core-9.0.16.jar:/home/robscientist/.m2/repository/org/apache/tomcat/embed/tomcat-embed-el/9.0.16/tomcat-embed-el-9.0.16.jar:/home/robscientist/.m2/repository/org/apache/tomcat/embed/tomcat-embed-websocket/9.0.16/tomcat-embed-websocket-9.0.16.jar:/home/robscientist/.m2/repository/org/hibernate/validator/hibernate-validator/6.0.14.Final/hibernate-validator-6.0.14.Final.jar:/home/robscientist/.m2/repository/javax/validation/validation-api/2.0.1.Final/validation-api-2.0.1.Final.jar:/home/robscientist/.m2/repository/org/jboss/logging/jboss-logging/3.3.2.Final/jboss-logging-3.3.2.Final.jar:/home/robscientist/.m2/repository/com/fasterxml/classmate/1.4.0/classmate-1.4.0.jar:/home/robscientist/.m2/repository/org/springframework/spring-web/5.1.5.RELEASE/spring-web-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-webmvc/5.1.5.RELEASE/spring-webmvc-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-aop/5.1.5.RELEASE/spring-aop-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-expression/5.1.5.RELEASE/spring-expression-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/io/debezium/debezium-core/0.9.5.Final/debezium-core-0.9.5.Final.jar:/home/robscientist/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.9.8/jackson-core-2.9.8.jar:/home/robscientist/.m2/repository/io/debezium/debezium-core/0.9.5.Final/debezium-core-0.9.5.Final-tests.jar:/home/robscientist/.m2/repository/org/springframework/spring-jdbc/5.1.5.RELEASE/spring-jdbc-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-beans/5.1.5.RELEASE/spring-beans-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-core/5.1.5.RELEASE/spring-core-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-jcl/5.1.5.RELEASE/spring-jcl-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-tx/5.1.5.RELEASE/spring-tx-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-websocket/2.1.3.RELEASE/spring-boot-starter-websocket-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-messaging/5.1.5.RELEASE/spring-messaging-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-websocket/5.1.5.RELEASE/spring-websocket-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-data-jpa/2.1.3.RELEASE/spring-boot-starter-data-jpa-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-aop/2.1.3.RELEASE/spring-boot-starter-aop-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/org/aspectj/aspectjweaver/1.9.2/aspectjweaver-1.9.2.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-jdbc/2.1.3.RELEASE/spring-boot-starter-jdbc-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/com/zaxxer/HikariCP/3.2.0/HikariCP-3.2.0.jar:/home/robscientist/.m2/repository/javax/transaction/javax.transaction-api/1.3/javax.transaction-api-1.3.jar:/home/robscientist/.m2/repository/javax/xml/bind/jaxb-api/2.3.1/jaxb-api-2.3.1.jar:/home/robscientist/.m2/repository/javax/activation/javax.activation-api/1.2.0/javax.activation-api-1.2.0.jar:/home/robscientist/.m2/repository/org/hibernate/hibernate-core/5.3.7.Final/hibernate-core-5.3.7.Final.jar:/home/robscientist/.m2/repository/javax/persistence/javax.persistence-api/2.2/javax.persistence-api-2.2.jar:/home/robscientist/.m2/repository/org/javassist/javassist/3.23.1-GA/javassist-3.23.1-GA.jar:/home/robscientist/.m2/repository/net/bytebuddy/byte-buddy/1.9.10/byte-buddy-1.9.10.jar:/home/robscientist/.m2/repository/antlr/antlr/2.7.7/antlr-2.7.7.jar:/home/robscientist/.m2/repository/org/jboss/jandex/2.0.5.Final/jandex-2.0.5.Final.jar:/home/robscientist/.m2/repository/org/dom4j/dom4j/2.1.1/dom4j-2.1.1.jar:/home/robscientist/.m2/repository/org/hibernate/common/hibernate-commons-annotations/5.0.4.Final/hibernate-commons-annotations-5.0.4.Final.jar:/home/robscientist/.m2/repository/org/springframework/data/spring-data-jpa/2.1.5.RELEASE/spring-data-jpa-2.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/data/spring-data-commons/2.1.5.RELEASE/spring-data-commons-2.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-orm/5.1.5.RELEASE/spring-orm-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-aspects/5.1.5.RELEASE/spring-aspects-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/com/h2database/h2/1.4.197/h2-1.4.197.jar:/home/robscientist/.m2/repository/org/webjars/stomp-websocket/2.3.3/stomp-websocket-2.3.3.jar:/home/robscientist/.m2/repository/org/webjars/bootstrap/3.3.7/bootstrap-3.3.7.jar:/home/robscientist/.m2/repository/org/webjars/jquery/1.11.1/jquery-1.11.1.jar:/home/robscientist/.m2/repository/org/springframework/kafka/spring-kafka/2.2.4.RELEASE/spring-kafka-2.2.4.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-context/5.1.5.RELEASE/spring-context-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/retry/spring-retry/1.2.4.RELEASE/spring-retry-1.2.4.RELEASE.jar:/home/robscientist/.m2/repository/org/apache/kafka/kafka-clients/2.0.1/kafka-clients-2.0.1.jar:/home/robscientist/.m2/repository/org/lz4/lz4-java/1.4.1/lz4-java-1.4.1.jar:/home/robscientist/.m2/repository/org/xerial/snappy/snappy-java/1.1.7.1/snappy-java-1.1.7.1.jar:/home/robscientist/.m2/repository/org/apache/kafka/kafka_2.12/2.2.0/kafka_2.12-2.2.0.jar:/home/robscientist/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar:/home/robscientist/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.9.0/jackson-annotations-2.9.0.jar:/home/robscientist/.m2/repository/com/fasterxml/jackson/datatype/jackson-datatype-jdk8/2.9.8/jackson-datatype-jdk8-2.9.8.jar:/home/robscientist/.m2/repository/net/sf/jopt-simple/jopt-simple/5.0.4/jopt-simple-5.0.4.jar:/home/robscientist/.m2/repository/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar:/home/robscientist/.m2/repository/org/scala-lang/scala-library/2.12.8/scala-library-2.12.8.jar:/home/robscientist/.m2/repository/org/scala-lang/scala-reflect/2.12.8/scala-reflect-2.12.8.jar:/home/robscientist/.m2/repository/com/typesafe/scala-logging/scala-logging_2.12/3.9.0/scala-logging_2.12-3.9.0.jar:/home/robscientist/.m2/repository/org/slf4j/slf4j-api/1.7.25/slf4j-api-1.7.25.jar:/home/robscientist/.m2/repository/com/101tec/zkclient/0.11/zkclient-0.11.jar:/home/robscientist/.m2/repository/org/apache/zookeeper/zookeeper/3.4.13/zookeeper-3.4.13.jar:/home/robscientist/.m2/repository/org/slf4j/slf4j-log4j12/1.7.25/slf4j-log4j12-1.7.25.jar:/home/robscientist/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar:/home/robscientist/.m2/repository/jline/jline/0.9.94/jline-0.9.94.jar:/home/robscientist/.m2/repository/org/apache/yetus/audience-annotations/0.5.0/audience-annotations-0.5.0.jar:/home/robscientist/.m2/repository/io/netty/netty/3.10.6.Final/netty-3.10.6.Final.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-reactor-netty/2.1.3.RELEASE/spring-boot-starter-reactor-netty-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/io/projectreactor/netty/reactor-netty/0.8.5.RELEASE/reactor-netty-0.8.5.RELEASE.jar:/home/robscientist/.m2/repository/io/netty/netty-codec-http/4.1.33.Final/netty-codec-http-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-common/4.1.33.Final/netty-common-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-buffer/4.1.33.Final/netty-buffer-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-transport/4.1.33.Final/netty-transport-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-resolver/4.1.33.Final/netty-resolver-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-codec/4.1.33.Final/netty-codec-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-codec-http2/4.1.33.Final/netty-codec-http2-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-handler/4.1.33.Final/netty-handler-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-handler-proxy/4.1.33.Final/netty-handler-proxy-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-codec-socks/4.1.33.Final/netty-codec-socks-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-transport-native-epoll/4.1.33.Final/netty-transport-native-epoll-4.1.33.Final-linux-x86_64.jar:/home/robscientist/.m2/repository/io/netty/netty-transport-native-unix-common/4.1.33.Final/netty-transport-native-unix-common-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/projectreactor/reactor-core/3.2.6.RELEASE/reactor-core-3.2.6.RELEASE.jar:/home/robscientist/.m2/repository/org/reactivestreams/reactive-streams/1.0.2/reactive-streams-1.0.2.jar:/home/robscientist/.m2/repository/org/postgresql/postgresql/42.2.5/postgresql-42.2.5.jar:/home/robscientist/.m2/repository/org/projectlombok/lombok/1.18.6/lombok-1.18.6.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-devtools/2.0.0.RELEASE/spring-boot-devtools-2.0.0.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot/2.1.3.RELEASE/spring-boot-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-autoconfigure/2.1.3.RELEASE/spring-boot-autoconfigure-2.1.3.RELEASE.jar
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
06:50:07.117 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=4.15.0-50-generic
06:50:07.117 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=robscientist
06:50:07.117 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/home/robscientist
06:50:07.117 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/home/robscientist/STS-WORKSPACE/poc_chat
06:50:07.118 [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=127.0.0.1:2181 sessionTimeout=15000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$#5bda8e08
06:50:07.120 [main] DEBUG org.apache.zookeeper.ClientCnxn - zookeeper.disableAutoWatchReset is false
06:50:07.129 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler.
06:50:07.131 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient] Waiting until connected.
06:50:07.138 [main-SendThread(localhost:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
06:50:07.144 [main-SendThread(localhost:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established to localhost/127.0.0.1:2181, initiating session
06:50:07.145 [main-SendThread(localhost:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Session establishment request sent on localhost/127.0.0.1:2181
06:50:07.403 [main-SendThread(localhost:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1000016739e0016, negotiated timeout = 15000
06:50:07.409 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient] Received event: WatchedEvent state:SyncConnected type:None path:null
06:50:07.415 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient] Connected.
06:50:07.469 [main-SendThread(localhost:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x1000016739e0016, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 1,12 replyHeader:: 1,237,0 request:: '/brokers/ids,F response:: v{'0},s{5,5,1559484903664,1559484903664,0,5,0,0,0,1,191}
06:50:07.492 [main-SendThread(localhost:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x1000016739e0016, packet:: clientPath:/brokers/ids/0 serverPath:/brokers/ids/0 finished:false header:: 2,4 replyHeader:: 2,237,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f6b756e74613a39303932225d2c226a6d785f706f7274223a2d312c22686f7374223a226b756e7461222c2274696d657374616d70223a2231353539353332323533363131222c22706f7274223a393039322c2276657273696f6e223a347d,s{191,191,1559532253652,1559532253652,1,0,0,72057690466942976,180,0,191}
06:50:07.690 [main-SendThread(localhost:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x1000016739e0016, packet:: clientPath:/brokers/topics/superTopic serverPath:/brokers/topics/superTopic finished:false header:: 3,3 replyHeader:: 3,237,-101 request:: '/brokers/topics/superTopic,F response::
Exception in thread "main" java.lang.NoSuchFieldError: DEFAULT_SSL_PRINCIPAL_MAPPING_RULES
at kafka.server.Defaults$.<init>(KafkaConfig.scala:224)
at kafka.server.Defaults$.<clinit>(KafkaConfig.scala)
at kafka.log.Defaults$.<init>(LogConfig.scala:36)
at kafka.log.Defaults$.<clinit>(LogConfig.scala)
at kafka.log.LogConfig$.<init>(LogConfig.scala:219)
at kafka.log.LogConfig$.<clinit>(LogConfig.scala)
at kafka.zk.AdminZkClient.validateTopicCreate(AdminZkClient.scala:128)
at kafka.zk.AdminZkClient.createTopicWithAssignment(AdminZkClient.scala:86)
at kafka.zk.AdminZkClient.createTopic(AdminZkClient.scala:56)
at com.predisurge.kafka.KafkaTopicCreator.createTopic(KafkaTopicCreator.java:41)
at com.predisurge.ChatApplication.main(ChatApplication.java:32)
After running the command to display my topics, I don't see the one I want to add.
IDE : STS 4,
Kafka version : 2.2.0,
ZK version : 3.5.5
not sure why you get this exception. Here's a solution that works for me, and just uses the KafkaAdminClient:
#SpringBootApplication
#EnableJpaAuditing
public class TopicCreator {
public static void main(String[] args) throws InterruptedException, ExecutionException {
Properties properties = new Properties();
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
AdminClient kafkaAdminClient = KafkaAdminClient.create(properties);
CreateTopicsResult result = kafkaAdminClient.createTopics(
Stream.of("foo", "bar", "baz").map(
name -> new NewTopic(name, 3, (short) 1)
).collect(Collectors.toList())
);
result.all().get();
}
}
As the above code shows, for newer versions of Kafka you can create topics directly through Kafka. Does this help?
I am using log4j to log events in java code. I have it set to start each log line as new line, with timestamp, thread, log level and the class where the log runs. So the configuration looks like this:
LoggerContext loggerContext = (LoggerContext) LoggerFactory.getILoggerFactory();
logger = loggerContext.getLogger("com.asdf");
logger.setAdditive(true);
PatternLayoutEncoder encoder = new PatternLayoutEncoder();
encoder.setContext(loggerContext);
encoder.setPattern("%-5level %d [%thread:%M:%caller{1}]: %message%n");
encoder.start();
cucumberAppender = new CucumberAppender();
cucumberAppender.setName("cucumber-appender");
cucumberAppender.setContext(loggerContext);
cucumberAppender.setScenario(scenario);
cucumberAppender.setEncoder(encoder);
cucumberAppender.start();
logger.addAppender(cucumberAppender);
loggerContext.start();
logger().info("*********************************************");
logger().info("* Starting Scenario - {}", scenario.getName());
logger().info("*********************************************\n");
}
#After
public void showScenarioResult(Scenario scenario) throws InterruptedException {
logger().info("**************************************************************");
logger().info("* {} Scenario - {} ", scenario.getStatus(), scenario.getName());
logger().info("**************************************************************\n");
cucumberAppender.writeToScenario();
cucumberAppender.stop();
logger.detachAppender(cucumberAppender);
logger.detachAndStopAllAppenders();
}
which most of the times outputs the log correctly, as so:
15:59:25.448 [main] INFO com.asdf.runner.steps.StepHooks -
********************************************* 15:59:25.449 [main] INFO com.asdf.runner.steps.StepHooks - * Starting Scenario - Check Cache 15:59:25.450 [main] INFO com.asdf.runner.steps.StepHooks -
********************************************* 15:59:25.558 [main] DEBUG org.cache2k.core.util.Log - New instance, using SLF4J logging 15:59:25.575 [main] INFO org.cache2k.core.Cache2kCoreProviderImpl - cache2k starting. version=1.0.1.Final, build=undefined, defaultImplementation=HeapCache 15:59:25.629 [main] DEBUG org.cache2k.CacheManager:default - open name=default, id=wvl973, classloaderId=6us14y
However, sometimes the next line of the logger is written on the above one, without using the new line, like below:
15:59:27.353 [main] INFO com.asdf.cache.CacheService - Creating a cache for [Kafka] service with specific types.15:59:27.354 [main] INFO com.asdf.runner.steps.StepHooks - **************************************************************
15:59:27.354 [main] INFO com.asdf.runner.steps.StepHooks - * PASSED Scenario - Check Cache
15:59:27.354 [main] INFO com.asdf.runner.steps.StepHooks - **************************************************************
As you can see, the first StepHooks line goes on the same line as CacheService, which is unaesthetic.
What can i change in order for the log to always log in new line, without exceptions like this?
I have this example of MapReduce [1], and I want to print info in the stdout and in a log file[3]. It seems that the logs isn’t print something. How can I make my map class print output?
I also have configured yarn-site.xml to retain log[2]. Although the logs are retained in the /app-logs dir, the userlogs dir that contains the output of the job execution is deleted at the end of the job execution. How can I make MapReduce to not delete files in the userlogs dir?
I am using Yarn.
Thanks,
[1] Wordcount exampla with just the map part.
public class MyWordCount {
public static class MyMap extends Mapper {
Log log = LogFactory.getLog(MyWordCount.class);
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
StringTokenizer itr = new StringTokenizer(value.toString());
System.out.println("HERRE");
log.info("HERRRRRE");
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
output.collect(word, one);
}
}
public void run(Context context) throws IOException, InterruptedException {
setup(context);
try {
while (context.nextKeyValue()) {
System.out.println("Key: " + context.getCurrentKey() + " Value: " + context.getCurrentValue());
map(context.getCurrentKey(), context.getCurrentValue(), context);
}
} finally {
cleanup(context);
}
}
public void cleanup(Mapper.Context context) {}
}
[2] yarn-site.xml
<!-- job history -->
<property> <name>yarn.log-aggregation-enable</name> <value>true</value> </property>
<property> <name>yarn.nodemanager.log.retain-seconds</name> <value>900000</value> </property>
<property> <name>yarn.nodemanager.remote-app-log-dir</name> <value>/app-logs</value> </property>
[3] log output
Log Type: stderr
Log Upload Time: 24-Sep-2015 12:45:19
Log Length: 317
Java HotSpot(TM) Client VM warning: You have loaded library /home/xubuntu/Programs/hadoop-2.6.0/lib/native/libhadoop.so which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
Log Type: stdout
Log Upload Time: 24-Sep-2015 12:45:19
Log Length: 0
Log Type: syslog
Log Upload Time: 24-Sep-2015 12:45:19
Log Length: 2604
2015-09-24 12:45:04,569 WARN [main] org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-09-24 12:45:05,139 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-09-24 12:45:05,412 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-09-24 12:45:05,413 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system started
2015-09-24 12:45:05,462 INFO [main] org.apache.hadoop.mapred.YarnChild: Executing with tokens:
2015-09-24 12:45:05,463 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: mapreduce.job, Service: job_1443113036547_0001, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier#1b5a082)
2015-09-24 12:45:05,847 INFO [main] org.apache.hadoop.mapred.YarnChild: Sleeping for 0ms before retrying again. Got null now.
2015-09-24 12:45:06,915 INFO [main] org.apache.hadoop.mapred.YarnChild: mapreduce.cluster.local.dir for child: /tmp/hadoop-temp/nm-local-dir/usercache/xubuntu/appcache/application_1443113036547_0001
2015-09-24 12:45:07,604 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
2015-09-24 12:45:09,402 INFO [main] org.apache.hadoop.mapred.Task: Using ResourceCalculatorProcessTree : [ ]
2015-09-24 12:45:10,187 INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: hdfs://hadoop-coc-1:9000/input1/b.txt:0+21
2015-09-24 12:45:10,812 INFO [main] org.apache.hadoop.mapred.Task: Task:attempt_1443113036547_0001_m_000000_0 is done. And is in the process of committing
2015-09-24 12:45:10,969 INFO [main] org.apache.hadoop.mapred.Task: Task attempt_1443113036547_0001_m_000000_0 is allowed to commit now
2015-09-24 12:45:10,993 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Saved output of task 'attempt_1443113036547_0001_m_000000_0' to hdfs://192.168.10.110:9000/output1-1442847968/_temporary/1/task_1443113036547_0001_m_000000
2015-09-24 12:45:11,135 INFO [main] org.apache.hadoop.mapred.Task: Task 'attempt_1443113036547_0001_m_000000_0' done.
2015-09-24 12:45:11,135 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping MapTask metrics system...
2015-09-24 12:45:11,136 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system stopped.
2015-09-24 12:45:11,136 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system shutdown complete.
I have found the error. It is a bug in my code.
Idea is to read the edi file from file system and transfer it to XML. I tried the example downloaded from smooks and it works fine. But when I start using the same code (and edi file) from Camel Processor, I get a nullpointer.
Code
public class MyRouteBuilder extends RouteBuilder
{
#Override
public void configure()
{
from("file://C:/Users/Owner/Desktop/BPMN").process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception
{
System.err.println("We just downloaded: " + exchange.getIn().getHeader("CamelFileName"));
Locale defaultLocale = Locale.getDefault();
Locale.setDefault(new Locale("en", "IE"));
// Instantiate Smooks with the config...
Smooks smooks = new Smooks("smooks-config.xml");
// smooks.setReaderConfig(new
// UNEdifactReaderConfigurator("urn:org.milyn.edi.unedifact:d03b-mapping:v1.4"));
System.err.println("Loaded smooks cfg");
try
{
// Create an exec context - no profiles....
ExecutionContext executionContext = smooks.createExecutionContext();
System.err.println("created execution context");
DOMResult domResult = new DOMResult();
// Configure the execution context to generate a report...
// executionContext.setEventListener(new HtmlReportGenerator("target/report/report.html"));
// Filter the input message to the outputWriter, using the execution context...
byte[] body = exchange.getIn().getBody(String.class).getBytes();
System.err.println("Retrieved the body " + new String(body));
smooks.filterSource(executionContext, new StreamSource(new ByteArrayInputStream(body)), domResult);
Locale.setDefault(defaultLocale);
System.err.println(domResult.getNode());
// System.err.println
System.err.println(XmlUtil.serialize(domResult.getNode().getChildNodes(), true));
}
finally
{
smooks.close();
}
}
}).to("file:C:/ws-juno");
}
}
Log
[ Thread-1] FakeFtpServer INFO Starting the server on port 0
[ Thread-1] FakeFtpServer INFO Actual server port is 49852
[ main] MainSupport INFO Apache Camel 2.9.0 starting
[ main] DefaultCamelContext INFO Apache Camel 2.9.0 (CamelContext: camel-1) is starting
[ main] ManagementStrategyFactory INFO JMX enabled. Using ManagedManagementStrategy.
[ main] ultManagementLifecycleStrategy INFO StatisticsLevel at All so enabling load performance statistics
[ main] AnnotationTypeConverterLoader INFO Found 3 packages with 15 #Converter classes to load
[ main] DefaultTypeConverter INFO Loaded 168 core type converters (total 168 type converters)
[ main] DefaultTypeConverter INFO Loaded additional 0 type converters (total 168 type converters) in 0.004 seconds
[ main] rFileExclusiveReadLockStrategy WARN Deleting orphaned lock file: C:\Users\Owner\Desktop\BPMN\input-message.edi.camelLock
[ main] DefaultCamelContext INFO Route: route1 started and consuming from: Endpoint[file://C:/Users/Owner/Desktop/BPMN]
[ main] DefaultCamelContext INFO Total 1 routes, of which 1 is started.
[ main] DefaultCamelContext INFO Apache Camel 2.9.0 (CamelContext: camel-1) started in 4.508 seconds
We just downloaded: input-message.edi
Loaded smooks cfg
created execution context
Retrieved the body HDR*1*0*59.97*64.92*4.95*Wed Nov 15 13:45:28 EST 2006
CUS*user1*Harry^Fletcher*SD
ORD*1*1*364*The 40-Year-Old Virgin*29.98
ORD*2*1*299*Pulp Fiction*29.99
null
[://C:/Users/Owner/Desktop/BPMN] DefaultErrorHandler ERROR Failed delivery for exchangeId: ID-Owner-PC-49853-1329098945139-0-1. Exhausted after delivery attempt: 1 caught: java.lang.NullPointerException
java.lang.NullPointerException
at com.xcg.routes.MyRouteBuilder$1.process(MyRouteBuilder.java:69)[file:/C:/ws-juno/routes/target/classes/:]
at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.DelegateAsyncProcessor.processNext(DelegateAsyncProcessor.java:99)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:90)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.DelegateAsyncProcessor.processNext(DelegateAsyncProcessor.java:99)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:90)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:71)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.DelegateAsyncProcessor.processNext(DelegateAsyncProcessor.java:99)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:90)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.interceptor.TraceInterceptor.process(TraceInterceptor.java:91)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.RedeliveryErrorHandler.processErrorHandler(RedeliveryErrorHandler.java:322)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:213)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.RouteContextProcessor.processNext(RouteContextProcessor.java:45)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:90)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.interceptor.DefaultChannel.process(DefaultChannel.java:303)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.Pipeline.process(Pipeline.java:117)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.Pipeline.process(Pipeline.java:80)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.RouteContextProcessor.processNext(RouteContextProcessor.java:45)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:90)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.UnitOfWorkProcessor.processAsync(UnitOfWorkProcessor.java:150)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.UnitOfWorkProcessor.process(UnitOfWorkProcessor.java:117)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.DelegateAsyncProcessor.processNext(DelegateAsyncProcessor.java:99)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:90)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:71)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.component.file.GenericFileConsumer.processExchange(GenericFileConsumer.java:352)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.component.file.GenericFileConsumer.processBatch(GenericFileConsumer.java:175)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.component.file.GenericFileConsumer.poll(GenericFileConsumer.java:136)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.impl.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:140)[camel-core-2.9.0.jar:2.9.0]
at org.apache.camel.impl.ScheduledPollConsumer.run(ScheduledPollConsumer.java:92)[camel-core-2.9.0.jar:2.9.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)[:1.6.0_26]
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(Unknown Source)[:1.6.0_26]
at java.util.concurrent.FutureTask.runAndReset(Unknown Source)[:1.6.0_26]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(Unknown Source)[:1.6.0_26]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(Unknown Source)[:1.6.0_26]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)[:1.6.0_26]
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)[:1.6.0_26]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)[:1.6.0_26]
at java.lang.Thread.run(Unknown Source)[:1.6.0_26]
Answering my own question for the benefit of others.
The problem with my posted code is that I (actually camel by default is logging only the outermost exception) was swallowing the exceptions. Once I caught the exception and printed the stack trace, I found the root cause was incorrect mapping in the Smook's edi-message-mapping xml.
Also, Smooks has a website available on GAE (http://edi-to-xml.appspot.com/) that allows your to parse and convert a edi message to xml.