Netty 4.0.30.Final channel write issue - java

I have written an encoder that does encode the message I am sending on the wire. In my handler I issued the the ctx.writeAndFlush() method but nothing seems to be written to the remote endpoint. These are my code snippets:
Encoder
#ChannelHandler.Sharable
public class FreeSwitchEncoder extends MessageToByteEncoder<BaseCommand> {
/**
* Logger property
*/
private final Logger _log = LoggerFactory.getLogger(this.getClass());
/**
* Character set delimiting the end of each FreeSwitch message parts
*/
private final String MESSAGE_END_STRING = "\n\n";
private final Charset _encoding = StandardCharsets.UTF_8;
#Override
protected void encode(ChannelHandlerContext ctx, BaseCommand msg, ByteBuf out) throws Exception {
// Get the string representation of the BaseCommand
String toSend = msg.toString().trim();
// Let us check whether the command ends with \n\n
if (!StringUtils.isEmpty(toSend)) {
if (!toSend.endsWith(MESSAGE_END_STRING)) toSend = toSend + MESSAGE_END_STRING;
_log.debug("Encoded message sent [{}]", toSend);
ByteBuf encoded = Unpooled.copiedBuffer(toSend.getBytes());
// encoded = encoded.capacity(encoded.readableBytes());
out.writeBytes(encoded);
}
}
}
The code that sends the message
public FreeSwitchMessage sendCommand(ChannelHandlerContext ctx,
final BaseCommand command) {
ManuelResetEvent manuelResetEvent = new ManuelResetEvent();
syncLock.lock();
try {
manuelResetEvents.add(manuelResetEvent);
_log.debug("Command sent to freeSwitch [{}]", command.toString());
ChannelFuture future = ctx.writeAndFlush(command);
future.addListener(new ChannelFutureListener() {
public void operationComplete(ChannelFuture future) throws Exception {
if(!future.isSuccess()){
future.cause().printStackTrace();
}
}
});
} finally {
syncLock.unlock();
}
// Block until the response is available
return manuelResetEvent.get();
}
I do not know what I am doing wrong. The code hangs at the writting to the wire. Please assist. This is the log:
06:53:44.417 [main] DEBUG i.n.u.i.l.InternalLoggerFactory - Using SLF4J as the default logging framework
06:53:44.421 [main] DEBUG i.n.c.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 16
06:53:44.432 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.Buffer.address: available
06:53:44.432 [main] DEBUG i.n.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available
06:53:44.433 [main] DEBUG i.n.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available
06:53:44.433 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: true
06:53:44.433 [main] DEBUG i.n.util.internal.PlatformDependent - Platform: Windows
06:53:44.434 [main] DEBUG i.n.util.internal.PlatformDependent - Java version: 8
06:53:44.434 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noUnsafe: false
06:53:44.434 [main] DEBUG i.n.util.internal.PlatformDependent - sun.misc.Unsafe: available
06:53:44.434 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noJavassist: false
06:53:44.435 [main] DEBUG i.n.util.internal.PlatformDependent - Javassist: unavailable
06:53:44.435 [main] DEBUG i.n.util.internal.PlatformDependent - You don't have Javassist in your class path or you don't have enough permission to load dynamically generated classes. Please check the configuration for better performance.
06:53:44.435 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.tmpdir: C:\Users\smsgh\AppData\Local\Temp (java.io.tmpdir)
06:53:44.436 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.bitMode: 64 (sun.arch.data.model)
06:53:44.436 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false
06:53:44.458 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.noKeySetOptimization: false
06:53:44.459 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.selectorAutoRebuildThreshold: 512
06:53:44.487 [main] INFO io.freeswitch.OutboundTest - Client connecting ..
06:53:44.517 [main] DEBUG i.n.util.internal.ThreadLocalRandom - -Dio.netty.initialSeedUniquifier: 0xf77d8a3de122380b (took 9 ms)
06:53:44.559 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: unpooled
06:53:44.559 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 65536
06:53:44.591 [nioEventLoopGroup-2-1] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetectionLevel: simple
06:53:44.596 [nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacity.default: 262144
06:53:44.599 [nioEventLoopGroup-2-1] DEBUG i.f.codecs.FreeSwitchDecoder - read header line [Content-Type: auth/request]
06:53:44.601 [nioEventLoopGroup-2-1] DEBUG i.f.message.FreeSwitchMessage - adding header [CONTENT_TYPE] [auth/request]
06:53:44.601 [nioEventLoopGroup-2-1] DEBUG i.f.codecs.FreeSwitchDecoder - read header line []
06:53:44.602 [nioEventLoopGroup-2-1] DEBUG io.netty.util.internal.Cleaner0 - java.nio.ByteBuffer.cleaner(): available
06:53:44.602 [nioEventLoopGroup-2-1] DEBUG i.n.channel.DefaultChannelPipeline - Discarded inbound message FreeSwitchMessage: contentType=[auth/request] headers=1, body=0 lines. that reached at the tail of the pipeline. Please check your pipeline configuration.
06:53:44.603 [nioEventLoopGroup-2-1] DEBUG i.f.outbound.OutboundHandler - Received message: [FreeSwitchMessage: contentType=[auth/request] headers=1, body=0 lines.]
06:53:44.607 [nioEventLoopGroup-2-1] DEBUG i.f.outbound.OutboundHandler - Auth requested, sending [auth *****]
06:53:44.683 [nioEventLoopGroup-2-1] DEBUG i.f.outbound.OutboundHandler - Command sent to freeSwitch [auth ClueCon]
06:53:44.683 [nioEventLoopGroup-2-1] DEBUG i.f.codecs.FreeSwitchEncoder - Encoded message sent [auth ClueCon
]

Related

Creating kafka topics with Java doesn't work

I'm trying to create Kafka topics using Java. But I get a Exception in thread "main" java.lang.NoSuchFieldError: DEFAULT_SSL_PRINCIPAL_MAPPING_RULES and I can't fix it.
My goal is to create a topic so that when I run my Kafka server to display my topics using this command bin/kafka-topics.sh --list --bootstrap-server localhost:9092, I can actually see my topic in the list.
(the command is from Kafka's official website https://kafka.apache.org/quickstart)
I looked up to this problem How to create a Topic in Kafka through Java which actually inspired my code, but not only it doesn't really help, but it seems like it uses deprecated classes and methods.
I tried to use what I believed to be more recent classes such as ZooKeeperClient, KafkaZkClient and AdminZkClient, but from what I understand, the method adminZkClient.createTopic(topic, noOfPartitions, noOfReplication, topicConfiguration, RackAwareMode.Disabled$.MODULE$); is what brings the exception. I don't know what it is about that function that creates the exception, whether I forgot something from my application.properties file or something else.
Here's my code
import java.util.Properties;
import org.I0Itec.zkclient.ZkClient;
import org.I0Itec.zkclient.ZkConnection;
import org.apache.kafka.common.utils.Time;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.data.jpa.repository.config.EnableJpaAuditing;
import kafka.admin.AdminUtils;
import kafka.admin.RackAwareMode;
import kafka.utils.ZkUtils;
import kafka.zk.AdminZkClient;
import kafka.zk.KafkaZkClient;
import kafka.zookeeper.ZooKeeperClient;
import kafka.utils.*;
#SpringBootApplication
#EnableJpaAuditing
public class ChatApplication {
#SuppressWarnings("deprecation")
public static void main(String[] args) {
try {
String zookeeperHost = "127.0.0.1:2181";
int sessionTimeOutInMs = 15 * 1000;
int connectionTimeOutInMs = 10 * 1000;
ZooKeeperClient zooKeeperClient = new ZooKeeperClient(zookeeperHost, sessionTimeOutInMs, connectionTimeOutInMs, 2, Time.SYSTEM, "BytesInPerSec", "BytesOutPerSec");
KafkaZkClient kafkaZkClient = new KafkaZkClient(zooKeeperClient, true, Time.SYSTEM);
String topic = "superTopic";
int noOfPartitions = 2;
int noOfReplication = 1;
Properties topicConfiguration = new Properties();
AdminZkClient adminZkClient = new AdminZkClient(kafkaZkClient);
adminZkClient.createTopic(topic, noOfPartitions, noOfReplication, topicConfiguration, RackAwareMode.Disabled$.MODULE$);
} catch (Exception ex) {
ex.printStackTrace();
}
}
//SpringApplication.run(ChatApplication.class, args);
}
Here's the output I get :
06:50:06.796 [main] INFO kafka.utils.Log4jControllerRegistration$ - Registered kafka:type=kafka.Log4jController MBean
06:50:07.101 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient] Initializing a new session to 127.0.0.1:2181.
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=kunta
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=11.0.3
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Oracle Corporation
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/java-11-openjdk-amd64
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/home/robscientist/STS-WORKSPACE/poc_chat/target/classes:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-web/2.1.3.RELEASE/spring-boot-starter-web-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter/2.1.3.RELEASE/spring-boot-starter-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-logging/2.1.3.RELEASE/spring-boot-starter-logging-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar:/home/robscientist/.m2/repository/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar:/home/robscientist/.m2/repository/org/apache/logging/log4j/log4j-to-slf4j/2.11.2/log4j-to-slf4j-2.11.2.jar:/home/robscientist/.m2/repository/org/apache/logging/log4j/log4j-api/2.11.2/log4j-api-2.11.2.jar:/home/robscientist/.m2/repository/org/slf4j/jul-to-slf4j/1.7.25/jul-to-slf4j-1.7.25.jar:/home/robscientist/.m2/repository/javax/annotation/javax.annotation-api/1.3.2/javax.annotation-api-1.3.2.jar:/home/robscientist/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-json/2.1.3.RELEASE/spring-boot-starter-json-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/com/fasterxml/jackson/datatype/jackson-datatype-jsr310/2.9.8/jackson-datatype-jsr310-2.9.8.jar:/home/robscientist/.m2/repository/com/fasterxml/jackson/module/jackson-module-parameter-names/2.9.8/jackson-module-parameter-names-2.9.8.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-tomcat/2.1.3.RELEASE/spring-boot-starter-tomcat-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.16/tomcat-embed-core-9.0.16.jar:/home/robscientist/.m2/repository/org/apache/tomcat/embed/tomcat-embed-el/9.0.16/tomcat-embed-el-9.0.16.jar:/home/robscientist/.m2/repository/org/apache/tomcat/embed/tomcat-embed-websocket/9.0.16/tomcat-embed-websocket-9.0.16.jar:/home/robscientist/.m2/repository/org/hibernate/validator/hibernate-validator/6.0.14.Final/hibernate-validator-6.0.14.Final.jar:/home/robscientist/.m2/repository/javax/validation/validation-api/2.0.1.Final/validation-api-2.0.1.Final.jar:/home/robscientist/.m2/repository/org/jboss/logging/jboss-logging/3.3.2.Final/jboss-logging-3.3.2.Final.jar:/home/robscientist/.m2/repository/com/fasterxml/classmate/1.4.0/classmate-1.4.0.jar:/home/robscientist/.m2/repository/org/springframework/spring-web/5.1.5.RELEASE/spring-web-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-webmvc/5.1.5.RELEASE/spring-webmvc-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-aop/5.1.5.RELEASE/spring-aop-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-expression/5.1.5.RELEASE/spring-expression-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/io/debezium/debezium-core/0.9.5.Final/debezium-core-0.9.5.Final.jar:/home/robscientist/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.9.8/jackson-core-2.9.8.jar:/home/robscientist/.m2/repository/io/debezium/debezium-core/0.9.5.Final/debezium-core-0.9.5.Final-tests.jar:/home/robscientist/.m2/repository/org/springframework/spring-jdbc/5.1.5.RELEASE/spring-jdbc-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-beans/5.1.5.RELEASE/spring-beans-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-core/5.1.5.RELEASE/spring-core-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-jcl/5.1.5.RELEASE/spring-jcl-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-tx/5.1.5.RELEASE/spring-tx-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-websocket/2.1.3.RELEASE/spring-boot-starter-websocket-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-messaging/5.1.5.RELEASE/spring-messaging-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-websocket/5.1.5.RELEASE/spring-websocket-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-data-jpa/2.1.3.RELEASE/spring-boot-starter-data-jpa-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-aop/2.1.3.RELEASE/spring-boot-starter-aop-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/org/aspectj/aspectjweaver/1.9.2/aspectjweaver-1.9.2.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-jdbc/2.1.3.RELEASE/spring-boot-starter-jdbc-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/com/zaxxer/HikariCP/3.2.0/HikariCP-3.2.0.jar:/home/robscientist/.m2/repository/javax/transaction/javax.transaction-api/1.3/javax.transaction-api-1.3.jar:/home/robscientist/.m2/repository/javax/xml/bind/jaxb-api/2.3.1/jaxb-api-2.3.1.jar:/home/robscientist/.m2/repository/javax/activation/javax.activation-api/1.2.0/javax.activation-api-1.2.0.jar:/home/robscientist/.m2/repository/org/hibernate/hibernate-core/5.3.7.Final/hibernate-core-5.3.7.Final.jar:/home/robscientist/.m2/repository/javax/persistence/javax.persistence-api/2.2/javax.persistence-api-2.2.jar:/home/robscientist/.m2/repository/org/javassist/javassist/3.23.1-GA/javassist-3.23.1-GA.jar:/home/robscientist/.m2/repository/net/bytebuddy/byte-buddy/1.9.10/byte-buddy-1.9.10.jar:/home/robscientist/.m2/repository/antlr/antlr/2.7.7/antlr-2.7.7.jar:/home/robscientist/.m2/repository/org/jboss/jandex/2.0.5.Final/jandex-2.0.5.Final.jar:/home/robscientist/.m2/repository/org/dom4j/dom4j/2.1.1/dom4j-2.1.1.jar:/home/robscientist/.m2/repository/org/hibernate/common/hibernate-commons-annotations/5.0.4.Final/hibernate-commons-annotations-5.0.4.Final.jar:/home/robscientist/.m2/repository/org/springframework/data/spring-data-jpa/2.1.5.RELEASE/spring-data-jpa-2.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/data/spring-data-commons/2.1.5.RELEASE/spring-data-commons-2.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-orm/5.1.5.RELEASE/spring-orm-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-aspects/5.1.5.RELEASE/spring-aspects-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/com/h2database/h2/1.4.197/h2-1.4.197.jar:/home/robscientist/.m2/repository/org/webjars/stomp-websocket/2.3.3/stomp-websocket-2.3.3.jar:/home/robscientist/.m2/repository/org/webjars/bootstrap/3.3.7/bootstrap-3.3.7.jar:/home/robscientist/.m2/repository/org/webjars/jquery/1.11.1/jquery-1.11.1.jar:/home/robscientist/.m2/repository/org/springframework/kafka/spring-kafka/2.2.4.RELEASE/spring-kafka-2.2.4.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-context/5.1.5.RELEASE/spring-context-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/retry/spring-retry/1.2.4.RELEASE/spring-retry-1.2.4.RELEASE.jar:/home/robscientist/.m2/repository/org/apache/kafka/kafka-clients/2.0.1/kafka-clients-2.0.1.jar:/home/robscientist/.m2/repository/org/lz4/lz4-java/1.4.1/lz4-java-1.4.1.jar:/home/robscientist/.m2/repository/org/xerial/snappy/snappy-java/1.1.7.1/snappy-java-1.1.7.1.jar:/home/robscientist/.m2/repository/org/apache/kafka/kafka_2.12/2.2.0/kafka_2.12-2.2.0.jar:/home/robscientist/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar:/home/robscientist/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.9.0/jackson-annotations-2.9.0.jar:/home/robscientist/.m2/repository/com/fasterxml/jackson/datatype/jackson-datatype-jdk8/2.9.8/jackson-datatype-jdk8-2.9.8.jar:/home/robscientist/.m2/repository/net/sf/jopt-simple/jopt-simple/5.0.4/jopt-simple-5.0.4.jar:/home/robscientist/.m2/repository/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar:/home/robscientist/.m2/repository/org/scala-lang/scala-library/2.12.8/scala-library-2.12.8.jar:/home/robscientist/.m2/repository/org/scala-lang/scala-reflect/2.12.8/scala-reflect-2.12.8.jar:/home/robscientist/.m2/repository/com/typesafe/scala-logging/scala-logging_2.12/3.9.0/scala-logging_2.12-3.9.0.jar:/home/robscientist/.m2/repository/org/slf4j/slf4j-api/1.7.25/slf4j-api-1.7.25.jar:/home/robscientist/.m2/repository/com/101tec/zkclient/0.11/zkclient-0.11.jar:/home/robscientist/.m2/repository/org/apache/zookeeper/zookeeper/3.4.13/zookeeper-3.4.13.jar:/home/robscientist/.m2/repository/org/slf4j/slf4j-log4j12/1.7.25/slf4j-log4j12-1.7.25.jar:/home/robscientist/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar:/home/robscientist/.m2/repository/jline/jline/0.9.94/jline-0.9.94.jar:/home/robscientist/.m2/repository/org/apache/yetus/audience-annotations/0.5.0/audience-annotations-0.5.0.jar:/home/robscientist/.m2/repository/io/netty/netty/3.10.6.Final/netty-3.10.6.Final.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-reactor-netty/2.1.3.RELEASE/spring-boot-starter-reactor-netty-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/io/projectreactor/netty/reactor-netty/0.8.5.RELEASE/reactor-netty-0.8.5.RELEASE.jar:/home/robscientist/.m2/repository/io/netty/netty-codec-http/4.1.33.Final/netty-codec-http-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-common/4.1.33.Final/netty-common-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-buffer/4.1.33.Final/netty-buffer-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-transport/4.1.33.Final/netty-transport-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-resolver/4.1.33.Final/netty-resolver-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-codec/4.1.33.Final/netty-codec-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-codec-http2/4.1.33.Final/netty-codec-http2-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-handler/4.1.33.Final/netty-handler-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-handler-proxy/4.1.33.Final/netty-handler-proxy-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-codec-socks/4.1.33.Final/netty-codec-socks-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-transport-native-epoll/4.1.33.Final/netty-transport-native-epoll-4.1.33.Final-linux-x86_64.jar:/home/robscientist/.m2/repository/io/netty/netty-transport-native-unix-common/4.1.33.Final/netty-transport-native-unix-common-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/projectreactor/reactor-core/3.2.6.RELEASE/reactor-core-3.2.6.RELEASE.jar:/home/robscientist/.m2/repository/org/reactivestreams/reactive-streams/1.0.2/reactive-streams-1.0.2.jar:/home/robscientist/.m2/repository/org/postgresql/postgresql/42.2.5/postgresql-42.2.5.jar:/home/robscientist/.m2/repository/org/projectlombok/lombok/1.18.6/lombok-1.18.6.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-devtools/2.0.0.RELEASE/spring-boot-devtools-2.0.0.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot/2.1.3.RELEASE/spring-boot-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-autoconfigure/2.1.3.RELEASE/spring-boot-autoconfigure-2.1.3.RELEASE.jar
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
06:50:07.117 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=4.15.0-50-generic
06:50:07.117 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=robscientist
06:50:07.117 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/home/robscientist
06:50:07.117 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/home/robscientist/STS-WORKSPACE/poc_chat
06:50:07.118 [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=127.0.0.1:2181 sessionTimeout=15000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$#5bda8e08
06:50:07.120 [main] DEBUG org.apache.zookeeper.ClientCnxn - zookeeper.disableAutoWatchReset is false
06:50:07.129 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler.
06:50:07.131 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient] Waiting until connected.
06:50:07.138 [main-SendThread(localhost:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
06:50:07.144 [main-SendThread(localhost:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established to localhost/127.0.0.1:2181, initiating session
06:50:07.145 [main-SendThread(localhost:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Session establishment request sent on localhost/127.0.0.1:2181
06:50:07.403 [main-SendThread(localhost:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1000016739e0016, negotiated timeout = 15000
06:50:07.409 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient] Received event: WatchedEvent state:SyncConnected type:None path:null
06:50:07.415 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient] Connected.
06:50:07.469 [main-SendThread(localhost:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x1000016739e0016, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 1,12 replyHeader:: 1,237,0 request:: '/brokers/ids,F response:: v{'0},s{5,5,1559484903664,1559484903664,0,5,0,0,0,1,191}
06:50:07.492 [main-SendThread(localhost:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x1000016739e0016, packet:: clientPath:/brokers/ids/0 serverPath:/brokers/ids/0 finished:false header:: 2,4 replyHeader:: 2,237,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f6b756e74613a39303932225d2c226a6d785f706f7274223a2d312c22686f7374223a226b756e7461222c2274696d657374616d70223a2231353539353332323533363131222c22706f7274223a393039322c2276657273696f6e223a347d,s{191,191,1559532253652,1559532253652,1,0,0,72057690466942976,180,0,191}
06:50:07.690 [main-SendThread(localhost:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x1000016739e0016, packet:: clientPath:/brokers/topics/superTopic serverPath:/brokers/topics/superTopic finished:false header:: 3,3 replyHeader:: 3,237,-101 request:: '/brokers/topics/superTopic,F response::
Exception in thread "main" java.lang.NoSuchFieldError: DEFAULT_SSL_PRINCIPAL_MAPPING_RULES
at kafka.server.Defaults$.<init>(KafkaConfig.scala:224)
at kafka.server.Defaults$.<clinit>(KafkaConfig.scala)
at kafka.log.Defaults$.<init>(LogConfig.scala:36)
at kafka.log.Defaults$.<clinit>(LogConfig.scala)
at kafka.log.LogConfig$.<init>(LogConfig.scala:219)
at kafka.log.LogConfig$.<clinit>(LogConfig.scala)
at kafka.zk.AdminZkClient.validateTopicCreate(AdminZkClient.scala:128)
at kafka.zk.AdminZkClient.createTopicWithAssignment(AdminZkClient.scala:86)
at kafka.zk.AdminZkClient.createTopic(AdminZkClient.scala:56)
at com.predisurge.kafka.KafkaTopicCreator.createTopic(KafkaTopicCreator.java:41)
at com.predisurge.ChatApplication.main(ChatApplication.java:32)
After running the command to display my topics, I don't see the one I want to add.
IDE : STS 4,
Kafka version : 2.2.0,
ZK version : 3.5.5
not sure why you get this exception. Here's a solution that works for me, and just uses the KafkaAdminClient:
#SpringBootApplication
#EnableJpaAuditing
public class TopicCreator {
public static void main(String[] args) throws InterruptedException, ExecutionException {
Properties properties = new Properties();
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
AdminClient kafkaAdminClient = KafkaAdminClient.create(properties);
CreateTopicsResult result = kafkaAdminClient.createTopics(
Stream.of("foo", "bar", "baz").map(
name -> new NewTopic(name, 3, (short) 1)
).collect(Collectors.toList())
);
result.all().get();
}
}
As the above code shows, for newer versions of Kafka you can create topics directly through Kafka. Does this help?

How to fix logger randomly logging on the same line, even though it is set to start each log with a new line

I am using log4j to log events in java code. I have it set to start each log line as new line, with timestamp, thread, log level and the class where the log runs. So the configuration looks like this:
LoggerContext loggerContext = (LoggerContext) LoggerFactory.getILoggerFactory();
logger = loggerContext.getLogger("com.asdf");
logger.setAdditive(true);
PatternLayoutEncoder encoder = new PatternLayoutEncoder();
encoder.setContext(loggerContext);
encoder.setPattern("%-5level %d [%thread:%M:%caller{1}]: %message%n");
encoder.start();
cucumberAppender = new CucumberAppender();
cucumberAppender.setName("cucumber-appender");
cucumberAppender.setContext(loggerContext);
cucumberAppender.setScenario(scenario);
cucumberAppender.setEncoder(encoder);
cucumberAppender.start();
logger.addAppender(cucumberAppender);
loggerContext.start();
logger().info("*********************************************");
logger().info("* Starting Scenario - {}", scenario.getName());
logger().info("*********************************************\n");
}
#After
public void showScenarioResult(Scenario scenario) throws InterruptedException {
logger().info("**************************************************************");
logger().info("* {} Scenario - {} ", scenario.getStatus(), scenario.getName());
logger().info("**************************************************************\n");
cucumberAppender.writeToScenario();
cucumberAppender.stop();
logger.detachAppender(cucumberAppender);
logger.detachAndStopAllAppenders();
}
which most of the times outputs the log correctly, as so:
15:59:25.448 [main] INFO com.asdf.runner.steps.StepHooks -
********************************************* 15:59:25.449 [main] INFO com.asdf.runner.steps.StepHooks - * Starting Scenario - Check Cache 15:59:25.450 [main] INFO com.asdf.runner.steps.StepHooks -
********************************************* 15:59:25.558 [main] DEBUG org.cache2k.core.util.Log - New instance, using SLF4J logging 15:59:25.575 [main] INFO org.cache2k.core.Cache2kCoreProviderImpl - cache2k starting. version=1.0.1.Final, build=undefined, defaultImplementation=HeapCache 15:59:25.629 [main] DEBUG org.cache2k.CacheManager:default - open name=default, id=wvl973, classloaderId=6us14y
However, sometimes the next line of the logger is written on the above one, without using the new line, like below:
15:59:27.353 [main] INFO com.asdf.cache.CacheService - Creating a cache for [Kafka] service with specific types.15:59:27.354 [main] INFO com.asdf.runner.steps.StepHooks - **************************************************************
15:59:27.354 [main] INFO com.asdf.runner.steps.StepHooks - * PASSED Scenario - Check Cache
15:59:27.354 [main] INFO com.asdf.runner.steps.StepHooks - **************************************************************
As you can see, the first StepHooks line goes on the same line as CacheService, which is unaesthetic.
What can i change in order for the log to always log in new line, without exceptions like this?

Citrus Example TCP send and receive fails

I'm trying to send and receive a TCP message via Citrus-Framework, where I used this thread as a reference:
Receiving messages from tcp server with Citrus framework and Spring Integration times out
I'm using a python message repeater, which returns the received message. I receive a payload with python and returns it but citrus times out. I tried all serializers (except SingleTerminatior, causes a context error).
I tried different serializers but none seem to solve my problem, Citrus always times out.
15:03:58,013 DEBUG t.TestContextFactory| Created new test context - using global variables: '{}'
15:03:58,024 INFO port.LoggingReporter|
15:03:58,024 INFO port.LoggingReporter|------------------------------------------------------------------------
15:03:58,024 DEBUG port.LoggingReporter| STARTING TEST sendSpringIntegrationMessageTest <com.consol.citrus.samples>
15:03:58,025 INFO port.LoggingReporter|
15:03:58,025 DEBUG citrus.TestCase| Initializing test case
15:03:58,026 DEBUG context.TestContext| Setting variable: citrus.test.name with value: 'sendSpringIntegrationMessageTest'
15:03:58,027 DEBUG context.TestContext| Setting variable: citrus.test.package with value: 'com.consol.citrus.samples'
15:03:58,028 DEBUG citrus.TestCase| Test variables:
15:03:58,028 DEBUG citrus.TestCase| citrus.test.name = sendSpringIntegrationMessageTest
15:03:58,028 DEBUG citrus.TestCase| citrus.test.package = com.consol.citrus.samples
15:03:58,029 INFO port.LoggingReporter|
15:03:58,030 DEBUG port.LoggingReporter| TEST STEP 1/2: send
15:03:58,049 DEBUG nnel.ChannelProducer| Sending message to channel: 'input'
15:03:58,055 DEBUG nnel.ChannelProducer| Message to send is:
DEFAULTMESSAGE [id: c5c61991-f567-42bd-9302-1f8e1fa16225, payload: Req][headers: {citrus_message_type=XML, citrus_message_id=c5c61991-f567-42bd-9302-1f8e1fa16225, citrus_message_timestamp=1539090238031}]
15:03:58,164 INFO nnel.ChannelProducer| Message was sent to channel: 'input'
15:03:58,165 INFO port.LoggingReporter|
15:03:58,166 DEBUG port.LoggingReporter| TEST STEP 1/2 SUCCESS
15:03:58,166 INFO port.LoggingReporter|
15:03:58,166 DEBUG port.LoggingReporter| TEST STEP 2/2: receive
15:03:58,168 DEBUG nnel.ChannelConsumer| Receiving message from: replies
15:04:03,171 INFO port.LoggingReporter|
15:04:03,172 ERROR port.LoggingReporter| TEST FAILED sendSpringIntegrationMessageTest <com.consol.citrus.samples> Nested exception is:
at com.consol.citrus.exceptions.ActionTimeoutException: Action timeout while receiving message from channel 'replies'
at com.consol.citrus.channel.ChannelConsumer.receive(ChannelConsumer.java:97)
at com.consol.citrus.messaging.AbstractSelectiveMessageConsumer.receive(AbstractSelectiveMessageConsumer.java:50)
at com.consol.citrus.actions.ReceiveMessageAction.receive(ReceiveMessageAction.java:141)
at com.consol.citrus.actions.ReceiveMessageAction.doExecute(ReceiveMessageAction.java:120)
at com.consol.citrus.actions.AbstractTestAction.execute(AbstractTestAction.java:46)
at com.consol.citrus.dsl.actions.DelegatingTestAction.doExecute(DelegatingTestAction.java:54)
at com.consol.citrus.actions.AbstractTestAction.execute(AbstractTestAction.java:46)
at com.consol.citrus.TestCase.executeAction(TestCase.java:234)
at com.consol.citrus.TestCase.doExecute(TestCase.java:153)
at com.consol.citrus.actions.AbstractTestAction.execute(AbstractTestAction.java:46)
at com.consol.citrus.Citrus.run(Citrus.java:403)
at com.consol.citrus.dsl.testng.TestNGCitrusTest.invokeTestMethod(TestNGCitrusTest.java:125)
at com.consol.citrus.dsl.testng.TestNGCitrusTestDesigner.invokeTestMethod(TestNGCitrusTestDesigner.java:73)
at com.consol.citrus.dsl.testng.TestNGCitrusTest.run(TestNGCitrusTest.java:110)
...
My context seems to be right (I'm using spring-integration-ip 5.0.8-RELEASE), there is no exception when executing the test (except using SingleTerminatior):
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
... >
<citrus:channel-endpoint id="citrusServiceInputEndpoint"
channel-name="input" />
<citrus:channel-endpoint id="citrusServiceRepliesEndpoint"
channel-name="replies" />
<int-ip:tcp-connection-factory id="client"
type="client" host="127.0.0.1"
port="33500" single-use="false"
so-timeout="10000" using-nio="true"
deserializer="javaSerializer"
serializer="javaSerializer" />
<bean id="javaSerializer"
class="org.springframework.integration.ip.tcp.serializer.ByteArrayLfSerializer" />
<int:channel id="input" />
<int:channel id="replies">
<int:queue />
</int:channel>
<int-ip:tcp-outbound-channel-adapter
id="outboundClient" channel="input" connection-factory="client" />
<int-ip:tcp-inbound-channel-adapter
id="inboundClient" channel="replies" connection-factory="client" />
</beans>
I appreciate any kind of help
Thanks
because I'm new to spring here is the dependency I added from my part:
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-ip</artifactId>
<version>5.0.8.RELEASE</version>
</dependency>
This is the debug output of citrus:
22:47:41,071 DEBUG port.LoggingReporter| TEST STEP 1/2: send
22:47:41,085 DEBUG tListableBeanFactory| Returning cached instance of singleton bean 'citrusServiceInputEndpoint'
22:47:41,086 DEBUG nnel.ChannelProducer| Sending message to channel: 'input'
22:47:41,086 DEBUG nnel.ChannelProducer| Message to send is:
DEFAULTMESSAGE [id: 7d4f4c7a-92d0-462c-b695-c32fc7e697ae, payload: Req][headers: {citrus_message_type=XML, citrus_message_id=7d4f4c7a-92d0-462c-b695-c32fc7e697ae, citrus_message_timestamp=1539118061073}]
22:47:41,087 DEBUG tListableBeanFactory| Returning cached instance of singleton bean 'input'
22:47:41,091 DEBUG hannel.DirectChannel| preSend on channel 'input', message:
GenericMessage [payload=Req, headers={citrus_message_timestamp=1539118061073, citrus_message_type=XML, id=33295f63-948f-6bcf-289f-d5e1df8dc98b, citrus_message_id=7d4f4c7a-92d0-462c-b695-c32fc7e697ae, timestamp=1539118061091}]
22:47:41,092 DEBUG endingMessageHandler| org.springframework.integration.ip.tcp.TcpSendingMessageHandler#0 received message:
GenericMessage [payload=Req, headers={citrus_message_timestamp=1539118061073, citrus_message_type=XML, id=33295f63-948f-6bcf-289f-d5e1df8dc98b, citrus_message_id=7d4f4c7a-92d0-462c-b695-c32fc7e697ae, timestamp=1539118061091}]
22:47:41,092 DEBUG entConnectionFactory| Opening new socket connection to 127.0.0.1:33500
22:47:41,106 DEBUG ion.TcpNioConnection| New connection localhost:33500:55108:33be3b24-f5ae-4594-83e9-c7eb0f104b1f
22:47:41,110 DEBUG entConnectionFactory| client: Added new connection: localhost:33500:55108:33be3b24-f5ae-4594-83e9-c7eb0f104b1f
22:47:41,113 DEBUG endingMessageHandler| Got Connection localhost:33500:55108:33be3b24-f5ae-4594-83e9-c7eb0f104b1f
22:47:41,114 DEBUG ion.TcpNioConnection| localhost:33500:55108:33be3b24-f5ae-4594-83e9-c7eb0f104b1f writing 4
22:47:41,116 DEBUG ion.TcpNioConnection| localhost:33500:55108:33be3b24-f5ae-4594-83e9-c7eb0f104b1f Message sent GenericMessage [payload=Req, headers={citrus_message_timestamp=1539118061073, citrus_message_type=XML, id=33295f63-948f-6bcf-289f-d5e1df8dc98b, citrus_message_id=7d4f4c7a-92d0-462c-b695-c32fc7e697ae, timestamp=1539118061091}]
22:47:41,117 DEBUG channel.DirectChannel| postSend (sent=true) on channel 'input', message: GenericMessage [payload=Req, headers={citrus_message_timestamp=1539118061073, citrus_message_type=XML, id=33295f63-948f-6bcf-289f-d5e1df8dc98b, citrus_message_id=7d4f4c7a-92d0-462c-b695-c32fc7e697ae, timestamp=1539118061091}]
22:47:41,118 INFO nnel.ChannelProducer| Message was sent to channel: 'input'
22:47:41,118 INFO port.LoggingReporter|
22:47:41,119 DEBUG port.LoggingReporter| TEST STEP 1/2 SUCCESS
22:47:41,119 DEBUG tListableBeanFactory| Returning cached instance of singleton bean 'citrusServiceRepliesEndpoint'
22:47:41,120 INFO port.LoggingReporter|
22:47:41,121 DEBUG port.LoggingReporter| TEST STEP 2/2: receive
22:47:41,122 DEBUG tListableBeanFactory| Returning cached instance of singleton bean 'citrusServiceRepliesEndpoint'
22:47:41,124 DEBUG tListableBeanFactory| Returning cached instance of singleton bean 'response'
22:47:41,125 DEBUG nnel.ChannelConsumer| Receiving message from: response
22:47:46,130 INFO port.LoggingReporter|
22:47:46,131 ERROR port.LoggingReporter| TEST FAILED
sendSpringIntegrationMessageTest <com.consol.citrus.samples> Nested exception is:
com.consol.citrus.exceptions.ActionTimeoutException: Action timeout while receiving message from channel 'response'
...
I checked Wireshark and the payload ("Req") inclusive the linefeed was returned, which can be seen in the screenshot. Maybe I should mention I'm running this in an Ubuntu VM.
Wireshark screenshot
#CitrusTest(name = "sendSpringIntegrationMessageTest")
public void sendSpringIntegrationMessageTest() throws Exception {
send("citrusServiceInputEndpoint").payload("Req");
receive("citrusServiceRepliesEndpoint").payload("Req");
}
EDIT
ok there seem to be an issue in my python server. I changed the code a bit to see the received bytes on the console (before I was just counting the bytes) and suddenly the output of citrus changed.
11:40:54,924 DEBUG port.LoggingReporter| TEST STEP 2/2: receive
11:40:54,925 DEBUG tListableBeanFactory| Returning cached instance of singleton bean 'citrusServiceRepliesEndpoint'
11:40:54,925 DEBUG tListableBeanFactory| Returning cached instance of singleton bean 'response'
11:40:54,925 DEBUG nnel.ChannelConsumer| Receiving message from: response
11:40:54,928 DEBUG ion.TcpNioConnection| localhost:33500:55890:7537f5ed-f447-4b76-ba66-2ec59c7619a9 Reading...
11:40:54,929 DEBUG ion.TcpNioConnection| localhost:33500:55890:7537f5ed-f447-4b76-ba66-2ec59c7619a9 Running an assembler
11:40:54,929 DEBUG ion.TcpNioConnection| Read 4 into raw buffer
Exception in thread "pool-1-thread-3" java.lang.AbstractMethodError: org.springframework.integration.ip.tcp.connection.TcpMessageMapper.toMessage(Ljava/lang/Object;)Lorg/springframework/messaging/Message;
at org.springframework.integration.ip.tcp.connection.TcpNioConnection.convert(TcpNioConnection.java:358)
at org.springframework.integration.ip.tcp.connection.TcpNioConnection.run(TcpNioConnection.java:235)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
11:40:54,937 DEBUG ion.TcpNioConnection| localhost:33500:55890:7537f5ed-f447-4b76-ba66-2ec59c7619a9 Reading...
11:40:54,938 DEBUG ion.TcpNioConnection| Read 0 into raw buffer
11:40:59,930 INFO port.LoggingReporter|
11:40:59,931 ERROR port.LoggingReporter| TEST FAILED sendSpringIntegrationMessageTest <com.consol.citrus.samples> Nested exception is:
com.consol.citrus.exceptions.ActionTimeoutException: Action timeout while receiving message from channel 'response'
EDIT2
Because of the following post:
How to use Spring Integration 5 with Spring Boot 1.5.x
I switched from spring-integration-ip 5.0.8 to 4.3.9 and there the output changed one more time. Now my problem seems to move away from TCP issues to actual spring know-how.
15:04:12,318 DEBUG port.LoggingReporter| TEST STEP 2/2: receive
15:04:12,319 DEBUG tListableBeanFactory| Returning cached instance of singleton bean 'citrusServiceRepliesEndpoint'
15:04:12,319 DEBUG tListableBeanFactory| Returning cached instance of singleton bean 'replies'
15:04:12,319 DEBUG nnel.ChannelConsumer| Receiving message from: replies
15:04:12,327 DEBUG ion.TcpNioConnection| localhost:33500:56004:1174f4c6-608c-44c6-aeec-33da4074e195 Reading...
15:04:12,328 DEBUG ion.TcpNioConnection| localhost:33500:56004:1174f4c6-608c-44c6-aeec-33da4074e195 Running an assembler
15:04:12,329 DEBUG ion.TcpNioConnection| Read 4 into raw buffer
15:04:12,329 DEBUG ion.TcpNioConnection| localhost:33500:56004:1174f4c6-608c-44c6-aeec-33da4074e195 Reading...
15:04:12,330 DEBUG ion.TcpNioConnection| Read 0 into raw buffer
15:04:12,332 DEBUG yteArrayLfSerializer| Available to read:4
15:04:12,333 DEBUG tListableBeanFactory| Returning cached instance of singleton bean 'messageBuilderFactory'
15:04:12,336 DEBUG channel.QueueChannel| preSend on channel 'replies', message: GenericMessage [payload=byte[3], headers={ip_tcp_remotePort=33500, ip_connectionId=localhost:33500:56004:1174f4c6-608c-44c6-aeec-33da4074e195, ip_localInetAddress=0.0.0.0/0.0.0.0, ip_address=127.0.0.1, id=9bd90a54-cc2b-d3d0-ca63-6355f53dee7c, ip_hostname=localhost, timestamp=1539176652336}]
15:04:12,339 DEBUG channel.QueueChannel| postReceive on channel 'replies', message: GenericMessage [payload=byte[3], headers={ip_tcp_remotePort=33500, ip_connectionId=localhost:33500:56004:1174f4c6-608c-44c6-aeec-33da4074e195, ip_localInetAddress=0.0.0.0/0.0.0.0, ip_address=127.0.0.1, id=9bd90a54-cc2b-d3d0-ca63-6355f53dee7c, ip_hostname=localhost, timestamp=1539176652336}]
15:04:12,340 DEBUG nnel.ChannelConsumer| Received message from: replies
15:04:12,340 DEBUG tListableBeanFactory| Returning cached instance of singleton bean 'citrusServiceRepliesEndpoint'
15:04:12,354 DEBUG channel.QueueChannel| postSend (sent=true) on channel 'replies', message: GenericMessage [payload=byte[3], headers={ip_tcp_remotePort=33500, ip_connectionId=localhost:33500:56004:1174f4c6-608c-44c6-aeec-33da4074e195, ip_localInetAddress=0.0.0.0/0.0.0.0, ip_address=127.0.0.1, id=9bd90a54-cc2b-d3d0-ca63-6355f53dee7c, ip_hostname=localhost, timestamp=1539176652336}]
15:04:12,379 INFO port.LoggingReporter|
15:04:12,381 ERROR port.LoggingReporter| TEST FAILED sendSpringIntegrationMessageTest <com.consol.citrus.samples> Nested exception is:
com.consol.citrus.exceptions.CitrusRuntimeException: Could not find proper message validator for message type 'XML', please define a capable message validator for this message type
EDIT3
Seems like after adjusting my environment a lot for test, I was lastly missing this line in my context file:
<int:object-to-string-transformer id="transformer" input-channel="replies" output-channel="response" />
Thanks for nudging me into the right direction.

Quickfixj internal debug logs

I'm using QuickFIXJ 1.5 version. I set logging level to INFO in my logj.properties file like that:
log4j.logger.quickfixj.msg.incoming=INFO
log4j.logger.quickfixj.msg.outgoing=INFO
log4j.logger.quickfixj.event=INFO
But in the application logs I see DEBUG logs of QuickFIXJ.
27-11-2014 10:20:34.172 8372 [SocketConnectorIoProcessor-0.0] DEBUG (FIXMessageDecoder.java:157) detected header: pos=0,lim=1695,rem=1695,offset=0,state=1
27-11-2014 10:20:34.172 8372 [SocketConnectorIoProcessor-0.0] DEBUG (FIXMessageDecoder.java:176) body length = 1671: pos=0,lim=1695,rem=1695,offset=17,state=3
27-11-2014 10:20:34.172 8372 [SocketConnectorIoProcessor-0.0] DEBUG (FIXMessageDecoder.java:200) message body found: pos=0,lim=1695,rem=1695,offset=1688,state=4
27-11-2014 10:20:34.172 8372 [SocketConnectorIoProcessor-0.0] DEBUG (FIXMessageDecoder.java:207) found checksum: pos=0,lim=1695,rem=1695,offset=1688,state=4
27-11-2014 10:20:34.173 8373 [SocketConnectorIoProcessor-0.0] DEBUG (FIXMessageDecoder.java:226) parsed message: pos=1695,lim=1695,rem=0,offset=1695,state=4
Why does it ignore the level setting in the log4j.properties?
Thanks

algebraic error when running "aggregate" function on dataset

I'm learning hadoop/pig/hive through running through tutorials on hortonworks.com
I have indeed tried to find a link to the tutorial, but unfortunately it only ships with the ISA image that they provide to you. It's not actually hosted on their website.
batting = load 'Batting.csv' using PigStorage(',');
runs = FOREACH batting GENERATE $0 as playerID, $1 as year, $8 as runs;
grp_data = GROUP runs by (year);
max_runs = FOREACH grp_data GENERATE group as grp,MAX(runs.runs) as max_runs;
join_max_run = JOIN max_runs by ($0, max_runs), runs by (year,runs);
join_data = FOREACH join_max_run GENERATE $0 as year, $2 as playerID, $1 as runs;
dump join_data;
I've copied their code exactly as it was stated in the tutorial and I'm getting this output:
2013-06-14 14:34:37,969 [main] INFO org.apache.pig.Main - Apache Pig version 0.11.1.1.3.0.0-107 (rexported) compiled May 20 2013, 03:04:35
2013-06-14 14:34:37,977 [main] INFO org.apache.pig.Main - Logging error messages to: /hadoop/mapred/taskTracker/hue/jobcache/job_201306140401_0020/attempt_201306140401_0020_m_000000_0/work/pig_1371245677965.log
2013-06-14 14:34:38,412 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /usr/lib/hadoop/.pigbootup not found
2013-06-14 14:34:38,598 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://sandbox:8020
2013-06-14 14:34:38,998 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to map-reduce job tracker at: sandbox:50300
2013-06-14 14:34:40,819 [main] WARN org.apache.pig.PigServer - Encountered Warning IMPLICIT_CAST_TO_DOUBLE 1 time(s).
2013-06-14 14:34:40,827 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: HASH_JOIN,GROUP_BY
2013-06-14 14:34:41,115 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
2013-06-14 14:34:41,160 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.CombinerOptimizer - Choosing to move algebraic foreach to combiner
2013-06-14 14:34:41,201 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler$LastInputStreamingOptimizer - Rewrite: POPackage->POForEach to POJoinPackage
2013-06-14 14:34:41,213 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 3
2013-06-14 14:34:41,213 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - Merged 1 map-reduce splittees.
2013-06-14 14:34:41,214 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - Merged 1 out of total 3 MR operators.
2013-06-14 14:34:41,214 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 2
2013-06-14 14:34:41,488 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job
2013-06-14 14:34:41,551 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2013-06-14 14:34:41,555 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Using reducer estimator: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator
2013-06-14 14:34:41,559 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator - BytesPerReducer=1000000000 maxReducers=999 totalInputFileSize=6398990
2013-06-14 14:34:41,559 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting Parallelism to 1
2013-06-14 14:34:44,244 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - creating jar file Job5371236206169131677.jar
2013-06-14 14:34:49,495 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - jar file Job5371236206169131677.jar created
2013-06-14 14:34:49,517 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up multi store job
2013-06-14 14:34:49,529 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Key [pig.schematuple] is false, will not generate code.
2013-06-14 14:34:49,530 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Starting process to move generated code to distributed cacche
2013-06-14 14:34:49,530 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Setting key [pig.schematuple.classes] with classes to deserialize []
2013-06-14 14:34:49,755 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
2013-06-14 14:34:50,144 [JobControl] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2013-06-14 14:34:50,145 [JobControl] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
2013-06-14 14:34:50,256 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
2013-06-14 14:34:50,316 [JobControl] INFO com.hadoop.compression.lzo.GPLNativeCodeLoader - Loaded native gpl library
2013-06-14 14:34:50,444 [JobControl] INFO com.hadoop.compression.lzo.LzoCodec - Successfully loaded & initialized native-lzo library [hadoop-lzo rev cf4e7cbf8ed0f0622504d008101c2729dc0c9ff3]
2013-06-14 14:34:50,665 [JobControl] WARN org.apache.hadoop.io.compress.snappy.LoadSnappy - Snappy native library is available
2013-06-14 14:34:50,666 [JobControl] INFO org.apache.hadoop.util.NativeCodeLoader - Loaded the native-hadoop library
2013-06-14 14:34:50,666 [JobControl] INFO org.apache.hadoop.io.compress.snappy.LoadSnappy - Snappy native library loaded
2013-06-14 14:34:50,680 [JobControl] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
2013-06-14 14:34:52,796 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_201306140401_0021
2013-06-14 14:34:52,796 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Processing aliases batting,grp_data,max_runs,runs
2013-06-14 14:34:52,796 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations: M: batting[1,10],runs[2,7],max_runs[4,11],grp_data[3,11] C: max_runs[4,11],grp_data[3,11] R: max_runs[4,11]
2013-06-14 14:34:52,796 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - More information at: http://sandbox:50030/jobdetails.jsp?jobid=job_201306140401_0021
2013-06-14 14:36:01,993 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 50% complete
2013-06-14 14:36:04,767 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
2013-06-14 14:36:04,768 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_201306140401_0021 has failed! Stop running all dependent jobs
2013-06-14 14:36:04,768 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2013-06-14 14:36:05,029 [main] ERROR org.apache.pig.tools.pigstats.SimplePigStats - ERROR 2106: Error executing an algebraic function
2013-06-14 14:36:05,030 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed!
2013-06-14 14:36:05,042 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
1.2.0.1.3.0.0-107 0.11.1.1.3.0.0-107 mapred 2013-06-14 14:34:41 2013-06-14 14:36:05 HASH_JOIN,GROUP_BY
Failed!
Failed Jobs:
JobId Alias Feature Message Outputs
job_201306140401_0021 batting,grp_data,max_runs,runs MULTI_QUERY,COMBINER Message: Job failed! Error - # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201306140401_0021_m_000000
Input(s):
Failed to read data from "hdfs://sandbox:8020/user/hue/batting.csv"
Output(s):
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_201306140401_0021 -> null,
null
2013-06-14 14:36:05,042 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
2013-06-14 14:36:05,043 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias join_data
Details at logfile: /hadoop/mapred/taskTracker/hue/jobcache/job_201306140401_0020/attempt_201306140401_0020_m_000000_0/work/pig_1371245677965.log
When switching this part: MAX(runs.runs) to avg(runs.runs) then I am getting a completely different issue:
2013-06-14 14:38:25,694 [main] INFO org.apache.pig.Main - Apache Pig version 0.11.1.1.3.0.0-107 (rexported) compiled May 20 2013, 03:04:35
2013-06-14 14:38:25,695 [main] INFO org.apache.pig.Main - Logging error messages to: /hadoop/mapred/taskTracker/hue/jobcache/job_201306140401_0022/attempt_201306140401_0022_m_000000_0/work/pig_1371245905690.log
2013-06-14 14:38:26,198 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /usr/lib/hadoop/.pigbootup not found
2013-06-14 14:38:26,438 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://sandbox:8020
2013-06-14 14:38:26,824 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to map-reduce job tracker at: sandbox:50300
2013-06-14 14:38:28,238 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1070: Could not resolve avg using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.]
Details at logfile: /hadoop/mapred/taskTracker/hue/jobcache/job_201306140401_0022/attempt_201306140401_0022_m_000000_0/work/pig_1371245905690.log
Anybody know what the issue might be?
I am sure lot of people would have figured this out. I combined Eugene's solution with the original code from Hortonworks such that we get the exact output as specific in the tutorial.
Following code works and produces exact output as specified in the tutorial:
batting = LOAD 'Batting.csv' using PigStorage(',');
runs_raw = FOREACH batting GENERATE $0 as playerID, $1 as year, $8 as runs;
runs = FILTER runs_raw BY runs > 0;
grp_data = group runs by (year);
max_runs = FOREACH grp_data GENERATE group as grp, MAX(runs.runs) as max_runs;
join_max_run = JOIN max_runs by ($0, max_runs), runs by (year,runs);
join_data = FOREACH join_max_run GENERATE $0 as year, $2 as playerID, $1 as runs;
dump join_data;
Note: line "runs = FILTER runs_raw BY runs > 0;" is additional than what has been provided by Hortonworks, thanks to Eugene for sharing working code which I used to modify original Hortonworks code to make it work.
UDFs are case sensitive, so at least to answer the second part of your question - you'll need to use AVG(runs.runs) instead of avg(runs.runs)
It's likely that once you correct your syntax you'll get the original error you reported...
i am having the same exact same issue with exact same log output, but this solution doesn't work because i believe changing MAX with AVG here dumps the whole purpose of this hortonworks.com tutorial - it was to get the MAX runs by playerID for each year.
UPDATE
Finally i got it resolved - you have to either remove the first line in Batting.csv (column names) or edit your Pig Latin code like this:
batting = LOAD ‘Batting.csv’ using PigStorage(‘,’);
runs_raw = FOREACH batting GENERATE $0 as playerID, $1 as year, $8 as runs;
runs = FILTER runs_raw BY runs > 0;
grp_data = group runs by (year);
max_runs = FOREACH grp_data GENERATE group as grp, MAX(runs.runs) as max_runs;
dump max_runs;
After that you should be able to complete tutorial correctly and get the proper result.
It also looks like this is due to the "bug" in the older versions of Pig rhich was used in the tutorial
Please specify appropriate data type for playerID, year & runs like below:
runs = FOREACH batting GENERATE $0 as playerID:int, $1 as year:chararray, $8 as runs:int;
Not, it should work.

Categories