Based on my previous question, I am still trying to figure out what's the issue with my code.
I've got a most basic topic possible: keys and values are a type of Long and this is my producer code:
public class DemoProducer {
public static void main(String... args) {
Producer<Long, Long> producer = new KafkaProducer<>(createProperties());
LongStream.range(1, 100)
.forEach(
i ->
LongStream.range(100, 115)
.forEach(j -> producer.send(new ProducerRecord<>("test", i, j))));
producer.close();
}
private static final Properties createProperties() {
final Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "broker:9092");
props.put(ProducerConfig.ACKS_CONFIG, "all");
props.put(ProducerConfig.RETRIES_CONFIG, 0);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class.getName());
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, LongSerializer.class.getName());
return props;
}
}
I'd like to group things up using key and put values in a ArrayList using Kafka Streams API.
This is my Stream app that's supposed to do the transformation and put things to new topic - test-aggregated:
public class DemoStreams {
public static void main(String... args) {
final Serde<Long> longSerde = Serdes.Long();
KStreamBuilder builder = new KStreamBuilder();
builder
.stream(longSerde, longSerde, "test")
.groupByKey(longSerde, longSerde)
.aggregate(
ArrayList::new,
(subscriberId, reportId, queue) -> {
queue.add(reportId);
return queue;
},
new ArrayListSerde<>(longSerde))
.to(longSerde, new ArrayListSerde<>(longSerde), "test-aggregated");
final KafkaStreams streams = new KafkaStreams(builder, createProperties());
streams.cleanUp();
streams.start();
Runtime.getRuntime().addShutdownHook(new Thread(streams::close));
}
private static Properties createProperties() {
final Properties properties = new Properties();
String longSerdes = Serdes.Long().getClass().getName();
properties.put(StreamsConfig.APPLICATION_ID_CONFIG, "aggregation-app");
properties.put(StreamsConfig.CLIENT_ID_CONFIG, "aggregation-app-client");
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "broker:9092");
properties.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, longSerdes);
properties.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, ArrayListSerde.class);
properties.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 10 * 1000);
properties.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 0);
return properties;
}
}
I have implemented my Serde as follows:
ArrayListSerde
public class ArrayListSerde<T> implements Serde<ArrayList<T>> {
private final Serde<ArrayList<T>> inner;
public ArrayListSerde(Serde<T> serde) {
inner =
Serdes.serdeFrom(
new ArrayListSerializer<>(serde.serializer()),
new ArrayListDeserializer<>(serde.deserializer()));
}
#Override
public Serializer<ArrayList<T>> serializer() {
return inner.serializer();
}
#Override
public Deserializer<ArrayList<T>> deserializer() {
return inner.deserializer();
}
#Override
public void configure(Map<String, ?> configs, boolean isKey) {
inner.serializer().configure(configs, isKey);
inner.deserializer().configure(configs, isKey);
}
#Override
public void close() {
inner.serializer().close();
inner.deserializer().close();
}
}
ArrayListSerializer
public class ArrayListSerializer<T> implements Serializer<ArrayList<T>> {
private Serializer<T> inner;
public ArrayListSerializer(Serializer<T> inner) {
this.inner = inner;
}
// Default constructor needed by Kafka
public ArrayListSerializer() {}
#Override
public void configure(Map<String, ?> configs, boolean isKey) {
// do nothing
}
#Override
public byte[] serialize(String topic, ArrayList<T> queue) {
final int size = queue.size();
final ByteArrayOutputStream baos = new ByteArrayOutputStream();
final DataOutputStream dos = new DataOutputStream(baos);
final Iterator<T> iterator = queue.iterator();
try {
dos.writeInt(size);
while (iterator.hasNext()) {
final byte[] bytes = inner.serialize(topic, iterator.next());
dos.writeInt(bytes.length);
dos.write(bytes);
}
} catch (IOException e) {
throw new RuntimeException("Unable to serialize ArrayList", e);
}
return baos.toByteArray();
}
#Override
public void close() {
inner.close();
}
}
ArrayListDeserializer
public class ArrayListDeserializer<T> implements Deserializer<ArrayList<T>> {
private final Deserializer<T> valueDeserializer;
public ArrayListDeserializer(final Deserializer<T> valueDeserializer) {
this.valueDeserializer = valueDeserializer;
}
#Override
public void configure(Map<String, ?> configs, boolean isKey) {
// do nothing
}
#Override
public ArrayList<T> deserialize(String topic, byte[] bytes) {
if (bytes == null || bytes.length == 0) {
return null;
}
final ArrayList<T> arrayList = new ArrayList<>();
final DataInputStream dataInputStream = new DataInputStream(new ByteArrayInputStream(bytes));
try {
final int records = dataInputStream.readInt();
for (int i = 0; i < records; i++) {
final byte[] valueBytes = new byte[dataInputStream.readInt()];
dataInputStream.read(valueBytes);
arrayList.add(valueDeserializer.deserialize(topic, valueBytes));
}
} catch (IOException e) {
throw new RuntimeException("Unable to deserialize ArrayList", e);
}
return arrayList;
}
#Override
public void close() {
// do nothing
}
}
However I end up getting this Exception:
Exception in thread "permission-agg4-client-StreamThread-1" org.apache.kafka.streams.errors.StreamsException: stream-thread [aggregation-app-client-StreamThread-1] Failed to rebalance.
at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:543)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:490)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
Caused by: org.apache.kafka.streams.errors.StreamsException: Failed to configure value serde class utils.ArrayListSerde
at org.apache.kafka.streams.StreamsConfig.defaultValueSerde(StreamsConfig.java:770)
at org.apache.kafka.streams.processor.internals.AbstractProcessorContext.<init>(AbstractProcessorContext.java:59)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.<init>(ProcessorContextImpl.java:40)
at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:138)
at org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:1078)
at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:255)
at org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.createTasks(StreamThread.java:245)
at org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:1147)
at org.apache.kafka.streams.processor.internals.StreamThread.access$800(StreamThread.java:68)
at org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsAssigned(StreamThread.java:184)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:265)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:367)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:316)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:297)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1078)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:536)
... 3 more
Caused by: org.apache.kafka.common.KafkaException: Could not instantiate class utils.ArrayListSerde Does it have a public no-argument constructor?
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:286)
at org.apache.kafka.common.config.AbstractConfig.getConfiguredInstance(AbstractConfig.java:246)
at org.apache.kafka.streams.StreamsConfig.defaultValueSerde(StreamsConfig.java:764)
... 19 more
Caused by: java.lang.InstantiationException: utils.ArrayListSerde
at java.lang.Class.newInstance(Class.java:427)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:282)
... 21 more
Caused by: java.lang.NoSuchMethodException: utils.ArrayListSerde.<init>()
at java.lang.Class.getConstructor0(Class.java:3082)
at java.lang.Class.newInstance(Class.java:412)
... 22 more
I was trying to implement Serde based on PriorityQueue example found in Confluent's GitHub page: https://github.com/confluentinc/kafka-streams-examples/tree/3.3.0-post/src/main/java/io/confluent/examples/streams/utils
As the error indicates, all Serdes need to have a non-argument constructor:
Caused by: org.apache.kafka.common.KafkaException: Could not instantiate class utils.ArrayListSerde Does it have a public no-argument constructor?
You class ArrayListSerde does only have constructor:
public ArrayListSerde(Serde<T> serde) { ... }
Thus, we get this error.
Compare ArrayListSerializer:
// Default constructor needed by Kafka
public ArrayListSerializer() {}
Update:
A standard implementation for ListSerde is WIP and should be included in a future release making custom List Serde obsolete: https://issues.apache.org/jira/browse/KAFKA-8326
Related
I have to write a simple "Word Count" Topology in Java and Storm. In particular, i have an external data source generating CSV (comma separated) string like
Daniel, 0.5654, 144543, user, 899898, Comment,,,
These strings are inserted into a RabbitMQ queue called "input". This datasource works well, and i can see the strings in the queue.
Now, i modified the classic topology adding the RabbitMQSpout. The goal is to do a word count for the first field of every CSV line, and publish results into a new queue called "output". The problem is that i cannot see any tuple inside the new queue, but the topology was submitted and RUNNING.
So, summing up:
external data source puts items into the input queue
RabbitMQSpout takes items from input queue and insert them into topology
classic word-count topology isperformed
last bolt puts results into output queue
Problem:
i can see items inside input queue, but nothing into output, even if i used same method to send item into the queue in the external data source (and it works) and in RabbitMQExporter (does not work...)
Some code below
RabbitMQSpout
public class RabbitMQSpout extends BaseRichSpout {
public static final String DATA = "data";
private SpoutOutputCollector _collector;
private RabbitMQManager rabbitMQManager;
#Override
public void open(Map map, TopologyContext topologyContext, SpoutOutputCollector spoutOutputCollector) {
_collector = _collector;
rabbitMQManager = new RabbitMQManager("localhost", "rabbitmq", "rabbitmq", "test");
}
#Override
public void nextTuple() {
Utils.sleep(1000);
String data = rabbitMQManager.receive("input");
if (data != null) {
_collector.emit(new Values(data));
}
}
#Override
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
outputFieldsDeclarer.declare(new Fields(DATA));
}
}
SplitBolt
public class SplitBolt extends BaseRichBolt {
private OutputCollector _collector;
public SplitSentenceBolt() { }
#Override
public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {
this._collector = collector;
this.SPACE = Pattern.compile(",");
}
#Override
public void execute(Tuple input) {
String sentence = input.getStringByField(RabbitMQSpout.DATA);
String[] words = sentence.split(",");
if (words.length > 0)
_collector.emit(new Values(words[0]));
}
#Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("word"));
}
#Override
public Map<String, Object> getComponentConfiguration() {
return null;
}
}
WordCountBolt
public class WordCountBolt extends BaseBasicBolt {
Map<String, Integer> counts = new HashMap<String, Integer>();
#Override
public void execute(Tuple tuple, BasicOutputCollector collector) {
String word = tuple.getString(0);
Integer count = counts.get(word);
if (count == null)
count = 0;
count++;
counts.put(word, count);
System.out.println(count);
collector.emit(new Values(word, count));
}
#Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("word", "count"));
}
}
RabbitMQExporterBolt
public RabbitMQExporterBolt(String rabbitMqHost, String rabbitMqUsername, String rabbitMqPassword,
String defaultQueue) {
super();
this.rabbitMqHost = rabbitMqHost;
this.rabbitMqUsername = rabbitMqUsername;
this.rabbitMqPassword = rabbitMqPassword;
this.defaultQueue = defaultQueue;
}
#Override
public void prepare(#SuppressWarnings("rawtypes") Map map, TopologyContext topologyContext, OutputCollector outputCollector) {
this.collector=outputCollector;
this.rabbitmq = new RabbitMQManager(rabbitMqHost, rabbitMqUsername, rabbitMqPassword, defaultQueue);
}
#Override
public void execute(Tuple tuple) {
String word = tuple.getString(0);
Integer count = tuple.getInteger(1);
String output = word + " " + count;
rabbitmq.send(output);
}
#Override
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
outputFieldsDeclarer.declare(new Fields("word"));
}
}
Topology
public class WordCountTopology {
private static final String RABBITMQ_HOST = "rabbitmq";
private static final String RABBITMQ_USER = "rabbitmq";
private static final String RABBITMQ_PASS = "rabbitmq";
private static final String RABBITMQ_QUEUE = "output";
public static void main(String[] args) throws Exception {
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("spout", new RabbitMQSpout(), 1);
builder.setBolt("split", new SplitSentenceBolt(), 1)
.shuffleGrouping("spout");
builder.setBolt("count", new WordCountBolt(), 1)
.fieldsGrouping("split", new Fields("word"));
Config conf = new Config();
conf.setDebug(true);
if (args != null && args.length > 0) {
builder.setBolt("exporter",
new RabbitMQExporterBolt(
RABBITMQ_HOST, RABBITMQ_USER,
RABBITMQ_PASS, RABBITMQ_QUEUE ),
1)
.shuffleGrouping("count");
conf.setNumWorkers(3);
StormSubmitter.submitTopologyWithProgressBar(args[0], conf, builder.createTopology());
} else {
conf.setMaxTaskParallelism(3);
LocalCluster cluster = new LocalCluster();
cluster.submitTopology("word-count", conf, builder.createTopology());
Thread.sleep(10000);
cluster.shutdown();
}
}
}
RabbitMQManager
public class RabbitMQManager {
private String host;
private String username;
private String password;
private ConnectionFactory factory;
private Connection connection;
private String defaultQueue;
public RabbitMQManager(String host, String username, String password, String queue) {
super();
this.host = host;
this.username = username;
this.password = password;
this.factory = null;
this.connection = null;
this.defaultQueue = queue;
this.initialize();
this.initializeQueue(defaultQueue);
}
private void initializeQueue(String queue){
ConnectionFactory factory = new ConnectionFactory();
factory.setHost(host);
factory.setUsername(username);
factory.setPassword(password);
Connection connection;
try {
connection = factory.newConnection();
Channel channel = connection.createChannel();
boolean durable = false;
boolean exclusive = false;
boolean autoDelete = false;
channel.queueDeclare(queue, durable, exclusive, autoDelete, null);
channel.close();
connection.close();
} catch (IOException | TimeoutException e) {
e.printStackTrace();
}
}
private void initialize(){
factory = new ConnectionFactory();
factory.setHost(host);
factory.setUsername(username);
factory.setPassword(password);
try {
connection = factory.newConnection();
} catch (IOException | TimeoutException e) {
e.printStackTrace();
}
}
public void terminate(){
if (connection != null && connection.isOpen()){
try {
connection.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
private boolean reopenConnectionIfNeeded(){
try {
if (connection == null){
connection = factory.newConnection();
return true;
}
if (!connection.isOpen()){
connection = factory.newConnection();
}
} catch (IOException | TimeoutException e) {
e.printStackTrace();
return false;
}
return true;
}
public boolean send(String message){
return this.send(defaultQueue, message);
}
public boolean send(String queue, String message){
try {
reopenConnectionIfNeeded();
Channel channel = connection.createChannel();
channel.basicPublish("", queue, null, message.getBytes());
channel.close();
return true;
} catch (IOException | TimeoutException e) {
e.printStackTrace();
}
return false;
}
public String receive(String queue) {
try {
reopenConnectionIfNeeded();
Channel channel = connection.createChannel();
Consumer consumer = new DefaultConsumer(channel);
return channel.basicConsume(queue, true, consumer);
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
}
I am new to Micronaut. I am trying to port a project to Micronaut (v1.1.1) and I have found a problem with Redis.
I am just trying to save a simple POJO in Redis, but when I try to "save" it the following error is raised:
io.lettuce.core.RedisException: io.netty.handler.codec.EncoderException: Cannot encode command. Please close the connection as the connection state may be out of sync.
Code is very simple (HERE you can find a complete test.):
class DummyTest {
#Test
public void testIssue() throws Exception {
final Date now = Date.from(Instant.now());
CatalogContent expectedContentOne = CatalogContent.builder()
.contentId(1)
.status(ContentStatus.AVAILABLE)
.title("uno")
.streamId(1)
.available(now)
.tags(Set.of("tag1", "tag2"))
.build();
repository.save(expectedContentOne);
}
}
/.../
class CatalogContentRepository {
private StatefulRedisConnection<String, CatalogContent> connection;
public CatalogContentRepository(StatefulRedisConnection<String, CatalogContent> connection) {
this.connection = connection;
}
public void save(CatalogContent content) {
RedisCommands<String, CatalogContent> redisApi = connection.sync();
redisApi.set(String.valueOf(content.getContentId()),content); //Error here!
}
}
Any idea will be welcomed.
Thanks in advance.
For the record I will answer my own question:
Right now (20190514) Micronaut only generate StatefulRedisConnection<String,String> with a hardcoded UTF8 String codec.
To change this you have to replace the DefaultRedisClientFactory and define a method returning the StatefulRedisConnection you need,
with your prefered codec.
In my case:
#Requires(beans = DefaultRedisConfiguration.class)
#Singleton
#Factory
#Replaces(factory = DefaultRedisClientFactory.class)
public class RedisClientFactory extends AbstractRedisClientFactory {
#Bean(preDestroy = "shutdown")
#Singleton
#Primary
#Override
public RedisClient redisClient(#Primary AbstractRedisConfiguration config) {
return super.redisClient(config);
}
#Bean(preDestroy = "close")
#Singleton
#Primary
public StatefulRedisConnection<String, Object> myRedisConnection(#Primary RedisClient redisClient) {
return redisClient.connect(new SerializedObjectCodec());
}
#Bean(preDestroy = "close")
#Singleton
#Primary
#Override
public StatefulRedisConnection<String, String> redisConnection(#Primary RedisClient redisClient) {
throw new RuntimeException("puta mierda");
}
#Override
#Bean(preDestroy = "close")
#Singleton
public StatefulRedisPubSubConnection<String, String> redisPubSubConnection(#Primary RedisClient redisClient) {
return super.redisPubSubConnection(redisClient);
}
}
Codec has been taken from Redis Lettuce wiki
public class SerializedObjectCodec implements RedisCodec<String, Object> {
private Charset charset = Charset.forName("UTF-8");
#Override
public String decodeKey(ByteBuffer bytes) {
return charset.decode(bytes).toString();
}
#Override
public Object decodeValue(ByteBuffer bytes) {
try {
byte[] array = new byte[bytes.remaining()];
bytes.get(array);
ObjectInputStream is = new ObjectInputStream(new ByteArrayInputStream(array));
return is.readObject();
} catch (Exception e) {
return null;
}
}
#Override
public ByteBuffer encodeKey(String key) {
return charset.encode(key);
}
#Override
public ByteBuffer encodeValue(Object value) {
try {
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
ObjectOutputStream os = new ObjectOutputStream(bytes);
os.writeObject(value);
return ByteBuffer.wrap(bytes.toByteArray());
} catch (IOException e) {
return null;
}
}
}
In the server,when receiving request login from every client,I will cache the data
using ConcurrentHashMap<String, String>()in the method channelRead,Can I get the value of the ConcurrentHashMap<String, String>() by defing a public methodpublic synchronized static Map<String, String> getClientMap() anywhere? And I want to define a schedule task to execute the opreation of clearing the cache whose data in the above ConcurrentHashMap<String, String>(),how can I do in Netty? I try many times ,but failed.
This is my code in ChannelHandler:
public class LoginAuthRespHandler extends ChannelInboundHandlerAdapter {
private static final Logger LOGGER = LoggerFactory.getLogger(LoginAuthRespHandler.class);
private static final Map<String, String> nodeCheck = new ConcurrentHashMap<String, String>();
private String[] whiteList = { "127.0.0.1", "192.168.56.1" };
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
AlarmMessage message = (AlarmMessage) msg;
//judge the messsge's type
if (message.getHeader() != null && message.getHeader().getType() == MessageType.LOGIN_REQ.value()) {
String nodeIndex = ctx.channel().remoteAddress().toString();
AlarmMessage loginResp = null;
if (nodeCheck.containsKey(nodeIndex)) {
loginResp = buildResponse(ResultType.FAIL);
} else {
InetSocketAddress address = (InetSocketAddress) ctx.channel().remoteAddress();
String ip = address.getAddress().getHostAddress();
boolean isOK = false;
for (String WIP : whiteList) {
if (WIP.equals(ip)) {
isOK = true;
break;
}
}
loginResp = isOK ? buildResponse(ResultType.SUCCESS) : buildResponse(ResultType.FAIL);
if (isOK)
//add a value
nodeCheck.put(nodeIndex, message.getBody().toString());
System.out.println(nodeCheck.get(nodeIndex));
}
ctx.writeAndFlush(loginResp);
} else {
ctx.fireChannelRead(msg);
}
}
private AlarmMessage buildResponse(ResultType result) {
AlarmMessage message = new AlarmMessage();
Header header = new Header();
header.setType(MessageType.LOGIN_RESP.value());
message.setHeader(header);
message.setBody(result.value());
return message;
}
#Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
String nodeIndex = ctx.channel().remoteAddress().toString();
ctx.close();
if(nodeCheck.containsKey(nodeIndex)){
nodeCheck.remove(nodeIndex);
}
}
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
cause.printStackTrace();
ctx.close();
ctx.fireExceptionCaught(cause);
}
public static Map<String, String> getNodeCheck() {
return nodeCheck;
}
}
And this is my code in main thread:
ScheduledFuture<?> sf = executor.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
HashSet<String> clients = new HashSet<>();
Map<String,String> map = LoginAuthRespHandler.getNodeCheck();
System.out.println(map.size());
for (String key:map.keySet()) {
clients.add(map.get(key));
}
//update data
try{
MySQLDB.updateClientStatus(clients);
}catch (Exception e){
e.printStackTrace();
}
//clear map
map.clear();
clients.clear();
}
},10,10,TimeUnit.SECONDS);
}
I have some issues developing a Kafka source connector using Kafka Connect API.
I get data from a REST API using Retrofit and GSON and then try to insert it into the Kafka.
Here is my source task class:
public class BitfinexSourceTask extends SourceTask implements BitfinexTickerGetter.OnTickerReadyListener {
private static final String DATETIME_FIELD = "datetime";
private BitfinexService service;
private ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
private BlockingQueue<SourceRecord> queue = null;
private BitfinexTickerGetter tickerGetter;
private final Runnable runnable = new Runnable() {
#Override
public void run() {
try {
tickerGetter.get();
} catch (IOException e) {
e.printStackTrace();
}
}
};
private ScheduledFuture<?> scheduledFuture;
#Override
public String version() {
return VersionUtil.getVersion();
}
#Override
public void start(Map<String, String> map) {
service = BitfinexServiceFactory.create();
queue = new LinkedBlockingQueue<>();
tickerGetter = new BitfinexTickerGetter(service, this);
scheduledFuture = scheduler.scheduleAtFixedRate(runnable, 0, 5, TimeUnit.MINUTES);
}
#Override
public List<SourceRecord> poll() throws InterruptedException {
List<SourceRecord> result = new LinkedList<>();
if (queue.isEmpty()) result.add(queue.take());
queue.drainTo(result);
return result;
}
#Override
public void stop() {
scheduledFuture.cancel(true);
scheduler.shutdown();
}
#Override
public void onTickerReady(Ticker ticker) {
Map<String, ?> srcOffset = Collections.singletonMap(DATETIME_FIELD, ticker.getDatetime());
Map<String, ?> srcPartition = Collections.singletonMap("from", "bitfinex");
SourceRecord record = new SourceRecord(srcPartition, srcOffset, ticker.getSymbol(), Schema.STRING_SCHEMA, ticker.getDatetime(), Ticker.SCHEMA, ticker);
queue.offer(record);
}
}
I actually was able to build and add the connector. It runs without any errors or something, but the topic was not created. I have decided to create the topic manually and then re-run the connector, but the topic remained empty. Ticker is my POJO object containing string and double fields.
Can someone help me with this?
So i want to implement simple application which send notification kafka producer to kafka consumer.So far i have successfully send String message to producer to consumer.But when i try to send notification object kafka consumer didn't receive any objects.This is the code i have used.
public class Notification implements Serializable{
private String name;
private String message;
private long currentTimeStamp;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message;
}
public long getCurrentTimeStamp() {
return currentTimeStamp;
}
public void setCurrentTimeStamp(long currentTimeStamp) {
this.currentTimeStamp = currentTimeStamp;
}
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
Notification that = (Notification) o;
if (currentTimeStamp != that.currentTimeStamp) return false;
if (message != null ? !message.equals(that.message) : that.message != null) return false;
if (name != null ? !name.equals(that.name) : that.name != null) return false;
return true;
}
#Override
public int hashCode() {
int result = name != null ? name.hashCode() : 0;
result = 31 * result + (message != null ? message.hashCode() : 0);
result = 31 * result + (int) (currentTimeStamp ^ (currentTimeStamp >>> 32));
return result;
}
#Override
public String toString() {
return "Notification{" +
"name='" + name + '\'' +
", message='" + message + '\'' +
", currentTimeStamp=" + currentTimeStamp +
'}';
}
}
And this is producer
public class KafkaProducer {
static String topic = "kafka-tutorial";
public static void main(String[] args) {
System.out.println("Start Kafka producer");
Properties properties = new Properties();
properties.put("metadata.broker.list", "localhost:9092");
properties.put("serializer.class", "dev.innova.kafka.tutorial.producer.CustomSerializer");
ProducerConfig producerConfig = new ProducerConfig(properties);
kafka.javaapi.producer.Producer<String, Notification> producer = new kafka.javaapi.producer.Producer<String, Notification>(producerConfig);
KeyedMessage<String, Notification> message = new KeyedMessage<String, Notification>(topic, createNotification());
System.out.println("send Message to broker");
producer.send(message);
producer.close();
}
private static Notification createNotification(){
Notification notification = new Notification();
notification.setMessage("Sample Message");
notification.setName("Sajith");
notification.setCurrentTimeStamp(System.currentTimeMillis());
return notification;
}
}
And this is consumer
public class KafkaConcumer extends Thread {
final static String clientId = "SimpleConsumerDemoClient";
final static String TOPIC = "kafka-tutorial";
ConsumerConnector consumerConnector;
public KafkaConcumer() {
Properties properties = new Properties();
properties.put("zookeeper.connect","localhost:2181");
properties.put("group.id","test-group");
properties.put("serializer.class", "dev.innova.kafka.tutorial.producer.CustomSerializer");
properties.put("zookeeper.session.timeout.ms", "400");
properties.put("zookeeper.sync.time.ms", "200");
properties.put("auto.commit.interval.ms", "1000");
ConsumerConfig consumerConfig = new ConsumerConfig(properties);
consumerConnector = Consumer.createJavaConsumerConnector(consumerConfig);
}
#Override
public void run() {
Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
topicCountMap.put(TOPIC, new Integer(1));
Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumerConnector.createMessageStreams(topicCountMap);
KafkaStream<byte[], byte[]> stream = consumerMap.get(TOPIC).get(0);
ConsumerIterator<byte[], byte[]> it = stream.iterator();
System.out.println("It :" + it.size());
while(it.hasNext()){
System.out.println(new String(it.next().message()));
}
}
private static void printMessages(ByteBufferMessageSet messageSet) throws UnsupportedEncodingException {
for(MessageAndOffset messageAndOffset: messageSet) {
ByteBuffer payload = messageAndOffset.message().payload();
byte[] bytes = new byte[payload.limit()];
payload.get(bytes);
System.out.println(new String(bytes, "UTF-8"));
}
}
}
And finally i have used customserializer to serialize and deserialize object.
public class CustomSerializer implements Encoder<Notification>, Decoder<Notification> {
public CustomSerializer(VerifiableProperties verifiableProperties) {
/* This constructor must be present for successful compile. */
}
#Override
public byte[] toBytes(Notification o) {
return new byte[0];
}
#Override
public Notification fromBytes(byte[] bytes) {
return null;
}
}
Can someone tell me what is the issue ? is this the right way ?
You have two problems.
First, your deserializer doesn't have any logic. It returns an empty byte array for each object it serializes and returns a null object whenever it's asked to deserialize an object. You need to put code there that actually serializes and deserializes your objects.
Second, if you plan to use the native JVM serialization and deserialization logic from the JVM, you'll need to add a serialVersionUID to your beans that will be transported. Something like this:
private static final long serialVersionUID = 123L;
You can use any value you like. When an object is deserialized by the JVM the serialVersionId in the object is compared to the value specified in the loaded class definition. If the two are different then the JVM assumes that even though you have a class definition loaded you don't have the correct version of the class definition loaded and serialization will fail. If you don't specify a value for serialVersionID in your class definition then the JVM will make one up for you and two different JVM's (the one with the producer and the one with the consumer) will almost certainly make up different values for you.
EDIT
You'd need to make your serializer look something like this if you want to leverage the default Java serialization:
public class CustomSerializer implements Encoder<Notification>, Decoder<Notification> {
public CustomSerializer(VerifiableProperties verifiableProperties) {
/* This constructor must be present for successful compile. */
}
#Override
public byte[] toBytes(Notification o) {
try {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ObjectOutputStream oos = new ObjectOutputStream(baos);
oos.writeObject(o);
oos.close();
byte[] b = baos.toByteArray();
return b;
} catch (IOException e) {
return new byte[0];
}
}
#Override
public Notification fromBytes(byte[] bytes) {
try {
return (Notification) new ObjectInputStream(new ByteArrayInputStream(b)).readObject();
} catch (Exception e) {
return null;
}
}
Create a custom deserializer , Kafka need a way to serialize and deserialize .We have to provide both of these implementations so far
Need to add library to get the object mapper class
FasterXML jackson – 2.8.6
Example - serializer
public class PayloadSerializer implements org.apache.kafka.common.serialization.Serializer {
#Override
public byte[] serialize(String arg0, Object arg1) {
byte[] retVal = null;
ObjectMapper objectMapper = new ObjectMapper();
TestModel model =(TestModel) arg1;
try {
retVal = objectMapper.writeValueAsString(model).getBytes();
} catch (Exception e) {
e.printStackTrace();
}
return retVal;
}
#Override
public void close() {
}
#Override
public void configure(Map map, boolean bln) {
}
}
Deserializer
public class PayloadDeserializer implements Deserializer {
#Override
public void close() {
}
#Override
public TestModel deserialize(String arg0, byte[] arg1) {
ObjectMapper mapper = new ObjectMapper();
TestModel testModel = null;
try {
testModel = mapper.readValue(arg1, TestModel.class);
} catch (Exception e) {
e.printStackTrace();
}
return testModel;
}
#Override
public void configure(Map map, boolean bln) {
}
}
Finally we have to pass deserializer class to the receiver
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG - PayloadDeserializer.class
or
deserializer.class - classpath.PayloadDeserializer
I strongly suggest you to convert your object to an Avro object before sending it.
It is not that difficult and is the Kafka way of transmitting objects.