I have a Kafka consumer that performs some migrations. Pretty simple flow
#KafkaListener(topics="client-migration-blah", groupId="migration-group",
containerFactory="kafkaListnerContainerFactory")
public void consume(ConsumerRecord<String, Object> payload) {
try {
Client client = payload.value();
if( migrationClient.clientExists(client)) {
updateClient(ClientEvent.UPDATE, client);
}
else {
migrationClient.importClient(ClientEvent.CREATE, client);
}
} catch (Exception ex) {
log.error("yada yada yada");
sendToDLQ(ClientEvent.ERROR, client);
}
}
I need a breakup of the three usecases, create, update and error(DLQ). Short of building a streaming (?) solution to collect these aggregates, what would be a simple way to gather these events to extract the breakup?
Related
I am new to MQTT and I have some questions that I hope you guys could help me with. I'm working on a project that will require me to utilize the MQTT protocol and the program needs to be written in java(Just some background info)
Can a MQTT client subscribe for particular time interval? I need to read mqtt messages using eclipse paho client mqttv3 and subscribe to a particular topic for certain duration (e.g. 15 minutes)and read those mqtt messages.
Please find below the code which I have tried .
private void initializeConnectionOptions() {
try {
mqttConnectOptions.setCleanSession(false);
mqttConnectOptions.setAutomaticReconnect(false);
mqttConnectOptions.setSocketFactory(SslUtil.getSocketFactory(this.caCrt, this.clientCrt, this.clientKey));
mqttConnectOptions.setKeepAliveInterval(300);
mqttConnectOptions.setConnectionTimeout(300);
mqttClient = new MqttClient("ssl://IP:port", "clientID", memoryPersistence);
mqttClient.setCallback(new MqttCallback() {
#Override
public void connectionLost(Throwable cause) {
}
#Override
public void messageArrived(String topic, MqttMessage message) throws Exception {
String attribute = "Attribute";
JSONObject json = new JSONObject(message.toString());
LOGGER.info("json value is "+ json.toString());
if (json.toString().contains(attribute)) {
int value = json.getInt(attribute);
Long sourceTimestamp = json.getLong("sourceTimestamp");
String deviceName = json.getString("deviceName");
String deviceType = json.getString("deviceType");
if (!nodeValueWithDevice.containsKey(deviceName)) {
List<Integer> attributeValue = new ArrayList<Integer>();
if (!attributeValue.contains(value)) {
attributeValue.add(value);
}
nodeValueWithDevice.put(deviceName, attributeValue);
} else {
List<Integer> temList = nodeValueWithDevice.get(deviceName);
if (!temList.contains(value)) {
temList.add(value);
}
nodeValueWithDevice.put(deviceName, temList);
}
if (!sourceTimestampWithDevice.containsKey(deviceName)) {
List<Long> Time = new ArrayList<Long>();
if (!Time.contains(sourceTimestamp)) {
Time.add(sourceTimestamp);
}
sourceTimestampWithDevice.put(deviceName, Time);
} else {
List<Long> tempList2 = sourceTimestampWithDevice.get(deviceName);
if (!tempList2.contains(sourceTimestamp)) {
tempList2.add(sourceTimestamp);
}
sourceTimestampWithDevice.put(deviceName, tempList2);
}
LOGGER.info(" map of source time stamp is :::" + sourceTimestampWithDevice);
LOGGER.info(" map of value is :::" + nodeValueWithDevice);
}
}
#Override
public void deliveryComplete(IMqttDeliveryToken token) {
}
});
} catch (MqttException | NoSuchAlgorithmException me) {
LOGGER.error("Error while connecting to Mqtt broker. Error message {} Error code {}", me.getMessage());
}
}
public void subscription(String inputTopic) {
try {
connectToBroker();
mqttClient.subscribe(getOutputTopic(inputTopic), 1);
LOGGER.info("subscription is done::::");
} catch (Exception e) {
LOGGER.error("Error while subscribing message to broker", e.getMessage());
e.printStackTrace();
}
}
No, the clients all designed to receive all messages for the lifetime of the client connection.
If you only want to be subscribed for a given duration it's up to you to find a way to be be notified when that time has passed and explicitly disconnect the client.
According to the MQTT specification for both v5.0 and v3.1.1, there is no specified way to only subscribe to a topic for a fixed interval. However, this could be done through your application logic.
In your case, assuming you have full control of the client, you can subscribe to some topic, keep track of the time connected, then after 15 minutes (or whatever interval you specify) send an UNSUBSCRIBE packet for that topic.
I'm new to vert.x and would like to know if its possible to configure eventbus somehow to make it work consistently?
I mean need to send requests one by one using vert.x
At the moment I got this code which uses eventloop principle and waits until all handlers finished, but I don't need this done that fast, idea is to free server from lots of requests at the same time. Here eb_send() uses default EventBus.send() method. In other words I want to execute all requests with blocking, waiting for answers before requests.
List<Future> queue = new ArrayList<>();
files.forEach(fileObj -> {
Future<JsonObject> trashStatusHandler = Future.future();
queue.add(trashStatusHandler);
eb_send(segment, StorageType.getAddress(StorageType.getStorageType(fileInfo.getString("storageType"))) + ".getTrashStatus", fileInfo, reply -> {
Entity dummy = createDummySegment();
try {
if (reply.succeeded()) {
//succeded
}
} catch (Exception ex) {
log.error(ex);
}
trashStatusHandler.complete();
});
});
The basic idea is to extract this into a function, which you would invoke recursively.
public void sendFile(List<File> files, AtomicInteger c) {
eb_send(segment, StorageType.getAddress(StorageType.getStorageType(fileInfo.getString("storageType"))) + ".getTrashStatus", fileInfo, reply -> {
Entity dummy = createDummySegment();
try {
if (reply.succeeded()) {
//succeded
}
// Recursion
if (c.incrementAndGet() < files.size()) {
sendFile(files, c);
}
} catch (Exception ex) {
log.error(ex);
}
});
}
I have a vertx service for all message-broker related operations. For e.g. an exchange creation function looks like this
#Override
public BrokerService createExchange(String exchange,
Handler<AsyncResult<JsonArray>> resultHandler) {
try {
getAdminChannel(exchange).exchangeDeclare(exchange, "topic", true);
resultHandler.handle(Future.succeededFuture());
} catch(Exception e) {
e.printStackTrace();
resultHandler.handle(Future.failedFuture(e.getCause()));
}
return this;
}
I am in the process of converting my entire codebase into rxjava and I would like to convert functions like these into completables. Something like:
try {
getAdminChannel(exchange).exchangeDeclare(exchange, "topic", true);
Completable.complete();
} catch(Exception e) {
Completable.error(new BrokerErrorThrowable("Exchange creation failed"));
}
Furthermore, I would also like to be able to throw custom errors like Completable.error(new BrokerErrorThrowable("Exchange creation failed")) when things go wrong. This is so that I'll be able to catch these errors and respond with appropriate HTTP responses.
I saw that Completable.fromCallable() is one way to do it, but I haven't found a way to throw these custom exceptions. How do I go about this? Thanks in advance!
I was able to figure it out. All I had to do was this:
#Override
public BrokerService createExchange(String exchange, Handler<AsyncResult<Void>> resultHandler) {
Completable.fromCallable(
() -> {
try {
getAdminChannel(exchange).exchangeDeclare(exchange, "topic", true);
return Completable.complete();
} catch (Exception e) {
return Completable.error(new InternalErrorThrowable("Create exchange failed"));
}
})
.subscribe(CompletableHelper.toObserver(resultHandler));
return this;
}
In my KafkaConsumer app I want to read a batch of messages with poll() and process them. But processing may fail. In this case I want to retry until I succeed but only retry if consumer still owns partitions. I don't want to constantly call poll() because I don't want to read more data.
This is a code snippet:
consumer = new KafkaConsumer<>(consumerConfig);
try {
consumer.subscribe(config.topics() /** Callback does not work as I do not call poll in between */ );
while (true) {
ConsumerRecords<byte[], Value> values = consumer.poll(10000);
while (/* I am still owner of partitions */) {
try {
process(values);
} catch (Exception e) {
log.error("I dont care, just retry while I own the partitions", e)
}
}
}
} catch (WakeupException e) {
// shutting down
} finally {
consumer.close();
}
There is a callback method that tells you when your consumers partition assignments are about to be revoked. Keep processing message unless you get an onPartitionRevoked() event.
https://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/consumer/ConsumerRebalanceListener.html#onPartitionsRevoked(java.util.Collection)
What about simply calling assignment() ?
http://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#assignment()
I came to a conclusion that it is impossible to call poll() without reading messages with current kafka consumer 10.2.x. However, it is possible to update offset after a processing failure. So I update offset as if the messages were never read
while (!stopped) {
ConsumerRecords<byte[], Value> values = consumer.poll(timeout);
try {
process(values);
} catch (Exception e) {
rewind(records);
// Ensure a delay after errors to let dependencies recover
Thread.sleep(delay);
}
}
and rewind method is
private void rewind(ConsumerRecords<byte[], Value> records) {
records.partitions().forEach(partition -> {
long offset = records.records(partition).get(0).offset();
consumer.seek(partition, offset);
});
}
It solves the initial problem
How can i use Spring Data in order to connect to DataStore google, actually i use com.google.api.services.datastore.DatastoreV1
But my lead Manager want use spring-Data with dataStore how can i do that?
for example to insert an Entity i actually use:
public void insert(Entity entity) {
Datastore datastore = this.datastoreFactory.getInstance();
CommitRequest request =
CommitRequest.newBuilder().setMode(CommitRequest.Mode.NON_TRANSACTIONAL)
.setMutation(Mutation.newBuilder().addInsertAutoId(entity)).build();
try {
CommitResponse response = datastore.commit(request);
} catch (DatastoreException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
#Override
#SuppressWarnings("deprecation")
public Datastore getInstance() {
if(datastore != null)
return datastore;
try {
// Setup the connection to Google Cloud Datastore and infer
// credentials
// from the environment.
//the environment variables DATASTORE_SERVICE_ACCOUNT and
DATASTORE_PRIVATE_KEY_FILE must be set
datastore = DatastoreFactory.get().create(
DatastoreHelper.getOptionsfromEnv().dataset(Constant.ProjectId)
.build());
} catch (GeneralSecurityException exception) {
System.err.println("Security error connecting to the datastore: "
+ exception.getMessage());
return null;
} catch (IOException exception) {
System.err.println("I/O error connecting to the datastore: "
+ exception.getMessage());
return null;
}
return datastore;
}
any help will be appreciated
To use Spring Data with a specific storage you need to implement a bunch of interfaces from Spring Data Commons. Take a look at the GCP Spanner Spring Data implementation as an example (https://github.com/spring-cloud/spring-cloud-gcp/tree/master/spring-cloud-gcp-data-spanner)