cloud function doesn't capture pubsub message, even though it is triggered - java

In my code I have 2 cloud functions, cf1 and cf2. cf1 is triggered via pubsub topic t1 with a Google Scheduler cron job every 10 minutes and creates a list and sends it to topic t2 that triggers cf2. When I use the Google's example for the cf2 I can see my message and it works. However when I deploy my own code and log the message this is what I see: ```
cf2.accept:81) - data
.accept:83) - ms {"data_":{"bytes":[],"hash":0},"messageId_":"","orderingKey_":"","memoizedIsInitialized":-1,"unknownFields":{"fields":{},"fieldsDescending":{}},"memoizedSize":-1,"memoizedHashCode":0}
My code is: ```
public class cf2 implements BackgroundFunction<PubsubMessage> {
#Override
public void accept(PubsubMessage message, Context context) throws Exception {
if (message.getData() == null) {
logger.info("No message provided");
return;
}
String messageString = new String(
Base64.getDecoder().decode(message.getData().toStringUtf8()),
StandardCharsets.UTF_8);
logger.info(messageString);
logger.info("Starting the job");
String data = message.getData().toStringUtf8();
logger.info("data "+ data);
String ms = new Gson().toJson(message);
logger.info("ms "+ ms);
}```
But when I use Google's example code :
package com.example;
import com.example.Example.PubSubMessage;
import com.google.cloud.functions.BackgroundFunction;
import com.google.cloud.functions.Context;
import java.util.Base64;
import java.util.Map;
import java.util.logging.Logger;
public class Example implements BackgroundFunction<PubSubMessage> {
private static final Logger logger = Logger.getLogger(Example.class.getName());
#Override
public void accept(PubSubMessage message, Context context) {
String data = message.data != null
? new String(Base64.getDecoder().decode(message.data))
: "empty message";
logger.info(data);
}
public static class PubSubMessage {
String data;
Map<String, String> attributes;
String messageId;
String publishTime;
}
}
I see my message body very neatly in the logs. Can someone help me with what is wrong with my code?
Here's how I deploy my function:
gcloud --project=${PROJECT_ID} functions deploy \
cf2 \
--entry-point=path.to.cf2 \
--runtime=java11 \
--trigger-topic=t2 \
--timeout=540\
--source=folder \
--set-env-vars="PROJECT_ID=${PROJECT_ID}" \
--vpc-connector=projects/${PROJECT_ID}/locations/us-central1/connectors/appengine-default-connect
and when I log the message.getData() I get <ByteString#37c278a2 size=0 contents=""> while I know message is not empty ( I made another test subscription on the topic that helps me see the message there )

You need to define what is a PubSub message. This part is missing in your code and I don't know which PubSubMessage type you are using:
public static class PubSubMessage {
String data;
Map<String, String> attributes;
String messageId;
String publishTime;
}
It should solve your issue. Let me know.

Related

InvalidParameterException on SNS topic using java

While trying to send message to AWS SNS topic using com.amazonaws.services.sns java module, I am stuck on following error:
shaded.com.amazonaws.services.sns.model.InvalidParameterException: Invalid parameter: Message too long (Service: AmazonSNS; Status Code: 400; Error Code: InvalidParameter; Request ID: 3b01ce49-a37d-5aba-bec2-9ab9d5446aea)
at shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1587)
at shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1257)
at shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1029)
at shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:741)
at shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:715)
at shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:697)
at shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:665)
at shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:647)
at shaded.com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:511)
at shaded.com.amazonaws.services.sns.AmazonSNSClient.doInvoke(AmazonSNSClient.java:2270)
at shaded.com.amazonaws.services.sns.AmazonSNSClient.invoke(AmazonSNSClient.java:2246)
at shaded.com.amazonaws.services.sns.AmazonSNSClient.executePublish(AmazonSNSClient.java:1698)
at shaded.com.amazonaws.services.sns.AmazonSNSClient.publish(AmazonSNSClient.java:1675)
Following is the AmazonSNS helper Class. This class manages the client creation and publishing message to SNS topic.
import java.io.Serializable;
import com.amazonaws.services.sns.AmazonSNS;
import com.amazonaws.services.sns.AmazonSNSClientBuilder;
import com.amazonaws.services.sns.model.PublishRequest;
import com.amazonaws.services.sns.model.PublishResult;
public class AWSSNS implements Serializable {
private static final long serialVersionUID = -4175291946259141176L;
protected AmazonSNS client;
public AWSSNS(){
this.client=AmazonSNSClientBuilder.standard().withRegion("us-west-2").build();
}
public AWSSNS(AmazonSNS client) {
this.client=client;
}
public AmazonSNS getSnsClient(){
return this.client;
}
public void setSqsClient(AmazonSNS client){
this.client = client;
}
public boolean sendMessages(String topicArn, String messageBody){
PublishRequest publishRequest = new PublishRequest(topicArn, messageBody);
PublishResult publishResult = this.client.publish(publishRequest);
if(publishResult != null && publishResult.getMessageId() != null){
return true;
}
else{
return false;
}
}
}
Following is the code snippet from where the amazonSNS helper class is being called.It does nothing but create a message of String dataType and send it forward along with the topicARN.
HashMap<String, String> variable_a = new HashMap<String, String>();
Gson gson = new Gson();
for (Object_a revoke : Object_a) {
Object_a operation = someMethod1(revoke);
String serializedOperation = gson.toJson(operation);
variable_a.put(revoke.someMethod2(), serializedOperation);
String message = gson.toJson(variable_a);
LOG.info(String.format("SNS message: %s", message));
this.awsSNS.sendMessages(topicARN, message);
}
So basically the error is thrown from inside the sendMessage.
Found the solution to the problem.
AWS SNS topic has a fixed maximum size. So, on publishing messages of larger size than the maximum size would result in invalidParameterException with message "message too long".
My message was more than that size and that was the reason for the error. I shredded the message until the size came under max size.

Sending PubSub message manually in Dataflow

How can I send a PubSub message manually (that is to say, without using a PubsubIO) in Dataflow ?
Importing (via Maven) google-cloud-dataflow-java-sdk-all 2.5.0 already imports a version of com.google.pubsub.v1 for which I was unable to find an easy way to send messages to a Pubsub topic (this version doesn't, for instance, allow to manipulate Publisher instances, which is the way described in the official documentation).
Would you consider using PubsubUnboundedSink? Quick example:
import org.apache.beam.runners.dataflow.options.DataflowPipelineOptions;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.options.ValueProvider.StaticValueProvider;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.transforms.Create;
import org.apache.beam.sdk.values.PCollection;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubClient;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubJsonClient;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubUnboundedSink;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubClient.TopicPath;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubMessage;
public class PubsubTest {
public static void main(String[] args) {
DataflowPipelineOptions options = PipelineOptionsFactory.fromArgs(args)
.as(DataflowPipelineOptions.class);
// writes message to "output_topic"
TopicPath topic = PubsubClient.topicPathFromName(options.getProject(), "output_topic");
Pipeline p = Pipeline.create(options);
p
.apply("input string", Create.of("This is just a message"))
.apply("convert to Pub/Sub message", ParDo.of(new DoFn<String, PubsubMessage>() {
#ProcessElement
public void processElement(ProcessContext c) {
c.output(new PubsubMessage(c.element().getBytes(), null));
}
}))
.apply("write to topic", new PubsubUnboundedSink(
PubsubJsonClient.FACTORY,
StaticValueProvider.of(topic), // topic
"timestamp", // timestamp attribute
"id", // ID attribute
5 // number of shards
));
p.run();
}
}
Here's a way I found browsing https://github.com/GoogleCloudPlatform/cloud-pubsub-samples-java/blob/master/dataflow/src/main/java/com/google/cloud/dataflow/examples/StockInjector.java:
import com.google.api.services.pubsub.Pubsub;
import com.google.api.services.pubsub.model.PublishRequest;
import com.google.api.services.pubsub.model.PubsubMessage;
public class PubsubManager {
private static final Logger logger = LoggerFactory.getLogger(PubsubManager.class);
private static final JsonFactory JSON_FACTORY = JacksonFactory.getDefaultInstance();
private static final Pubsub pubsub = createPubsubClient();
public static class RetryHttpInitializerWrapper implements HttpRequestInitializer {
// Intercepts the request for filling in the "Authorization"
// header field, as well as recovering from certain unsuccessful
// error codes wherein the Credential must refresh its token for a
// retry.
private final GoogleCredential wrappedCredential;
// A sleeper; you can replace it with a mock in your test.
private final Sleeper sleeper;
private RetryHttpInitializerWrapper(GoogleCredential wrappedCredential) {
this(wrappedCredential, Sleeper.DEFAULT);
}
// Use only for testing.
RetryHttpInitializerWrapper(
GoogleCredential wrappedCredential, Sleeper sleeper) {
this.wrappedCredential = Preconditions.checkNotNull(wrappedCredential);
this.sleeper = sleeper;
}
#Override
public void initialize(HttpRequest request) {
final HttpUnsuccessfulResponseHandler backoffHandler =
new HttpBackOffUnsuccessfulResponseHandler(
new ExponentialBackOff())
.setSleeper(sleeper);
request.setInterceptor(wrappedCredential);
request.setUnsuccessfulResponseHandler(
new HttpUnsuccessfulResponseHandler() {
#Override
public boolean handleResponse(HttpRequest request,
HttpResponse response,
boolean supportsRetry)
throws IOException {
if (wrappedCredential.handleResponse(request,
response,
supportsRetry)) {
// If credential decides it can handle it, the
// return code or message indicated something
// specific to authentication, and no backoff is
// desired.
return true;
} else if (backoffHandler.handleResponse(request,
response,
supportsRetry)) {
// Otherwise, we defer to the judgement of our
// internal backoff handler.
logger.info("Retrying " + request.getUrl());
return true;
} else {
return false;
}
}
});
request.setIOExceptionHandler(new HttpBackOffIOExceptionHandler(
new ExponentialBackOff()).setSleeper(sleeper));
}
}
/**
* Creates a Cloud Pub/Sub client.
*/
private static Pubsub createPubsubClient() {
try {
HttpTransport transport = GoogleNetHttpTransport.newTrustedTransport();
GoogleCredential credential = GoogleCredential.getApplicationDefault();
HttpRequestInitializer initializer =
new RetryHttpInitializerWrapper(credential);
return new Pubsub.Builder(transport, JSON_FACTORY, initializer).build();
} catch (IOException | GeneralSecurityException e) {
logger.error("Could not create Pubsub client: " + e);
}
return null;
}
/**
* Publishes the given message to a Cloud Pub/Sub topic.
*/
public static void publishMessage(String message, String outputTopic) {
int maxLogMessageLength = 200;
if (message.length() < maxLogMessageLength) {
maxLogMessageLength = message.length();
}
logger.info("Received ...." + message.substring(0, maxLogMessageLength));
// Publish message to Pubsub.
PubsubMessage pubsubMessage = new PubsubMessage();
pubsubMessage.encodeData(message.getBytes());
PublishRequest publishRequest = new PublishRequest();
publishRequest.setMessages(Collections.singletonList(pubsubMessage));
try {
pubsub.projects().topics().publish(outputTopic, publishRequest).execute();
} catch (java.io.IOException e) {
logger.error("Stuff happened in pubsub: " + e);
}
}
}
You can send pubsub message using PubsubIO writeMessages method
dataflow Pipeline steps
Pipeline p = Pipeline.create(options);
p.apply("Transformer1", ParDo.of(new Fn.method1()))
.apply("Transformer2", ParDo.of(new Fn.method2()))
.apply("PubsubMessageSend", PubsubIO.writeMessages().to(PubSubConfig.getTopic(options.getProject(), options.getpubsubTopic ())));
Define Project Name and pubsubTopic where to want to send pub subs message in the PipeLineOptions

Junit integration testing

I need help writing a unit test for class NotificationHandler. so I made NotificationHandlerTest (using junit4) but I don't know how to determine what I should expect as a result versus what the actual result is, so one or more simple test's (for some of its methods) would help me a lot!
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.integration.annotation.Poller;
import org.springframework.integration.annotation.ServiceActivator;
import org.springframework.messaging.Message;
import org.springframework.stereotype.Component;
import java.util.List;
import java.util.stream.Collectors;
#Component
class NotificationHandler {
private static Logger LOG = LoggerFactory.getLogger(NotificationHandler.class);
#Autowired
private NotificationRoutingRepository routingRepository;
#Autowired
private SendNotificationGateway gateway;
#Autowired
private AccessService accessService;
#Autowired
private EndpointService endpointService;
#ServiceActivator(inputChannel = Channels.ASSET_MODIFIED_CHANNEL, poller = #Poller("assetModifiedPoller"), outputChannel = Channels.NULL_CHANNEL)
public Message<?> handle(Message<EventMessage> message) {
final EventMessage event = message.getPayload();
LOG.debug("Generate notification messages: {}, {}", event.getOriginType(), event.getType());
routingRepository.findByOriginTypeAndEventType(event.getOriginType(), event.getType()).stream()
.filter(routing -> routing.getOriginId() == null || routing.getOriginId() == event.getOriginId())
.map(routing -> getNotificationMessages(event, routing))
.flatMap(List::stream)
.forEach(notificationMessage -> {
LOG.debug("Sending message {}", notificationMessage);
gateway.send(notificationMessage);
});
return message;
}enter code here
enter code here`enter code here`
private List<NotificationMessage> getNotificationMessages(EventMessage event, NotificationRouting routing) {
switch (routing.getDestinationType()) {
case "USERS":
LOG.trace("Getting endpoints for users");
return getEndpointsByUsers(routing, event.getOrigin(), event.getOriginType()).stream()
.map(endpoint -> new NotificationMessage(event.getOriginType(), event.getOrigin(), endpoint))
.collect(Collectors.toList());
default:
LOG.trace("Getting default endpoints");
return getEndpoints(routing, event.getOrigin(), event.getOriginType()).stream()
.map(endpoint -> new NotificationMessage(event.getOriginType(), event.getOrigin(), endpoint))
.collect(Collectors.toList());
}
}
private List<Endpoint> getEndpoints(NotificationRouting routing, Object origin, String originType) {
final Asset asset = getAssetForObject(origin, originType);
final List<Long> userIds = accessService.list(asset).stream()
.map(ResourceAccess::getUser)
.map(AbstractEntity::getId)
.collect(Collectors.toList());
userIds.add(asset.getCreatorId());
LOG.trace("getEndpoints usersIds {}", userIds);
final List<Endpoint> endpoints = endpointService.getEndpoints(userIds, routing.getEndpointType());
LOG.trace("Endpoints {}", endpoints.stream().map(Endpoint::getId).collect(Collectors.toList()));
return endpoints;
}
private List<Endpoint> getEndpointsByUsers(NotificationRouting routing, Object origin, String originType) {
final Asset asset = getAssetForObject(origin, originType);
final List<Long> userIds = accessService.list(asset).stream()
.map(ResourceAccess::getUser)
.map(AbstractEntity::getId)
.filter(routing.getDestinations()::contains)
.collect(Collectors.toList());
routing.setDestinations(userIds);
routingRepository.save(routing);
LOG.trace("getEndpointsByUsers usersIds {}", userIds);
final List<Endpoint> endpoints = endpointService.getEndpoints(userIds, routing.getEndpointType());
LOG.trace("Endpoints {}", endpoints.stream().map(Endpoint::getId).collect(Collectors.toList()));
return endpoints;
}
private Asset getAssetForObject(Object origin, String originType) {
switch (originType) {
case EventMessage.POINT:
return (Point) origin;
case EventMessage.FEED:
return ((Feed) origin).getPoint();
case EventMessage.ACTUATOR:
return ((Actuator)origin).getPoint();
case EventMessage.DEVICE:
return (Device) origin;
case EventMessage.ALARM:
return ((Alarm) origin).getPoint();
default:
throw new IllegalArgumentException("Unsupported type: " + originType);
}
}
}
I'd say you start with a simple test if you're not sure what to test. One test that verifies you don't get any exception if you send null as an argument.
E.g.
#Test
public void shouldNotThrowAnyExceptionIfArgumentIsNull() {
// given
NotificationHandler handler = new NotificationHandler();
// when
handler.handle(null);
// then no exception is thrown.
}
After that, you can analyze line by line what the method handle is doing and write tests that verify its behavior.
You can, for example, verify that the method gateway.send(...); was executed or not depending on what you sent in the parameter.
For dependency mocking and behavior verification, I'd recommend you use mockito or a similar tool.
You can follow this tutorial to learn how to do it.

My Kafka sink connector for Neo4j fails to load

Introduction:
Let me start by apologizing for any vagueness in my question I will try to provide as much information on this topic as I can (hopefully not too much), and please let me know if I should provide more. As well, I am quite new to Kafka and will probably stumble on terminology.
So, from my understanding on how the sink and source work, I can use the FileStreamSourceConnector provided by the Kafka Quickstart guide to write data(Neo4j commands) to a topic held in a Kafka cluster. Then I can write my own Neo4j sink connector and task to read those commands and send them to one or more Neo4j servers. To keep the project as simple as possible, for now, I based the sink connector and task off of the Kafka Quickstart guide's FileStreamSinkConnector and FileStreamSinkTask.
Kafka's FileStream:
FileStreamSourceConnector
FileStreamSourceTask
FileStreamSinkConnector
FileStreamSinkTask
My Neo4j Sink Connector:
package neo4k.sink;
import org.apache.kafka.common.config.ConfigDef;
import org.apache.kafka.common.config.ConfigDef.Importance;
import org.apache.kafka.common.config.ConfigDef.Type;
import org.apache.kafka.common.utils.AppInfoParser;
import org.apache.kafka.connect.connector.Task;
import org.apache.kafka.connect.sink.SinkConnector;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
public class Neo4jSinkConnector extends SinkConnector {
public enum Keys {
;
static final String URI = "uri";
static final String USER = "user";
static final String PASS = "pass";
static final String LOG = "log";
}
private static final ConfigDef CONFIG_DEF = new ConfigDef()
.define(Keys.URI, Type.STRING, "", Importance.HIGH, "Neo4j URI")
.define(Keys.USER, Type.STRING, "", Importance.MEDIUM, "User Auth")
.define(Keys.PASS, Type.STRING, "", Importance.MEDIUM, "Pass Auth")
.define(Keys.LOG, Type.STRING, "./neoj4sinkconnecterlog.txt", Importance.LOW, "Log File");
private String uri;
private String user;
private String pass;
private String logFile;
#Override
public String version() {
return AppInfoParser.getVersion();
}
#Override
public void start(Map<String, String> props) {
uri = props.get(Keys.URI);
user = props.get(Keys.USER);
pass = props.get(Keys.PASS);
logFile = props.get(Keys.LOG);
}
#Override
public Class<? extends Task> taskClass() {
return Neo4jSinkTask.class;
}
#Override
public List<Map<String, String>> taskConfigs(int maxTasks) {
ArrayList<Map<String, String>> configs = new ArrayList<>();
for (int i = 0; i < maxTasks; i++) {
Map<String, String> config = new HashMap<>();
if (uri != null)
config.put(Keys.URI, uri);
if (user != null)
config.put(Keys.USER, user);
if (pass != null)
config.put(Keys.PASS, pass);
if (logFile != null)
config.put(Keys.LOG, logFile);
configs.add(config);
}
return configs;
}
#Override
public void stop() {
}
#Override
public ConfigDef config() {
return CONFIG_DEF;
}
}
My Neo4j Sink Task:
package neo4k.sink;
import org.apache.kafka.clients.consumer.OffsetAndMetadata;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.connect.sink.SinkRecord;
import org.apache.kafka.connect.sink.SinkTask;
import org.neo4j.driver.v1.AuthTokens;
import org.neo4j.driver.v1.Driver;
import org.neo4j.driver.v1.GraphDatabase;
import org.neo4j.driver.v1.Session;
import org.neo4j.driver.v1.StatementResult;
import org.neo4j.driver.v1.exceptions.Neo4jException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Collection;
import java.util.Map;
public class Neo4jSinkTask extends SinkTask {
private static final Logger log = LoggerFactory.getLogger(Neo4jSinkTask.class);
private String uri;
private String user;
private String pass;
private String logFile;
private Driver driver;
private Session session;
public Neo4jSinkTask() {
}
#Override
public String version() {
return new Neo4jSinkConnector().version();
}
#Override
public void start(Map<String, String> props) {
uri = props.get(Neo4jSinkConnector.Keys.URI);
user = props.get(Neo4jSinkConnector.Keys.USER);
pass = props.get(Neo4jSinkConnector.Keys.PASS);
logFile = props.get(Neo4jSinkConnector.Keys.LOG);
driver = null;
session = null;
try {
driver = GraphDatabase.driver(uri, AuthTokens.basic(user, pass));
session = driver.session();
} catch (Neo4jException ex) {
log.trace(ex.getMessage(), logFilename());
}
}
#Override
public void put(Collection<SinkRecord> sinkRecords) {
StatementResult result;
for (SinkRecord record : sinkRecords) {
result = session.run(record.value().toString());
log.trace(result.toString(), logFilename());
}
}
#Override
public void flush(Map<TopicPartition, OffsetAndMetadata> offsets) {
}
#Override
public void stop() {
if (session != null)
session.close();
if (driver != null)
driver.close();
}
private String logFilename() {
return logFile == null ? "stdout" : logFile;
}
}
The Issue:
After writing that, I next built that including any dependencies that it had, excluding any Kafka dependencies, into a jar (Or Uber Jar? It was one file). Then I edited the plugin pathways in the connect-standalone.properties to include that artifact and wrote a properties file for my Neo4j sink connector. I did this all in an attempt to follow these guidelines.
My Neo4j sink connector properties file:
name=neo4k-sink
connector.class=neo4k.sink.Neo4jSinkConnector
tasks.max=1
uri=bolt://localhost:7687
user=neo4j
pass=Hunter2
topics=connect-test
But upon running the standalone, I get this error in the output that shuts down the stream (Error on line 5):
[2017-08-14 12:59:00,150] INFO Kafka version : 0.11.0.0 (org.apache.kafka.common.utils.AppInfoParser:83)
[2017-08-14 12:59:00,150] INFO Kafka commitId : cb8625948210849f (org.apache.kafka.common.utils.AppInfoParser:84)
[2017-08-14 12:59:00,153] INFO Source task WorkerSourceTask{id=local-file-source-0} finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:143)
[2017-08-14 12:59:00,153] INFO Created connector local-file-source (org.apache.kafka.connect.cli.ConnectStandalone:91)
[2017-08-14 12:59:00,153] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:100)
java.lang.IllegalArgumentException: Malformed \uxxxx encoding.
at java.util.Properties.loadConvert(Properties.java:574)
at java.util.Properties.load0(Properties.java:390)
at java.util.Properties.load(Properties.java:341)
at org.apache.kafka.common.utils.Utils.loadProps(Utils.java:429)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:84)
[2017-08-14 12:59:00,156] INFO Kafka Connect stopping (org.apache.kafka.connect.runtime.Connect:65)
[2017-08-14 12:59:00,156] INFO Stopping REST server (org.apache.kafka.connect.runtime.rest.RestServer:154)
[2017-08-14 12:59:00,168] INFO Stopped ServerConnector#540accf4{HTTP/1.1}{0.0.0.0:8083} (org.eclipse.jetty.server.ServerConnector:306)
[2017-08-14 12:59:00,173] INFO Stopped o.e.j.s.ServletContextHandler#6d548d27{/,null,UNAVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:865)
Edit: I should mention that during the part of the connector loading where the output is declaring what plugins have been added, I do not see any mention of the jar that I built earlier and created a pathway for in connect-standalone.properties. Here's a snippet for context:
[2017-08-14 12:58:58,969] INFO Added plugin 'org.apache.kafka.connect.file.FileStreamSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-08-14 12:58:58,969] INFO Added plugin 'org.apache.kafka.connect.tools.MockSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-08-14 12:58:58,969] INFO Added plugin 'org.apache.kafka.connect.tools.VerifiableSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-08-14 12:58:58,969] INFO Added plugin 'org.apache.kafka.connect.tools.VerifiableSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-08-14 12:58:58,970] INFO Added plugin 'org.apache.kafka.connect.tools.MockConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
Conclusion:
I am at loss, I've done testing and researching for about a couple hours and I don't think I'm exactly sure what question to ask. So I'll say thank you for reading if you've gotten this far. If you noticed anything glaring that I may have done wrong in code or in method (e.g. packaging the jar), or think I should provide more context or console logs or anything really let me know. Thank you, again.
As pointed out by #Randall Hauch, my properties file had hidden characters within it because it was a rich text document. I fixed this by duplicating the connect-file-sink.properties file provided with Kafka, which I believe is just a regular text document. Then renaming and editing that duplicate for my neo4j sink properties.

VertX3 verticle deployment: CompletionHandler not triggered

When I run the following piece of code:
import io.vertx.core.*;
import io.vertx.core.eventbus.MessageConsumer;
import io.vertx.core.json.JsonObject;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
class TestVerticle extends AbstractVerticle {
final Logger logger = LoggerFactory.getLogger(TestVerticle.class.getName());
public static final String ADDRESS = "oot.test";
public void start(Future<Void> startFuture) {
logger.info("starting test verticle");
MessageConsumer<JsonObject> consumer = vertx.eventBus().consumer(ADDRESS);
consumer.handler(message -> {
final JsonObject body = message.body();
logger.info("received: " + body);
JsonObject replyMessage = body.copy();
replyMessage.put("status", "processed");
message.reply(replyMessage);
});
logger.info("started test verticle");
}
}
public class Scratchpad {
private final static Logger logger = LoggerFactory.getLogger(Scratchpad.class);
public static void main(String[] args) throws InterruptedException {
Vertx vertx = Vertx.vertx();
logger.info("deploying test verticle");
Handler<AsyncResult<String>> completionHandler = result -> {
System.out.println("done");
if (result.succeeded()) {
logger.info("deployment result: " + result.result());
} else {
logger.error("failed to deploy: " + result);
}
};
TestVerticle testVerticle = new TestVerticle();
vertx.deployVerticle(testVerticle, completionHandler);
logger.info("deployment completed");
}
}
I expect the CompletionHandler content to be executed and therefore I should get something sent to stdout (at least "done", although the logging should be working as well) but nothing happens. All the other logging information shows up correctly on my screen. What am I doing wrong?
After:
logger.info("started test verticle");
Add:
startFuture.complete();
See Asynchronous Verticle start and stop section of the docs about the asynchronous start method:
This version of the method takes a Future as a parameter. When the method returns the verticle will not be considered deployed.
Some time later, after you've done everything you need to do (e.g. start other verticles), you can call complete on the Future (or fail) to signal that you're done.

Categories