Caused by: io.grpc.StatusRuntimeException: NOT_FOUND: Resource not found - java

I try to write a test of pubsub:
#Test
public void sendTopic() throws Exception {
CustomSubscriber customSubscriber = new CustomSubscriber();
customSubscriber.startAndWait();
CustomPublisher customPublisher = new CustomPublisher();
customPublisher.publish("123");
}
and:
public CustomSubscriber() {
this.subscriptionName = SubscriptionName.create(SdkServiceConfig.s.GCP_PROJECT_ID, SdkServiceConfig.s.TOPIC_ID );
this.receiveMsgAction = (message, consumer) -> {
// handle incoming message, then ack/nack the received message
System.out.println("Id : " + message.getMessageId());
System.out.println("Data : " + message.getData().toStringUtf8());
consumer.ack();
};
this.afterStopAction = new ApiFutureEmpty();
}
// [TARGET startAsync()]
public void startAndWait() throws Exception {
Subscriber subscriber = createSubscriberWithCustomCredentials();
subscriber.startAsync();
// Wait for a stop signal.
afterStopAction.get();
subscriber.stopAsync().awaitTerminated();
}
and:
public ApiFuture<String> publish(String message) throws Exception {
ByteString data = ByteString.copyFromUtf8(message);
PubsubMessage pubsubMessage = PubsubMessage.newBuilder().setData(data).build();
ApiFuture<String> messageIdFuture = publisher.publish(pubsubMessage);
ApiFutures.addCallback(messageIdFuture, new ApiFutureCallback<String>() {
public void onSuccess(String messageId) {
System.out.println("published with message id: " + messageId);
}
public void onFailure(Throwable t) {
System.out.println("failed to publish: " + t);
}
});
return messageIdFuture;
}
/**
* Example of creating a {#code Publisher}.
*/
// [TARGET newBuilder(TopicName)]
// [VARIABLE "my_project"]
// [VARIABLE "my_topic"]
public void createPublisher(String projectId, String topicId) throws Exception {
TopicName topic = TopicName.create(projectId, topicId);
try {
publisher = createPublisherWithCustomCredentials(topic);
} finally {
// When finished with the publisher, make sure to shutdown to free up resources.
publisher.shutdown();
}
}
When i run the code i get this error:
Caused by: io.grpc.StatusRuntimeException: NOT_FOUND: Resource not found (resource=add-partner-request).
What am i missing?

Whatever entity is named "add-partner-request" was not successfully created or does not belong to the project. If "add-partner-request" is the topic, then you actually need to create the topic; the line TopicName.create(projectId, topicId) is not sufficient for creating the topic itself. Typically, one would create the topic in the Cloud Pub/Sub portion of the Cloud console or via a gcloud command, e.g.,
gcloud pubsub topics create add-partner-request
Ensure that the project you are logged into in the console is the one used in the code. You should also set the project explicitly when creating the topic via the --project flag or verify that the default project is the correct one:
gcloud config list --format='text(core.project)'
For tests, it is typical to create and delete in code. For example, to create a topic:
Topic topic = null;
ProjectTopicName topicName = ProjectTopicName.of(projectId, topicId);
TopicAdminClient topicAdminClient = TopicAdminClient.create();
try {
topic = topicAdminClient.createTopic(topicName);
} catch (APIException e) {
System.out.println("Issue creating topic!");
}
If "add-partner-request" is the subscription name, then the same things apply. The gcloud command would change a bit:
gcloud pubsub subscriptions create add-partner-request --topic=<topic name>
The command to create the subscription in Java would be as follows:
Subscription subscription = null;
ProjectTopicName topicName = ProjectTopicName.of(projectId, topicId);
ProjectSubscriptionName subscriptionName = ProjectSubscriptionName.of(projectId, subscriptionId);
SubscriptionAdminClient subscriptionAdminClient = SubscriptionAdminClient.create();
try {
subscription = subscriptionAdminClient.createSubscription(
subscriptionName, topicName, PushConfig.newBuilder().build(), 600);
} catch (APIException e) {
System.out.println("Issue creating topic!");
}

I'm assuming TOPIC_ID is the name of your topic; you actually need to reference a subscription. You can easily create a subscription from the GCP console, then reference that name in the SubscriptionName.create(project,yoursubscriptionname)

I think that you forget to create a topic inside your project with the following name "add-partner-request".
You can create it using the following code:
try (TopicAdminClient topicAdminClient = TopicAdminClient.create()) {
// projectId <= unique project identifier, eg. "my-project-id"
TopicName topicName = TopicName.create(projectId, "add-partner-request");
Topic topic = topicAdminClient.createTopic(topicName);
return topic;
}

Related

Akka actorSelection(String) does not work on TCP

I'm trying to experiment with Akka and to use actors on different PCs. To start, I'm trying to connect to actors in the same JVM and in the same ActorSystem, but using a remote selection. However, I'm failing even at this simple task. The following is the minimized code that shows my problem. I believe I'm programmatically adding all the needed configuration. When I run the code as it is, using the line marked with /*works*/, I get B received dd; if I swap /*works*/ with /*fails*/, I get [INFO]..[akka://N1/deadLetters]..was not delivered.
What am I doing wrong? How can I access B using the remote selector?
class A extends AbstractActor {
public Receive createReceive() {
return receiveBuilder()
.match(String.class, s -> {
ActorSelection selection = context().actorSelection(
/*fails*/ //"akka.tcp://N1#127.0.0.1:2500/user/B"
/*works*/ "akka://N1/user/B"
);
selection.tell("dd", self());
})
.build();
}
}
class B extends AbstractActor {
public Receive createReceive() {
return receiveBuilder()
.match(String.class, s -> {
System.out.println("B received " + s);
})
.build();
}
}
public class AkkaS1 {
public static void main(String[] args) {
Config config =
ConfigFactory
.parseString("akka.remote.netty.tcp.port = 2500")
.withFallback(
ConfigFactory.parseString("akka.remote.netty.hostname = 127.0.0.1"))
.withFallback(ConfigFactory.load());
ActorSystem s = ActorSystem.create("N1", config);
ActorRef a = s.actorOf(Props.create(A.class, () -> new A()), "A");
s.actorOf(Props.create(B.class, () -> new B()), "B");
a.tell("Please discover b", ActorRef.noSender());
System.out.println(">>> Press ENTER to exit <<<");
try {
System.in.read();
} catch (IOException ioe) {
} finally {
s.terminate();
}
}
}
I believe I'm programmatically adding all the needed configuration.
You appear to be missing a couple of settings: akka.actor.provider = remote and akka.remote.enabled-transports = ["akka.remote.netty.tcp"]. Also, change akka.remote.netty.hostname to akka.remote.netty.tcp.hostname.
According to the documentation, the minimum configuration is the following:
akka {
actor {
provider = remote
}
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 2500
}
}
}

Sending PubSub message manually in Dataflow

How can I send a PubSub message manually (that is to say, without using a PubsubIO) in Dataflow ?
Importing (via Maven) google-cloud-dataflow-java-sdk-all 2.5.0 already imports a version of com.google.pubsub.v1 for which I was unable to find an easy way to send messages to a Pubsub topic (this version doesn't, for instance, allow to manipulate Publisher instances, which is the way described in the official documentation).
Would you consider using PubsubUnboundedSink? Quick example:
import org.apache.beam.runners.dataflow.options.DataflowPipelineOptions;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.options.ValueProvider.StaticValueProvider;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.transforms.Create;
import org.apache.beam.sdk.values.PCollection;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubClient;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubJsonClient;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubUnboundedSink;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubClient.TopicPath;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubMessage;
public class PubsubTest {
public static void main(String[] args) {
DataflowPipelineOptions options = PipelineOptionsFactory.fromArgs(args)
.as(DataflowPipelineOptions.class);
// writes message to "output_topic"
TopicPath topic = PubsubClient.topicPathFromName(options.getProject(), "output_topic");
Pipeline p = Pipeline.create(options);
p
.apply("input string", Create.of("This is just a message"))
.apply("convert to Pub/Sub message", ParDo.of(new DoFn<String, PubsubMessage>() {
#ProcessElement
public void processElement(ProcessContext c) {
c.output(new PubsubMessage(c.element().getBytes(), null));
}
}))
.apply("write to topic", new PubsubUnboundedSink(
PubsubJsonClient.FACTORY,
StaticValueProvider.of(topic), // topic
"timestamp", // timestamp attribute
"id", // ID attribute
5 // number of shards
));
p.run();
}
}
Here's a way I found browsing https://github.com/GoogleCloudPlatform/cloud-pubsub-samples-java/blob/master/dataflow/src/main/java/com/google/cloud/dataflow/examples/StockInjector.java:
import com.google.api.services.pubsub.Pubsub;
import com.google.api.services.pubsub.model.PublishRequest;
import com.google.api.services.pubsub.model.PubsubMessage;
public class PubsubManager {
private static final Logger logger = LoggerFactory.getLogger(PubsubManager.class);
private static final JsonFactory JSON_FACTORY = JacksonFactory.getDefaultInstance();
private static final Pubsub pubsub = createPubsubClient();
public static class RetryHttpInitializerWrapper implements HttpRequestInitializer {
// Intercepts the request for filling in the "Authorization"
// header field, as well as recovering from certain unsuccessful
// error codes wherein the Credential must refresh its token for a
// retry.
private final GoogleCredential wrappedCredential;
// A sleeper; you can replace it with a mock in your test.
private final Sleeper sleeper;
private RetryHttpInitializerWrapper(GoogleCredential wrappedCredential) {
this(wrappedCredential, Sleeper.DEFAULT);
}
// Use only for testing.
RetryHttpInitializerWrapper(
GoogleCredential wrappedCredential, Sleeper sleeper) {
this.wrappedCredential = Preconditions.checkNotNull(wrappedCredential);
this.sleeper = sleeper;
}
#Override
public void initialize(HttpRequest request) {
final HttpUnsuccessfulResponseHandler backoffHandler =
new HttpBackOffUnsuccessfulResponseHandler(
new ExponentialBackOff())
.setSleeper(sleeper);
request.setInterceptor(wrappedCredential);
request.setUnsuccessfulResponseHandler(
new HttpUnsuccessfulResponseHandler() {
#Override
public boolean handleResponse(HttpRequest request,
HttpResponse response,
boolean supportsRetry)
throws IOException {
if (wrappedCredential.handleResponse(request,
response,
supportsRetry)) {
// If credential decides it can handle it, the
// return code or message indicated something
// specific to authentication, and no backoff is
// desired.
return true;
} else if (backoffHandler.handleResponse(request,
response,
supportsRetry)) {
// Otherwise, we defer to the judgement of our
// internal backoff handler.
logger.info("Retrying " + request.getUrl());
return true;
} else {
return false;
}
}
});
request.setIOExceptionHandler(new HttpBackOffIOExceptionHandler(
new ExponentialBackOff()).setSleeper(sleeper));
}
}
/**
* Creates a Cloud Pub/Sub client.
*/
private static Pubsub createPubsubClient() {
try {
HttpTransport transport = GoogleNetHttpTransport.newTrustedTransport();
GoogleCredential credential = GoogleCredential.getApplicationDefault();
HttpRequestInitializer initializer =
new RetryHttpInitializerWrapper(credential);
return new Pubsub.Builder(transport, JSON_FACTORY, initializer).build();
} catch (IOException | GeneralSecurityException e) {
logger.error("Could not create Pubsub client: " + e);
}
return null;
}
/**
* Publishes the given message to a Cloud Pub/Sub topic.
*/
public static void publishMessage(String message, String outputTopic) {
int maxLogMessageLength = 200;
if (message.length() < maxLogMessageLength) {
maxLogMessageLength = message.length();
}
logger.info("Received ...." + message.substring(0, maxLogMessageLength));
// Publish message to Pubsub.
PubsubMessage pubsubMessage = new PubsubMessage();
pubsubMessage.encodeData(message.getBytes());
PublishRequest publishRequest = new PublishRequest();
publishRequest.setMessages(Collections.singletonList(pubsubMessage));
try {
pubsub.projects().topics().publish(outputTopic, publishRequest).execute();
} catch (java.io.IOException e) {
logger.error("Stuff happened in pubsub: " + e);
}
}
}
You can send pubsub message using PubsubIO writeMessages method
dataflow Pipeline steps
Pipeline p = Pipeline.create(options);
p.apply("Transformer1", ParDo.of(new Fn.method1()))
.apply("Transformer2", ParDo.of(new Fn.method2()))
.apply("PubsubMessageSend", PubsubIO.writeMessages().to(PubSubConfig.getTopic(options.getProject(), options.getpubsubTopic ())));
Define Project Name and pubsubTopic where to want to send pub subs message in the PipeLineOptions

Custom DNS Server using Netty

I have been trying to write a custom DNS server using netty. I have used the DatagramDnsQueryDecoder to parse in the incoming DNSQuery UDP packets but I cannot figure out how to send a response to resolve a domain name. I have received the DatagramDnsQuery object from the handler but cannot find a way to initialize DatagramDnsResponse and add a test DNS record and send it back to the client through DatagramDnsResponseEncoder.
Here is what i have done so far
public class DNSListen {
public static void main(String[] args) {
final NioEventLoopGroup group = new NioEventLoopGroup();
try {
Bootstrap bootstrap = new Bootstrap();
bootstrap.group(group)
.channel(NioDatagramChannel.class)
.handler(new ChannelInitializer<NioDatagramChannel>() {
#Override
protected void initChannel(NioDatagramChannel nioDatagramChannel) throws Exception {
nioDatagramChannel.pipeline().addLast(new DatagramDnsQueryDecoder());
nioDatagramChannel.pipeline().addLast(new DatagramDnsResponseEncoder());
nioDatagramChannel.pipeline().addLast(new DNSMessageHandler());
}
})
.option(ChannelOption.SO_BROADCAST, true);
ChannelFuture future = bootstrap.bind(53).sync();
future.channel().closeFuture().sync();
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
group.shutdownGracefully();
}
}
}
here is the Handler for DNSMessages
public class DNSMessageHandler extends SimpleChannelInboundHandler<DatagramDnsQuery> {
#Override
protected void channelRead0(ChannelHandlerContext ctx, DatagramDnsQuery dnsMessage) throws Exception {
System.out.println(dnsMessage.content());
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
cause.printStackTrace();
}
}
I get the output for multiple DNS Requests when I run this and Set the System DNS as 127.0.0.1
But I cannot find a way to send a dummy DNS Response so that the IP address is resolved for the requested domain. (I think I should initialize a DatagramDnsResponse object and write it back to the user, but is that correct?, if so how do I initialize it with a dummy IP as the resolved IP)
Am I in the wrong path, can someone please direct me to the correct path . thanks.
It's pretty easy actualy you just need an instance of DatagramDnsResponse and add a dns record with the addRecord method thats all.
Here an example for a SRV answer:
DatagramDnsResponse response = new DatagramDnsResponse(msg.recipient(), msg.sender(), msg.id());
ByteBuf buf = Unpooled.buffer();
buf.writeShort(0); // priority
buf.writeShort(0); // weight
buf.writeShort(993); // port
encodeName("my.domain.tld", buf); // target (special encoding: https://tools.ietf.org/html/rfc1101)
response.addRecord(DnsSection.ANSWER, new DefaultDnsRawRecord("_myprotocol._tcp.domain.tld." /* requested domain */, DnsRecordType.SRV, 30 /* ttl */ , buf));
ctx.writeAndFlush(response);
Edit
To encode a name use this function (I extracted it from netty):
public void encodeName(String name, ByteBuf buf) throws Exception {
if(".".equals(name)) {
buf.writeByte(0);
return;
}
final String[] labels = name.split("\\.");
for (String label : labels) {
final int labelLen = label.length();
if (labelLen == 0)
break;
buf.writeByte(labelLen);
ByteBufUtil.writeAscii(buf, label);
}
buf.writeByte(0);
}
To decode a SRV record you could use this snippet:
DnsRawRecord record = msg.recordAt(DnsSection.ANSWER, 0); //msg is a DnsResponse
if(record.type() == DnsRecordType.SRV) {
ByteBuf buf = record.content();
System.out.println("\tPriority: " + buf.readUnsignedShort());
System.out.println("\tWeight: " + buf.readUnsignedShort());
System.out.println("\tPort: " + buf.readUnsignedShort());
System.out.println("\tTarget:" + DefaultDnsRecordDecoder.decodeName(buf));
}

Application using Java SDK Client for Hyperledger Fabric V1.0 is waiting indefinitely when invoking chaincode

I have my Hyperledger Fabric V1.0 network up and running by following the steps Building Your First Network.
And now I am able to create channel, install/instantiate/invoke/query chaincode etc.
Now I am trying to create some assets and query the same using Java SDK Client.
I have created the following methods to invoke and query the chaincode from my java application.
void createChannel() throws InvalidArgumentException, TransactionException, IOException, ProposalException{
Properties ordererProperties = getOrdererProperties("orderer.example.com");
ordererProperties.put("grpc.NettyChannelBuilderOption.keepAliveTime", new Object[] {5L, TimeUnit.MINUTES});
ordererProperties.put("grpc.NettyChannelBuilderOption.keepAliveTimeout", new Object[] {8L, TimeUnit.SECONDS});
Orderer orderer = client.newOrderer("orderer.example.com", "grpcs://192.168.99.100:7050",ordererProperties);
Properties peerProperties = getPeerProperties("peer0.org1.example.com"); //test properties for peer.. if any.
if (peerProperties == null) {
peerProperties = new Properties();
}
peerProperties.put("grpc.NettyChannelBuilderOption.maxInboundMessageSize", 9000000);
Peer peer = client.newPeer("peer0.org1.example.com", "grpcs://192.168.99.100:7051",peerProperties);
channel = client.newChannel("testchannel");
channel.addOrderer(orderer);
channel.addPeer(peer);
channel.initialize();
}
void creteTransactionalProposal(){
proposalRequest = client.newTransactionProposalRequest();
final ChaincodeID chaincodeID = ChaincodeID.newBuilder()
.setName("asset_test")
.setVersion("1.0")
.setPath("github.com/myuser/myfabricrepo/asset_chain")
.build();
proposalRequest.setChaincodeID(chaincodeID);
proposalRequest.setFcn("set");
proposalRequest.setProposalWaitTime(TimeUnit.SECONDS.toMillis(1));
proposalRequest.setArgs(new String[]{"a1", "a1_val"});
}
void sendProposal() throws ProposalException, InvalidArgumentException, InterruptedException, ExecutionException{
final Collection<ProposalResponse> responses = channel.sendTransactionProposal(proposalRequest);
CompletableFuture<BlockEvent.TransactionEvent> txFuture = channel.sendTransaction(responses, client.getUserContext());
BlockEvent.TransactionEvent event = txFuture.get();//waiting indefinitely
System.out.println(event.toString());
//query();
}
void query() throws InvalidArgumentException, ProposalException{
final ChaincodeID chaincodeID = ChaincodeID.newBuilder()
.setName(""asset_test"")
.setVersion("1.0")
.setPath("github.com/myuser/myfabricrepo/asset_chain")
.build();
QueryByChaincodeRequest queryByChaincodeRequest = client.newQueryProposalRequest();
queryByChaincodeRequest.setArgs(new String[] {"a1"});
queryByChaincodeRequest.setFcn("get");
queryByChaincodeRequest.setChaincodeID(chaincodeID);
Map<String, byte[]> tm2 = new HashMap<>();
tm2.put("HyperLedgerFabric", "QueryByChaincodeRequest:JavaSDK".getBytes(UTF_8));
tm2.put("method", "QueryByChaincodeRequest".getBytes(UTF_8));
queryByChaincodeRequest.setTransientMap(tm2);
Collection<ProposalResponse> queryProposals = channel.queryByChaincode(queryByChaincodeRequest, channel.getPeers());
for (ProposalResponse proposalResponse : queryProposals) {
if (!proposalResponse.isVerified()
|| proposalResponse.getStatus() != ProposalResponse.Status.SUCCESS) {
System.out.println("Failed query proposal from peer " + proposalResponse.getPeer().getName() + " status: "
+ proposalResponse.getStatus() + ". Messages: " + proposalResponse.getMessage()
+ ". Was verified : " + proposalResponse.isVerified());
} else {
String payload = proposalResponse.getProposalResponse().getResponse().getPayload()
.toStringUtf8();
System.out.printf("\nQuery payload of b from peer %s returned %s", proposalResponse.getPeer().getName(),
payload);
//assertEquals(payload, expect);
}
}
}
I am able to create Asset by calling
t.creteTransactionalProposal();
t.sendProposal();
But the line BlockEvent.TransactionEvent event = txFuture.get(); makes the application in an indefinite waiting state even after the completion of the transaction commit to ledger. Why it is behaving like this?
Once I to force exit and run the query() method it is listing the asset.
I ran into a similar issue to this, and many of the answers around the net are missing a key part of the code - assigning the EventHub to the channel. I added this before initializing the channel (which in this case would be in the createChannel mehtod), and my transactions were then processed successfully:
channel.addEventHub(client.newEventHub("eventhub0", "grpc://localhost:7053"));

error: message from Actor to Actor was not delivered.[1] dead letters encountered. Distributed pub-sub working across clusters not working

I'm trying to make distributed pub-sub across different cluster system but it's not working whatever i try.
All I'm trying to do is create a simple example where.
1) I create a topic, say "content".
2) One node in say jvm A creates the topic, subscribes to it, and a publisher who publishes to it too.
3) In a different node , say jvm B on a different port , I create a subscriber.
4) When i sent a message to the topic from jvm A, then I want the subscriber on jvm B to receive it too as its subscribed to the same topic.
Any helps would be greatly appreciated or a simple working example of distributed pub sub with subscribers and publishers in different cluster system on different ports, in Java.
here is the code for app1 and its config file.
public class App1{
public static void main(String[] args) {
System.setProperty("akka.remote.netty.tcp.port", "2551");
ActorSystem clusterSystem = ActorSystem.create("ClusterSystem");
ClusterClientReceptionist clusterClientReceptionist1 = ClusterClientReceptionist.get(clusterSystem);
ActorRef subcriber1=clusterSystem.actorOf(Props.create(Subscriber.class), "subscriber1");
clusterClientReceptionist1.registerSubscriber("content", subcriber1);
ActorRef publisher1=clusterSystem.actorOf(Props.create(Publisher.class), "publisher1");
clusterClientReceptionist1.registerSubscriber("content", publisher1);
publisher1.tell("testMessage1", ActorRef.noSender());
}
}
app1.confi
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "DEBUG"
stdout-loglevel = "DEBUG"
logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
log-remote-lifecycle-events = off
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 2551
}
}
cluster {
seed-nodes = [
"akka.tcp://ClusterSystem#127.0.0.1:2551"
]
auto-down-unreachable-after = 10s
}
akka.extensions = ["akka.cluster.pubsub.DistributedPubSub",
"akka.contrib.pattern.ClusterReceptionistExtension"]
akka.cluster.pub-sub {
name = distributedPubSubMediator
role = ""
routing-logic = random
gossip-interval = 1s
removed-time-to-live = 120s
max-delta-elements = 3000
use-dispatcher = ""
}
akka.cluster.client.receptionist {
name = receptionist
role = ""
number-of-contacts = 3
response-tunnel-receive-timeout = 30s
use-dispatcher = ""
heartbeat-interval = 2s
acceptable-heartbeat-pause = 13s
failure-detection-interval = 2s
}
}
code for app2 and its config file
public class App
{
public static Set<ActorPath> initialContacts() {
return new HashSet<ActorPath>(Arrays.asList(
ActorPaths.fromString("akka.tcp://ClusterSystem#127.0.0.1:2551/system/receptionist")));
}
public static void main( String[] args ) {
System.setProperty("akka.remote.netty.tcp.port", "2553");
ActorSystem clusterSystem = ActorSystem.create("ClusterSystem2");
ClusterClientReceptionist clusterClientReceptionist2 = ClusterClientReceptionist.get(clusterSystem);
final ActorRef clusterClient = clusterSystem.actorOf(ClusterClient.props(ClusterClientSettings.create(
clusterSystem).withInitialContacts(initialContacts())), "client");
ActorRef subcriber2=clusterSystem.actorOf(Props.create(Subscriber.class), "subscriber2");
clusterClientReceptionist2.registerSubscriber("content", subcriber2);
ActorRef publisher2=clusterSystem.actorOf(Props.create(Publisher.class), "publisher2");
publisher2.tell("testMessage2", ActorRef.noSender());
clusterClient.tell(new ClusterClient.Send("/user/publisher1", "hello", true), null);
}
}
app2.confi
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "DEBUG"
stdout-loglevel = "DEBUG"
logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
log-remote-lifecycle-events = off
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 2553
}
}
cluster {
seed-nodes = [
"akka.tcp://ClusterSystem#127.0.0.1:2553"
]
auto-down-unreachable-after = 10s
}
akka.extensions = ["akka.cluster.pubsub.DistributedPubSub",
"akka.contrib.pattern.ClusterReceptionistExtension"]
akka.cluster.pub-sub {
name = distributedPubSubMediator
role = ""
routing-logic = random
gossip-interval = 1s
removed-time-to-live = 120s
max-delta-elements = 3000
use-dispatcher = ""
}
akka.cluster.client.receptionist {
name = receptionist
role = ""
number-of-contacts = 3
response-tunnel-receive-timeout = 30s
use-dispatcher = ""
heartbeat-interval = 2s
acceptable-heartbeat-pause = 13s
failure-detection-interval = 2s
}
}
Publisher and Subscriber class are same for both application which is given below.
Publisher:
public class Publisher extends UntypedActor {
private final ActorRef mediator =
DistributedPubSub.get(getContext().system()).mediator();
#Override
public void onReceive(Object msg) throws Exception {
if (msg instanceof String) {
mediator.tell(new DistributedPubSubMediator.Publish("events", msg), getSelf());
} else {
unhandled(msg);
}
}
}
Subscriber:
public class Subscriber extends UntypedActor {
private final LoggingAdapter log = Logging.getLogger(getContext().system(), this);
public Subscriber(){
ActorRef mediator = DistributedPubSub.get(getContext().system()).mediator();
mediator.tell(new DistributedPubSubMediator.Subscribe("events", getSelf()), getSelf());
}
public void onReceive(Object msg) throws Throwable {
if (msg instanceof String) {
log.info("Got: {}", msg);
} else if (msg instanceof DistributedPubSubMediator.SubscribeAck) {
log.info("subscribing");
} else {
unhandled(msg);
}
}
}
i got this error in receiver side app while running both apps.Dead letters encounterd
[ClusterSystem-akka.actor.default-dispatcher-21] INFO akka.actor.RepointableActorRef - Message [java.lang.String] from Actor[akka://ClusterSystem/system/receptionist/akka.tcp%3A%2F%2FClusterSystem2%40127.0.0.1%3A2553%2FdeadLetters#188707926] to Actor[akka://ClusterSystem/system/distributedPubSubMediator#1119990682] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
and in sender side app message send successfully is displayed in log.
[ClusterSystem2-akka.actor.default-dispatcher-22] DEBUG akka.cluster.client.ClusterClient - Sending buffered messages to receptionist
Using the ClusterClient in that way does not really make sense and does not have anything to do with using the distributed pub sub, as both your nodes are a part of the cluster you can just use the distributed pub sub api directly.
Here is a simple main including config creating a two node cluster using your exact Publisher and Subscriber actors that works as expected:
public static void main(String[] args) throws Exception {
final Config config = ConfigFactory.parseString(
"akka.actor.provider=cluster\n" +
"akka.remote.netty.tcp.port=2551\n" +
"akka.cluster.seed-nodes = [ \"akka.tcp://ClusterSystem#127.0.0.1:2551\"]\n");
ActorSystem node1 = ActorSystem.create("ClusterSystem", config);
ActorSystem node2 = ActorSystem.create("ClusterSystem",
ConfigFactory.parseString("akka.remote.netty.tcp.port=2552")
.withFallback(config));
// wait a bit for the cluster to form
Thread.sleep(3000);
ActorRef subscriber = node1.actorOf(
Props.create(Subscriber.class),
"subscriber");
ActorRef publisher = node2.actorOf(
Props.create(Publisher.class),
"publisher");
// wait a bit for the subscription to be gossiped
Thread.sleep(3000);
publisher.tell("testMessage1", ActorRef.noSender());
}
Note that distributed pub sub does not give any guarantees of delivery, so if you send a message before the mediators has gotten in contact with each other, the message will simply be lost (hence the Thread.sleep statements, which are ofc not something you should do in actual code).
I think the issue is that your actor systems have different names ClusterSystem and ClusterSystem2. At least I was having the same issue because I had two different services in the cluster but I names the systems in each service with a different name.

Categories