Samples with JmDNS - java

I've been able to get the samples that come with JmDNS to compile and run, however I can't get any of the classes to discover my services.
I'm running a Windows environment with multiple PC's running VNC, SSH & Apache and I've been trying to get JmDNS to discover at least one of these...
What I ideally want is to be able to detect all running VNC servers on my network. Is there some sort of client and server pairing where I can only discover a service if I've registered it using JmDNS?
Any help getting some results out of the samples will be appreciated, the documentation isn't much help.
import java.io.IOException;
import java.util.logging.ConsoleHandler;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.jmdns.JmDNS;
import javax.jmdns.ServiceEvent;
import javax.jmdns.ServiceListener;
/**
* Sample Code for Service Discovery using JmDNS and a ServiceListener.
* <p>
* Run the main method of this class. It listens for HTTP services and lists all changes on System.out.
*
* #author Werner Randelshofer
*/
public class DiscoverServices {
static class SampleListener implements ServiceListener {
#Override
public void serviceAdded(ServiceEvent event) {
System.out.println("Service added : " + event.getName() + "." + event.getType());
}
#Override
public void serviceRemoved(ServiceEvent event) {
System.out.println("Service removed : " + event.getName() + "." + event.getType());
}
#Override
public void serviceResolved(ServiceEvent event) {
System.out.println("Service resolved: " + event.getInfo());
}
}
/**
* #param args
* the command line arguments
*/
public static void main(String[] args) {
try {
// Activate these lines to see log messages of JmDNS
boolean log = false;
if (log) {
Logger logger = Logger.getLogger(JmDNS.class.getName());
ConsoleHandler handler = new ConsoleHandler();
logger.addHandler(handler);
logger.setLevel(Level.FINER);
handler.setLevel(Level.FINER);
}
final JmDNS jmdns = JmDNS.create();
String type = "_http._tcp.local.";
if(args.length > 0) {
type = args[0];
}
jmdns.addServiceListener(type, new SampleListener());
System.out.println("Press q and Enter, to quit");
int b;
while ((b = System.in.read()) != -1 && (char) b != 'q') {
/* Stub */
}
jmdns.close();
System.out.println("Done");
} catch (IOException e) {
e.printStackTrace();
}
}
}

To discover a specific type of service, you need to know the correct service type name, check out DNS SRV (RFC 2782) Service Types:
String bonjourServiceType = "_http._tcp.local.";
bonjourService = JmDNS.create();
bonjourService.addServiceListener(bonjourServiceType, bonjourServiceListener);
ServiceInfo[] serviceInfos = bonjourService.list(bonjourServiceType);
for (ServiceInfo info : serviceInfos) {
System.out.println("## resolve service " + info.getName() + " : " + info.getURL());
}
bonjourService.close();
For VNC, use _rfb._tcp.local.
For SSH, use _ssh._tcp.local.
For Apache, use _http._tcp.local.

Related

Using kinesis async client AWS SDK 2 with Java

I am trying to use the KinesisAsyncClient as described in https://docs.aws.amazon.com/code-samples/latest/catalog/javav2-kinesis-src-main-java-com-example-kinesis-KinesisStreamRxJavaEx.java.html
I have a mac OS and I have configured the following dependencies for async http client
'software.amazon.awssdk:netty-nio-client:2.16.101'
'software.amazon.awssdk:kinesis:2.16.99'
Caused by: java.lang.NoClassDefFoundError: io/netty/internal/tcnative/SSLPrivateKeyMethod
at software.amazon.awssdk.http.nio.netty.internal.AwaitCloseChannelPoolMap.newPool(AwaitCloseChannelPoolMap.java:119)
at software.amazon.awssdk.http.nio.netty.internal.AwaitCloseChannelPoolMap.newPool(AwaitCloseChannelPoolMap.java:49)
at java.base/java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1705)
at software.amazon.awssdk.http.nio.netty.internal.SdkChannelPoolMap.get(SdkChannelPoolMap.java:44)
at software.amazon.awssdk.http.nio.netty.NettyNioAsyncHttpClient.createRequestContext(NettyNioAsyncHttpClient.java:140)
at software.amazon.awssdk.http.nio.netty.NettyNioAsyncHttpClient.execute(NettyNioAsyncHttpClient.java:121)
at software.amazon.awssdk.core.client.builder.SdkDefaultClientBuilder$NonManagedSdkAsyncHttpClient.execute(SdkDefaultClientBuilder.java:463)
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.doExecuteHttpRequest(MakeAsyncHttpRequestStage.java:219)
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.executeHttpRequest(MakeAsyncHttpRequestStage.java:191)
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.lambda$execute$1(MakeAsyncHttpRequestStage.java:100)
at java.base/java.util.concurrent.CompletableFuture.uniAcceptNow(CompletableFuture.java:753)
at java.base/java.util.concurrent.CompletableFuture.uniAcceptStage(CompletableFuture.java:731)
at java.base/java.util.concurrent.CompletableFuture.thenAccept(CompletableFuture.java:2108)
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.execute(MakeAsyncHttpRequestStage.java:96)
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.execute(MakeAsyncHttpRequestStage.java:61)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncApiCallAttemptMetricCollectionStage.execute(AsyncApiCallAttemptMetricCollectionStage.java:55)
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncApiCallAttemptMetricCollectionStage.execute(AsyncApiCallAttemptMetricCollectionStage.java:37)
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.attemptExecute(AsyncRetryableStage.java:110)
In the same example folders, the syncClient works well and connects to kinesis on AWS. Does anyone know what can be done to fix this?
I just tested this code:
package com.example.kinesis.asny;
import java.util.concurrent.CompletableFuture;
import io.reactivex.Flowable;
import software.amazon.awssdk.core.async.SdkPublisher;
import software.amazon.awssdk.services.kinesis.KinesisAsyncClient;
import software.amazon.awssdk.services.kinesis.model.ShardIteratorType;
import software.amazon.awssdk.services.kinesis.model.StartingPosition;
import software.amazon.awssdk.services.kinesis.model.SubscribeToShardEvent;
import software.amazon.awssdk.services.kinesis.model.SubscribeToShardRequest;
import software.amazon.awssdk.services.kinesis.model.SubscribeToShardResponseHandler;
import software.amazon.awssdk.utils.AttributeMap;
public class KinesisStreamRxJavaEx {
private static final String CONSUMER_ARN = "arn:aws:kinesis:us-east-1:814548xxxxxx:stream/LamDataStream/consumer/MyConsumer:162645xxxx";
public static void main(String[] args) {
KinesisAsyncClient client = KinesisAsyncClient.create();
SubscribeToShardRequest request = SubscribeToShardRequest.builder()
.consumerARN(CONSUMER_ARN)
.shardId("shardId-000000000000")
.startingPosition(StartingPosition.builder().type(ShardIteratorType.LATEST).build())
.build();
responseHandlerBuilder_RxJava(client, request).join();
System.out.println("Done");
client.close();
}
/**
* Uses RxJava via the onEventStream lifecycle method. This gives you full access to the publisher, which can be used
* to create an Rx Flowable.
*/
private static CompletableFuture<Void> responseHandlerBuilder_RxJava(KinesisAsyncClient client, SubscribeToShardRequest request) {
// snippet-start:[kinesis.java2.stream_rx_example.event_stream]
SubscribeToShardResponseHandler responseHandler = SubscribeToShardResponseHandler
.builder()
.onError(t -> System.err.println("Error during stream - " + t.getMessage()))
.onEventStream(p -> Flowable.fromPublisher(p)
.ofType(SubscribeToShardEvent.class)
.flatMapIterable(SubscribeToShardEvent::records)
.limit(1000)
.buffer(25)
.subscribe(e -> System.out.println("Record batch = " + e)))
.build();
// snippet-end:[kinesis.java2.stream_rx_example.event_stream]
return client.subscribeToShard(request, responseHandler);
}
/**
* Because a Flowable is also a publisher, the publisherTransformer method integrates nicely with RxJava. Notice that
* you must adapt to an SdkPublisher.
*/
private static CompletableFuture<Void> responseHandlerBuilder_OnEventStream_RxJava(KinesisAsyncClient client, SubscribeToShardRequest request) {
// snippet-start:[kinesis.java2.stream_rx_example.publish_transform]
SubscribeToShardResponseHandler responseHandler = SubscribeToShardResponseHandler
.builder()
.onError(t -> System.err.println("Error during stream - " + t.getMessage()))
.publisherTransformer(p -> SdkPublisher.adapt(Flowable.fromPublisher(p).limit(100)))
.build();
// snippet-end:[kinesis.java2.stream_rx_example.publish_transform]
return client.subscribeToShard(request, responseHandler);
}
}
It successfully completed:
Make sure that you specify a valid consumer ARN value; otherwise the code does not work.
You can get a valid consumer ARN using this code;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.kinesis.KinesisClient;
import software.amazon.awssdk.services.kinesis.model.KinesisException;
import software.amazon.awssdk.services.kinesis.model.RegisterStreamConsumerRequest;
import software.amazon.awssdk.services.kinesis.model.RegisterStreamConsumerResponse;
public class RegisterStreamConsumer {
public static void main(String[] args) {
final String USAGE = "\n" +
"Usage:\n" +
" ListShards <streamName>\n\n" +
"Where:\n" +
" streamName - The Amazon Kinesis data stream (for example, StockTradeStream)\n\n" +
"Example:\n" +
" ListShards StockTradeStream\n";
// if (args.length != 1) {
// System.out.println(USAGE);
// System.exit(1);
// }
String streamARN = "arn:aws:kinesis:us-east-1:8145480xxxxx:stream/LamDataStream" ; //args[0];
Region region = Region.US_EAST_1;
KinesisClient kinesisClient = KinesisClient.builder()
.region(region)
.build();
String arnValue = regConsumer(kinesisClient, streamARN);
System.out.println(arnValue);
kinesisClient.close();
}
public static String regConsumer(KinesisClient kinesisClient, String streamARN) {
try {
RegisterStreamConsumerRequest regCon = RegisterStreamConsumerRequest.builder()
.consumerName("MyConsumer")
.streamARN(streamARN)
.build();
RegisterStreamConsumerResponse resp = kinesisClient.registerStreamConsumer(regCon);
return resp.consumer().consumerARN();
} catch (KinesisException e) {
System.err.println(e.getMessage());
System.exit(1);
}
return "";
}
}
This worked for me after adding a couple of dependencies for netty
implementation 'io.netty:netty-tcnative:2.0.40.Final'
implementation 'io.netty:netty-tcnative-boringssl-static:2.0.40.Final'
I followed the https://docs.hazelcast.com/imdg/4.2/security/integrating-openssl.html to understand what was going on and why the netty.nio was having issues.

WARNING: RPC failed: Status{code=NOT_FOUND, description=Not found, cause=null} when trying to use GRPC in java and Nodejs

I am currently trying to send a request to a nodejs server from a java client that I created but I am getting the error that is showing above. I've been doing some research on it but can seem to figure out why it is happening. The server I created in nodejs:
var grpc = require('grpc');
const protoLoader = require('#grpc/proto-loader')
const packageDefinition = protoLoader.loadSync('AirConditioningDevice.proto')
var AirConditioningDeviceproto = grpc.loadPackageDefinition(packageDefinition);
var AirConditioningDevice = [{
device_id: 1,
name: 'Device1',
location: 'room1',
status: 'On',
new_tempature: 11
}];
var server = new grpc.Server();
server.addService(AirConditioningDeviceproto.AirConditioningDevice.Airconditioning_service.service,{
currentDetails: function(call, callback){
console.log(call.request.device_id);
for(var i =0; i <AirConditioningDevice.length; i++){
console.log(call.request.device_id);
if(AirConditioningDevice[i].device_id == call.request.device_id){
console.log(call.request.device_id);
return callback(null, AirConditioningDevice [i]);
}
console.log(call.request.device_id);
}
console.log(call.request.device_id);
callback({
code: grpc.status.NOT_FOUND,
details: 'Not found'
});
},
setTemp: function(call, callback){
for(var i =0; i <AirConditioningDevice.length; i++){
if(AirConditioningDevice[i].device_id == call.request.device_id){
AirConditioningDevice[i].new_tempature == call.request.new_tempature;
return callback(null, AirConditioningDevice[i]);
}
}
callback({
code: grpc.status.NOT_FOUND,
details: 'Not found'
});
},
setOff: function(call, callback){
for(var i =0; i <AirConditioningDevice.length; i++){
if(AirConditioningDevice[i].device_id == call.request.device_id && AirConditioningDevice[i].status == 'on'){
AirConditioningDevice[i].status == 'off';
return callback(null, AirConditioningDevice[i]);
}else{
AirConditioningDevice[i].status == 'on';
return callback(null, AirConditioningDevice[i]);
}
}
callback({
code: grpc.status.NOT_FOUND,
details: 'Not found'
});
}
});
server.bind('localhost:3000', grpc.ServerCredentials.createInsecure());
server.start();
This is the client that I have created in java:
package com.air.grpc;
import java.util.concurrent.TimeUnit;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.JPanel;
import com.air.grpc.Airconditioning_serviceGrpc;
import com.air.grpc.GrpcClient;
import com.air.grpc.deviceIDRequest;
import com.air.grpc.ACResponse;
import io.grpc.ManagedChannel;
import io.grpc.ManagedChannelBuilder;
import io.grpc.StatusRuntimeException;
public class GrpcClient {
private static final Logger logger = Logger.getLogger(GrpcClient.class.getName());
private final ManagedChannel channel;
private final Airconditioning_serviceGrpc.Airconditioning_serviceBlockingStub blockingStub;
private final Airconditioning_serviceGrpc.Airconditioning_serviceStub asyncStub;
public GrpcClient(String host, int port) {
this(ManagedChannelBuilder.forAddress(host, port)
// Channels are secure by default (via SSL/TLS). For the example we disable TLS to avoid
// needing certificates.
.usePlaintext()
.build());
}
GrpcClient(ManagedChannel channel) {
this.channel = channel;
blockingStub = Airconditioning_serviceGrpc.newBlockingStub(channel);
asyncStub = Airconditioning_serviceGrpc.newStub(channel);
}
public void shutdown() throws InterruptedException {
channel.shutdown().awaitTermination(5, TimeUnit.SECONDS);
}
public void currentDetails(int id) {
logger.info("Will try to get device " + id + " ...");
deviceIDRequest deviceid = deviceIDRequest.newBuilder().setDeviceId(id).build();
ACResponse response;
try {
response =blockingStub.currentDetails(deviceid);
}catch(StatusRuntimeException e) {
logger.log(Level.WARNING, "RPC failed: {0}", e.getStatus());
return;
}
logger.info("Device: " + response.getAirConditioning ());
}
public static void main(String[] args) throws Exception {
GrpcClient client = new GrpcClient("localhost", 3000);
try {
client.currentDetails(1);
}finally {
client.shutdown();
}
}
}
Right now the only one that I have tested cause its the most basic one is currentdetails. As you can see I have created an AirConditioningDevice object. I am trying to get the details of it by typing in 1 to a textbox which is the id but like i said when i send it i get the error in the title. This is the proto file that I have created:
syntax = "proto3";
package AirConditioningDevice;
option java_package = "AircondioningDevice.proto.ac";
service Airconditioning_service{
rpc currentDetails(deviceIDRequest) returns (ACResponse) {};
rpc setTemp( TempRequest ) returns (ACResponse) {};
rpc setOff(deviceIDRequest) returns (ACResponse) {};
}
message AirConditioning{
int32 device_id =1;
string name = 2;
string location = 3;
string status = 4;
int32 new_tempature = 5;
}
message deviceIDRequest{
int32 device_id =1;
}
message TempRequest {
int32 device_id = 1;
int32 new_temp = 2;
}
message ACResponse {
AirConditioning airConditioning = 1;
}
lastly this is everything I get back in the console:
Apr 02, 2020 4:23:29 PM AircondioningDevice.proto.ac.AirConClient currentDetails
INFO: Will try to get device 1 ...
Apr 02, 2020 4:23:30 PM AircondioningDevice.proto.ac.AirConClient currentDetails
WARNING: RPC failed: Status{code=NOT_FOUND, description=Not found, cause=null}
I dont know whether I am completely off or if the error is small. Any suggestions? One other thing is I the same proto file in the java client and the node server I dont know if that matters. One last this is I also get this when i run my server: DeprecationWarning: grpc.load: Use the #grpc/proto-loader module with grpc.loadPackageDefinition instead I dont know if that has anything to do with it.
In your .proto file, you declare deviceIDRequest with a field device_id, but you are checking call.request.id in the currentDetails handler. If you look at call.request.id directly, it's probably undefined.
You also aren't getting to this bit yet, but the success callback is using the books array instead of the AirConditioningDevice array.

com.ibm.msg.client.wmq.WMQConstants cannot rsolve

I am implementing an IBM MQ client using java class as follows;
import javax.jms.JMSException;
import com.ibm.msg.client.jms.JmsConnectionFactory;
import com.ibm.msg.client.jms.JmsFactoryFactory;
import com.ibm.msg.client.wmq.WMQConstants;
import javax.jms.JMSContext;
import javax.jms.Topic;
import javax.jms.Queue;
import javax.jms.JMSConsumer;
import javax.jms.Message;
import javax.jms.JMSProducer;
/*
* Implements both Subscriber and Publisher
*/
class SharedNonDurableSubscriberAndPublisher implements Runnable {
private Thread t;
private String threadName;
SharedNonDurableSubscriberAndPublisher( String name){
threadName = name;
System.out.println("Creating Thread:" + threadName );
}
/*
* Demonstrates shared non-durable subscription in JMS 2.0
*/
private void sharedNonDurableSubscriptionDemo(){
JmsConnectionFactory cf = null;
JMSContext msgContext = null;
try {
// Create Factory for WMQ JMS provider
JmsFactoryFactory ff = JmsFactoryFactory.getInstance(WMQConstants.WMQ_PROVIDER);
// Create connection factory
cf = ff.createConnectionFactory();
// Set MQ properties
cf.setStringProperty(WMQConstants.WMQ_QUEUE_MANAGER, "QM3");
cf.setIntProperty(WMQConstants.WMQ_CONNECTION_MODE, WMQConstants.WMQ_CM_BINDINGS);
// Create message context
msgContext = cf.createContext();
// Create a topic destination
Topic fifaScores = msgContext.createTopic("/FIFA2014/UPDATES");
// Create a consumer. Subscription name specified, required for sharing of subscription.
JMSConsumer msgCons = msgContext.createSharedConsumer(fifaScores, "FIFA2014SUBID");
// Loop around to receive publications
while(true){
String msgBody=null;
// Use JMS 2.0 receiveBody method as we are interested in message body only.
msgBody = msgCons.receiveBody(String.class);
if(msgBody != null){
System.out.println(threadName + " : " + msgBody);
}
}
}catch(JMSException jmsEx){
System.out.println(jmsEx);
}
}
/*
* Publisher publishes match updates like current attendance in the stadium, goal score and ball possession by teams.
*/
private void matchUpdatePublisher(){
JmsConnectionFactory cf = null;
JMSContext msgContext = null;
int nederlandsGoals = 0;
int chileGoals = 0;
int stadiumAttendence = 23231;
int switchIndex = 0;
String msgBody = "";
int nederlandsHolding = 60;
int chileHolding = 40;
try {
// Create Factory for WMQ JMS provider
JmsFactoryFactory ff = JmsFactoryFactory.getInstance(WMQConstants.WMQ_PROVIDER);
// Create connection factory
cf = ff.createConnectionFactory();
// Set MQ properties
cf.setStringProperty(WMQConstants.WMQ_QUEUE_MANAGER, "QM3");
cf.setIntProperty(WMQConstants.WMQ_CONNECTION_MODE, WMQConstants.WMQ_CM_BINDINGS);
// Create message context
msgContext = cf.createContext();
// Create a topic destination
Topic fifaScores = msgContext.createTopic("/FIFA2014/UPDATES");
// Create publisher to publish updates from stadium
JMSProducer msgProducer = msgContext.createProducer();
while(true){
// Send match updates
switch(switchIndex){
// Attendance
case 0:
msgBody ="Stadium Attendence " + stadiumAttendence;
stadiumAttendence += 314;
break;
// Goals
case 1:
msgBody ="SCORE: The Netherlands: " + nederlandsGoals + " - Chile:" + chileGoals;
break;
// Ball possession percentage
case 2:
msgBody ="Ball possession: The Netherlands: " + nederlandsHolding + "% - Chile: " + chileHolding + "%";
if((nederlandsHolding > 60) && (nederlandsHolding < 70)){
nederlandsHolding -= 2;
chileHolding += 2;
}else{
nederlandsHolding += 2;
chileHolding -= 2;
}
break;
}
// Publish and wait for two seconds to publish next update
msgProducer.send (fifaScores, msgBody);
try{
Thread.sleep(2000);
}catch(InterruptedException iex){
}
// Increment and reset the index if greater than 2
switchIndex++;
if(switchIndex > 2)
switchIndex = 0;
}
}catch(JMSException jmsEx){
System.out.println(jmsEx);
}
}
/*
* (non-Javadoc)
* #see java.lang.Runnable#run()
*/
public void run() {
// If this is a publisher thread
if(threadName == "PUBLISHER"){
matchUpdatePublisher();
}else{
// Create subscription and start receiving publications
sharedNonDurableSubscriptionDemo();
}
}
// Start thread
public void start (){
System.out.println("Starting " + threadName );
if (t == null)
{
t = new Thread (this, threadName);
t.start ();
}
}
}
I am new to IBM MQ and can't understand how to resolve following imports.
import com.ibm.msg.client.jms.JmsConnectionFactory;
import com.ibm.msg.client.jms.JmsFactoryFactory;
import com.ibm.msg.client.wmq.WMQConstants;
Just resolved other dependencies through jars. Please help on this.
You want 'com.ibm.mq.allclient.jar'. You can find this on disk if you have a queue manager installed or MQC8 support pac under INSTALL_DIR/java/lib.
I recently used IBM MQ classes for JMS to place messages onto queues and was successful. Please ensure you have the following jar files in your build path before compilation. You can either google this and download or if you have installed WebSphere MQ on your pc then go to the installation folder(WebSphere MQ)/Java/lib to find the jar files:
com.ibm.mq.jmqi.jar
com.ibm.mqjms.jar
jms.jar

How to register an agent from different platform to a different platform located remotely in JADE?

I have two pc's on which i am running agents.Both are connected by LAN(or wifi). I want these agents to communicate. One of the ways i found is by giving agent's full addresses.Below is the code snippet.
AID a = new AID("A#192.168.14.51:1099/JADE",AID.ISGUID);
a.addAddresses("http://192.168.14.51:7778/acc");
msg.addReceiver(a);
send(msg);
however Once i start agents at one platform, i want the agents on other platform to be able to register services on its yellow pages so that i can search for appropriate agent from a list of same.I looked but could not find anything about it. Please give me suggestion on how i can achieve this.
Well, you are looking for DF federation. As far as I understand, it is nothing but 'connecting' DFs.
There is an example in yelloPages package in 'jade all examples' folder. It creates register,subscriber,searcher and a subDF agent. registrer agent registers agent with soe property and other agents do their jobs. SubDF creates child DF which involves DF Federation. For you, I modified the code as this:
Next three agents run on port 1099 as:
1)
package examples.yellowPages;
import jade.core.Agent;
import jade.core.AID;
import jade.domain.DFService;
import jade.domain.FIPAException;
import jade.domain.FIPANames;
import jade.domain.FIPAAgentManagement.DFAgentDescription;
import jade.domain.FIPAAgentManagement.ServiceDescription;
import jade.domain.FIPAAgentManagement.Property;
/**
This example shows how to register an application specific service in the Yellow Pages
catalogue managed by the DF Agent so that other agents can dynamically discover it.
In this case in particular we register a "Weather-forecast" service for
Italy. The name of this service is specified as a command line argument.
#author Giovanni Caire - TILAB
*/
public class DFRegisterAgent extends Agent {
protected void setup() {
String serviceName = "unknown";
// Read the name of the service to register as an argument
Object[] args = getArguments();
if (args != null && args.length > 0) {
serviceName = (String) args[0];
}
// Register the service
System.out.println("Agent "+getLocalName()+" registering service \""+serviceName+"\" of type \"weather-forecast\"");
try {
DFAgentDescription dfd = new DFAgentDescription();
dfd.setName(getAID());
ServiceDescription sd = new ServiceDescription();
sd.setName(serviceName);
sd.setType("weather-forecast");
// Agents that want to use this service need to "know" the weather-forecast-ontology
sd.addOntologies("weather-forecast-ontology");
// Agents that want to use this service need to "speak" the FIPA-SL language
sd.addLanguages(FIPANames.ContentLanguage.FIPA_SL);
sd.addProperties(new Property("country", "Italy"));
dfd.addServices(sd);
DFService.register(this, dfd);
}
catch (FIPAException fe) {
fe.printStackTrace();
}
}
}
2)
package examples.yellowPages;
import jade.core.Agent;
import jade.core.AID;
import jade.domain.DFService;
import jade.domain.FIPAException;
import jade.domain.FIPANames;
import jade.domain.FIPAAgentManagement.DFAgentDescription;
import jade.domain.FIPAAgentManagement.ServiceDescription;
import jade.domain.FIPAAgentManagement.SearchConstraints;
import jade.util.leap.Iterator;
/**
This example shows how to search for services provided by other agents
and advertised in the Yellow Pages catalogue managed by the DF agent.
In this case in particular we search for agents providing a
"Weather-forecast" service.
#author Giovanni Caire - TILAB
*/
public class DFSearchAgent extends Agent {
protected void setup() {
// Search for services of type "weather-forecast"
System.out.println("Agent "+getLocalName()+" searching for services of type \"weather-forecast\"");
try {
// Build the description used as template for the search
DFAgentDescription template = new DFAgentDescription();
ServiceDescription templateSd = new ServiceDescription();
templateSd.setType("weather-forecast");
template.addServices(templateSd);
SearchConstraints sc = new SearchConstraints();
// We want to receive 10 results at most
sc.setMaxResults(new Long(10));
DFAgentDescription[] results = DFService.search(this, template, sc);
if (results.length > 0) {
System.out.println("Agent "+getLocalName()+" found the following weather-forecast services:");
for (int i = 0; i < results.length; ++i) {
DFAgentDescription dfd = results[i];
AID provider = dfd.getName();
// The same agent may provide several services; we are only interested
// in the weather-forcast one
Iterator it = dfd.getAllServices();
while (it.hasNext()) {
ServiceDescription sd = (ServiceDescription) it.next();
if (sd.getType().equals("weather-forecast")) {
System.out.println("- Service \""+sd.getName()+"\" provided by agent "+provider.getName());
}
}
}
}
else {
System.out.println("Agent "+getLocalName()+" did not find any weather-forecast service");
}
}
catch (FIPAException fe) {
fe.printStackTrace();
}
}
}
3)
package examples.yellowPages;
import jade.core.Agent;
import jade.core.AID;
import jade.domain.DFService;
import jade.domain.FIPAException;
import jade.domain.FIPANames;
import jade.domain.FIPAAgentManagement.DFAgentDescription;
import jade.domain.FIPAAgentManagement.ServiceDescription;
import jade.domain.FIPAAgentManagement.Property;
import jade.domain.FIPAAgentManagement.SearchConstraints;
import jade.proto.SubscriptionInitiator;
import jade.lang.acl.ACLMessage;
import jade.util.leap.Iterator;
/**
This example shows how to subscribe to the DF agent in order to be notified
each time a given service is published in the yellow pages catalogue.
In this case in particular we want to be informed whenever a service of type
"Weather-forecast" for Italy becomes available.
#author Giovanni Caire - TILAB
*/
public class DFSubscribeAgent extends Agent {
protected void setup() {
// Build the description used as template for the subscription
DFAgentDescription template = new DFAgentDescription();
ServiceDescription templateSd = new ServiceDescription();
templateSd.setType("weather-forecast");
templateSd.addProperties(new Property("country", "Italy"));
template.addServices(templateSd);
SearchConstraints sc = new SearchConstraints();
// We want to receive 10 results at most
sc.setMaxResults(new Long(10));
addBehaviour(new SubscriptionInitiator(this, DFService.createSubscriptionMessage(this, getDefaultDF(), template, sc)) {
protected void handleInform(ACLMessage inform) {
System.out.println("Agent "+getLocalName()+": Notification received from DF");
try {
DFAgentDescription[] results = DFService.decodeNotification(inform.getContent());
if (results.length > 0) {
for (int i = 0; i < results.length; ++i) {
DFAgentDescription dfd = results[i];
AID provider = dfd.getName();
// The same agent may provide several services; we are only interested
// in the weather-forcast one
Iterator it = dfd.getAllServices();
while (it.hasNext()) {
ServiceDescription sd = (ServiceDescription) it.next();
if (sd.getType().equals("weather-forecast")) {
System.out.println("Weather-forecast service for Italy found:");
System.out.println("- Service \""+sd.getName()+"\" provided by agent "+provider.getName());
}
}
}
}
System.out.println();
}
catch (FIPAException fe) {
fe.printStackTrace();
}
}
} );
}
}
4) This is the last one. It creates a DF and registers in DFRegister agent i.e. DF federation is done. I ran this on 1331 port. Remember to change IP addresses. (u can run agent on different port by using -local-port 1331.
Remember to run previous agents before this.
You can put it in different eclipse project and run it.
import jade.core.*;
import jade.core.behaviours.*;
import jade.domain.FIPAAgentManagement.*;
import jade.domain.FIPAException;
import jade.domain.DFService;
import jade.domain.FIPANames;
import jade.util.leap.Iterator;
/**
This is an example of an agent that plays the role of a sub-df by
automatically registering with a parent DF.
Notice that exactly the same might be done by using the GUI of the DF.
<p>
This SUBDF inherits all the functionalities of the default DF, including
its GUI.
#author Giovanni Rimassa - Universita` di Parma
#version $Date: 2003-12-03 17:57:03 +0100 (mer, 03 dic 2003) $ $Revision: 4638 $
*/
public class SubDF2 extends jade.domain.df {
public void setup() {
// Input df name
int len = 0;
byte[] buffer = new byte[1024];
try {
// AID parentName = getDefaultDF();
AID parentName = new AID("df#10.251.216.135:1099/JADE");
parentName.addAddresses("http://NikhilChilwant:7778/acc");
//Execute the setup of jade.domain.df which includes all the default behaviours of a df
//(i.e. register, unregister,modify, and search).
super.setup();
//Use this method to modify the current description of this df.
setDescriptionOfThisDF(getDescription());
//Show the default Gui of a df.
super.showGui();
DFService.register(this,parentName,getDescription());
addParent(parentName,getDescription());
System.out.println("Agent: " + getName() + " federated with default df.");
DFAgentDescription template = new DFAgentDescription();
ServiceDescription templateSd = new ServiceDescription();
templateSd.setType("weather-forecast");
templateSd.addProperties(new Property("country", "Italy"));
template.addServices(templateSd);
SearchConstraints sc = new SearchConstraints();
// We want to receive 10 results at most
sc.setMaxResults(new Long(10));
DFAgentDescription[] results = DFService.search(this,parentName, template, sc);
/* if (results.length > 0) {*/
System.out.println("SUB DF ***Agent "+getLocalName()+" found the following weather-forecast services:");
for (int i = 0; i < results.length; ++i) {
DFAgentDescription dfd = results[i];
AID provider = dfd.getName();
// The same agent may provide several services; we are only interested
// in the weather-forcast one
Iterator it = dfd.getAllServices();
while (it.hasNext()) {
ServiceDescription sd = (ServiceDescription) it.next();
if (sd.getType().equals("weather-forecast")) {
System.out.println("- Service \""+sd.getName()+"\" provided by agent "+provider.getName());
}
}
}/*}*/
String serviceName = "unknown2";
DFAgentDescription dfd = new DFAgentDescription();
dfd.setName(getAID());
ServiceDescription sd = new ServiceDescription();
sd.setName(serviceName);
sd.setType("weather-forecast2");
// Agents that want to use this service need to "know" the weather-forecast-ontology
sd.addOntologies("weather-forecast-ontology2");
// Agents that want to use this service need to "speak" the FIPA-SL language
sd.addLanguages(FIPANames.ContentLanguage.FIPA_SL);
sd.addProperties(new Property("country2", "Italy2"));
dfd.addServices(sd);
DFService.register(this, parentName,dfd);
}catch(FIPAException fe){fe.printStackTrace();}
}
private DFAgentDescription getDescription()
{
DFAgentDescription dfd = new DFAgentDescription();
dfd.setName(getAID());
ServiceDescription sd = new ServiceDescription();
sd.setName(getLocalName() + "-sub-df");
sd.setType("fipa-df");
sd.addProtocols(FIPANames.InteractionProtocol.FIPA_REQUEST);
sd.addOntologies("fipa-agent-management");
sd.setOwnership("JADE");
dfd.addServices(sd);
return dfd;
}
}
After running the code you can see that, subDF agent is able to find agent which is registered on its federated DF.
You can download complete code here also: http://tinyurl.com/Agent-on-different-platforms

Is there a useDirtyFlag option for Tomcat 6 cluster configuration?

In Tomcat 5.0.x you had the ability to set useDirtyFlag="false" to force replication of the session after every request rather than checking for set/removeAttribute calls.
<Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
managerClassName="org.apache.catalina.cluster.session.SimpleTcpReplicationManager"
expireSessionsOnShutdown="false"
**useDirtyFlag="false"**
doClusterLog="true"
clusterLogName="clusterLog"> ...
The comments in the server.xml stated this may be used to make the following work:
<%
HashMap map = (HashMap)session.getAttribute("map");
map.put("key","value");
%>
i.e. change the state of an object that has already been put in the session and you can be sure that this object still be replicated to the other nodes in the cluster.
According to the Tomcat 6 documentation you only have two "Manager" options - DeltaManager & BackupManager ... neither of these seem to allow this option or anything like it. In my testing the default setup:
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
where you get the DeltaManager by default, it's definitely behaving as useDirtyFlag="true" (as I'd expect).
So my question is - is there an equivalent in Tomcat 6?
Looking at the source I can see a manager implementation "org.apache.catalina.ha.session.SimpleTcpReplicationManager" which does have the useDirtyFlag but the javadoc comments in this state it's "Tomcat Session Replication for Tomcat 4.0" ... I don't know if this is ok to use - I'm guessing not as it's not mentioned in the main cluster configuration documentation.
I posted essentially the same question on the tomcat-users mailing list and the responses to this along with some information in the tomcat bugzilla ([43866]) led me to the following conclusions:
There is no equivalent to the useDirtyFlag, if you're putting mutable (ie changing) objects in the session you need a custom coded solution.
A Tomcat ClusterValve seems to be an effecting place for this solution - plug into the cluster mechanism, manipulate attributes to make it appear to the DeltaManager that all attributes in the session have changed. This forces replication of the entire session.
Step 1: Write the ForceReplicationValve (extends ValveBase implements ClusterValve)
I won't include the whole class but the key bit of logic (taking out the logging and instanceof checking):
#Override
public void invoke(Request request, Response response)
throws IOException, ServletException {
getNext().invoke(request, response);
Session session = request.getSessionInternal();
HttpSession deltaSession = (HttpSession) session;
for (Enumeration<String> names = deltaSession.getAttributeNames();
names.hasMoreElements(); ) {
String name = names.nextElement();
deltaSession.setAttribute(name, deltaSession.getAttribute(name));
}
}
Step 2: Alter the cluster config (in conf/server.xml)
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="8">
<Valve className="org.apache.catalina.ha.tcp.ForceReplicationValve"/>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
filter=".*\.gif;.*\.jpg;.*\.png;.*\.js;.*\.htm;.*\.html;.*\.txt;.*\.css;"/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
Replication of the session to all cluster nodes will now happen after every request.
Aside: Note the channelSendOptions setting. This replaces the replicationMode=asynchronous/synchronous/pooled from Tomcat 5.0.x. See the cluster documentation for the possible int values.
Appendix: Full Valve source as requested
package org.apache.catalina.ha.tcp;
import java.io.IOException;
import java.util.Enumeration;
import java.util.LinkedList;
import java.util.List;
import javax.servlet.ServletException;
import javax.servlet.http.HttpSession;
import org.apache.catalina.Lifecycle;
import org.apache.catalina.LifecycleException;
import org.apache.catalina.LifecycleListener;
import org.apache.catalina.Session;
import org.apache.catalina.connector.Request;
import org.apache.catalina.connector.Response;
import org.apache.catalina.ha.CatalinaCluster;
import org.apache.catalina.ha.ClusterValve;
import org.apache.catalina.ha.session.ReplicatedSession;
import org.apache.catalina.ha.session.SimpleTcpReplicationManager;
import org.apache.catalina.util.LifecycleSupport;
//import org.apache.catalina.util.StringManager;
import org.apache.catalina.valves.ValveBase;
/**
* <p>With the {#link SimpleTcpReplicationManager} effectively deprecated, this allows
* mutable objects to be replicated in the cluster by forcing the "dirty" status on
* every request.</p>
*
* #author Jon Brisbin (via post on tomcat-users http://markmail.org/thread/rdo3drcir75dzzrq)
* #author Kevin Jansz
*/
public class ForceReplicationValve extends ValveBase implements Lifecycle, ClusterValve {
private static org.apache.juli.logging.Log log =
org.apache.juli.logging.LogFactory.getLog( ForceReplicationValve.class );
#SuppressWarnings("hiding")
protected static final String info = "org.apache.catalina.ha.tcp.ForceReplicationValve/1.0";
// this could be used if ForceReplicationValve messages were setup
// in org/apache/catalina/ha/tcp/LocalStrings.properties
//
// /**
// * The StringManager for this package.
// */
// #SuppressWarnings("hiding")
// protected static StringManager sm =
// StringManager.getManager(Constants.Package);
/**
* Not actually required but this must implement {#link ClusterValve} to
* be allowed to be added to the Cluster.
*/
private CatalinaCluster cluster = null ;
/**
* Also not really required, implementing {#link Lifecycle} to allow
* initialisation and shutdown to be logged.
*/
protected LifecycleSupport lifecycle = new LifecycleSupport(this);
/**
* Default constructor
*/
public ForceReplicationValve() {
super();
if (log.isInfoEnabled()) {
log.info(getInfo() + ": created");
}
}
#Override
public String getInfo() {
return info;
}
#Override
public void invoke(Request request, Response response) throws IOException,
ServletException {
getNext().invoke(request, response);
Session session = null;
try {
session = request.getSessionInternal();
} catch (Throwable e) {
log.error(getInfo() + ": Unable to perform replication request.", e);
}
String context = request.getContext().getName();
String task = request.getPathInfo();
if(task == null) {
task = request.getRequestURI();
}
if (session != null) {
if (log.isDebugEnabled()) {
log.debug(getInfo() + ": [session=" + session.getId() + ", instanceof=" + session.getClass().getName() + ", context=" + context + ", request=" + task + "]");
}
if (session instanceof ReplicatedSession) {
// it's a SimpleTcpReplicationManager - can just set to dirty
((ReplicatedSession) session).setIsDirty(true);
if (log.isDebugEnabled()) {
log.debug(getInfo() + ": [session=" + session.getId() + ", context=" + context + ", request=" + task + "] maked DIRTY");
}
} else {
// for everything else - cycle all attributes
List cycledNames = new LinkedList();
// in a cluster where the app is <distributable/> this should be
// org.apache.catalina.ha.session.DeltaSession - implements HttpSession
HttpSession deltaSession = (HttpSession) session;
for (Enumeration<String> names = deltaSession.getAttributeNames(); names.hasMoreElements(); ) {
String name = names.nextElement();
deltaSession.setAttribute(name, deltaSession.getAttribute(name));
cycledNames.add(name);
}
if (log.isDebugEnabled()) {
log.debug(getInfo() + ": [session=" + session.getId() + ", context=" + context + ", request=" + task + "] cycled atrributes=" + cycledNames + "");
}
}
} else {
String id = request.getRequestedSessionId();
log.warn(getInfo() + ": [session=" + id + ", context=" + context + ", request=" + task + "] Session not available, unable to send session over cluster.");
}
}
/*
* ClusterValve methods - implemented to ensure this valve is not ignored by Cluster
*/
public CatalinaCluster getCluster() {
return cluster;
}
public void setCluster(CatalinaCluster cluster) {
this.cluster = cluster;
}
/*
* Lifecycle methods - currently implemented just for logging startup
*/
/**
* Add a lifecycle event listener to this component.
*
* #param listener The listener to add
*/
public void addLifecycleListener(LifecycleListener listener) {
lifecycle.addLifecycleListener(listener);
}
/**
* Get the lifecycle listeners associated with this lifecycle. If this
* Lifecycle has no listeners registered, a zero-length array is returned.
*/
public LifecycleListener[] findLifecycleListeners() {
return lifecycle.findLifecycleListeners();
}
/**
* Remove a lifecycle event listener from this component.
*
* #param listener The listener to remove
*/
public void removeLifecycleListener(LifecycleListener listener) {
lifecycle.removeLifecycleListener(listener);
}
public void start() throws LifecycleException {
lifecycle.fireLifecycleEvent(START_EVENT, null);
if (log.isInfoEnabled()) {
log.info(getInfo() + ": started");
}
}
public void stop() throws LifecycleException {
lifecycle.fireLifecycleEvent(STOP_EVENT, null);
if (log.isInfoEnabled()) {
log.info(getInfo() + ": stopped");
}
}
}
Many thanks to kevinjansz for providing the source for ForceReplicationValve.
I adjusted it for Tomcat7, here it is if anyone needs it:
package org.apache.catalina.ha.tcp;
import java.io.IOException;
import java.util.Enumeration;
import java.util.LinkedList;
import java.util.List;
import javax.servlet.ServletException;
import javax.servlet.http.HttpSession;
import org.apache.catalina.Lifecycle;
import org.apache.catalina.LifecycleException;
import org.apache.catalina.LifecycleListener;
import org.apache.catalina.Session;
import org.apache.catalina.connector.Request;
import org.apache.catalina.connector.Response;
import org.apache.catalina.ha.CatalinaCluster;
import org.apache.catalina.ha.ClusterValve;
import org.apache.catalina.util.LifecycleSupport;
import org.apache.catalina.valves.ValveBase;
import org.apache.catalina.LifecycleState;
// import org.apache.tomcat.util.res.StringManager;
/**
* <p>With the {#link SimpleTcpReplicationManager} effectively deprecated, this allows
* mutable objects to be replicated in the cluster by forcing the "dirty" status on
* every request.</p>
*
* #author Jon Brisbin (via post on tomcat-users http://markmail.org/thread/rdo3drcir75dzzrq)
* #author Kevin Jansz
*/
public class ForceReplicationValve extends ValveBase implements Lifecycle, ClusterValve {
private static org.apache.juli.logging.Log log =
org.apache.juli.logging.LogFactory.getLog( ForceReplicationValve.class );
#SuppressWarnings("hiding")
protected static final String info = "org.apache.catalina.ha.tcp.ForceReplicationValve/1.0";
// this could be used if ForceReplicationValve messages were setup
// in org/apache/catalina/ha/tcp/LocalStrings.properties
//
// /**
// * The StringManager for this package.
// */
// #SuppressWarnings("hiding")
// protected static StringManager sm =
// StringManager.getManager(Constants.Package);
/**
* Not actually required but this must implement {#link ClusterValve} to
* be allowed to be added to the Cluster.
*/
private CatalinaCluster cluster = null;
/**
* Also not really required, implementing {#link Lifecycle} to allow
* initialisation and shutdown to be logged.
*/
protected LifecycleSupport lifecycle = new LifecycleSupport(this);
/**
* Default constructor
*/
public ForceReplicationValve() {
super();
if (log.isInfoEnabled()) {
log.info(getInfo() + ": created");
}
}
#Override
public String getInfo() {
return info;
}
#Override
public void invoke(Request request, Response response) throws IOException,
ServletException {
getNext().invoke(request, response);
Session session = null;
try {
session = request.getSessionInternal();
} catch (Throwable e) {
log.error(getInfo() + ": Unable to perform replication request.", e);
}
String context = request.getContext().getName();
String task = request.getPathInfo();
if(task == null) {
task = request.getRequestURI();
}
if (session != null) {
if (log.isDebugEnabled()) {
log.debug(getInfo() + ": [session=" + session.getId() + ", instanceof=" + session.getClass().getName() + ", context=" + context + ", request=" + task + "]");
}
//cycle all attributes
List<String> cycledNames = new LinkedList<String>();
// in a cluster where the app is <distributable/> this should be
// org.apache.catalina.ha.session.DeltaSession - implements HttpSession
HttpSession deltaSession = (HttpSession) session;
for (Enumeration<String> names = deltaSession.getAttributeNames(); names.hasMoreElements(); ) {
String name = names.nextElement();
deltaSession.setAttribute(name, deltaSession.getAttribute(name));
cycledNames.add(name);
}
if (log.isDebugEnabled()) {
log.debug(getInfo() + ": [session=" + session.getId() + ", context=" + context + ", request=" + task + "] cycled atrributes=" + cycledNames + "");
}
} else {
String id = request.getRequestedSessionId();
log.warn(getInfo() + ": [session=" + id + ", context=" + context + ", request=" + task + "] Session not available, unable to send session over cluster.");
}
}
/*
* ClusterValve methods - implemented to ensure this valve is not ignored by Cluster
*/
public CatalinaCluster getCluster() {
return cluster;
}
public void setCluster(CatalinaCluster cluster) {
this.cluster = cluster;
}
/*
* Lifecycle methods - currently implemented just for logging startup
*/
/**
* Add a lifecycle event listener to this component.
*
* #param listener The listener to add
*/
public void addLifecycleListener(LifecycleListener listener) {
lifecycle.addLifecycleListener(listener);
}
/**
* Get the lifecycle listeners associated with this lifecycle. If this
* Lifecycle has no listeners registered, a zero-length array is returned.
*/
public LifecycleListener[] findLifecycleListeners() {
return lifecycle.findLifecycleListeners();
}
/**
* Remove a lifecycle event listener from this component.
*
* #param listener The listener to remove
*/
public void removeLifecycleListener(LifecycleListener listener) {
lifecycle.removeLifecycleListener(listener);
}
protected synchronized void startInternal() throws LifecycleException {
setState(LifecycleState.STARTING);
if (log.isInfoEnabled()) {
log.info(getInfo() + ": started");
}
}
protected synchronized void stopInternal() throws LifecycleException {
setState(LifecycleState.STOPPING);
if (log.isInfoEnabled()) {
log.info(getInfo() + ": stopped");
}
}
}

Categories