Jade Messaging Queue - Eclipse - java

How I would like to change these parameters from eclipse (not using command line):
jade_core_messaging_MessageManager_poolsize
jade_core_messaging_MessageManager_maxqueuesize
jade_core_messaging_MessageManager_deliverytimethreshold
The first changes the number of threads that handle the queue size, the second changes the max queue size of the received ACL messages, and finally the last changes when to print a warning when the delivery time threshold is exceeded.
bests,

If you start your container and agents programmatically, then something like this
jade.core.Runtime rt = jade.core.Runtime.instance();
Properties properties = new Properties();
properties.put("local-port", "8858");
properties.put("port", "8858");
properties.put("host", "127.0.0.1");
properties.put("local-host", "127.0.0.1");
.... other parameters
properties.put("jade_core_messaging_MessageManager_poolsize", "100");
ProfileImpl p = new ProfileImpl(properties);
rt.setCloseVM(true);
AgentContainer agentContainer = rt.createMainContainer(p);
AgentController ac = agentContainer.createNewAgent("YourAgent", YourAgent.class.getName(), new Object[]{});
ac.start();

Related

Having trouble listen vert.x config change

I changed my config-test.json,but application did not print "new config:...",the before scanhander has print .
JsonObject jsonConfig = new JsonObject();
jsonConfig.put("path", "test.json");
ConfigStoreOptions config = new ConfigStoreOptions();
config.setType("file").setOptional(true).setConfig(jsonConfig);
ConfigRetrieverOptions options =
new ConfigRetrieverOptions().addStore(config).setScanPeriod(5000);
ConfigRetriever configRetriever = ConfigRetriever.create(vertx, options);
configRetriever.setBeforeScanHandler(h -> {
System.out.println("config:" + configRetriever.getCachedConfig());
});
configRetriever.listen(change -> {
JsonObject newConfiguration = change.getNewConfiguration();
System.out.println("new config:" + newConfiguration);
JsonObject old = change.getPreviousConfiguration();
System.out.println("old config:" + old);
});
The Javadoc of setBeforeScanHandler says:
Registers a handler called before every scan. This method is mostly used for logging purpose.
Which means that in your case the program will check for changes in the JSON file every five seconds as specified in .setScanPeriod(5000). If the program finds the change during one of those checks, first the configRetriever.setBeforeScanHandler method will trigger, followed by configRetriever.listen.
The order of messages when there is a change will be:
config : {...}
new config:{...}
old config:{...}
The program does not register the config change immediately, but only on the regularly scheduled interval checks, as is stated in the setScanPeriod JavaDoc:
Configures the scan period, in ms. This is the time amount between two
checks of the configuration updates.

Java: Kafka AdminClient not establishing connection (or so it seems)

everyone.
This is my first post here, so, please, pardon my finesse skill of writing stack overflow questions.
I am having trouble using AdminClient from org.apache.kafka.clients.admin.AdminClient.
The issue at hand is this:
I initiate a secure connection to our broker server (running kafka 1.0.0) using SASL SSL.
it works just fine when I am running a consumer against that same broker with the same security settings. However when I am doing AdminClient stuff, it seems to have worked, but I see no traffic coming out of my machine to the broker server whatsoever in wireshark, and what I am trying to do does not happen on the broker side.
here is my code:
public class AclProvisioner {
//set up variables
private static Properties props = new Properties();
private static ClassLoader classloader = Thread.currentThread().getContextClassLoader();
static String mid = null;
static String topic = null;
public static void main(String... args) {
props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, "kafkabroker.mydomain.com:9094");
props.put("security.protocol","SASL_SSL");
props.put("ssl.truststore.location", "C:\\Temp\\mydomain.root.jks" );
props.put("ssl.truststore.password","my_truststore_password");
props.put("sasl.mechanism","GSSAPI");
props.put("sasl.kerberos.service.name","kafka_admin_username");
AdminClient adminClient = AdminClient.create(props);
// generate ACLs
AclBinding newTopicReadAcl = new AclBinding( new Resource(ResourceType.TOPIC, "TestTopic"),
new AccessControlEntry("MY_TESTID", "*", AclOperation.READ, AclPermissionType.ALLOW) );
AclBinding newTopicDescribeAcl = new AclBinding( new Resource(ResourceType.TOPIC, "TestTopic"),
new AccessControlEntry("MY_TESTID", "*", AclOperation.DESCRIBE, AclPermissionType.ALLOW) );
AclBinding newGroupReadAcl = new AclBinding( new Resource(ResourceType.GROUP, "TestGroup"),
new AccessControlEntry("MY_TESTID", "*", AclOperation.READ, AclPermissionType.ALLOW) );
Collection<AclBinding> aclList = Arrays.asList(newTopicReadAcl, newTopicDescribeAcl, newGroupReadAcl);
adminClient.createAcls(aclList);
// create topic
int numPartitions = 6;
short replicasFactor = 2;
NewTopic newTopic = new NewTopic("Demo.JavaAdminClientTest", numPartitions, replicasFactor);
Map<String, String> configMap = new HashMap<>();
configMap.put(TopicConfig.CLEANUP_POLICY_CONFIG, TopicConfig.CLEANUP_POLICY_COMPACT);
configMap.put(TopicConfig.COMPRESSION_TYPE_CONFIG, "gzip");
newTopic.configs(configMap);
List<NewTopic> topics = Arrays.asList(newTopic);
adminClient.createTopics( topics );
}
If I ssh to the server itself and export my keytab and kinit, I am able to generate ACLs just fine using CLI method. I am also able to run a consumer using the same exact properties (as far as security goes).
Another thing I have discovered, is that if I put a server that does not exist or can not be reached, the program does fail, telling me that it could not resolve the BOOTSTRAP_SERVER_NAME.
same exact behavior happens if instead of ACL I attempt to create Topics. Once again, that does work just fine out of CLI.
I appreciate any pointers!
Cheers
All AdminClient methods are asynchronous and only return Future objects.
So if you don't explicitly wait on the futures to complete, your program just terminates before the AdminClient has time to send anything over the network.
You can use all() or values() on the CreateAclsResult [0] and CreateTopicsResults [1] to retrieve KafkaFuture [2] objects. Then use get() on them to wait for example.
[0] http://kafka.apache.org/11/javadoc/org/apache/kafka/clients/admin/CreateAclsResult.html
[1] http://kafka.apache.org/11/javadoc/org/apache/kafka/clients/admin/CreateTopicsResult.html
[2] http://kafka.apache.org/11/javadoc/org/apache/kafka/common/KafkaFuture.html

How to check whether Kafka Server is running?

I want to ensure whether kafka server is running or not before starting production and consumption jobs. It is in windows environment and here's my kafka server's code in eclipse...
Properties properties = new Properties();
properties.setProperty("broker.id", "1");
properties.setProperty("port", "9092");
properties.setProperty("log.dirs", "D://workspace//");
properties.setProperty("zookeeper.connect", "localhost:2181");
Option<String> option = Option.empty();
KafkaConfig config = new KafkaConfig(properties);
KafkaServer kafka = new KafkaServer(config, new CurrentTime(), option);
kafka.startup();
In this case if (kafka != null) is not enough because it is always true. So is there any way to know that my kafka server is running and ready for producer. It is necessary for me to check this because it causes loss of some starting data packets.
All Kafka brokers must be assigned a broker.id. On startup a broker will create an ephemeral node in Zookeeper with a path of /broker/ids/$id. As the node is ephemeral it will be removed as soon as the broker disconnects, e.g. by shutting down.
You can view the list of the ephemeral broker nodes like so:
echo dump | nc localhost 2181 | grep brokers
The ZooKeeper client interface exposes a number of commands; dump lists all the sessions and ephemeral nodes for the cluster.
Note, the above assumes:
You're running ZooKeeper on the default port (2181) on localhost, and that localhost is the leader for the cluster
Your zookeeper.connect Kafka config doesn't specify a chroot env for your Kafka cluster i.e. it's just host:port and not host:port/path
You can install Kafkacat tool on your machine
For example on Ubuntu You can install it using
apt-get install kafkacat
once kafkacat is installed then you can use following command to connect it
kafkacat -b <your-ip-address>:<kafka-port> -t test-topic
Replace <your-ip-address> with your machine ip
<kafka-port> can be replaced by the port on which kafka is running. Normally it is 9092
once you run the above command and if kafkacat is able to make the connection then it means that kafka is up and running
I used the AdminClient api.
Properties properties = new Properties();
properties.put("bootstrap.servers", "localhost:9092");
properties.put("connections.max.idle.ms", 10000);
properties.put("request.timeout.ms", 5000);
try (AdminClient client = KafkaAdminClient.create(properties))
{
ListTopicsResult topics = client.listTopics();
Set<String> names = topics.names().get();
if (names.isEmpty())
{
// case: if no topic found.
}
return true;
}
catch (InterruptedException | ExecutionException e)
{
// Kafka is not available
}
For Linux, "ps aux | grep kafka" see if kafka properties are shown in the results. E.g. /path/to/kafka/server.properties
Paul's answer is very good and it is actually how Kafka & Zk work together from a broker point of view.
I would say that another easy option to check if a Kafka server is running is to create a simple KafkaConsumer pointing to the cluste and try some action, for example, listTopics(). If kafka server is not running, you will get a TimeoutException and then you can use a try-catch sentence.
def validateKafkaConnection(kafkaParams : mutable.Map[String, Object]) : Unit = {
val props = new Properties()
props.put("bootstrap.servers", kafkaParams.get("bootstrap.servers").get.toString)
props.put("group.id", kafkaParams.get("group.id").get.toString)
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
val simpleConsumer = new KafkaConsumer[String, String](props)
simpleConsumer.listTopics()
}
The good option is to use AdminClient as below before starting to produce or consume the messages
private static final int ADMIN_CLIENT_TIMEOUT_MS = 5000;
try (AdminClient client = AdminClient.create(properties)) {
client.listTopics(new ListTopicsOptions().timeoutMs(ADMIN_CLIENT_TIMEOUT_MS)).listings().get();
} catch (ExecutionException ex) {
LOG.error("Kafka is not available, timed out after {} ms", ADMIN_CLIENT_TIMEOUT_MS);
return;
}
Firstly you need to create AdminClient bean:
#Bean
public AdminClient adminClient(){
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
StringUtils.arrayToCommaDelimitedString(new Object[]{"your bootstrap server address}));
return AdminClient.create(configs);
}
Then, you can use this script:
while (true) {
Map<String, ConsumerGroupDescription> groupDescriptionMap =
adminClient.describeConsumerGroups(Collections.singletonList(groupId))
.all()
.get(10, TimeUnit.SECONDS);
ConsumerGroupDescription consumerGroupDescription = groupDescriptionMap.get(groupId);
log.debug("Kafka consumer group ({}) state: {}",
groupId,
consumerGroupDescription.state());
if (consumerGroupDescription.state().equals(ConsumerGroupState.STABLE)) {
boolean isReady = true;
for (MemberDescription member : consumerGroupDescription.members()) {
if (member.assignment() == null || member.assignment().topicPartitions().isEmpty()) {
isReady = false;
}
}
if (isReady) {
break;
}
}
log.debug("Kafka consumer group ({}) is not ready. Waiting...", groupId);
TimeUnit.SECONDS.sleep(1);
}
This script will check the state of the consumer group every second till the state will be STABLE. Because all consumers assigned to topic partitions, you can conclude that server is running and ready.
you can use below code to check for brokers available if server is running.
import org.I0Itec.zkclient.ZkClient;
public static boolean isBrokerRunning(){
boolean flag = false;
ZkClient zkClient = new ZkClient(endpoint.getZookeeperConnect(), 10000);//, kafka.utils.ZKStringSerializer$.MODULE$);
if(zkClient!=null){
int brokersCount = zkClient.countChildren(ZkUtils.BrokerIdsPath());
if(brokersCount > 0){
logger.info("Following Broker(s) {} is/are available on Zookeeper.",zkClient.getChildren(ZkUtils.BrokerIdsPath()));
flag = true;
}
else{
logger.error("ERROR:No Broker is available on Zookeeper.");
}
zkClient.close();
}
return flag;
}
I found an event OnError in confluent Kafka:
consumer.OnError += Consumer_OnError;
private void Consumer_OnError(object sender, Error e)
{
Debug.Log("connection error: "+ e.Reason);
ConsumerConnectionError(e);
}
And its documentation in code:
//
// Summary:
// Raised on critical errors, e.g. connection failures or all brokers down. Note
// that the client will try to automatically recover from errors - these errors
// should be seen as informational rather than catastrophic
//
// Remarks:
// Executes on the same thread as every other Consumer event handler (except OnLog
// which may be called from an arbitrary thread).
public event EventHandler<Error> OnError;

Gradle P4Java java.net.SocketTimeoutException: Read timed out

I am using P4Java library in my build.gradle file to sync a large zip file (>200MB) residing at a remote Perforce repository but I am encountering a "java.net.SocketTimeoutException: Read timed out" error either during the sync process or (mostly) during deleting the temporary client created for the sync operation. I am referring http://razgulyaev.blogspot.in/2011/08/p4-java-api-how-to-work-with-temporary.html for working with temporary clients using P4Java API.
I tried increasing the socket read timeout from default 30 sec as suggested in http://answers.perforce.com/articles/KB/8044 and also by introducing sleep but both approaches didn't solved the problem. Probing the server to verify the connection using getServerInfo() right before performing sync or delete operations results in a successful connection check. Can someone please point me as to where I should look for answers?
Thank you.
Providing the code snippet:
void perforceSync(String srcPath, String destPath, String server) {
// Generating the file(s) to sync-up
String[] pathUnderDepot = [
srcPath + "*"
]
// Increasing timeout from default 30 sec to 60 sec
Properties defaultProps = new Properties()
defaultProps.put(PropertyDefs.PROG_NAME_KEY, "CustomBuildApp")
defaultProps.put(PropertyDefs.PROG_VERSION_KEY, "tv_1.0")
defaultProps.put(RpcPropertyDefs.RPC_SOCKET_SO_TIMEOUT_NICK, "60000")
// Instantiating the server
IOptionsServer p4Server = ServerFactory.getOptionsServer("p4java://" + server, defaultProps)
p4Server.connect()
// Authorizing
p4Server.setUserName("perforceUserName")
p4Server.login("perforcePassword")
// Just check if connected successfully
IServerInfo serverInfo = p4Server.getServerInfo()
println 'Server info: ' + serverInfo.getServerLicense()
// Creating new client
IClient tempClient = new Client()
// Setting up the name and the root folder
tempClient.setName("tempClient" + UUID.randomUUID().toString().replace("-", ""))
tempClient.setRoot(destPath)
tempClient.setServer(p4Server)
// Setting the client as the current one for the server
p4Server.setCurrentClient(tempClient)
// Creating Client View entry
ClientViewMapping tempMappingEntry = new ClientViewMapping()
// Setting up the mapping properties
tempMappingEntry.setLeft(srcPath + "...")
tempMappingEntry.setRight("//" + tempClient.getName() + "/...")
tempMappingEntry.setType(EntryType.INCLUDE)
// Creating Client view
ClientView tempClientView = new ClientView()
// Attaching client view entry to client view
tempClientView.addEntry(tempMappingEntry)
tempClient.setClientView(tempClientView)
// Registering the new client on the server
println p4Server.createClient(tempClient)
// Surrounding the underlying block with try as we want some action
// (namely client removing) to be performed in any way
try {
// Forming the FileSpec collection to be synced-up
List<IFileSpec> fileSpecsSet = FileSpecBuilder.makeFileSpecList(pathUnderDepot)
// Syncing up the client
println "Syncing..."
tempClient.sync(FileSpecBuilder.getValidFileSpecs(fileSpecsSet), true, false, false, false)
}
catch (Exception e) {
println "Sync failed. Trying again..."
sleep(60 * 1000)
tempClient.sync(FileSpecBuilder.getValidFileSpecs(fileSpecsSet), true, false, false, false)
}
finally {
println "Done syncing."
try {
p4Server.connect()
IServerInfo serverInfo2 = p4Server.getServerInfo()
println '\nServer info: ' + serverInfo2.getServerLicense()
// Removing the temporary client from the server
println p4Server.deleteClient(tempClient.getName(), false)
}
catch(Exception e) {
println 'Ignoring exception caught while deleting tempClient!'
/*sleep(60 * 1000)
p4Server.connect()
IServerInfo serverInfo3 = p4Server.getServerInfo()
println '\nServer info: ' + serverInfo3.getServerLicense()
sleep(60 * 1000)
println p4Server.deleteClient(tempClient.getName(), false)*/
}
}
}
One unusual thing which I observed while deleting tempClient was it was actually deleting the client but still throwing "java.net.SocketTimeoutException: Read timed out" which is why I ended up commenting the second delete attempt in the second catch block.
Which version of P4Java are you using? Have you tried this out with the newest P4Java? There are notable fixes dealing with RPC sockets since the 2013.2 version forward as can be seen in the release notes:
http://www.perforce.com/perforce/doc.current/user/p4javanotes.txt
Here are some variations that you can try where you have your code to increase timeout and instantiating the server:
a] Have you tried to passing props in its own argument,? For example:
Properties prop = new Properties();
prop.setProperty(RpcPropertyDefs.RPC_SOCKET_SO_TIMEOUT_NICK, "300000");
UsageOptions uop = new UsageOptions(prop);
server = ServerFactory.getOptionsServer(ServerFactory.DEFAULT_PROTOCOL_NAME + "://" + serverPort, prop, uop);
Or something like the following:
IOptionsServer p4Server = ServerFactory.getOptionsServer("p4java://" + server, defaultProps)
You can also set the timeout to "0" to give it no timeout.
b]
props.put(RpcPropertyDefs.RPC_SOCKET_SO_TIMEOUT_NICK, "60000");
props.put(RpcPropertyDefs.RPC_SOCKET_POOL_SIZE_NICK, "5");
c]
Properties props = System.getProperties();
props.put(RpcPropertyDefs.RPC_SOCKET_SO_TIMEOUT_NICK, "60000");
IOptionsServer server =
ServerFactory.getOptionsServer("p4java://perforce:1666", props, null);
d] In case you have Eclipse users using our P4Eclipse plugin, the property can be set in the plugin preferences (Team->Perforce->Advanced) under the Custom P4Java Properties.
"sockSoTimeout" : "3000000"
REFERENCES
Class RpcPropertyDefs
http://perforce.com/perforce/doc.current/manuals/p4java-javadoc/com/perforce/p4java/impl/mapbased/rpc/RpcPropertyDefs.html
P4Eclipse or P4Java: SocketTimeoutException: Read timed out
http://answers.perforce.com/articles/KB/8044

Communication of separate processes through Esper events

I am trying to let multiple java processes exchange events using Esper. One process should send events, the other prepares a query and reacts according to the reported events.
When both operations are done within the same java process, everything works fine. But when I use two different processes, they just don't see each other.
I am wondering what is the key for this communication. I used the same name for the provider. This is all I could do so far.
The Producer:
String aType = espertest.dummy.A.class.getName();
Configuration cepConfig = new Configuration();
cepConfig.addEventType("A",aType);
EPServiceProvider epService = EPServiceProviderManager.getProvider("DummyProvider", cepConfig);
Object o = new A();
epService.getEPRuntime().sendEvent(o);
The Consumer:
String aType = A.class.getName();
String expression = "select count(*) from "+aType + "";
System.out.println("Our Query: " + expression);
Configuration cepConfig = new Configuration();
cepConfig.addEventType("A",aType);
EPServiceProvider epService = EPServiceProviderManager.getProvider("DummyProvider", cepConfig);
EPStatement statement = epService.getEPAdministrator().createEPL(expression);
DummyListener listener = new DummyListener();
statement.addListener(listener);
System.out.println("Anything");
try{
A a = new A();
epService.getEPRuntime().sendEvent(a);
Thread.sleep(60000);
}catch(Exception E)
{
System.out.println("Exception ");
}
The consumer tries to count the events of type A. It also sends an instance of A as a test, and this works fine. The listener is called as expected.
The code above is just an excerpt.
You need to configure middleware (Message Queue, Distributed Cache, Networked FileSystem, Socket Connection, etc....) to get the events from the producer JVM to the consumer JVM. If you can deploy the producer and consumer to a container that supports Apache Camel (e.g. ServiceMix) then it should be trivial to stand up a prototype that uses ActiveMQ to transport your objects into Esper as Camel has support for both products.
JVM 1
From Data Source
To CEP Engine 1
To Message Queue
JVM 2 (also could host MQ Broker)
From Message Queue
To CEP Engine 2
To Destination
Update:
If the producer and consumer can be threads in the same JVM, then the issue may be in the consumer. I cannot see where the consumer does anything with the event from the producer. Try something like this instead (esper reference is provided to the producer/consumer and consumer is reworked with an update method to handle results of the select statement).
Test Driver:
public Driver() {
String aType = espertest.dummy.A.class.getName();
Configuration cepConfig = new Configuration();
cepConfig.addEventType("A",aType);
EPServiceProvider epService = EPServiceProviderManager.getProvider("DummyProvider", cepConfig);
Consumer c = new Consumer(epService);
Producer p = new Producer(epService);
}
Producer:
public Producer(EPServiceProvider epsp) {
Object o = new A();
epsp.getEPRuntime().sendEvent(o);
}
Consumer:
public Consumer(EPServiceProvider epsp) {
EPStatement statement = epsp.getEPAdministrator().createEPL(input);
statement.setSubscriber(this);
}
public void update(A event) {
System.out.println("Consumer received event!");
}

Categories