How to store multi-column data in Apache Geode? - java

I am new to Apache Geode and I am trying a sample program to store date like:
empid:col1:col2
1:10:15
I have written a sample program but at runtime its giving error like: "Error registering instantiator on pool:". If I go through logs I can see record has been inserted in regions but also at query time I am getting following error:
Result : false
startCount : 0
endCount : 20
Message : A ClassNotFoundException was thrown while trying to deserialize cached value.
sharing complete code.
DataEntry.java
package com.apache.geode;
import java.util.Map.Entry;
import com.gemstone.gemfire.cache.Region;
import com.gemstone.gemfire.cache.client.ClientCache;
import com.gemstone.gemfire.cache.client.ClientCacheFactory;
import com.gemstone.gemfire.cache.client.ClientRegionShortcut;
import com.gemstone.gemfire.cache.query.FunctionDomainException;
import com.gemstone.gemfire.cache.query.NameResolutionException;
import com.gemstone.gemfire.cache.query.QueryInvocationTargetException;
import com.gemstone.gemfire.cache.query.TypeMismatchException;
public class DataEntry {
public static void main(String[] args) throws FunctionDomainException,TypeMismatchException, NameResolutionException, QueryInvocationTargetException {
ClientCache cache = new ClientCacheFactory().addPoolLocator(
"10.77.17.17", 10334).create();
Region<String, CustomerValue> customer = cache
.<String, CustomerValue> createClientRegionFactory(
ClientRegionShortcut.CACHING_PROXY)
.setValueConstraint(CustomerValue.class)
.setKeyConstraint(String.class).create("custRegion");
CustomerValue customerValue = new CustomerValue(10, 15);
customer.put("1", customerValue);
System.out.println("successfully Put customer object into the cache");
for (Entry<String, CustomerValue> entry : customer.entrySet()) {
System.out.format("key = %s, value = %s\n", entry.getKey(),
entry.getValue());
}
cache.close();
}
}
ConsumerValue.java
package com.apache.geode;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import com.gemstone.gemfire.DataSerializable;
import com.gemstone.gemfire.Instantiator;
public class CustomerValue implements DataSerializable{
private static final long serialVersionUID = -5524295054253565345L;
private int points_5A;
private int points_10A;
static {
Instantiator.register(new Instantiator(CustomerValue.class, 45) {
public DataSerializable newInstance() {
return new CustomerValue();
}
});
}
public CustomerValue()
{
}
public CustomerValue(int points_5A,int points_10A)
{
this.points_10A=points_10A;
this.points_5A=points_5A;
}
public int getPoints_5A() {
return points_5A;
}
public void setPoints_5A(int points_5a) {
points_5A = points_5a;
}
public int getPoints_10A() {
return points_10A;
}
public void setPoints_10A(int points_10a) {
points_10A = points_10a;
}
#Override
public String toString()
{
return "customer [ 5Apoints=" + points_5A +",10Apoints=" + points_10A +"]";
}
public void fromData(DataInput in) throws IOException {
this.points_5A=in.readInt();
this.points_10A=in.readInt();
}
public void toData(DataOutput io) throws IOException {
io.writeInt(points_5A);
io.writeInt(points_10A);
}
}
output logs:
[info 2015/08/13 14:28:23.452 UTC <main> tid=0x1] Running in local mode since mcast-port was 0 and locators was empty.
[info 2015/08/13 14:28:23.635 UTC <Thread-0 StatSampler> tid=0x9] Disabling statistic archival.
[config 2015/08/13 14:28:23.881 UTC <main> tid=0x1] Pool DEFAULT started with multiuser-authentication=false
[config 2015/08/13 14:28:23.938 UTC <poolTimer-DEFAULT-3> tid=0x13] Updating membership port. Port changed from 0 to 59,982.
[warning 2015/08/13 14:28:24.176 UTC <main> tid=0x1] Error registering instantiator on pool:
com.gemstone.gemfire.cache.client.ServerOperationException: : While performing a remote registerInstantiators
at com.gemstone.gemfire.cache.client.internal.AbstractOp.processAck(AbstractOp.java:257)
at com.gemstone.gemfire.cache.client.internal.RegisterInstantiatorsOp$RegisterInstantiatorsOpImpl.processResponse(RegisterInstantiatorsOp.java:140)
at com.gemstone.gemfire.cache.client.internal.AbstractOp.processResponse(AbstractOp.java:219)
at com.gemstone.gemfire.cache.client.internal.AbstractOp.attemptReadResponse(AbstractOp.java:167)
at com.gemstone.gemfire.cache.client.internal.AbstractOp.attempt(AbstractOp.java:373)
at com.gemstone.gemfire.cache.client.internal.ConnectionImpl.execute(ConnectionImpl.java:261)
at com.gemstone.gemfire.cache.client.internal.pooling.PooledConnection.execute(PooledConnection.java:323)
at com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.executeWithPossibleReAuthentication(OpExecutorImpl.java:932)
at com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:162)
at com.gemstone.gemfire.cache.client.internal.PoolImpl.execute(PoolImpl.java:660)
at com.gemstone.gemfire.cache.client.internal.RegisterInstantiatorsOp.execute(RegisterInstantiatorsOp.java:42)
at com.gemstone.gemfire.internal.cache.PoolManagerImpl.allPoolsRegisterInstantiator(PoolManagerImpl.java:219)
at com.gemstone.gemfire.internal.InternalInstantiator.sendRegistrationMessageToServers(InternalInstantiator.java:206)
at com.gemstone.gemfire.internal.InternalInstantiator._register(InternalInstantiator.java:161)
at com.gemstone.gemfire.internal.InternalInstantiator.register(InternalInstantiator.java:89)
at com.gemstone.gemfire.Instantiator.register(Instantiator.java:175)
at CustomerValue.<clinit>(CustomerValue.java:16)
at DataEntry.main(DataEntry.java:22)
Caused by: java.lang.ClassNotFoundException: CustomerValue$1
at com.gemstone.gemfire.internal.ClassPathLoader.forName (ClassPathLoader.java:422)
at com.gemstone.gemfire.internal.InternalDataSerializer.getCachedClass (InternalDataSerializer.java:4066)
at com.gemstone.gemfire.internal.cache.tier.sockets.command.RegisterInstantiators.cmdExecute(RegisterInstantiators.java:89)
at com.gemstone.gemfire.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:182)
at com.gemstone.gemfire.internal.cache.tier.sockets.ServerConnection.doNormalMsg(ServerConnection.java:787)
at com.gemstone.gemfire.internal.cache.tier.sockets.ServerConnection.doOneMessage(ServerConnection.java:914)
at com.gemstone.gemfire.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1159)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at com.gemstone.gemfire.internal.cache.tier.sockets.AcceptorImpl$1$1.run(AcceptorImpl.java:580)
at java.lang.Thread.run(Thread.java:745)
successfully Put customer object into the cache
key = 1, value = customer [ 5Apoints=10,10Apoints=15]
[info 2015/08/13 14:28:24.225 UTC <main> tid=0x1] GemFireCache[id = 712610161; isClosing = true; isShutDownAll = false; closingGatewayHubsByShutdownAll = false; created = Thu Aug 13 14:28:23 UTC 2015; server = false; copyOnRead = false; lockLease = 120; lockTimeout = 60]: Now closing.
[info 2015/08/13 14:28:24.277 UTC <main> tid=0x1] Resetting original MemoryPoolMXBean heap threshold bytes 0 on pool PS Old Gen
[config 2015/08/13 14:28:24.329 UTC <main> tid=0x1] Destroying connection pool DEFAULT

Your CustomerValue class needs to be on the server's classpath. Please refer to geode documentation on how to deploy jars to the server.

Related

WARNING: RPC failed: Status{code=NOT_FOUND, description=Not found, cause=null} when trying to use GRPC in java and Nodejs

I am currently trying to send a request to a nodejs server from a java client that I created but I am getting the error that is showing above. I've been doing some research on it but can seem to figure out why it is happening. The server I created in nodejs:
var grpc = require('grpc');
const protoLoader = require('#grpc/proto-loader')
const packageDefinition = protoLoader.loadSync('AirConditioningDevice.proto')
var AirConditioningDeviceproto = grpc.loadPackageDefinition(packageDefinition);
var AirConditioningDevice = [{
device_id: 1,
name: 'Device1',
location: 'room1',
status: 'On',
new_tempature: 11
}];
var server = new grpc.Server();
server.addService(AirConditioningDeviceproto.AirConditioningDevice.Airconditioning_service.service,{
currentDetails: function(call, callback){
console.log(call.request.device_id);
for(var i =0; i <AirConditioningDevice.length; i++){
console.log(call.request.device_id);
if(AirConditioningDevice[i].device_id == call.request.device_id){
console.log(call.request.device_id);
return callback(null, AirConditioningDevice [i]);
}
console.log(call.request.device_id);
}
console.log(call.request.device_id);
callback({
code: grpc.status.NOT_FOUND,
details: 'Not found'
});
},
setTemp: function(call, callback){
for(var i =0; i <AirConditioningDevice.length; i++){
if(AirConditioningDevice[i].device_id == call.request.device_id){
AirConditioningDevice[i].new_tempature == call.request.new_tempature;
return callback(null, AirConditioningDevice[i]);
}
}
callback({
code: grpc.status.NOT_FOUND,
details: 'Not found'
});
},
setOff: function(call, callback){
for(var i =0; i <AirConditioningDevice.length; i++){
if(AirConditioningDevice[i].device_id == call.request.device_id && AirConditioningDevice[i].status == 'on'){
AirConditioningDevice[i].status == 'off';
return callback(null, AirConditioningDevice[i]);
}else{
AirConditioningDevice[i].status == 'on';
return callback(null, AirConditioningDevice[i]);
}
}
callback({
code: grpc.status.NOT_FOUND,
details: 'Not found'
});
}
});
server.bind('localhost:3000', grpc.ServerCredentials.createInsecure());
server.start();
This is the client that I have created in java:
package com.air.grpc;
import java.util.concurrent.TimeUnit;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.JPanel;
import com.air.grpc.Airconditioning_serviceGrpc;
import com.air.grpc.GrpcClient;
import com.air.grpc.deviceIDRequest;
import com.air.grpc.ACResponse;
import io.grpc.ManagedChannel;
import io.grpc.ManagedChannelBuilder;
import io.grpc.StatusRuntimeException;
public class GrpcClient {
private static final Logger logger = Logger.getLogger(GrpcClient.class.getName());
private final ManagedChannel channel;
private final Airconditioning_serviceGrpc.Airconditioning_serviceBlockingStub blockingStub;
private final Airconditioning_serviceGrpc.Airconditioning_serviceStub asyncStub;
public GrpcClient(String host, int port) {
this(ManagedChannelBuilder.forAddress(host, port)
// Channels are secure by default (via SSL/TLS). For the example we disable TLS to avoid
// needing certificates.
.usePlaintext()
.build());
}
GrpcClient(ManagedChannel channel) {
this.channel = channel;
blockingStub = Airconditioning_serviceGrpc.newBlockingStub(channel);
asyncStub = Airconditioning_serviceGrpc.newStub(channel);
}
public void shutdown() throws InterruptedException {
channel.shutdown().awaitTermination(5, TimeUnit.SECONDS);
}
public void currentDetails(int id) {
logger.info("Will try to get device " + id + " ...");
deviceIDRequest deviceid = deviceIDRequest.newBuilder().setDeviceId(id).build();
ACResponse response;
try {
response =blockingStub.currentDetails(deviceid);
}catch(StatusRuntimeException e) {
logger.log(Level.WARNING, "RPC failed: {0}", e.getStatus());
return;
}
logger.info("Device: " + response.getAirConditioning ());
}
public static void main(String[] args) throws Exception {
GrpcClient client = new GrpcClient("localhost", 3000);
try {
client.currentDetails(1);
}finally {
client.shutdown();
}
}
}
Right now the only one that I have tested cause its the most basic one is currentdetails. As you can see I have created an AirConditioningDevice object. I am trying to get the details of it by typing in 1 to a textbox which is the id but like i said when i send it i get the error in the title. This is the proto file that I have created:
syntax = "proto3";
package AirConditioningDevice;
option java_package = "AircondioningDevice.proto.ac";
service Airconditioning_service{
rpc currentDetails(deviceIDRequest) returns (ACResponse) {};
rpc setTemp( TempRequest ) returns (ACResponse) {};
rpc setOff(deviceIDRequest) returns (ACResponse) {};
}
message AirConditioning{
int32 device_id =1;
string name = 2;
string location = 3;
string status = 4;
int32 new_tempature = 5;
}
message deviceIDRequest{
int32 device_id =1;
}
message TempRequest {
int32 device_id = 1;
int32 new_temp = 2;
}
message ACResponse {
AirConditioning airConditioning = 1;
}
lastly this is everything I get back in the console:
Apr 02, 2020 4:23:29 PM AircondioningDevice.proto.ac.AirConClient currentDetails
INFO: Will try to get device 1 ...
Apr 02, 2020 4:23:30 PM AircondioningDevice.proto.ac.AirConClient currentDetails
WARNING: RPC failed: Status{code=NOT_FOUND, description=Not found, cause=null}
I dont know whether I am completely off or if the error is small. Any suggestions? One other thing is I the same proto file in the java client and the node server I dont know if that matters. One last this is I also get this when i run my server: DeprecationWarning: grpc.load: Use the #grpc/proto-loader module with grpc.loadPackageDefinition instead I dont know if that has anything to do with it.
In your .proto file, you declare deviceIDRequest with a field device_id, but you are checking call.request.id in the currentDetails handler. If you look at call.request.id directly, it's probably undefined.
You also aren't getting to this bit yet, but the success callback is using the books array instead of the AirConditioningDevice array.

My Kafka sink connector for Neo4j fails to load

Introduction:
Let me start by apologizing for any vagueness in my question I will try to provide as much information on this topic as I can (hopefully not too much), and please let me know if I should provide more. As well, I am quite new to Kafka and will probably stumble on terminology.
So, from my understanding on how the sink and source work, I can use the FileStreamSourceConnector provided by the Kafka Quickstart guide to write data(Neo4j commands) to a topic held in a Kafka cluster. Then I can write my own Neo4j sink connector and task to read those commands and send them to one or more Neo4j servers. To keep the project as simple as possible, for now, I based the sink connector and task off of the Kafka Quickstart guide's FileStreamSinkConnector and FileStreamSinkTask.
Kafka's FileStream:
FileStreamSourceConnector
FileStreamSourceTask
FileStreamSinkConnector
FileStreamSinkTask
My Neo4j Sink Connector:
package neo4k.sink;
import org.apache.kafka.common.config.ConfigDef;
import org.apache.kafka.common.config.ConfigDef.Importance;
import org.apache.kafka.common.config.ConfigDef.Type;
import org.apache.kafka.common.utils.AppInfoParser;
import org.apache.kafka.connect.connector.Task;
import org.apache.kafka.connect.sink.SinkConnector;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
public class Neo4jSinkConnector extends SinkConnector {
public enum Keys {
;
static final String URI = "uri";
static final String USER = "user";
static final String PASS = "pass";
static final String LOG = "log";
}
private static final ConfigDef CONFIG_DEF = new ConfigDef()
.define(Keys.URI, Type.STRING, "", Importance.HIGH, "Neo4j URI")
.define(Keys.USER, Type.STRING, "", Importance.MEDIUM, "User Auth")
.define(Keys.PASS, Type.STRING, "", Importance.MEDIUM, "Pass Auth")
.define(Keys.LOG, Type.STRING, "./neoj4sinkconnecterlog.txt", Importance.LOW, "Log File");
private String uri;
private String user;
private String pass;
private String logFile;
#Override
public String version() {
return AppInfoParser.getVersion();
}
#Override
public void start(Map<String, String> props) {
uri = props.get(Keys.URI);
user = props.get(Keys.USER);
pass = props.get(Keys.PASS);
logFile = props.get(Keys.LOG);
}
#Override
public Class<? extends Task> taskClass() {
return Neo4jSinkTask.class;
}
#Override
public List<Map<String, String>> taskConfigs(int maxTasks) {
ArrayList<Map<String, String>> configs = new ArrayList<>();
for (int i = 0; i < maxTasks; i++) {
Map<String, String> config = new HashMap<>();
if (uri != null)
config.put(Keys.URI, uri);
if (user != null)
config.put(Keys.USER, user);
if (pass != null)
config.put(Keys.PASS, pass);
if (logFile != null)
config.put(Keys.LOG, logFile);
configs.add(config);
}
return configs;
}
#Override
public void stop() {
}
#Override
public ConfigDef config() {
return CONFIG_DEF;
}
}
My Neo4j Sink Task:
package neo4k.sink;
import org.apache.kafka.clients.consumer.OffsetAndMetadata;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.connect.sink.SinkRecord;
import org.apache.kafka.connect.sink.SinkTask;
import org.neo4j.driver.v1.AuthTokens;
import org.neo4j.driver.v1.Driver;
import org.neo4j.driver.v1.GraphDatabase;
import org.neo4j.driver.v1.Session;
import org.neo4j.driver.v1.StatementResult;
import org.neo4j.driver.v1.exceptions.Neo4jException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Collection;
import java.util.Map;
public class Neo4jSinkTask extends SinkTask {
private static final Logger log = LoggerFactory.getLogger(Neo4jSinkTask.class);
private String uri;
private String user;
private String pass;
private String logFile;
private Driver driver;
private Session session;
public Neo4jSinkTask() {
}
#Override
public String version() {
return new Neo4jSinkConnector().version();
}
#Override
public void start(Map<String, String> props) {
uri = props.get(Neo4jSinkConnector.Keys.URI);
user = props.get(Neo4jSinkConnector.Keys.USER);
pass = props.get(Neo4jSinkConnector.Keys.PASS);
logFile = props.get(Neo4jSinkConnector.Keys.LOG);
driver = null;
session = null;
try {
driver = GraphDatabase.driver(uri, AuthTokens.basic(user, pass));
session = driver.session();
} catch (Neo4jException ex) {
log.trace(ex.getMessage(), logFilename());
}
}
#Override
public void put(Collection<SinkRecord> sinkRecords) {
StatementResult result;
for (SinkRecord record : sinkRecords) {
result = session.run(record.value().toString());
log.trace(result.toString(), logFilename());
}
}
#Override
public void flush(Map<TopicPartition, OffsetAndMetadata> offsets) {
}
#Override
public void stop() {
if (session != null)
session.close();
if (driver != null)
driver.close();
}
private String logFilename() {
return logFile == null ? "stdout" : logFile;
}
}
The Issue:
After writing that, I next built that including any dependencies that it had, excluding any Kafka dependencies, into a jar (Or Uber Jar? It was one file). Then I edited the plugin pathways in the connect-standalone.properties to include that artifact and wrote a properties file for my Neo4j sink connector. I did this all in an attempt to follow these guidelines.
My Neo4j sink connector properties file:
name=neo4k-sink
connector.class=neo4k.sink.Neo4jSinkConnector
tasks.max=1
uri=bolt://localhost:7687
user=neo4j
pass=Hunter2
topics=connect-test
But upon running the standalone, I get this error in the output that shuts down the stream (Error on line 5):
[2017-08-14 12:59:00,150] INFO Kafka version : 0.11.0.0 (org.apache.kafka.common.utils.AppInfoParser:83)
[2017-08-14 12:59:00,150] INFO Kafka commitId : cb8625948210849f (org.apache.kafka.common.utils.AppInfoParser:84)
[2017-08-14 12:59:00,153] INFO Source task WorkerSourceTask{id=local-file-source-0} finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:143)
[2017-08-14 12:59:00,153] INFO Created connector local-file-source (org.apache.kafka.connect.cli.ConnectStandalone:91)
[2017-08-14 12:59:00,153] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:100)
java.lang.IllegalArgumentException: Malformed \uxxxx encoding.
at java.util.Properties.loadConvert(Properties.java:574)
at java.util.Properties.load0(Properties.java:390)
at java.util.Properties.load(Properties.java:341)
at org.apache.kafka.common.utils.Utils.loadProps(Utils.java:429)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:84)
[2017-08-14 12:59:00,156] INFO Kafka Connect stopping (org.apache.kafka.connect.runtime.Connect:65)
[2017-08-14 12:59:00,156] INFO Stopping REST server (org.apache.kafka.connect.runtime.rest.RestServer:154)
[2017-08-14 12:59:00,168] INFO Stopped ServerConnector#540accf4{HTTP/1.1}{0.0.0.0:8083} (org.eclipse.jetty.server.ServerConnector:306)
[2017-08-14 12:59:00,173] INFO Stopped o.e.j.s.ServletContextHandler#6d548d27{/,null,UNAVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:865)
Edit: I should mention that during the part of the connector loading where the output is declaring what plugins have been added, I do not see any mention of the jar that I built earlier and created a pathway for in connect-standalone.properties. Here's a snippet for context:
[2017-08-14 12:58:58,969] INFO Added plugin 'org.apache.kafka.connect.file.FileStreamSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-08-14 12:58:58,969] INFO Added plugin 'org.apache.kafka.connect.tools.MockSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-08-14 12:58:58,969] INFO Added plugin 'org.apache.kafka.connect.tools.VerifiableSourceConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-08-14 12:58:58,969] INFO Added plugin 'org.apache.kafka.connect.tools.VerifiableSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
[2017-08-14 12:58:58,970] INFO Added plugin 'org.apache.kafka.connect.tools.MockConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)
Conclusion:
I am at loss, I've done testing and researching for about a couple hours and I don't think I'm exactly sure what question to ask. So I'll say thank you for reading if you've gotten this far. If you noticed anything glaring that I may have done wrong in code or in method (e.g. packaging the jar), or think I should provide more context or console logs or anything really let me know. Thank you, again.
As pointed out by #Randall Hauch, my properties file had hidden characters within it because it was a rich text document. I fixed this by duplicating the connect-file-sink.properties file provided with Kafka, which I believe is just a regular text document. Then renaming and editing that duplicate for my neo4j sink properties.

Catch user exception in remote service at caller level

I am running multiple services in an Ignite cluster which depend on each other.
I'd like to catch (user defined) exceptions at caller level when I call a remote service function. See example based on the Service example in the docs for 1.7.
MyUserException.java
package com.example.testing;
public class MyUserException extends Throwable {}
MyCounterService.java
package com.example.testing;
public interface MyCounterService {
int increment() throws MyUserException;
}
MyCounterServiceImpl.java (Error condition is ignite.cluster().forYoungest())
package com.example.testing;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteServices;
import org.apache.ignite.Ignition;
import org.apache.ignite.resources.IgniteInstanceResource;
import org.apache.ignite.services.Service;
import org.apache.ignite.services.ServiceContext;
public class MyCounterServiceImpl implements MyCounterService, Service {
#IgniteInstanceResource
private Ignite ignite;
private int value = 0;
public int increment() throws MyUserException {
if ((value % 2) == 0) {
throw new MyUserException();
} else {
value++;
}
return value;
}
public static void main(String [] args) {
Ignite ignite = Ignition.start();
IgniteServices svcs = ignite.services(ignite.cluster().forYoungest());
svcs.deployNodeSingleton("MyCounterService", new MyCounterServiceImpl());
}
#Override
public void cancel(ServiceContext ctx) {
System.out.println("Service cancelled");
}
#Override
public void init(ServiceContext ctx) throws Exception {
System.out.println("Service initialized");
}
#Override
public void execute(ServiceContext ctx) throws Exception {
System.out.println("Service running");
}
}
MyCallerService.java
package com.example.testing;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteException;
import org.apache.ignite.Ignition;
import org.apache.ignite.resources.IgniteInstanceResource;
import org.apache.ignite.services.Service;
import org.apache.ignite.services.ServiceContext;
public class MyCallerService implements Service {
#IgniteInstanceResource
private Ignite ignite;
private Boolean stopped;
public void run() {
stopped = false;
MyCounterService service = ignite.services().serviceProxy("MyCounterService", MyCounterService.class, false);
while (!stopped)
{
try {
Thread.sleep(500);
service.increment();
} catch (MyUserException e) {
System.out.println("Got exception");
//e.printStackTrace();
} catch (InterruptedException e) {
//e.printStackTrace();
}
catch (IgniteException e) {
System.out.println("Got critial exception");
// would print the actual user exception
//e.getCause().getCause().getCause().printStackTrace();
break;
}
}
}
public static void main(String [] args) {
Ignite ignite = Ignition.start();
ignite.services(ignite.cluster().forYoungest()).deployNodeSingleton("MyCallerService", new MyCallerService());
}
#Override
public void cancel(ServiceContext ctx) {
stopped = true;
}
#Override
public void init(ServiceContext ctx) throws Exception {
}
#Override
public void execute(ServiceContext ctx) throws Exception {
run();
}
}
The exception is not being catched at the caller level. Instead these exceptions show up in the console. How do I catch and handle the exceptions properly when a service function is called?
Output of MyCounterServiceImpl
[18:23:23] Ignite node started OK (id=c82df19c)
[18:23:23] Topology snapshot [ver=1, servers=1, clients=0, CPUs=4, heap=3.5GB]
Service initialized
Service running
[18:23:27] Topology snapshot [ver=2, servers=2, clients=0, CPUs=4, heap=7.0GB]
Nov 17, 2016 6:23:28 PM org.apache.ignite.logger.java.JavaLogger error
SCHWERWIEGEND: Failed to execute job [jobId=82580537851-3c0a354f-69b5-496c-af10-ee789a5387c3, ses=GridJobSessionImpl [ses=GridTaskSessionImpl [taskName=o.a.i.i.processors.service.GridServiceProxy$ServiceProxyCallable, dep=LocalDeployment [super=GridDeployment [ts=1479403401422, depMode=SHARED, clsLdr=sun.misc.Launcher$AppClassLoader#1d44bcfa, clsLdrId=4fe60537851-c82df19c-cdff-43ef-b7b6-e8485231629a, userVer=0, loc=true, sampleClsName=java.lang.String, pendingUndeploy=false, undeployed=false, usage=0]], taskClsName=o.a.i.i.processors.service.GridServiceProxy$ServiceProxyCallable, sesId=72580537851-3c0a354f-69b5-496c-af10-ee789a5387c3, startTime=1479403408961, endTime=9223372036854775807, taskNodeId=3c0a354f-69b5-496c-af10-ee789a5387c3, clsLdr=sun.misc.Launcher$AppClassLoader#1d44bcfa, closed=false, cpSpi=null, failSpi=null, loadSpi=null, usage=1, fullSup=false, subjId=3c0a354f-69b5-496c-af10-ee789a5387c3, mapFut=IgniteFuture [orig=GridFutureAdapter [resFlag=0, res=null, startTime=1479403408960, endTime=0, ignoreInterrupts=false, state=INIT]]], jobId=82580537851-3c0a354f-69b5-496c-af10-ee789a5387c3]]
class org.apache.ignite.IgniteException: null
at org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2V2.execute(GridClosureProcessor.java:2009)
at org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:509)
at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6521)
at org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:503)
at org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:456)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1161)
at org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1766)
at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1238)
at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:866)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$1700(GridIoManager.java:106)
at org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:829)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.ignite.internal.processors.service.GridServiceProxy$ServiceProxyCallable.call(GridServiceProxy.java:392)
at org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2V2.execute(GridClosureProcessor.java:2006)
... 14 more
Caused by: com.example.testing.MyUserException
at com.example.testing.MyCounterServiceImpl.increment(MyCounterServiceImpl.java:19)
... 20 more
Output of MyCallerService
[18:23:28] Ignite node started OK (id=3c0a354f)
[18:23:28] Topology snapshot [ver=2, servers=2, clients=0, CPUs=4, heap=7.0GB]
Nov 17, 2016 6:23:28 PM org.apache.ignite.logger.java.JavaLogger error
SCHWERWIEGEND: Failed to obtain remote job result policy for result from ComputeTask.result(..) method (will fail the whole task): GridJobResultImpl [job=C2V2 [c=ServiceProxyCallable [mtdName=increment, svcName=MyCounterService, ignite=null]], sib=GridJobSiblingImpl [sesId=72580537851-3c0a354f-69b5-496c-af10-ee789a5387c3, jobId=82580537851-3c0a354f-69b5-496c-af10-ee789a5387c3, nodeId=c82df19c-cdff-43ef-b7b6-e8485231629a, isJobDone=false], jobCtx=GridJobContextImpl [jobId=82580537851-3c0a354f-69b5-496c-af10-ee789a5387c3, timeoutObj=null, attrs={}], node=TcpDiscoveryNode [id=c82df19c-cdff-43ef-b7b6-e8485231629a, addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 172.18.22.52], sockAddrs=[/0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500, /172.18.22.52:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1479403407847, loc=false, ver=1.7.0#20160801-sha1:383273e3, isClient=false], ex=class o.a.i.IgniteException: null, hasRes=true, isCancelled=false, isOccupied=true]
class org.apache.ignite.IgniteException: Remote job threw user exception (override or implement ComputeTask.result(..) method if you would like to have automatic failover for this exception).
at org.apache.ignite.compute.ComputeTaskAdapter.result(ComputeTaskAdapter.java:101)
at org.apache.ignite.internal.processors.task.GridTaskWorker$4.apply(GridTaskWorker.java:946)
at org.apache.ignite.internal.processors.task.GridTaskWorker$4.apply(GridTaskWorker.java:939)
at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6553)
at org.apache.ignite.internal.processors.task.GridTaskWorker.result(GridTaskWorker.java:939)
at org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:810)
at org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:995)
at org.apache.ignite.internal.processors.task.GridTaskProcessor$JobMessageListener.onMessage(GridTaskProcessor.java:1220)
at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1238)
at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:866)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$1700(GridIoManager.java:106)
at org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:829)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.IgniteException: null
at org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2V2.execute(GridClosureProcessor.java:2009)
at org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:509)
at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6521)
at org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:503)
at org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:456)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1161)
at org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1766)
... 7 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.ignite.internal.processors.service.GridServiceProxy$ServiceProxyCallable.call(GridServiceProxy.java:392)
at org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2V2.execute(GridClosureProcessor.java:2006)
... 14 more
Caused by: com.example.testing.MyUserException
at com.example.testing.MyCounterServiceImpl.increment(MyCounterServiceImpl.java:19)
... 20 more
Got critial exception
Apperently this is a bug that's to be resolved:
https://issues.apache.org/jira/browse/IGNITE-4298
i think exception must throw over to caller node. Could you please provide full code example? Also, so strange that on node which have service, was exception with null value.
UPD.
Could you please also add log, because fort me, all work as expected. I catched MyUserException, and have in log message "Got exception".

I want to send a file from one agent to an other agent?

I want to send file fron one agent to an other agent using JADE on same PC.
Here some error which is occur during execution.
*** Uncaught Exception for agent a ***
ERROR: Agent a died without being properly terminated !!!
java.lang.RuntimeException: Uncompilable source code - incompatible types: java.lang.String cannot be converted to byte[]
State was 2
ERROR: Agent b died without being properly terminated !!!
State was 2
at sendmessage.A.sendMessage(A.java:36)
at sendmessage.A.setup(A.java:25)
at jade.core.Agent$ActiveLifeCycle.init(Agent.java:1490)
at jade.core.Agent.run(Agent.java:1436)
at java.lang.Thread.run(Thread.java:745)
Nov 20, 2015 4:21:34 PM jade.core.messaging.MessagingService removeLocalAliases
INFO: Removing all local alias entries for agent a
Nov 20, 2015 4:21:34 PM jade.core.messaging.MessagingService removeGlobalAliases
INFO: Removing all global alias entries for agent a
*** Uncaught Exception for agent b ***
java.lang.RuntimeException: Uncompilable source code - Erroneous sym type: sendmessage.B.MyBehaviour.receive
at sendmessage.B$MyBehaviour.action(B.java:40)
at jade.core.behaviours.Behaviour.actionWrapper(Behaviour.java:344)
at jade.core.Agent$ActiveLifeCycle.execute(Agent.java:1500)
at jade.core.Agent.run(Agent.java:1439)
at java.lang.Thread.run(Thread.java:745)
Nov 20, 2015 4:21:34 PM jade.core.messaging.MessagingService removeLocalAliases
INFO: Removing all local alias entries for agent b
Nov 20, 2015 4:21:34 PM jade.core.messaging.MessagingService removeGlobalAliases
INFO: Removing all global alias entries for agent b
Nov 20, 2015 4:21:42 PM jade.core.messaging.MessagingService removeLocalAliases
INFO: Removing all local alias entries for agent rma
Nov 20, 2015 4:21:42 PM jade.core.messaging.MessagingService removeGlobalAliases
INFO: Removing all global alias entries for agent rma
Sender:Who send file to another agent via using JADE.
package sendmessage;
import jade.core.AID;
import jade.core.Agent;
import jade.core.behaviours.Behaviour;
import jade.lang.acl.ACLMessage;
import jade.lang.acl.MessageTemplate;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
/**
*
* #author Administrator
*/
public class A extends Agent {
protected void setup() {
sendMessage();
this.addBehaviour(new MyBehaviour(this));
}
private void sendMessage() {
AID r = new AID("b", AID.ISLOCALNAME);
// ACLMessage acl = new ACLMessage(ACLMessage.REQUEST);
// acl.addReceiver(r);
// acl.setContent("hello, my name is sender!");
// this.send(acl);
String fileName = "a.txt";// get file name
byte[] fileContent = "f://a.txt";// read file content
ACLMessage msg = new ACLMessage(ACLMessage.INFORM);
msg.addReceiver(r);
msg.setByteSequenceContent(fileContent);
msg.addUserDefinedParameter("file-name", fileName);
send(msg);
}
private static class MyBehaviour extends Behaviour {
MessageTemplate mt = MessageTemplate.MatchPerformative(ACLMessage.INFORM);
private static int finish;
public MyBehaviour(A aThis) {
}
#Override
public void action() {
ACLMessage acl = myAgent.receive(mt);
if (acl != null) {
System.out.println(myAgent.getLocalName() + " received a reply: " + acl.getContent() + "from " + acl.getSender());
finish = 1;
} else {
this.block();
}
}
#Override
public boolean done() {
return finish == 1;
}
}
}
Receiver: who receive file from send agent via using JADE
package sendmessage;
import jade.core.Agent;
import jade.core.behaviours.Behaviour;
import jade.lang.acl.ACLMessage;
import jade.lang.acl.MessageTemplate;
import java.io.File;
/**
*
* #author Administrator
*/
public class B extends Agent {
protected void setup() {
this.addBehaviour(new MyBehaviour(this));
}
private static class MyBehaviour extends Behaviour {
MessageTemplate mt = MessageTemplate.MatchPerformative(ACLMessage.REQUEST);
public MyBehaviour(B aThis) {
}
#Override
public void action() {
// ACLMessage acl = myAgent.receive(mt);
// if (acl!= null) {
// System.out.println(myAgent.getLocalName()+ " received a message: "+acl.getContent());
// ACLMessage rep = acl.createReply();
// rep.setPerformative(ACLMessage.INFORM);
// rep.setContent("ok, i received a message!");
// myAgent.send(rep);
ACLMessage msg = receive("Yes Received your file");
if (msg != null) {
String fileName = msg.getUserDefinedParameter("a.txt");
File f = "a.txt"; // create file called fileName
byte[] fileContent = msg.getByteSequenceContent();
// write fileContent to f
}
else {
this.block();
}
}
#Override
public boolean done() {
return false;
}
}
}
It is probably an idea to create an Ontology for your agent system if you are going to be exchanging files around alot. JADE offers the ability to use OntologyBean classes as representation of the Agent communication methods(which really simplifies things).
the BeanOntology class adds custom language in the form of predicates, concepts, etc. Inside these custom language descriptions you can place an object and any other information you may want to specify alongside it. It is actually quite simple if you follow the documentation provided from JADE --> tutorial here
good luck!

how can we remove extra message in log files

i have a simple logging program ie:
public class LoggingExample1 {
public static void main(String args[]) {
try {
LogManager lm = LogManager.getLogManager();
Logger logger;
FileHandler fh = new FileHandler("log_test.txt");
logger = Logger.getLogger("LoggingExample1");
lm.addLogger(logger);
logger.setLevel(Level.INFO);
fh.setFormatter(new SimpleFormatter());
logger.addHandler(fh);
// root logger defaults to SimpleFormatter. We don't want messages
// logged twice.
//logger.setUseParentHandlers(false);
logger.log(Level.INFO, "test 1");
logger.log(Level.INFO, "test 2");
logger.log(Level.INFO, "test 3");
fh.close();
} catch (Exception e) {
System.out.println("Exception thrown: " + e);
e.printStackTrace();
}
}
}
i get this log:
Aug 1, 2011 5:36:37 PM LoggingExample1 main
INFO: test 1
Aug 1, 2011 5:36:37 PM LoggingExample1 main
INFO: test 2
Aug 1, 2011 5:36:37 PM LoggingExample1 main
INFO: test 3
but i want to remove the messages like: LoggingExample1 main and INFO
and only keep the data that are logged by code.
what can i do???
This is because you are using a SimpleFormatter which always logs the class name, method name etc. If you don't want this information, you can write your own formatter. Here is a simple example of a formatter which just outputs the log level and the log message:
import java.util.logging.*;
class MyFormatter extends Formatter{
/* (non-Javadoc)
* #see java.util.logging.Formatter#format(java.util.logging.LogRecord)
*/
#Override
public String format(LogRecord record) {
StringBuilder sb = new StringBuilder();
sb.append(record.getLevel()).append(':');
sb.append(record.getMessage()).append('\n');
return sb.toString();
}
}
Use this formatter in your file handler:
fh.setFormatter(new MyFormatter());
This will output:
INFO:test 1
INFO:test 2
INFO:test 3
Extend your class with formatter class as below:
public class MyFormatter extends SimpleFormatter{
#Override
public synchronized String format(LogRecord record){
record.setSourceClassName(MyFormatter.class .getName());
return String.format(
" [%1$s] :%2$s\n",
record.getLevel().getName(), formatMessage(record));
}
}
Then in you main class, set console handler and filehandler format as your class format object.
In my case ,it is consoleHandler.format(new MyFormatter);
filehandler.format(new Myformatter);
Most imp thing is you have to set
logger.setUseParentHandler(false);
now you are good to go!
If I understand you correctly, you want to remove INFO and only have test 1 and so on?
If that is the case, I don't think its possible. The reason is that you need to know what type of message this is.
ERROR
WARNING
INFO
DEBUG
TRACE
You can change the level to only display WARNING and up if you want.
Here is how I do this, simplified version of SimpleFormatter
package org.nibor.git_merge_repos;
import java.util.Date;
import java.util.logging.Formatter;
import java.util.logging.LogRecord;
public class CustomLogFormatter extends Formatter {
private final Date dat = new Date();
private static final String seperator = " : ";
public synchronized String format(LogRecord record) {
dat.setTime(record.getMillis());
return dat + seperator + record.getLevel().getName() + seperator + record.getMessage() + "\n";
}
}

Categories