java, matlabcontrol, using eval and feval - java

i have problems with matlabcontrol liblary especially using eval and feval function.
this is my program
package matlab;
import java.io.IOException;
import java.util.Arrays;
import java.util.logging.Level;
import java.util.logging.Logger;
import matlabcontrol.MatlabProxy;
import matlabcontrol.MatlabProxyFactory;
import matlabcontrol.MatlabConnectionException;
import matlabcontrol.MatlabInvocationException;
import matlabcontrol.extensions.MatlabNumericArray;
import matlabcontrol.extensions.MatlabTypeConverter;
public class Matlab {
public static void main(String[] args) {
String address = "D:\\database.mat";
int hidden = 10;
int epoch = 100;
int mu = 1;
int lr =1;
int fail =1;
try {
MatlabProxyFactory factory = new MatlabProxyFactory();
MatlabProxy proxy = factory.getProxy();
proxy.eval("load"+address);
//there is problem here unclosed string literal
proxy.eval("net=newff(input,target,["+hidden+"),(logsig, tansig),trainlm);
proxy.eval("net.trainparam.showwindow=true;");
proxy.eval("net.divideRand='trainlm';");
proxy.eval("net.trainparam.epochs="+epoch+";");
proxy.eval("net.trainparam.mu="+mu+";");
proxy.eval("net.trainparam.lr="+lr+";");
proxy.eval("net.trainparam.goal=0;");
proxy.eval("net.trainparam.max_fail="+fail+";");
proxy.eval("net=train(net,input,target);");
proxy.eval("class_Matlab=sim(net,target);");
proxy.eval("save net net class_Matlab");
proxy.disconnect();
} catch (MatlabConnectionException ex) {
Logger.getLogger(Matlab.class.getName()).log(Level.SEVERE, null, ex);
} catch (MatlabInvocationException ex) {
Logger.getLogger(Matlab.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
i still dont know how to use eval and feval properly, can someone explain me how to use eval and feval?
i tested the library using example from mathworks and it worked fine
package matlab;
import java.io.IOException;
import java.util.Arrays;
import java.util.logging.Level;
import java.util.logging.Logger;
import matlabcontrol.MatlabProxy;
import matlabcontrol.MatlabProxyFactory;
import matlabcontrol.MatlabConnectionException;
import matlabcontrol.MatlabInvocationException;
import matlabcontrol.extensions.MatlabNumericArray;
import matlabcontrol.extensions.MatlabTypeConverter;
public class Matlab {
public static void main(String[] args) {
try {
MatlabProxyFactory factory = new MatlabProxyFactory();
MatlabProxy proxy = factory.getProxy();
proxy.eval("[x,t] = simplefit_dataset;");
proxy.eval("net= feedforwardnet(10);");
proxy.eval("net = train(net,x,t);");
proxy.eval("view(net)");
proxy.disconnect();
} catch (MatlabConnectionException ex) {
Logger.getLogger(Matlab.class.getName()).log(Level.SEVERE, null, ex);
} catch (MatlabInvocationException ex) {
Logger.getLogger(Matlab.class.getName()).log(Level.SEVERE, null, ex);
}
}
}

public static void main(String[] args) throws MatlabConnectionException, MatlabInvocationException
{
MatlabProxyFactoryOptions options =
new MatlabProxyFactoryOptions.Builder()
.setUsePreviouslyControlledSession(true)
.build();
MatlabProxyFactory factory = new MatlabProxyFactory(options);
MatlabProxy proxy = factory.getProxy();
proxy.feval("matlab_filename");
proxy.disconnect();
}
and
public static void main(String[] args) throws MatlabConnectionException, MatlabInvocationException { //Create a proxy, which we will use to control MATLAB MatlabProxyFactory factory = new MatlabProxyFactory(); MatlabProxy proxy = factory.getProxy();
//Set a variable, add to it, retrieve it, and print the result
proxy.setVariable("a", 5);
proxy.eval("a = a + 6");
Object result = proxy.getVariable("a");
System.out.println("Result: " + result);
//Disconnect the proxy from MATLAB
proxy.disconnect();
}
and
Public static void main(String[] args) throws MatlabConnectionException, MatlabInvocationException { //Create a proxy, which we will use to control MATLAB MatlabProxyFactory factory = new MatlabProxyFactory(); MatlabProxy proxy = factory.getProxy();
//Display 'hello world' like before, but this time using feval
proxy.feval("disp", "hello world");
//Disconnect the proxy from MATLAB
proxy.disconnect();
}
For more help, please execute the examples first. https://code.google.com/archive/p/matlabcontrol/wikis/Walkthrough.wiki
Hope it will help you.

Related

How can the PCA algorithm be implemented for feature extraction of captured live packets (using the jpcap library), in Java?

As in several research papers [1], [2], [3] it seems appropriate to make use of the Principal Component Analysis (PCA) algorithm for feature extraction. To attempt to accomplish this I have produced a traffic sniffer program in java making use of the jpcap library (snippet below), which is able to extract various features from live network traffic.
But I have not found any way to actually implement this, as existing approaches (snippets below) all seem to operate on non-live pre-organised datasets such as in the form of ARFF files.
I have seen a number of research papers [4], [5], [6] that reference the use of the PCA algorithm in combination jpcap library, that said I have not been able to find an explanation as to how this was accomplished this.
To be clear:
My question is, how can I implement PCA (using any approach) to extract features from live packets (attained via used of the pcap library referenced above) in Java? as others have accomplished (some examples have been referenced above)
Sniffer Example (using Jpcap Library)
Sniffer.java
import jpcap.JpcapCaptor;
import jpcap.NetworkInterface;
import jpcap.packet.Packet;
import jpcap.*;
import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.util.ArrayList;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.xml.bind.DatatypeConverter;
import java.util.List;
public class Sniffer {
public static NetworkInterface[] NETWORK_INTERFACES;
public static JpcapCaptor CAP;
jpcap_thread THREAD;
public static int INDEX = 0;
public static int flag = 0;
public static int COUNTER = 0;
static boolean CaptureState = false;
public static int No = 0;
JpcapWriter writer = null;
List<Packet> packetList = new ArrayList<>();
public static ArrayList<Object[]> packetInfo = new ArrayList<>();
public static void CapturePackets() {
THREAD = new jpcap_thread() {
public Object construct() {
try {
CAP = JpcapCaptor.openDevice(NETWORK_INTERFACES[INDEX], 65535, false, 20);
// writer = JpcapWriter.openDumpFile(CAP, "captureddata");
if ("UDP".equals(filter_options.getSelectedItem().toString())) {
CAP.setFilter("udp", true);
} else if ("TCP".equals(filter_options.getSelectedItem().toString())) {
CAP.setFilter("tcp", true);
} else if ("ICMP".equals(filter_options.getSelectedItem().toString())) {
CAP.setFilter("icmp", true);
}
while (CaptureState) {
CAP.processPacket(1, new PacketContents());
packetList.add(CAP.getPacket());
}
CAP.close();
} catch (Exception e) {
System.out.print(e);
}
return 0;
}
public void finished() {
this.interrupt();
}
};
THREAD.start();
}
public static void main(String[] args) {
CaptureState = true;
CapturePackets();
}
public void saveToFile() {
THREAD = new jpcap_thread() {
public Object construct() {
writer = null;
try {
CAP = JpcapCaptor.openDevice(NETWORK_INTERFACES[INDEX], 65535, false, 20);
writer = JpcapWriter.openDumpFile(CAP, "captured_data.txt");
} catch (IOException ex) {
Logger.getLogger(Sniffer.class.getName()).log(Level.SEVERE, null, ex);
}
for (int i = 0; i < No; i++) {
writer.writePacket(packetList.get(i));
}
return 0;
}
public void finished() {
this.interrupt();
}
};
THREAD.start();
}
}
PacketContents.java
import jpcap.PacketReceiver;
import jpcap.packet.Packet;
import javax.swing.table.DefaultTableModel;
import jpcap.packet.TCPPacket;
import jpcap.packet.UDPPacket;
import java.util.ArrayList;
import java.util.List;
import jpcap.packet.ICMPPacket;
public class PacketContents implements PacketReceiver {
public static TCPPacket tcp;
public static UDPPacket udp;
public static ICMPPacket icmp;
public void recievePacket(Packet packet) {
}
#Override
public void receivePacket(Packet packet) {
if (packet instanceof TCPPacket) {
tcp = (TCPPacket) packet;
Sniffer.packetInfo.add(new Object[] { sniffer.No, tcp.length, tcp.src_ip, tcp.dst_ip, "TCP", tcp.src_port,
tcp.dst_port, tcp.ack, tcp.ack_num, tcp.data, tcp.sequence, tcp.offset, tcp.header });
sniffer.No++;
} else if (packet instanceof UDPPacket) {
udp = (UDPPacket) packet;
Sniffer.packetInfo.add(new Object[] { sniffer.No, udp.length, udp.src_ip, udp.dst_ip, "UDP", udp.src_port,
udp.dst_port, udp.data, udp.offset, udp.header });
sniffer.No++;
} else if (packet instanceof ICMPPacket) {
icmp = (ICMPPacket) packet;
Sniffer.packetInfo.add(new Object[] { sniffer.No, icmp.length, icmp.src_ip, icmp.dst_ip, "ICMP",
icmp.checksum, icmp.header, icmp.offset, icmp.orig_timestamp, icmp.recv_timestamp,
icmp.trans_timestamp, icmp.data });
sniffer.No++;
}
}
}
Using the Weka Library to execute the PCA algorithm on an ARFF file
WekaPCA.java
package project;
import weka.core.Instances;
import weka.core.converters.ArffLoader;
import weka.core.converters.ConverterUtils;
import weka.core.converters.TextDirectoryLoader;
import java.io.File;
import org.math.plot.FrameView;
import org.math.plot.Plot2DPanel;
import org.math.plot.PlotPanel;
import org.math.plot.plots.ScatterPlot;
import weka.attributeSelection.PrincipalComponents;
import weka.attributeSelection.Ranker;
public class PCA {
public static void main(String[] args) {
try {
// Load data
String InputFilename = "kdd99.arff";
ArffLoader loader = new ArffLoader();
loader.setSource(new File(InputFilename));
Instances data = loader.getDataSet();
// Perform PCA
PrincipalComponents pca = new PrincipalComponents();
pca.setVarianceCovered(1.0);
pca.setTransformBackToOriginal(false);
pca.buildEvaluator(data);
// Show transformed data
Instances transformedData = pca.transformedData();
System.out.println(transformedData);
}
catch (Exception e) {
e.printStackTrace();

An example in Java using Embedded Cassandra Server to test a Cassandra-Spark job

I am new to Cassandra and Spark. I am trying to set up a test for my Spark job, which does the following:
Loads data from table A into DataFrames
Does some filtering, grouping and aggregating on these DataFrames
Loads the result into table B
I want to use Embedded Cassandra Server to run the test rather than having it connecting to a local instance of the Cassandra database. Has anyone done this before? If so, could someone point me to a good example please? Thanks for your help in advance!
this code does
package cassspark.clt;
import java.io.*;
import javafx.application.Application;
import java.util.concurrent.Executors ;
import java.util.concurrent.ExecutorService;
import org.apache.cassandra.service.CassandraDaemon;
import com.datastax.driver.core.exceptions.ConnectionException;
import java.util.Properties;
import org.apache.log4j.PropertyConfigurator;
import org.apache.spark.sql.SparkSession;
public class EmbeddedCassandraDemo extends Application {
private ExecutorService executor = Executors.newSingleThreadExecutor();
private CassandraDaemon cassandraDaemon;
public EmbeddedCassandraDemo() {
}
public static void main(String[] args) {
try {
new EmbeddedCassandraDemo().run();
}
catch(java.lang.InterruptedException e)
{
;
}
}
#Override public void start(javafx.stage.Stage stage) throws Exception
{
stage.show();
}
private void run() throws InterruptedException, ConnectionException {
setProperties();
activateDeamon();
}
private void activateDeamon() {
executor.execute( new Runnable() {
#Override
public void run() {
cassandraDaemon = new CassandraDaemon();
cassandraDaemon.activate();
SparkSession spark = SparkSession .builder().master("local").appName("ASH").getOrCreate();
}
});
}
private void setProperties() {
final String yaml = System.getProperty("user.dir") + File.separator +"conf"+File.separator+"cassandra.yaml";
final String storage = System.getProperty("user.dir") + File.separator +"storage" + File.separator +"data";
System.setProperty("cassandra.config", "file:"+ yaml );
System.setProperty("cassandra.storagedir", storage );
System.setProperty("cassandra-foreground", "true");
String log4JPropertyFile = "./conf/log4j.properties";
Properties p = new Properties();
try {
p.load(new FileInputStream(log4JPropertyFile));
PropertyConfigurator.configure(p);
} catch (IOException e) {
System.err.println("./conf/log4j.properties not found ");
System.exit(1);
;
}
}
}

Zeppelin: How to create DataFrame from within custom interpreter?

I am developing a custom interpreter for a domain specific language. Based on the example given in the Apache Zeppelin documentation (https://zeppelin.incubator.apache.org/docs/latest/development/writingzeppelininterpreter.html), the interpreter works pretty well. Now I want to store some results in a new DataFrame.
I found code to create DataFrames (http://spark.apache.org/docs/latest/sql-programming-guide.html), but I cant use this in my interpreter because I basically dont find a way to access a valid runtime SparkContext (often called "sc") from within my custom interpreter.
I tried (static) SparkContext.getOrCreate() but this even led to a ClassNotFoundException. Then I added the whole zeppelin-spark-dependencies...jar to my interpreter folder, which solved the class loading issue but now I am getting a SparkException ("master url must be set...").
Any idea how I could get access to my Notebook's SparkContext from within the custom interpreter? Thanks a lot!
UPDATE
Thanks to Kangrok Lee's comment below, my code now looks as follows: see below. It runs and seems to create a DataFrame (at least it doesnt throw any Exception any more). But I can not consume the created DataFrame in a subsequent SQL paragraph (the first paragraph uses my "%opl" interpreter, as given below, that should create the "result" DataFrame):
%opl
1 2 3
> 1
> 2
> 3
%sql
select * from result
> Table not found: result; line 1 pos 14
So probably there is still something wrong with my way of dealing with the SparkContext. Any ideas? Thanks a lot!
package opl;
import java.io.ByteArrayOutputStream;
import java.io.PrintStream;
import java.util.ArrayList;
import java.util.List;
import java.util.Properties;
import org.apache.spark.SparkContext;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructType;
import org.apache.zeppelin.interpreter.Interpreter;
import org.apache.zeppelin.interpreter.InterpreterContext;
import org.apache.zeppelin.interpreter.InterpreterPropertyBuilder;
import org.apache.zeppelin.interpreter.InterpreterResult;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class OplInterpreter2 extends Interpreter {
static {
Interpreter.register("opl","opl",OplInterpreter2.class.getName(),
new InterpreterPropertyBuilder()
.add("spark.master", "local[4]", "spark.master")
.add("spark.app.name", "Opl Interpreter", "spark.app.name")
.add("spark.serializer", "org.apache.spark.serializer.KryoSerializer", "spark.serializer")
.build());
}
private Logger logger = LoggerFactory.getLogger(OplInterpreter2.class);
private void log(Object o) {
if (logger != null)
logger.warn("OplInterpreter2 "+o);
}
public OplInterpreter2(Properties properties) {
super(properties);
log("CONSTRUCTOR");
}
#Override
public void open() {
log("open()");
}
#Override
public void cancel(InterpreterContext arg0) {
log("cancel()");
}
#Override
public void close() {
log("close()");
}
#Override
public List<String> completion(String arg0, int arg1) {
log("completion()");
return new ArrayList<String>();
}
#Override
public FormType getFormType() {
log("getFormType()");
return FormType.SIMPLE;
}
#Override
public int getProgress(InterpreterContext arg0) {
log("getProgress()");
return 100;
}
#Override
public InterpreterResult interpret(String string, InterpreterContext context) {
log("interpret() "+string);
PrintStream oldSys = System.out;
try {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
PrintStream ps = new PrintStream(baos);
System.setOut(ps);
execute(string);
System.out.flush();
System.setOut(oldSys);
return new InterpreterResult(
InterpreterResult.Code.SUCCESS,
InterpreterResult.Type.TEXT,
baos.toString());
} catch (Exception ex) {
System.out.flush();
System.setOut(oldSys);
return new InterpreterResult(
InterpreterResult.Code.ERROR,
InterpreterResult.Type.TEXT,
ex.toString());
}
}
private void execute(String code) throws Exception {
SparkContext sc = SparkContext.getOrCreate();
SQLContext sqlc = SQLContext.getOrCreate(sc);
StructType structType = new StructType().add("value",DataTypes.IntegerType);
ArrayList<Row> list = new ArrayList<Row>();
for (String s : code.trim().split("\\s+")) {
int value = Integer.parseInt(s);
System.out.println(value);
list.add(RowFactory.create(value));
}
DataFrame df = sqlc.createDataFrame(list,structType);
df.registerTempTable("result");
}
}
Finally I found a solution although I don't think this is a very nice one. In the code below, I am using a function getSparkInterpreter() that I found in org.apache.zeppelin.spark.PySparkInterpreter.java.
This requires that I put my packaged code (jar) into the Spark interpreter folder, instead of its own interpreter folder, which I believe should be the preferred way (according to https://zeppelin.incubator.apache.org/docs/latest/development/writingzeppelininterpreter.html). Also, my interpreter does not show up in Zeppelin's interpreter configuration page as an interpreter of its own. But it can be used in a Zeppelin paragraph nevertheless.
And: In the code I can create a DataFrame and this is also consumable outside my paragraph -- which is what I wanted to achieve.
package opl;
import java.io.ByteArrayOutputStream;
import java.io.PrintStream;
import java.util.ArrayList;
import java.util.List;
import java.util.Properties;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructType;
import org.apache.zeppelin.interpreter.Interpreter;
import org.apache.zeppelin.interpreter.InterpreterContext;
import org.apache.zeppelin.interpreter.InterpreterPropertyBuilder;
import org.apache.zeppelin.interpreter.InterpreterResult;
import org.apache.zeppelin.interpreter.LazyOpenInterpreter;
import org.apache.zeppelin.interpreter.WrappedInterpreter;
import org.apache.zeppelin.spark.SparkInterpreter;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class OplInterpreter2 extends Interpreter {
static {
Interpreter.register(
"opl",
"spark",//"opl",
OplInterpreter2.class.getName(),
new InterpreterPropertyBuilder()
.add("sth", "defaultSth", "some thing")
.build());
}
private Logger logger = LoggerFactory.getLogger(OplInterpreter2.class);
private void log(Object o) {
if (logger != null)
logger.warn("OplInterpreter2 "+o);
}
public OplInterpreter2(Properties properties) {
super(properties);
log("CONSTRUCTOR");
}
#Override
public void open() {
log("open()");
}
#Override
public void cancel(InterpreterContext arg0) {
log("cancel()");
}
#Override
public void close() {
log("close()");
}
#Override
public List<String> completion(String arg0, int arg1) {
log("completion()");
return new ArrayList<String>();
}
#Override
public FormType getFormType() {
log("getFormType()");
return FormType.SIMPLE;
}
#Override
public int getProgress(InterpreterContext arg0) {
log("getProgress()");
return 100;
}
#Override
public InterpreterResult interpret(String string, InterpreterContext context) {
log("interpret() "+string);
PrintStream oldSys = System.out;
try {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
PrintStream ps = new PrintStream(baos);
System.setOut(ps);
execute(string);
System.out.flush();
System.setOut(oldSys);
return new InterpreterResult(
InterpreterResult.Code.SUCCESS,
InterpreterResult.Type.TEXT,
baos.toString());
} catch (Exception ex) {
System.out.flush();
System.setOut(oldSys);
return new InterpreterResult(
InterpreterResult.Code.ERROR,
InterpreterResult.Type.TEXT,
ex.toString());
}
}
private void execute(String code) throws Exception {
SparkInterpreter sintp = getSparkInterpreter();
SQLContext sqlc = sintp.getSQLContext();
StructType structType = new StructType().add("value",DataTypes.IntegerType);
ArrayList<Row> list = new ArrayList<Row>();
for (String s : code.trim().split("\\s+")) {
int value = Integer.parseInt(s);
System.out.println(value);
list.add(RowFactory.create(value));
}
DataFrame df = sqlc.createDataFrame(list,structType);
df.registerTempTable("result");
}
private SparkInterpreter getSparkInterpreter() {
LazyOpenInterpreter lazy = null;
SparkInterpreter spark = null;
Interpreter p = getInterpreterInTheSameSessionByClassName(SparkInterpreter.class.getName());
while (p instanceof WrappedInterpreter) {
if (p instanceof LazyOpenInterpreter) {
lazy = (LazyOpenInterpreter) p;
}
p = ((WrappedInterpreter) p).getInnerInterpreter();
}
spark = (SparkInterpreter) p;
if (lazy != null) {
lazy.open();
}
return spark;
}
}
I think that you should configure spark cluster such as the below statement.
spark.master = "local[4]"
spark.app.name = "My Spark App"
spark.serializer = "org.apache.spark.serializer.KryoSerializer"
Using SparkContext.getOrCreate() looks good to me.
Thanks,
Kangrok Lee

Different Result on DBPedia Spotlight by using the code and DBPedia Spotlight endpoint

This is the main class in which query is being fired
package extractKeyword;
import java.io.UnsupportedEncodingException;
import java.net.URLEncoder;
import org.apache.commons.httpclient.Header;
import org.apache.commons.httpclient.methods.GetMethod;
import org.dbpedia.spotlight.exceptions.AnnotationException;
import org.dbpedia.spotlight.model.DBpediaResource;
import org.dbpedia.spotlight.model.Text;
import org.json.JSONArray;
import org.json.JSONException;
import org.json.JSONObject;
import java.io.File;
import java.io.UnsupportedEncodingException;
import java.net.URLEncoder;
import java.util.LinkedList;
import java.util.List;
public class db extends AnnotationClient {
//private final static String API_URL = "http://jodaiber.dyndns.org:2222/";
private static String API_URL = "http://spotlight.dbpedia.org/";
private static double CONFIDENCE = 0.0;
private static int SUPPORT = 0;
// private static String powered_by ="non";
// private static String spotter ="CoOccurrenceBasedSelector";//"LingPipeSpotter"=Annotate all spots
//AtLeastOneNounSelector"=No verbs and adjs.
//"CoOccurrenceBasedSelector" =No 'common words'
//"NESpotter"=Only Per.,Org.,Loc.
//private static String disambiguator ="Default";//Default ;Occurrences=Occurrence-centric;Document=Document-centric
//private static String showScores ="yes";
#SuppressWarnings("static-access")
public void configiration(double CONFIDENCE,int SUPPORT)
//, String powered_by,String spotter,String disambiguator,String showScores)
{
this.CONFIDENCE=CONFIDENCE;
this.SUPPORT=SUPPORT;
// this.powered_by=powered_by;
//this.spotter=spotter;
//this.disambiguator=disambiguator;
//showScores=showScores;
}
public List<DBpediaResource> extract(Text text) throws AnnotationException {
// LOG.info("Querying API.");
String spotlightResponse;
try {
String Query=API_URL + "rest/annotate/?" +
"confidence=" + CONFIDENCE
+ "&support=" + SUPPORT
// + "&spotter=" + spotter
// + "&disambiguator=" + disambiguator
// + "&showScores=" + showScores
// + "&powered_by=" + powered_by
+ "&text=" + URLEncoder.encode(text.text(), "utf-8");
//LOG.info(Query);
GetMethod getMethod = new GetMethod(Query);
getMethod.addRequestHeader(new Header("Accept", "application/json"));
spotlightResponse = request(getMethod);
} catch (UnsupportedEncodingException e) {
throw new AnnotationException("Could not encode text.", e);
}
assert spotlightResponse != null;
JSONObject resultJSON = null;
JSONArray entities = null;
try {
resultJSON = new JSONObject(spotlightResponse);
entities = resultJSON.getJSONArray("Resources");
} catch (JSONException e) {
//throw new AnnotationException("Received invalid response from DBpedia Spotlight API.");
}
LinkedList<DBpediaResource> resources = new LinkedList<DBpediaResource>();
if(entities!=null)
for(int i = 0; i < entities.length(); i++) {
try {
JSONObject entity = entities.getJSONObject(i);
resources.add(
new DBpediaResource(entity.getString("#URI"),
Integer.parseInt(entity.getString("#support"))));
} catch (JSONException e) {
//((Object) LOG).error("JSON exception "+e);
}
}
return resources;
}
}
The extended class
package extractKeyword;
import org.apache.commons.httpclient.DefaultHttpMethodRetryHandler;
import org.apache.commons.httpclient.Header;
import org.apache.commons.httpclient.HttpClient;
import org.apache.commons.httpclient.HttpException;
import org.apache.commons.httpclient.HttpMethodBase;
import org.apache.commons.httpclient.HttpStatus;
import org.apache.commons.httpclient.methods.GetMethod;
import org.apache.commons.httpclient.params.HttpMethodParams;
import org.dbpedia.spotlight.exceptions.AnnotationException;
import org.dbpedia.spotlight.model.DBpediaResource;
import org.dbpedia.spotlight.model.Text;
import org.json.JSONArray;
import org.json.JSONException;
import org.json.JSONObject;
import java.io.BufferedInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.net.URLEncoder;
import java.text.ParseException;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
import java.util.logging.Logger;
import javax.ws.rs.HttpMethod;
/**
* #author pablomendes
*/
public abstract class AnnotationClient {
//public Logger LOG = Logger.getLogger(this.getClass());
private List<String> RES = new ArrayList<String>();
// Create an instance of HttpClient.
private static HttpClient client = new HttpClient();
public List<String> getResu(){
return RES;
}
public String request(GetMethod getMethod) throws AnnotationException {
String response = null;
// Provide custom retry handler is necessary
( getMethod).getParams().setParameter(HttpMethodParams.RETRY_HANDLER,
new DefaultHttpMethodRetryHandler(3, false));
try {
// Execute the method.
int statusCode = client.executeMethod((org.apache.commons.httpclient.HttpMethod) getMethod);
if (statusCode != HttpStatus.SC_OK) {
// LOG.error("Method failed: " + ((HttpMethodBase) method).getStatusLine());
}
// Read the response body.
byte[] responseBody = ((HttpMethodBase) getMethod).getResponseBody(); //TODO Going to buffer response body of large or unknown size. Using getResponseBodyAsStream instead is recommended.
// Deal with the response.
// Use caution: ensure correct character encoding and is not binary data
response = new String(responseBody);
} catch (HttpException e) {
// LOG.error("Fatal protocol violation: " + e.getMessage());
throw new AnnotationException("Protocol error executing HTTP request.",e);
} catch (IOException e) {
//((Object) LOG).error("Fatal transport error: " + e.getMessage());
//((Object) LOG).error(((HttpMethodBase) method).getQueryString());
throw new AnnotationException("Transport error executing HTTP request.",e);
} finally {
// Release the connection.
((HttpMethodBase) getMethod).releaseConnection();
}
return response;
}
protected static String readFileAsString(String filePath) throws java.io.IOException{
return readFileAsString(new File(filePath));
}
protected static String readFileAsString(File file) throws IOException {
byte[] buffer = new byte[(int) file.length()];
#SuppressWarnings("resource")
BufferedInputStream f = new BufferedInputStream(new FileInputStream(file));
f.read(buffer);
return new String(buffer);
}
static abstract class LineParser {
public abstract String parse(String s) throws ParseException;
static class ManualDatasetLineParser extends LineParser {
public String parse(String s) throws ParseException {
return s.trim();
}
}
static class OccTSVLineParser extends LineParser {
public String parse(String s) throws ParseException {
String result = s;
try {
result = s.trim().split("\t")[3];
} catch (ArrayIndexOutOfBoundsException e) {
throw new ParseException(e.getMessage(), 3);
}
return result;
}
}
}
public void saveExtractedEntitiesSet(String Question, LineParser parser, int restartFrom) throws Exception {
String text = Question;
int i=0;
//int correct =0 ; int error = 0;int sum = 0;
for (String snippet: text.split("\n")) {
String s = parser.parse(snippet);
if (s!= null && !s.equals("")) {
i++;
if (i<restartFrom) continue;
List<DBpediaResource> entities = new ArrayList<DBpediaResource>();
try {
entities = extract(new Text(snippet.replaceAll("\\s+"," ")));
System.out.println(entities.get(0).getFullUri());
} catch (AnnotationException e) {
// error++;
//LOG.error(e);
e.printStackTrace();
}
for (DBpediaResource e: entities) {
RES.add(e.uri());
}
}
}
}
public abstract List<DBpediaResource> extract(Text text) throws AnnotationException;
public void evaluate(String Question) throws Exception {
evaluateManual(Question,0);
}
public void evaluateManual(String Question, int restartFrom) throws Exception {
saveExtractedEntitiesSet(Question,new LineParser.ManualDatasetLineParser(), restartFrom);
}
}
The Main Class
package extractKeyword;
public class startAnnonation {
public static void main(String[] args) throws Exception {
String question = "What is the winning chances of BJP in New Delhi elections?";
db c = new db ();
c.configiration(0.25,0);
//, 0, "non", "AtLeastOneNounSelector", "Default", "yes");
c.evaluate(question);
System.out.println("resource : "+c.getResu());
}
}
The main problem is here when I am using DBPedia spotlight using spotlight jar (above code)then i am getting different result as compared to the dbpedia spotlight endpoint(dbpedia-spotlight.github.io/demo/)
Result using the above code:-
Text :-What is the winning chances of BJP in New Delhi elections?
Confidence level:-0.35
resource : [Election]
Result on DBPedia Spotlight endpoint(//dbpedia-spotlight.github.io/demo/)
Text:-What is the winning chances of BJP in New Delhi elections?
Confidence level:-0.35
resource : [Bharatiya_Janata_Party, New_Delhi, Election]
Why also the spotlight now don't have support as a parameter?

java urlclassloader that calls a class that has an import dependency

After some help in an other thread on urlclassloaders - understanding urlclassloader, how to access a loaded jar's classes
I have a follow on question as I don't think I am approaching the problem correctly.
myPackageA.start has a urlclassloader calling myPackageB.comms
myPackageB.comms has an dependency to import org.jgroups.JChannel
form /home/myJars/jgroups-3.4.2.Final.jar with the following code
package myPackageB;
import org.jgroups.JChannel;
public class SimpleChat {
JChannel channel;
String user_name=System.getProperty("user.name", "n/a");
private void start() throws Exception {
channel=new JChannel();
channel.connect("ChatCluster");
channel.getState(null, 10000);
channel.close();
}
public static void main(String[] args) throws Exception {
new SimpleChat().start();
}
}
normally I would call the above code with java -cp /home/myJars/jgroups-3.4.2.Final.jar:myPackageB myPackageB.SimpleChat and runs as expected.
My question is howit possible to set the -cp within the script so the import works when using the below code to call myPackageB.SimpleChat from java -cp myPackageA.jar myPackageA.start
package myPackageA;
import java.lang.reflect.Field;
import java.lang.reflect.Method;
import java.net.URL;
import java.net.URLClassLoader;
public class start
{
Class<?> clazz;
private void start() throws Exception
{
if (this.clazz == null)
throw new Exception("The class was not loaded properly");
Object mySc = this.clazz.newInstance();
Method sC = this.clazz.getDeclaredMethod("main", String[].class);
String[] params = null;
sC.invoke(mySc, (Object) params);
}
public void loadSc() throws Exception
{
URL classUrl;
classUrl = new URL("file:///home/myJars/myPackageB.jar");
URL[] classUrls = { classUrl };
URLClassLoader ucl = new URLClassLoader(classUrls);
Class<?> c = ucl.loadClass("myPackageB.SimpleChat");
this.clazz = c;
}
public static void main(String[] args) throws Exception
{
start startnow = new start();
startnow.loadSc();
startnow.start();
}
}
thanks
Art
Just add the URL for jgroups-3.4.2.Final.jar to the URLClassLoader's array of URLs.

Categories