I have written this java code for appending data in ObjectOutputStream, but this code is throwing (java.io.StreamCorruptedException:). Please help if this code can not work properly then please give an alternative for appending data in ObjectOutputStream.
import java.awt.Toolkit;
import java.io.EOFException;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.Serializable;
import java.util.Vector;
import javax.swing.JOptionPane;
class Data implements Serializable {
private static final long serialVersionUID = 1L;
private String time;
private String note;
public Data(String time, String note) {
this.time=time;
this.note=note;
}
public String getTime() {
return time;
}
public String getNote() {
return note;
}
}
public class S extends ObjectOutputStream {
String t, n;
public S(FileOutputStream w, String time, String note) throws Exception {
super(w);
t=time;
n=note;
writeStreamHeader();
}
protected void writeStreamHeader() throws IOException {
writeObject(new Data(t,n));
reset();
}
public static void rd() {
Vector v = new Vector();
Data d;
try
{
ObjectInputStream r = new ObjectInputStream(new FileInputStream("file.cer"));
for(int i=1; i<=100; i++) {
try { v.add(r.readObject()); }
catch(EOFException exp){
r.close();
break;
}
}
for(int i=0; i<v.size(); i++) {
d = (Data)v.elementAt(i);
System.out.println(d.getNote()+" "+d.getTime());
}
}
catch(Exception exp) {
Toolkit.getDefaultToolkit().beep();
JOptionPane.showMessageDialog(null, String.format("ERROR = %s\nCLASS = S", exp.getClass()));
System.out.println(exp.getClass());
System.exit(0);
}
}
public static void main(String arg[]) throws Exception {
FileOutputStream w = new FileOutputStream("file.cer",true);
new S(w,"99:59:59:99","Maxima");
new S(w,"00:00:00:00","Minima");
rd();
}
}
if this code can not work properly
You are correct. It can not work properly.
then please give an alternative for appending data in ObjectOutputStream.
writeStreamHeader() must do nothing if the file is being appended to, but it must call super.writeStreamHeader() if the file is newly created. It certainly should not call writeObject() at any time.
Related
As in several research papers [1], [2], [3] it seems appropriate to make use of the Principal Component Analysis (PCA) algorithm for feature extraction. To attempt to accomplish this I have produced a traffic sniffer program in java making use of the jpcap library (snippet below), which is able to extract various features from live network traffic.
But I have not found any way to actually implement this, as existing approaches (snippets below) all seem to operate on non-live pre-organised datasets such as in the form of ARFF files.
I have seen a number of research papers [4], [5], [6] that reference the use of the PCA algorithm in combination jpcap library, that said I have not been able to find an explanation as to how this was accomplished this.
To be clear:
My question is, how can I implement PCA (using any approach) to extract features from live packets (attained via used of the pcap library referenced above) in Java? as others have accomplished (some examples have been referenced above)
Sniffer Example (using Jpcap Library)
Sniffer.java
import jpcap.JpcapCaptor;
import jpcap.NetworkInterface;
import jpcap.packet.Packet;
import jpcap.*;
import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.util.ArrayList;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.xml.bind.DatatypeConverter;
import java.util.List;
public class Sniffer {
public static NetworkInterface[] NETWORK_INTERFACES;
public static JpcapCaptor CAP;
jpcap_thread THREAD;
public static int INDEX = 0;
public static int flag = 0;
public static int COUNTER = 0;
static boolean CaptureState = false;
public static int No = 0;
JpcapWriter writer = null;
List<Packet> packetList = new ArrayList<>();
public static ArrayList<Object[]> packetInfo = new ArrayList<>();
public static void CapturePackets() {
THREAD = new jpcap_thread() {
public Object construct() {
try {
CAP = JpcapCaptor.openDevice(NETWORK_INTERFACES[INDEX], 65535, false, 20);
// writer = JpcapWriter.openDumpFile(CAP, "captureddata");
if ("UDP".equals(filter_options.getSelectedItem().toString())) {
CAP.setFilter("udp", true);
} else if ("TCP".equals(filter_options.getSelectedItem().toString())) {
CAP.setFilter("tcp", true);
} else if ("ICMP".equals(filter_options.getSelectedItem().toString())) {
CAP.setFilter("icmp", true);
}
while (CaptureState) {
CAP.processPacket(1, new PacketContents());
packetList.add(CAP.getPacket());
}
CAP.close();
} catch (Exception e) {
System.out.print(e);
}
return 0;
}
public void finished() {
this.interrupt();
}
};
THREAD.start();
}
public static void main(String[] args) {
CaptureState = true;
CapturePackets();
}
public void saveToFile() {
THREAD = new jpcap_thread() {
public Object construct() {
writer = null;
try {
CAP = JpcapCaptor.openDevice(NETWORK_INTERFACES[INDEX], 65535, false, 20);
writer = JpcapWriter.openDumpFile(CAP, "captured_data.txt");
} catch (IOException ex) {
Logger.getLogger(Sniffer.class.getName()).log(Level.SEVERE, null, ex);
}
for (int i = 0; i < No; i++) {
writer.writePacket(packetList.get(i));
}
return 0;
}
public void finished() {
this.interrupt();
}
};
THREAD.start();
}
}
PacketContents.java
import jpcap.PacketReceiver;
import jpcap.packet.Packet;
import javax.swing.table.DefaultTableModel;
import jpcap.packet.TCPPacket;
import jpcap.packet.UDPPacket;
import java.util.ArrayList;
import java.util.List;
import jpcap.packet.ICMPPacket;
public class PacketContents implements PacketReceiver {
public static TCPPacket tcp;
public static UDPPacket udp;
public static ICMPPacket icmp;
public void recievePacket(Packet packet) {
}
#Override
public void receivePacket(Packet packet) {
if (packet instanceof TCPPacket) {
tcp = (TCPPacket) packet;
Sniffer.packetInfo.add(new Object[] { sniffer.No, tcp.length, tcp.src_ip, tcp.dst_ip, "TCP", tcp.src_port,
tcp.dst_port, tcp.ack, tcp.ack_num, tcp.data, tcp.sequence, tcp.offset, tcp.header });
sniffer.No++;
} else if (packet instanceof UDPPacket) {
udp = (UDPPacket) packet;
Sniffer.packetInfo.add(new Object[] { sniffer.No, udp.length, udp.src_ip, udp.dst_ip, "UDP", udp.src_port,
udp.dst_port, udp.data, udp.offset, udp.header });
sniffer.No++;
} else if (packet instanceof ICMPPacket) {
icmp = (ICMPPacket) packet;
Sniffer.packetInfo.add(new Object[] { sniffer.No, icmp.length, icmp.src_ip, icmp.dst_ip, "ICMP",
icmp.checksum, icmp.header, icmp.offset, icmp.orig_timestamp, icmp.recv_timestamp,
icmp.trans_timestamp, icmp.data });
sniffer.No++;
}
}
}
Using the Weka Library to execute the PCA algorithm on an ARFF file
WekaPCA.java
package project;
import weka.core.Instances;
import weka.core.converters.ArffLoader;
import weka.core.converters.ConverterUtils;
import weka.core.converters.TextDirectoryLoader;
import java.io.File;
import org.math.plot.FrameView;
import org.math.plot.Plot2DPanel;
import org.math.plot.PlotPanel;
import org.math.plot.plots.ScatterPlot;
import weka.attributeSelection.PrincipalComponents;
import weka.attributeSelection.Ranker;
public class PCA {
public static void main(String[] args) {
try {
// Load data
String InputFilename = "kdd99.arff";
ArffLoader loader = new ArffLoader();
loader.setSource(new File(InputFilename));
Instances data = loader.getDataSet();
// Perform PCA
PrincipalComponents pca = new PrincipalComponents();
pca.setVarianceCovered(1.0);
pca.setTransformBackToOriginal(false);
pca.buildEvaluator(data);
// Show transformed data
Instances transformedData = pca.transformedData();
System.out.println(transformedData);
}
catch (Exception e) {
e.printStackTrace();
I would like to convert this code:
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.parquet.column.page.PageReadStore;
import org.apache.parquet.example.data.simple.SimpleGroup;
import org.apache.parquet.example.data.simple.convert.GroupRecordConverter;
import org.apache.parquet.hadoop.ParquetFileReader;
import org.apache.parquet.hadoop.util.HadoopInputFile;
import org.apache.parquet.io.ColumnIOFactory;
import org.apache.parquet.io.MessageColumnIO;
import org.apache.parquet.io.RecordReader;
import org.apache.parquet.schema.MessageType;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
public class ParquetReaderUtils {
public static Parquet getParquetData(String filePath) throws IOException {
List<SimpleGroup> simpleGroups = new ArrayList<>();
ParquetFileReader reader = ParquetFileReader.open(HadoopInputFile.fromPath(new Path(filePath), new Configuration()));
MessageType schema = reader.getFooter().getFileMetaData().getSchema();
//List<Type> fields = schema.getFields();
PageReadStore pages;
while ((pages = reader.readNextRowGroup()) != null) {
long rows = pages.getRowCount();
MessageColumnIO columnIO = new ColumnIOFactory().getColumnIO(schema);
RecordReader recordReader = columnIO.getRecordReader(pages, new GroupRecordConverter(schema));
for (int i = 0; i < rows; i++) {
SimpleGroup simpleGroup = (SimpleGroup) recordReader.read();
simpleGroups.add(simpleGroup);
}
}
reader.close();
return new Parquet(simpleGroups, schema);
}
}
(which is from https://www.arm64.ca/post/reading-parquet-files-java/)
to take a ByteArrayOutputStream parameter instead of a filePath.
Is this possible? I don't see a ParquetStreamReader in org.apache.parquet.hadoop.
Any help is appreciated. I am trying to write a test app for parquet coming from kafka and writing each of many messages out to a file is rather slow.
So without deeper testing, I would try with this class (albeit the content of the outputstream should be parquet-compatible). I put there a streamId to make the identification of the processed bytearray easier (the ParquetFileReader prints the instance.toString() out if something went wrong).
public class ParquetStream implements InputFile {
private final String streamId;
private final byte[] data;
private static class SeekableByteArrayInputStream extends ByteArrayInputStream {
public SeekableByteArrayInputStream(byte[] buf) {
super(buf);
}
public void setPos(int pos) {
this.pos = pos;
}
public int getPos() {
return this.pos;
}
}
public ParquetStream(String streamId, ByteArrayOutputStream stream) {
this.streamId = streamId;
this.data = stream.toByteArray();
}
#Override
public long getLength() throws IOException {
return this.data.length;
}
#Override
public SeekableInputStream newStream() throws IOException {
return new DelegatingSeekableInputStream(new SeekableByteArrayInputStream(this.data)) {
#Override
public void seek(long newPos) throws IOException {
((SeekableByteArrayInputStream) this.getStream()).setPos((int) newPos);
}
#Override
public long getPos() throws IOException {
return ((SeekableByteArrayInputStream) this.getStream()).getPos();
}
};
}
#Override
public String toString() {
return "ParquetStream[" + streamId + "]";
}
}
This is my first post so sorry if I mess something up or if I am not clear enough. I have been looking through online forums for several hours and spend more trying to figure it out for myself.
I am reading information from a file and I need a loop that creates an ArrayList every time it goes through.
static ArrayList<String> fileToArrayList(String infoFromFile)
{
ArrayList<String> smallerArray = new ArrayList<String>();
//This ArrayList needs to be different every time so that I can add them
//all to the same ArrayList
if (infoFromFile != null)
{
String[] splitData = infoFromFile.split(":");
for (int i = 0; i < splitData.length; i++)
{
if (!(splitData[i] == null) || !(splitData[i].length() == 0))
{
smallerArray.add(splitData[i].trim());
}
}
}
The reason I need to do this is that I am creating an app for a school project that reads questions from a delimited text file. I have a loop earlier that reads one line at a time from the text. I will insert that string into this program.
How do I make the ArrayList smallerArray a separate ArrayList everytime it goes through this method?
I need this so I can have an ArrayList of each of these ArrayList
Here is a sample code of what you intend to do -
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Stream;
public class SimpleFileReader {
private static final String DELEMETER = ":";
private String filename = null;
public SimpleFileReader() {
super();
}
public SimpleFileReader(String filename) {
super();
setFilename(filename);
}
public String getFilename() {
return filename;
}
public void setFilename(String filename) {
this.filename = filename;
}
public List<List<String>> getRowSet() throws IOException {
List<List<String>> rows = new ArrayList<>();
try (Stream<String> stream = Files.lines(Paths.get(filename))) {
stream.forEach(row -> rows.add(Arrays.asList(row.split(DELEMETER))));
}
return rows;
}
}
And, here is the JUnit test for the above code -
import static org.junit.jupiter.api.Assertions.fail;
import java.io.IOException;
import java.util.List;
import org.junit.jupiter.api.Test;
public class SimpleFileReaderTest {
public SimpleFileReaderTest() {
super();
}
#Test
public void testFileReader() {
try {
SimpleFileReader reader = new SimpleFileReader("c:/temp/sample-input.txt");
List<List<String>> rows = reader.getRowSet();
int expectedValue = 3; // number of actual lines in the sample file
int actualValue = rows.size(); // number of rows in the list
if (actualValue != expectedValue) {
fail(String.format("Expected value for the row count is %d, whereas obtained value is %d", expectedValue, actualValue));
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
My question is: How to send a line of a file to another agent every 2 seconds using ticker behaviours?
More specifically, in the first iteration, the agent sends the first line. In the second, the agent sends the second line etc.
My code below:
package pack1;
import java.io.BufferedReader;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.LineNumberReader;
import jade.core.AID;
import jade.core.Agent;
import jade.core.behaviours.TickerBehaviour;
import jade.lang.acl.ACLMessage;
import jade.wrapper.ControllerException;
public class Agent2 extends Agent {
private static final long serialVersionUID = 1L;
int nombre_ligne = 0;
BufferedReader lecteurAvecBuffer = null;
#Override
protected void setup() {
FileInputStream fis;
try {
fis = new FileInputStream("/home/hduser/Bureau/word.txt");
#SuppressWarnings("resource")
LineNumberReader l = new LineNumberReader(new BufferedReader(
new InputStreamReader(fis)));
while ((l.readLine()) != null) {
nombre_ligne = l.getLineNumber();
}
lecteurAvecBuffer = new BufferedReader(new FileReader(
"/home/hduser/Bureau/esclave1/abc.txt"));
int a = 1;
while (a <= ((int) nombre_ligne) / 3) {
a++;
String word = lecteurAvecBuffer.readLine();
addBehaviour(new TickerBehaviour(this, 2000) {
private static final long serialVersionUID = 1L;
#Override
protected void onTick() {
ACLMessage message = new ACLMessage(ACLMessage.INFORM);
message.addReceiver(new AID("agent1", AID.ISLOCALNAME));
message.setContent(word);
send(message);
}
});
a++;
}
lecteurAvecBuffer.close();
} catch (FileNotFoundException exc) {
System.out.println("Erreur d'ouverture");
} catch (IOException e) {
e.printStackTrace();
}
}
protected void takeDown() {
System.out.println("Destruction de l'agent");
}
#Override
protected void afterMove() {
try {
System.out.println(" La Destination : "
+ this.getContainerController().getContainerName());
} catch (ControllerException e) {
e.printStackTrace();
}
}
}
You don't say nothing about what is the problem with your code. I guess you get at least a compiler message about:
message.setContent(word);
As you access a local variable from an inner class, you must declare the variable as final in the context, like:
final String word = lecteurAvecBuffer.readLine();
This is the main class in which query is being fired
package extractKeyword;
import java.io.UnsupportedEncodingException;
import java.net.URLEncoder;
import org.apache.commons.httpclient.Header;
import org.apache.commons.httpclient.methods.GetMethod;
import org.dbpedia.spotlight.exceptions.AnnotationException;
import org.dbpedia.spotlight.model.DBpediaResource;
import org.dbpedia.spotlight.model.Text;
import org.json.JSONArray;
import org.json.JSONException;
import org.json.JSONObject;
import java.io.File;
import java.io.UnsupportedEncodingException;
import java.net.URLEncoder;
import java.util.LinkedList;
import java.util.List;
public class db extends AnnotationClient {
//private final static String API_URL = "http://jodaiber.dyndns.org:2222/";
private static String API_URL = "http://spotlight.dbpedia.org/";
private static double CONFIDENCE = 0.0;
private static int SUPPORT = 0;
// private static String powered_by ="non";
// private static String spotter ="CoOccurrenceBasedSelector";//"LingPipeSpotter"=Annotate all spots
//AtLeastOneNounSelector"=No verbs and adjs.
//"CoOccurrenceBasedSelector" =No 'common words'
//"NESpotter"=Only Per.,Org.,Loc.
//private static String disambiguator ="Default";//Default ;Occurrences=Occurrence-centric;Document=Document-centric
//private static String showScores ="yes";
#SuppressWarnings("static-access")
public void configiration(double CONFIDENCE,int SUPPORT)
//, String powered_by,String spotter,String disambiguator,String showScores)
{
this.CONFIDENCE=CONFIDENCE;
this.SUPPORT=SUPPORT;
// this.powered_by=powered_by;
//this.spotter=spotter;
//this.disambiguator=disambiguator;
//showScores=showScores;
}
public List<DBpediaResource> extract(Text text) throws AnnotationException {
// LOG.info("Querying API.");
String spotlightResponse;
try {
String Query=API_URL + "rest/annotate/?" +
"confidence=" + CONFIDENCE
+ "&support=" + SUPPORT
// + "&spotter=" + spotter
// + "&disambiguator=" + disambiguator
// + "&showScores=" + showScores
// + "&powered_by=" + powered_by
+ "&text=" + URLEncoder.encode(text.text(), "utf-8");
//LOG.info(Query);
GetMethod getMethod = new GetMethod(Query);
getMethod.addRequestHeader(new Header("Accept", "application/json"));
spotlightResponse = request(getMethod);
} catch (UnsupportedEncodingException e) {
throw new AnnotationException("Could not encode text.", e);
}
assert spotlightResponse != null;
JSONObject resultJSON = null;
JSONArray entities = null;
try {
resultJSON = new JSONObject(spotlightResponse);
entities = resultJSON.getJSONArray("Resources");
} catch (JSONException e) {
//throw new AnnotationException("Received invalid response from DBpedia Spotlight API.");
}
LinkedList<DBpediaResource> resources = new LinkedList<DBpediaResource>();
if(entities!=null)
for(int i = 0; i < entities.length(); i++) {
try {
JSONObject entity = entities.getJSONObject(i);
resources.add(
new DBpediaResource(entity.getString("#URI"),
Integer.parseInt(entity.getString("#support"))));
} catch (JSONException e) {
//((Object) LOG).error("JSON exception "+e);
}
}
return resources;
}
}
The extended class
package extractKeyword;
import org.apache.commons.httpclient.DefaultHttpMethodRetryHandler;
import org.apache.commons.httpclient.Header;
import org.apache.commons.httpclient.HttpClient;
import org.apache.commons.httpclient.HttpException;
import org.apache.commons.httpclient.HttpMethodBase;
import org.apache.commons.httpclient.HttpStatus;
import org.apache.commons.httpclient.methods.GetMethod;
import org.apache.commons.httpclient.params.HttpMethodParams;
import org.dbpedia.spotlight.exceptions.AnnotationException;
import org.dbpedia.spotlight.model.DBpediaResource;
import org.dbpedia.spotlight.model.Text;
import org.json.JSONArray;
import org.json.JSONException;
import org.json.JSONObject;
import java.io.BufferedInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.net.URLEncoder;
import java.text.ParseException;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
import java.util.logging.Logger;
import javax.ws.rs.HttpMethod;
/**
* #author pablomendes
*/
public abstract class AnnotationClient {
//public Logger LOG = Logger.getLogger(this.getClass());
private List<String> RES = new ArrayList<String>();
// Create an instance of HttpClient.
private static HttpClient client = new HttpClient();
public List<String> getResu(){
return RES;
}
public String request(GetMethod getMethod) throws AnnotationException {
String response = null;
// Provide custom retry handler is necessary
( getMethod).getParams().setParameter(HttpMethodParams.RETRY_HANDLER,
new DefaultHttpMethodRetryHandler(3, false));
try {
// Execute the method.
int statusCode = client.executeMethod((org.apache.commons.httpclient.HttpMethod) getMethod);
if (statusCode != HttpStatus.SC_OK) {
// LOG.error("Method failed: " + ((HttpMethodBase) method).getStatusLine());
}
// Read the response body.
byte[] responseBody = ((HttpMethodBase) getMethod).getResponseBody(); //TODO Going to buffer response body of large or unknown size. Using getResponseBodyAsStream instead is recommended.
// Deal with the response.
// Use caution: ensure correct character encoding and is not binary data
response = new String(responseBody);
} catch (HttpException e) {
// LOG.error("Fatal protocol violation: " + e.getMessage());
throw new AnnotationException("Protocol error executing HTTP request.",e);
} catch (IOException e) {
//((Object) LOG).error("Fatal transport error: " + e.getMessage());
//((Object) LOG).error(((HttpMethodBase) method).getQueryString());
throw new AnnotationException("Transport error executing HTTP request.",e);
} finally {
// Release the connection.
((HttpMethodBase) getMethod).releaseConnection();
}
return response;
}
protected static String readFileAsString(String filePath) throws java.io.IOException{
return readFileAsString(new File(filePath));
}
protected static String readFileAsString(File file) throws IOException {
byte[] buffer = new byte[(int) file.length()];
#SuppressWarnings("resource")
BufferedInputStream f = new BufferedInputStream(new FileInputStream(file));
f.read(buffer);
return new String(buffer);
}
static abstract class LineParser {
public abstract String parse(String s) throws ParseException;
static class ManualDatasetLineParser extends LineParser {
public String parse(String s) throws ParseException {
return s.trim();
}
}
static class OccTSVLineParser extends LineParser {
public String parse(String s) throws ParseException {
String result = s;
try {
result = s.trim().split("\t")[3];
} catch (ArrayIndexOutOfBoundsException e) {
throw new ParseException(e.getMessage(), 3);
}
return result;
}
}
}
public void saveExtractedEntitiesSet(String Question, LineParser parser, int restartFrom) throws Exception {
String text = Question;
int i=0;
//int correct =0 ; int error = 0;int sum = 0;
for (String snippet: text.split("\n")) {
String s = parser.parse(snippet);
if (s!= null && !s.equals("")) {
i++;
if (i<restartFrom) continue;
List<DBpediaResource> entities = new ArrayList<DBpediaResource>();
try {
entities = extract(new Text(snippet.replaceAll("\\s+"," ")));
System.out.println(entities.get(0).getFullUri());
} catch (AnnotationException e) {
// error++;
//LOG.error(e);
e.printStackTrace();
}
for (DBpediaResource e: entities) {
RES.add(e.uri());
}
}
}
}
public abstract List<DBpediaResource> extract(Text text) throws AnnotationException;
public void evaluate(String Question) throws Exception {
evaluateManual(Question,0);
}
public void evaluateManual(String Question, int restartFrom) throws Exception {
saveExtractedEntitiesSet(Question,new LineParser.ManualDatasetLineParser(), restartFrom);
}
}
The Main Class
package extractKeyword;
public class startAnnonation {
public static void main(String[] args) throws Exception {
String question = "What is the winning chances of BJP in New Delhi elections?";
db c = new db ();
c.configiration(0.25,0);
//, 0, "non", "AtLeastOneNounSelector", "Default", "yes");
c.evaluate(question);
System.out.println("resource : "+c.getResu());
}
}
The main problem is here when I am using DBPedia spotlight using spotlight jar (above code)then i am getting different result as compared to the dbpedia spotlight endpoint(dbpedia-spotlight.github.io/demo/)
Result using the above code:-
Text :-What is the winning chances of BJP in New Delhi elections?
Confidence level:-0.35
resource : [Election]
Result on DBPedia Spotlight endpoint(//dbpedia-spotlight.github.io/demo/)
Text:-What is the winning chances of BJP in New Delhi elections?
Confidence level:-0.35
resource : [Bharatiya_Janata_Party, New_Delhi, Election]
Why also the spotlight now don't have support as a parameter?