I am developing a custom interpreter for a domain specific language. Based on the example given in the Apache Zeppelin documentation (https://zeppelin.incubator.apache.org/docs/latest/development/writingzeppelininterpreter.html), the interpreter works pretty well. Now I want to store some results in a new DataFrame.
I found code to create DataFrames (http://spark.apache.org/docs/latest/sql-programming-guide.html), but I cant use this in my interpreter because I basically dont find a way to access a valid runtime SparkContext (often called "sc") from within my custom interpreter.
I tried (static) SparkContext.getOrCreate() but this even led to a ClassNotFoundException. Then I added the whole zeppelin-spark-dependencies...jar to my interpreter folder, which solved the class loading issue but now I am getting a SparkException ("master url must be set...").
Any idea how I could get access to my Notebook's SparkContext from within the custom interpreter? Thanks a lot!
UPDATE
Thanks to Kangrok Lee's comment below, my code now looks as follows: see below. It runs and seems to create a DataFrame (at least it doesnt throw any Exception any more). But I can not consume the created DataFrame in a subsequent SQL paragraph (the first paragraph uses my "%opl" interpreter, as given below, that should create the "result" DataFrame):
%opl
1 2 3
> 1
> 2
> 3
%sql
select * from result
> Table not found: result; line 1 pos 14
So probably there is still something wrong with my way of dealing with the SparkContext. Any ideas? Thanks a lot!
package opl;
import java.io.ByteArrayOutputStream;
import java.io.PrintStream;
import java.util.ArrayList;
import java.util.List;
import java.util.Properties;
import org.apache.spark.SparkContext;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructType;
import org.apache.zeppelin.interpreter.Interpreter;
import org.apache.zeppelin.interpreter.InterpreterContext;
import org.apache.zeppelin.interpreter.InterpreterPropertyBuilder;
import org.apache.zeppelin.interpreter.InterpreterResult;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class OplInterpreter2 extends Interpreter {
static {
Interpreter.register("opl","opl",OplInterpreter2.class.getName(),
new InterpreterPropertyBuilder()
.add("spark.master", "local[4]", "spark.master")
.add("spark.app.name", "Opl Interpreter", "spark.app.name")
.add("spark.serializer", "org.apache.spark.serializer.KryoSerializer", "spark.serializer")
.build());
}
private Logger logger = LoggerFactory.getLogger(OplInterpreter2.class);
private void log(Object o) {
if (logger != null)
logger.warn("OplInterpreter2 "+o);
}
public OplInterpreter2(Properties properties) {
super(properties);
log("CONSTRUCTOR");
}
#Override
public void open() {
log("open()");
}
#Override
public void cancel(InterpreterContext arg0) {
log("cancel()");
}
#Override
public void close() {
log("close()");
}
#Override
public List<String> completion(String arg0, int arg1) {
log("completion()");
return new ArrayList<String>();
}
#Override
public FormType getFormType() {
log("getFormType()");
return FormType.SIMPLE;
}
#Override
public int getProgress(InterpreterContext arg0) {
log("getProgress()");
return 100;
}
#Override
public InterpreterResult interpret(String string, InterpreterContext context) {
log("interpret() "+string);
PrintStream oldSys = System.out;
try {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
PrintStream ps = new PrintStream(baos);
System.setOut(ps);
execute(string);
System.out.flush();
System.setOut(oldSys);
return new InterpreterResult(
InterpreterResult.Code.SUCCESS,
InterpreterResult.Type.TEXT,
baos.toString());
} catch (Exception ex) {
System.out.flush();
System.setOut(oldSys);
return new InterpreterResult(
InterpreterResult.Code.ERROR,
InterpreterResult.Type.TEXT,
ex.toString());
}
}
private void execute(String code) throws Exception {
SparkContext sc = SparkContext.getOrCreate();
SQLContext sqlc = SQLContext.getOrCreate(sc);
StructType structType = new StructType().add("value",DataTypes.IntegerType);
ArrayList<Row> list = new ArrayList<Row>();
for (String s : code.trim().split("\\s+")) {
int value = Integer.parseInt(s);
System.out.println(value);
list.add(RowFactory.create(value));
}
DataFrame df = sqlc.createDataFrame(list,structType);
df.registerTempTable("result");
}
}
Finally I found a solution although I don't think this is a very nice one. In the code below, I am using a function getSparkInterpreter() that I found in org.apache.zeppelin.spark.PySparkInterpreter.java.
This requires that I put my packaged code (jar) into the Spark interpreter folder, instead of its own interpreter folder, which I believe should be the preferred way (according to https://zeppelin.incubator.apache.org/docs/latest/development/writingzeppelininterpreter.html). Also, my interpreter does not show up in Zeppelin's interpreter configuration page as an interpreter of its own. But it can be used in a Zeppelin paragraph nevertheless.
And: In the code I can create a DataFrame and this is also consumable outside my paragraph -- which is what I wanted to achieve.
package opl;
import java.io.ByteArrayOutputStream;
import java.io.PrintStream;
import java.util.ArrayList;
import java.util.List;
import java.util.Properties;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructType;
import org.apache.zeppelin.interpreter.Interpreter;
import org.apache.zeppelin.interpreter.InterpreterContext;
import org.apache.zeppelin.interpreter.InterpreterPropertyBuilder;
import org.apache.zeppelin.interpreter.InterpreterResult;
import org.apache.zeppelin.interpreter.LazyOpenInterpreter;
import org.apache.zeppelin.interpreter.WrappedInterpreter;
import org.apache.zeppelin.spark.SparkInterpreter;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class OplInterpreter2 extends Interpreter {
static {
Interpreter.register(
"opl",
"spark",//"opl",
OplInterpreter2.class.getName(),
new InterpreterPropertyBuilder()
.add("sth", "defaultSth", "some thing")
.build());
}
private Logger logger = LoggerFactory.getLogger(OplInterpreter2.class);
private void log(Object o) {
if (logger != null)
logger.warn("OplInterpreter2 "+o);
}
public OplInterpreter2(Properties properties) {
super(properties);
log("CONSTRUCTOR");
}
#Override
public void open() {
log("open()");
}
#Override
public void cancel(InterpreterContext arg0) {
log("cancel()");
}
#Override
public void close() {
log("close()");
}
#Override
public List<String> completion(String arg0, int arg1) {
log("completion()");
return new ArrayList<String>();
}
#Override
public FormType getFormType() {
log("getFormType()");
return FormType.SIMPLE;
}
#Override
public int getProgress(InterpreterContext arg0) {
log("getProgress()");
return 100;
}
#Override
public InterpreterResult interpret(String string, InterpreterContext context) {
log("interpret() "+string);
PrintStream oldSys = System.out;
try {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
PrintStream ps = new PrintStream(baos);
System.setOut(ps);
execute(string);
System.out.flush();
System.setOut(oldSys);
return new InterpreterResult(
InterpreterResult.Code.SUCCESS,
InterpreterResult.Type.TEXT,
baos.toString());
} catch (Exception ex) {
System.out.flush();
System.setOut(oldSys);
return new InterpreterResult(
InterpreterResult.Code.ERROR,
InterpreterResult.Type.TEXT,
ex.toString());
}
}
private void execute(String code) throws Exception {
SparkInterpreter sintp = getSparkInterpreter();
SQLContext sqlc = sintp.getSQLContext();
StructType structType = new StructType().add("value",DataTypes.IntegerType);
ArrayList<Row> list = new ArrayList<Row>();
for (String s : code.trim().split("\\s+")) {
int value = Integer.parseInt(s);
System.out.println(value);
list.add(RowFactory.create(value));
}
DataFrame df = sqlc.createDataFrame(list,structType);
df.registerTempTable("result");
}
private SparkInterpreter getSparkInterpreter() {
LazyOpenInterpreter lazy = null;
SparkInterpreter spark = null;
Interpreter p = getInterpreterInTheSameSessionByClassName(SparkInterpreter.class.getName());
while (p instanceof WrappedInterpreter) {
if (p instanceof LazyOpenInterpreter) {
lazy = (LazyOpenInterpreter) p;
}
p = ((WrappedInterpreter) p).getInnerInterpreter();
}
spark = (SparkInterpreter) p;
if (lazy != null) {
lazy.open();
}
return spark;
}
}
I think that you should configure spark cluster such as the below statement.
spark.master = "local[4]"
spark.app.name = "My Spark App"
spark.serializer = "org.apache.spark.serializer.KryoSerializer"
Using SparkContext.getOrCreate() looks good to me.
Thanks,
Kangrok Lee
Related
I have a program which does the following:
Stores a file name in Main method
Passes that file to the below method(StreamParser)from Main
Method StreamParser reads that file as Stream
StreamParser should return Stream
In main method when I call forEach on purchaseEventStream it gives an error in line
purchaseEventStream.forEach(purchaseEvent -> {
Exception in thread "main" java.lang.IllegalStateException: source already consumed or
closed
at java.base/java.util.stream.AbstractPipeline.sourceSpliterator(AbstractPipeline.java:409)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497)
at com.cognitree.internship.streamprocessing.Main.main(Main.java:22)
StreamParser Class
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.stream.Stream;
public class StreamParser {
public Stream<PurchaseEvent> parser(String fileName) throws IOException {
Stream<PurchaseEvent> purchaseEventStream;
try (Stream<String> lines = Files.lines(Paths.get(fileName))) {
purchaseEventStream= lines.map(line -> {
String[] fields = line.split(",");
PurchaseEvent finalPurchaseEvent = new PurchaseEvent();
finalPurchaseEvent.setSessionId(fields[0]);
finalPurchaseEvent.setTimeStamp(fields[1]);
finalPurchaseEvent.setItemId(fields[2]);
finalPurchaseEvent.setPrice(fields[3]);
finalPurchaseEvent.setQuantity(fields[4]);
return finalPurchaseEvent;
});
return purchaseEventStream;
}
}
}
Main Class
import java.io.*;
import java.util.ArrayList;
import java.util.List;
import java.util.stream.Stream;
public class Main {
public static void main(String[] args) throws IOException {
OutputStreamWriter outputStream = new OutputStreamWriter(new
FileOutputStream("output1.txt"));
String file = "/Users/mohit/intern-mohit/yoochoose-buys.dat";
StreamParser streamParser = new StreamParser();
List<ReportGenerator> reports = new ArrayList<>();
PurchaseEventCount purchaseEventCount = new PurchaseEventCount();
QuantityPerSession quantityPerSession = new QuantityPerSession();
SessionCount sessionCount = new SessionCount();
reports.add(purchaseEventCount);
reports.add(sessionCount);
reports.add(quantityPerSession);
Stream<PurchaseEvent> purchaseEventStream = streamParser.parser(file);
purchaseEventStream.forEach(purchaseEvent -> {
for (ReportGenerator report : reports) {
report.generateReports(purchaseEvent);
}
});
reports.forEach(report -> {
try {
report.printReports(outputStream);
} catch (IOException e) {
e.printStackTrace();
}
});
}
}
Why am i getting the error?
A stream in java is not a collection. It does not store data. You should create and return a collection from method parser() in class StreamParser and then create a stream from the returned collection.
I rewrote your StreamParser class to return a List.
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.List;
import java.util.stream.Collectors;
public class StreamParser {
public List<PurchaseEvent> parser(String fileName) throws IOException {
List<PurchaseEvent> purchaseEventStream = Files.lines(Paths.get(fileName))
.map(line -> {
String[] fields = line.split(",");
PurchaseEvent finalPurchaseEvent = new PurchaseEvent();
finalPurchaseEvent.setSessionId(fields[0]);
finalPurchaseEvent.setTimeStamp(fields[1]);
finalPurchaseEvent.setItemId(fields[2]);
finalPurchaseEvent.setPrice(fields[3]);
finalPurchaseEvent.setQuantity(fields[4]);
return finalPurchaseEvent;
})
.collect(Collectors.toList());
return purchaseEventStream;
}
}
And I changed your Main class accordingly.
import java.io.*;
import java.util.ArrayList;
import java.util.List;
public class Main {
public static void main(String[] args) throws IOException {
OutputStreamWriter outputStream = new OutputStreamWriter(new FileOutputStream("output1.txt"));
String file = "/Users/mohit/intern-mohit/yoochoose-buys.dat";
StreamParser streamParser = new StreamParser();
List<ReportGenerator> reports = new ArrayList<>();
PurchaseEventCount purchaseEventCount = new PurchaseEventCount();
QuantityPerSession quantityPerSession = new QuantityPerSession();
SessionCount sessionCount = new SessionCount();
reports.add(purchaseEventCount);
reports.add(sessionCount);
reports.add(quantityPerSession);
List<PurchaseEvent> purchaseEventStream = streamParser.parser(file);
purchaseEventStream.forEach(purchaseEvent -> {
for (ReportGenerator report : reports) {
report.generateReports(purchaseEvent);
}
});
reports.forEach(report -> {
try {
report.printReports(outputStream);
}
catch (IOException e) {
e.printStackTrace();
}
});
}
}
As in several research papers [1], [2], [3] it seems appropriate to make use of the Principal Component Analysis (PCA) algorithm for feature extraction. To attempt to accomplish this I have produced a traffic sniffer program in java making use of the jpcap library (snippet below), which is able to extract various features from live network traffic.
But I have not found any way to actually implement this, as existing approaches (snippets below) all seem to operate on non-live pre-organised datasets such as in the form of ARFF files.
I have seen a number of research papers [4], [5], [6] that reference the use of the PCA algorithm in combination jpcap library, that said I have not been able to find an explanation as to how this was accomplished this.
To be clear:
My question is, how can I implement PCA (using any approach) to extract features from live packets (attained via used of the pcap library referenced above) in Java? as others have accomplished (some examples have been referenced above)
Sniffer Example (using Jpcap Library)
Sniffer.java
import jpcap.JpcapCaptor;
import jpcap.NetworkInterface;
import jpcap.packet.Packet;
import jpcap.*;
import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.util.ArrayList;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.xml.bind.DatatypeConverter;
import java.util.List;
public class Sniffer {
public static NetworkInterface[] NETWORK_INTERFACES;
public static JpcapCaptor CAP;
jpcap_thread THREAD;
public static int INDEX = 0;
public static int flag = 0;
public static int COUNTER = 0;
static boolean CaptureState = false;
public static int No = 0;
JpcapWriter writer = null;
List<Packet> packetList = new ArrayList<>();
public static ArrayList<Object[]> packetInfo = new ArrayList<>();
public static void CapturePackets() {
THREAD = new jpcap_thread() {
public Object construct() {
try {
CAP = JpcapCaptor.openDevice(NETWORK_INTERFACES[INDEX], 65535, false, 20);
// writer = JpcapWriter.openDumpFile(CAP, "captureddata");
if ("UDP".equals(filter_options.getSelectedItem().toString())) {
CAP.setFilter("udp", true);
} else if ("TCP".equals(filter_options.getSelectedItem().toString())) {
CAP.setFilter("tcp", true);
} else if ("ICMP".equals(filter_options.getSelectedItem().toString())) {
CAP.setFilter("icmp", true);
}
while (CaptureState) {
CAP.processPacket(1, new PacketContents());
packetList.add(CAP.getPacket());
}
CAP.close();
} catch (Exception e) {
System.out.print(e);
}
return 0;
}
public void finished() {
this.interrupt();
}
};
THREAD.start();
}
public static void main(String[] args) {
CaptureState = true;
CapturePackets();
}
public void saveToFile() {
THREAD = new jpcap_thread() {
public Object construct() {
writer = null;
try {
CAP = JpcapCaptor.openDevice(NETWORK_INTERFACES[INDEX], 65535, false, 20);
writer = JpcapWriter.openDumpFile(CAP, "captured_data.txt");
} catch (IOException ex) {
Logger.getLogger(Sniffer.class.getName()).log(Level.SEVERE, null, ex);
}
for (int i = 0; i < No; i++) {
writer.writePacket(packetList.get(i));
}
return 0;
}
public void finished() {
this.interrupt();
}
};
THREAD.start();
}
}
PacketContents.java
import jpcap.PacketReceiver;
import jpcap.packet.Packet;
import javax.swing.table.DefaultTableModel;
import jpcap.packet.TCPPacket;
import jpcap.packet.UDPPacket;
import java.util.ArrayList;
import java.util.List;
import jpcap.packet.ICMPPacket;
public class PacketContents implements PacketReceiver {
public static TCPPacket tcp;
public static UDPPacket udp;
public static ICMPPacket icmp;
public void recievePacket(Packet packet) {
}
#Override
public void receivePacket(Packet packet) {
if (packet instanceof TCPPacket) {
tcp = (TCPPacket) packet;
Sniffer.packetInfo.add(new Object[] { sniffer.No, tcp.length, tcp.src_ip, tcp.dst_ip, "TCP", tcp.src_port,
tcp.dst_port, tcp.ack, tcp.ack_num, tcp.data, tcp.sequence, tcp.offset, tcp.header });
sniffer.No++;
} else if (packet instanceof UDPPacket) {
udp = (UDPPacket) packet;
Sniffer.packetInfo.add(new Object[] { sniffer.No, udp.length, udp.src_ip, udp.dst_ip, "UDP", udp.src_port,
udp.dst_port, udp.data, udp.offset, udp.header });
sniffer.No++;
} else if (packet instanceof ICMPPacket) {
icmp = (ICMPPacket) packet;
Sniffer.packetInfo.add(new Object[] { sniffer.No, icmp.length, icmp.src_ip, icmp.dst_ip, "ICMP",
icmp.checksum, icmp.header, icmp.offset, icmp.orig_timestamp, icmp.recv_timestamp,
icmp.trans_timestamp, icmp.data });
sniffer.No++;
}
}
}
Using the Weka Library to execute the PCA algorithm on an ARFF file
WekaPCA.java
package project;
import weka.core.Instances;
import weka.core.converters.ArffLoader;
import weka.core.converters.ConverterUtils;
import weka.core.converters.TextDirectoryLoader;
import java.io.File;
import org.math.plot.FrameView;
import org.math.plot.Plot2DPanel;
import org.math.plot.PlotPanel;
import org.math.plot.plots.ScatterPlot;
import weka.attributeSelection.PrincipalComponents;
import weka.attributeSelection.Ranker;
public class PCA {
public static void main(String[] args) {
try {
// Load data
String InputFilename = "kdd99.arff";
ArffLoader loader = new ArffLoader();
loader.setSource(new File(InputFilename));
Instances data = loader.getDataSet();
// Perform PCA
PrincipalComponents pca = new PrincipalComponents();
pca.setVarianceCovered(1.0);
pca.setTransformBackToOriginal(false);
pca.buildEvaluator(data);
// Show transformed data
Instances transformedData = pca.transformedData();
System.out.println(transformedData);
}
catch (Exception e) {
e.printStackTrace();
So I'm relatively new to java and I'm trying to use a method from a different class inside my main.
The method I'm using to pull doesn't contain any data initially but pulls the data from a text doc.
I've included the code that calls the other class method that loads the data from the file. It sill doesn`t work, so where is my mistake?
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.File;
import java.io.FileNotFoundException;
import java.util.Scanner;
public class FinalRobert {
public static void main(String[] args) {
//output of animalList class here
}
Here is the class I'm trying to pull from:
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.FileNotFoundException;
import java.io.IOException;
public class animalList {
public void animalDetails () {
int i = 0;
String animalInfo = "C:/Users/Robert/Documents/animals.txt";
String animalHabitat = "C:/Users/Robert/Documents/habitats.txt";
try {
File animalFile = new File(animalInfo);
FileReader animalReader = new FileReader(animalFile);
BufferedReader animalList = new BufferedReader (animalReader);
StringBuilder animalDetailList = new StringBuilder();
String line;
while ((line = animalList.readLine()) != null) {
for (i = 0; i <4 ; i++) {
System.out.println(line);
animalList.readLine();
}
}
animalReader.close();
System.out.println(animalDetailList.toString());
}
catch (IOException e) {
}
}
}
So I want to have the output of the animalList class in my main, but I don't know how to bring it over because I'm not necessarily bring over variable, but a process. The full thing should bring the first line and four past it (so a total of the first five lines in the doc). Hopefully that makes things easier to see my problem.
This is a mcve of AnimalList :
public class AnimalList {//use java naming convention
public void animalDetails () {
//mcve should be runnable. The problem you ask help with is not
//reading from file, so remove file reading functionality to make it mcve
StringBuilder animalDetailList = new StringBuilder();
animalDetailList.append("Family: Cats").append("\n")
.append("Type : Panther").append("\n")
.append("Weight: 250kg").append("\n")
.append("Color : Pink");
System.out.println(animalDetailList.toString());
}
}
Invoke its method from another class:
public class FinalRobert {
public static void main(String[] args) {
//to invoke animalDetails() method use
AnimalList aList = new AnimalList();
aList.animalDetails();
//if you do not need the aList refrence you could use
//new AnimalList().animalDetails();
}
}
Output
Family: Cats Type : Panther Weight: 250kg Color :
Pink
I hope this might help you.
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.File;
import java.io.FileNotFoundException;
import java.util.Scanner;
public class FinalRobert {
public static void main(String[] args) {
animalList list = new animalList();
list.animalDetails();
}
}
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.FileNotFoundException;
import java.io.IOException;
public class animalList {
public String animalDetails () {
int i = 0;
String output="";
String animalInfo = "C:/Users/Robert/Documents/animals.txt";
String animalHabitat = "C:/Users/Robert/Documents/habitats.txt";
try {
File animalFile = new File(animalInfo);
FileReader animalReader = new FileReader(animalFile);
BufferedReader animalList = new BufferedReader (animalReader);
String line;
while ((line = animalList.readLine()) != null & i<4) {
System.out.println(line);
output = output + "\n"+ line;
i++;
}
animalReader.close();
System.out.println(output);
}
catch (IOException e) {
}
return output;
}
}
I am new to Cassandra and Spark. I am trying to set up a test for my Spark job, which does the following:
Loads data from table A into DataFrames
Does some filtering, grouping and aggregating on these DataFrames
Loads the result into table B
I want to use Embedded Cassandra Server to run the test rather than having it connecting to a local instance of the Cassandra database. Has anyone done this before? If so, could someone point me to a good example please? Thanks for your help in advance!
this code does
package cassspark.clt;
import java.io.*;
import javafx.application.Application;
import java.util.concurrent.Executors ;
import java.util.concurrent.ExecutorService;
import org.apache.cassandra.service.CassandraDaemon;
import com.datastax.driver.core.exceptions.ConnectionException;
import java.util.Properties;
import org.apache.log4j.PropertyConfigurator;
import org.apache.spark.sql.SparkSession;
public class EmbeddedCassandraDemo extends Application {
private ExecutorService executor = Executors.newSingleThreadExecutor();
private CassandraDaemon cassandraDaemon;
public EmbeddedCassandraDemo() {
}
public static void main(String[] args) {
try {
new EmbeddedCassandraDemo().run();
}
catch(java.lang.InterruptedException e)
{
;
}
}
#Override public void start(javafx.stage.Stage stage) throws Exception
{
stage.show();
}
private void run() throws InterruptedException, ConnectionException {
setProperties();
activateDeamon();
}
private void activateDeamon() {
executor.execute( new Runnable() {
#Override
public void run() {
cassandraDaemon = new CassandraDaemon();
cassandraDaemon.activate();
SparkSession spark = SparkSession .builder().master("local").appName("ASH").getOrCreate();
}
});
}
private void setProperties() {
final String yaml = System.getProperty("user.dir") + File.separator +"conf"+File.separator+"cassandra.yaml";
final String storage = System.getProperty("user.dir") + File.separator +"storage" + File.separator +"data";
System.setProperty("cassandra.config", "file:"+ yaml );
System.setProperty("cassandra.storagedir", storage );
System.setProperty("cassandra-foreground", "true");
String log4JPropertyFile = "./conf/log4j.properties";
Properties p = new Properties();
try {
p.load(new FileInputStream(log4JPropertyFile));
PropertyConfigurator.configure(p);
} catch (IOException e) {
System.err.println("./conf/log4j.properties not found ");
System.exit(1);
;
}
}
}
I have written this java code for appending data in ObjectOutputStream, but this code is throwing (java.io.StreamCorruptedException:). Please help if this code can not work properly then please give an alternative for appending data in ObjectOutputStream.
import java.awt.Toolkit;
import java.io.EOFException;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.Serializable;
import java.util.Vector;
import javax.swing.JOptionPane;
class Data implements Serializable {
private static final long serialVersionUID = 1L;
private String time;
private String note;
public Data(String time, String note) {
this.time=time;
this.note=note;
}
public String getTime() {
return time;
}
public String getNote() {
return note;
}
}
public class S extends ObjectOutputStream {
String t, n;
public S(FileOutputStream w, String time, String note) throws Exception {
super(w);
t=time;
n=note;
writeStreamHeader();
}
protected void writeStreamHeader() throws IOException {
writeObject(new Data(t,n));
reset();
}
public static void rd() {
Vector v = new Vector();
Data d;
try
{
ObjectInputStream r = new ObjectInputStream(new FileInputStream("file.cer"));
for(int i=1; i<=100; i++) {
try { v.add(r.readObject()); }
catch(EOFException exp){
r.close();
break;
}
}
for(int i=0; i<v.size(); i++) {
d = (Data)v.elementAt(i);
System.out.println(d.getNote()+" "+d.getTime());
}
}
catch(Exception exp) {
Toolkit.getDefaultToolkit().beep();
JOptionPane.showMessageDialog(null, String.format("ERROR = %s\nCLASS = S", exp.getClass()));
System.out.println(exp.getClass());
System.exit(0);
}
}
public static void main(String arg[]) throws Exception {
FileOutputStream w = new FileOutputStream("file.cer",true);
new S(w,"99:59:59:99","Maxima");
new S(w,"00:00:00:00","Minima");
rd();
}
}
if this code can not work properly
You are correct. It can not work properly.
then please give an alternative for appending data in ObjectOutputStream.
writeStreamHeader() must do nothing if the file is being appended to, but it must call super.writeStreamHeader() if the file is newly created. It certainly should not call writeObject() at any time.