Writting an object in a file. Best way - java

I have a problem with time.
I currently develop an app in Java where I have to make a network analyzer.
For that I use JPCAP to capture all the packets, and write them in a file, and from there I will put them bulk in DB.
The problem is when I am writting in file the entire object, like this,
UDPPacket udpPacket = (UDPPacket)packet
wtf.writeToFile("packets.txt",udpPacket +"\n");
everything is working nice and smooth, but when I try to write like this
String str=""+udpPacket.src_ip+" "+udpPacket.dst_ip+""
+udpPacket.src_port+" "+udpPacket.dst_port+" "+udpPacket.protocol +
" Wi-fi "+udpPacket.dst_ip.getCanonicalHostName()+"\n";
wtf.writeToFile("packets.txt",str +"\n");
writting in file is during lot more time.
the function to write in file is this
public void writeToFile(String name, String str){
try{
PrintWriter writer = new PrintWriter(new FileOutputStream(new File(name),this.restart));
if(!str.equalsIgnoreCase("0")){
writer.append(str);
this.restart=true;
}
else {
this.restart=false;
writer.print("");
}
writer.close();
} catch (IOException e) {
System.out.println(e);
}
}
Can anyone give me a hit, whats the best way to do this?
Thanks a lot
EDIT:
7354.120266 ns - packet print
241471.110451 ns - with StringBuilder

Keep the PrintWriter open. Don't open and close it for every line you want to write to the file. And don't flush it either: just close it when you exit. Basically you should remove your writeToFile() method and just call PrintWriter.write() or whatever directly when necessary.
NB You are writing text, not objects.

I found the problem
as #KevinO said, getCanonicalHostName() was the problem.
Thanks a lot.

Related

Best way to continuously write to file (50 times per second)

I am building an Android app which records Accelerometer and Gyroscope data to a text file. In most of the tutorials they use a method which involves creating two text files, and opening and closing them each 50 times per second. ie :
private static void writeToFile(File file, String data) {
FileOutputStream stream = null;
try {
stream = new FileOutputStream(file, true);
stream.write(data.getBytes());
} catch (FileNotFoundException e) {
Log.e("History", "In catch");
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
try {
stream.close();
} catch (IOException e) {
e.printStackTrace();
}
ie, on every SensorEvent, you open the file, write the values, then close the file, then open it again 20 milliseconds later.
It all seems to be working fine, I was just wondering if there was a better way of going about doing it? I tried some changes using a boolean flag to say whether the stream is already open or not, and then a different writeToFile if flag is set to true, but clearly the fileOutputStream can sometimes close itself in the 20 millisecond time frame, and the app crashes.
So I guess my Question is: How many system resources does it take to open, write and close a file that many times? Is it fine, and not something I should worry about, or is there a better way of doing things? Bear in mind continous sensor logging already takes a toll on battery life, so I would like to do things as efficiently as possible.
Thanks
It's not a good way of doing it. A better way would be to create the FileOutputStream once, save it as an instance member of whatever class this is, and just write to it (possibly with an occasional call to flush to make sure it writes to disk).

Cannot access file because being used

I'm writing a logger to my java program (in csv format).
The logger works fine, and I had one problem.
It sounds pretty logical that the program will crash when i tried to write to the file and at the same time open the file.
When i do that, I got that exception: "The process cannot access the file because it is being used by another process".
My question is if there is anyway to continue writing even if someone open the file?
Thanks.
UPDATE:
I think i solved the problem.
Every time after i write to the file (With bufferedWriter and FileWriter), I call to a close() function that closing the bufferedWriter and FileWriter.
I changed the close() function:
1. Added fileChannel and FileLock.
2. Igonore the line bw.close();
Its ok not to close the bufferWriter (bw)?, Can there be any problems later on?
private void close() throws IOException {
RandomAccessFile rf;
rf = new RandomAccessFile(file, "rw");
fileChannel = rf.getChannel();
lock = fileChannel.lock();
try {
if (bw != null) {
// bw.close(); The line i ignored.
bw = null;
}
if (fw != null) {
fw.close();
fw = null;
}
} catch (IOException ex) {
ex.printStackTrace();
}
lock.release();
}
UPDATE 2:
Now i found that if i change the function to that (close changed to flush), Its working:
private void close() {
try {
if (bw != null) {
bw.flush();
bw = null;
}
if (fw != null) {
fw.flush();
fw = null;
}
} catch (IOException ex) {
ex.printStackTrace();
}
}
What is the best option ?
Reverse the problem: try to open while continuing writing:
if you want fixed datas, you can copy the file (by shell), and then read it;
if you want even future written datas, you must keep the same output: try to redirect the normal output, to something you can store and read.
Perhaps some library exists. It seems like tee and tpipe.
see for example:
Could I duplicate or intercept an output stream in Java?
for redirecting log4j to what you want, see this for example:
How do I redirect log4j output to my HttpServletResponse output stream?
Is there is anyway to continue writing even if something else has opened the file?
Not in Java.
To write a file, you must first open it. If you cannot open it because the OS won't permit it ... because something else has opened it ... then you cannot get to the point where you can write it.
In this scenario, you should consider opening a different log file.
Note that this scenario happens in Windows because Java is following normal Window practice and opening the file with an exclusive (mandatory) lock by default. Short of changing Java ... and every other Windows application that opens files like this ... you are stuck.
UPDATE
It turns out that there may be a way.
Read this Q&A: https://stackoverflow.com/a/22648514/139985
Use FileChannel.open as described, but use flags that allow you to write without forbidding other writers. For example
FileChannel.open(path, WRITE)
or
FileChannel.open(path, WRITE, APPEND)
The trick is that you don't want any of the NOSHARE_* options.
CAVEAT: I haven't tried this.
As #guillaume said, you can use a library like log4j.
But If you want to implements your solution in Java, you can use the observer pattern and write your logs async.

about the close method() used for closing a stream

Today, when I was working on some kind of servlet which was writing some information to some file present on my hard disk, I was using the following code to perform the write operation
File f=new File("c:/users/dell/desktop/ja/MyLOgs.txt");
PrintWriter out=new PrintWriter(new FileWriter(f,true));
out.println("the name of the user is "+name+"\n");
out.println("the email of the user is "+ email+"\n");
out.close(); //**my question is about this statement**
When I was not using the statement, the servlet was compiling well, but it was not writing anything to the file, but when I included it, then the write operation was successfully performed. My questions are:
Why was the data not being written to the file when I was not including that statement (even my servlet was compiling without any errors)?
Up to which extent the close operation is considerable for the streams?
Calling close() causes all the data to be flushed. You have constructed a PrintWriter without enabling auto-flush (a second argument to one of the constructors), which would mean you would have to manually call flush(), which close() does for you.
Closing also frees up any system resources used by having the file open. Although the VM and Operating System will eventually close the file, it is good practice to close it when you are finished with it to save memory on the computer.
You may also which to put the close() inside a finally block to ensure it always gets called. Such as:
PrintWriter out = null;
try {
File f = new File("c:/users/dell/desktop/ja/MyLOgs.txt");
out = new PrintWriter(new FileWriter(f,true));
out.println("the name of the user is "+name+"\n");
out.println("the email of the user is "+ email+"\n");
} finally {
out.close();
}
See: PrintWriter
Sanchit also makes a good point about getting the Java 7 VM to automatically close your streams the moment you don't need them automatically.
When you close a PrintWriter, it will flush all of its data out to wherever you want the data to go. It doesn't automatically do this because if it did every time you wrote to something, it would be very inefficient as writing is not an easy process.
You could achieve the same effect with flush();, but you should always close streams - see here: http://www.javapractices.com/topic/TopicAction.do?Id=8 and here: http://docs.oracle.com/javase/tutorial/jndi/ldap/close.html. Always call close(); on streams when you are done using them. Additionally, to make sure it is always closed regardless of exceptions, you could do this:
try {
//do stuff
} finally {
outputStream.close():
}
It is because the PrintWriter buffers your data in order for not making I/O operations repeatedly for every write operation (which is very expensive). When you call close() the Buffer flushes into the file. You can also call flush() for forcing the data to be written without closing the stream.
Streams automatically flush their data before closing. So you can either manually flush the data every once in a while using out.flush(); or you can just close the stream once you are done with it. When the program ends, streams close and your data gets flushed, this is why most of the time people do not close their streams!
Using Java 7 you can do something like this below which will auto close your streams in the order you open them.
public static void main(String[] args) {
String name = "";
String email = "";
File f = new File("c:/users/dell/desktop/ja/MyLOgs.txt");
try (FileWriter fw = new FileWriter(f, true); PrintWriter out = new PrintWriter(fw);) {
out.println("the name of the user is " + name + "\n");
out.println("the email of the user is " + email + "\n");
} catch (IOException e) {
e.printStackTrace();
}
}
PrintWriter buffers the data to be written so and will not write to disk until its buffer is full. Calling close() will ensure that any remaining data is flushed as well as closing the OutputStream.
close() statements typically appear in finally blocks.
Why the data was not being written to the file when I was not including that statement?
When the process terminates the unmanaged resources will be released. For InputStreams this is fine. For OutputStreams, you could lose an buffered data, so you should at least flush the stream before exiting the program.

Issue with Socket Streams in Java "Telnet" Code?

I'm having trouble transitioning to Java from C/C++ for my "Telnet" interface to some modules we work with here. I want to be able to establish a connection with a card that, after starting it's command line interface, waits for a connection and serves up a prompt ("OK>") to the clients. This works fine for both C and C# clients I've written, but the Java has given me some issues. I've attached some code that I grabbed from some examples online, but so far, all I can ascertain for sure is that the socket is being created.
Code:
private boolean CreateTelnetSession()
{
try
{
_socket = new Socket();
_socket.connect(new InetSocketAddress(_ipAddr, _ipPort));
_socket.setSoTimeout(10000);
_socket.setKeepAlive(true);
_out = new PrintWriter(_socket.getOutputStream(), true);
_in = new BufferedReader(new InputStreamReader(_socket.getInputStream()));
_out.println("\r\n");
System.out.println(_in.readLine());
return true;
}
catch(Exception e)
{
System.out.println("Exception!");
}
return false;
}
The socket SEEMS to be created correctly, and when the program shuts down, I can see the session close on the card(s) I'm trying to talk to, but I don't see the carriage return/line feed echoed on the card as I would expect, or a prompt returned via the InputStream. Is it possible that it's a character encoding issue? Am I doing something incorrectly with the streams (crossing them!?!)? Any insight at all? When I get over this initial learning curve, I would like to acknowledge how easy Java makes these socket reads and writes, but until then...
I read this post:
java simple telnet client using sockets
It seems similar to what I'm running up against, but it's not the same. I'm willing to take the rep hit if someone has seen something on here that resolves my issue, so feel free to let me know, bluntly, what I missed.
Edit:
private boolean CreateTelnetSession()
{
try
{
_socket = new Socket();
_socket.connect(new InetSocketAddress(_ipAddr, _ipPort));
_socket.setSoTimeout(10000);
_socket.setKeepAlive(true);
_out = new DataOutputStream(_socket.getOutputStream());
_in = new DataInputStream(_socket.getInputStream());
_outBuffer = ByteBuffer.allocate(2048);
_outBuffer.order(ByteOrder.LITTLE_ENDIAN);
_inBuffer = ByteBuffer.allocate(2048);
_inBuffer.order(ByteOrder.LITTLE_ENDIAN);
System.out.println("Connection Response: " + _in.read(_inBuffer.array()));
System.out.println("Response: " + WriteCommand("DRS\r\n"));
return true;
}
catch(Exception e)
{
System.out.println("Exception!");
}
return false;
}
private String WriteCommand(String command)
{
try
{
_outBuffer = encoder.encode(CharBuffer.wrap(command));
_out.write(_outBuffer.array());
_out.flush();
_in.read(_inBuffer.array());
String retString = decoder.decode(_inBuffer).toString();
return retString.substring(0, retString.indexOf('>') + 1);
}
catch(Exception e)
{
System.out.println("Exception!");
}
return "E1>";
}
There are many things to clean up and I'm going to experiment with whether I need to do it in quite this way, but this is the gist of the "solution". The big killer was the endian-ness. It should be mentioned, once again, that this is ugly and non-production code, but any other input would still be appreciated.
I have a couple things you can try. You are using a PrintWriter for your output, this is a fairly high-level Writer (i.e. it encapsulates a lot of things from you). My concern is that the println() method in PrintWriter adds a line terminating character(s) at the end automatically (as appropriate for your OS). So what you are really sending is "/r/n(line terminator)" so on a unix box you would be sending "/r/n/n".
I would recommend switching to a DataOutputStream which gives you much more control over the raw bytes that are sent: http://docs.oracle.com/javase/6/docs/api/java/io/DataOutputStream.html
Remember if you switch to DataOutputStream you need to call flush on the output stream.
My other thought is it might be an endianess problem. Java is strictly Big Endian (network byte order). Is it possible your "card" is reading things in little-endian? If you need to write over the network in little endian (if so your card is a bad netizen!) you will need to use a ByteBuffer, set its order to little-endian. Write your bytes to it, then write the bytes from your ByteBuffer to the DataOutputStream.
I would probably switch to a DataInputStream for your input stream too. readline() will only return once the newline character is seen. Is your card returning a newline in its response?
My last thought is that your println methods might have an error and you don't know it because PrintWriter doesn't throw exceptions. The PrintWriter JavaDocs says:
"Methods in this class never throw I/O exceptions, although some of its constructors may. The client may inquire as to whether any errors have occurred by invoking checkError()."
Hopefully something in my long rambling response will help you.

Recovering from IOException: network name no longer available

I'm trying to read in a large (700GB) file and incrementally process it, but the network I'm working on will occasionally go down, cutting off access to the file. This throws a java.io.IOException telling me that "The specified network name is no longer available". Is there a way that I can catch this exception and wait for, say, fifteen minues, and then retry the read, or is the Reader object fried once access to the file is lost?
If the Reader is rendered useless once the connection is lost, is there a way that I can rewrite this in such a way as to allow me to "save my place" and then begin my read from there without having to read and discard all the data before it? Even just munching data without processing it takes a long time when there's 500GB of it to get through.
Currently, the code looks something like this (edited for brevity):
class Processor {
BufferedReader br;
Processor(String fname) {
br = new BufferedReader(new FileReader("fname"));
}
void process() {
try {
String line;
while((line=br.readLine)!=null) {
...code for processing the line goes here...
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
Thank you for your time.
You can keep track of read bytes in a variable. For example here I keep track in a variable called read, and buff is char[]. Not sure if this is possible using the readLine method.
read+=br.read(buff);
Then if you need to restart, you can skip that many bytes
br.skip(read);
Then you can keep processing away. Good luck
I doubt that the underlying fd will still be usable after this error, but you would have to try it. More probably you will have to reopen the file and skip to where you were up to.

Categories