First of all I have a couple hours experience with Java so, If its a bit simple question, sorry about that.
Now, I want to read a file but I dont want to start at the beginning of file instead I want to skip first 1024 bytes and than start reading. Is there a way to do that ? I realized that RandomAccessFile might be useful but I'm not sure.
try {
FileInputStream inputStream=new FileInputStream(fName);
// skip 1024 bytes somehow and start reading .
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
You will want to use the FileInputStream.skip method to seek to the point you want and then begin reading from that point. There is more information in the javadocs about FileInputStream that you might find interesting.
You could use a method like skipBytes() here
/**
* Read count bytes from the InputStream to "/dev/null".
* Return true if count bytes read, false otherwise.
*
* #param is
* The InputStream to read.
* #param count
* The count of bytes to drop.
*/
private static boolean skipBytes(
java.io.InputStream is, int count) {
byte[] buff = new byte[count];
/*
* offset could be used in "is.read" as the second arg
*/
try {
int r = is.read(buff, 0, count);
return r == count;
} catch (IOException e) {
e.printStackTrace();
}
return false;
}
Related
I'm modifying the source code of H2 MVStore 1.4.191 to write files by doing some thread sleep.
The big change is that the file is not written in one time anymore, but by 2^16 bytes chunks.
MVStore uses java nio FileChannel and ByteBuffer to write its file. The problem is that the result is different from the original version. It seems that FileChannel add space characters (0x20 in ASCII), like, more than 40 in a row. Or maybe it doesn't remove this spaces, on the contrary to the original version, I don't know.
I suppose it's due to file writing.
The method file.write(buffer,position), where file is FileChannel object, and that returns the number of bytes written, sometimes returns a smaller number than the buffer size, in the original version of H2. In my version, it never happens.
Have you tips about ByteBuffer, FileChannel and my problem ?
The original code call writefully function few times (it writes a header, a footer and the datas)
int off = 0;
do {
int len = file.write(src, pos + off);
off += len;
} while (src.remaining() > 0);
src is the ByteBuffer and file is a FileChannelImpl from sun.io. Buffer can contain more than 50MB of datas.
From this code, I developped a solution that split the ByteBuffer in 2^16-sized buffers that I write, by adding sleep function between each of them:
int off = 0;
byte[] buffer = src.array();
int size = src.array().length;
int chunkSize = 128;
List<byte[]> splittedBuffer = new ArrayList<byte[]>();
int i = 0;
while (i < size) {
int start = i;
int end = i + chunkSize;
if (end > size)
{
//if buffer size is not a multiple of 2^16, the last
//chunk will be smaller
end = size;
}
splittedBuffer.add(Arrays.copyOfRange(src.array(), start, end));
try {
Thread.sleep(5);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
i += chunkSize;
}
int offset = 0;
for (byte[] chunk : splittedBuffer) {
int len=file.write(ByteBuffer.wrap(chunk),pos+offset);
offset+=len;
try {
Thread.sleep(5);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Finally, the problem is maybe not that whitespaces are added, but that a part of datas is written in a wrong place. I'm going to check it.
Ok,
The problem was that I used the size of ByteBuffer to split it instead of its limit which is smaller (set by H2 during its process)
Thanks for the help
Regards
Currently writing a java swing program. which gets all files from a drive and checks the binary signature. But i only want the first 2-8 bytes of the file to speed to program up. Tried most solutions already available but none work with what i have already coded.
Current Code:
public void getBinary() {
try {
// get the file as a DataInputStream
DataInputStream input = new DataInputStream(new FileInputStream(file));
try {
// if files are avaliable countinue looping and get bytes / hex
while (input.available() > 0) {
// build a hex string from the bytes and change to uppercase
sb.append(Integer.toHexString(input.readByte()).toUpperCase());
// need to get the first couple of (8) bytes
}
} catch (IOException e) {
}
// print the hex out to command
// System.out.println(sb.toString());
} catch (FileNotFoundException e2) {
}
}
works perfect for me, and may help others too!!!
https://i.stack.imgur.com/8SU5A.png
I'm writing a Server/Client application, where the server will be communicating with many different clients. The communication with each client is taking place in a separate thread on the machine running the server.
So far I have been using the BufferedReader class in order to read data from the client sockets using the readLine() method. The problem I have with readLine() is that it stops reading when it finds a new line character. Due to the nature of my program I would like to substitute the new line limitation with a sequence of characters like $^%, so with a readLine() call the BufferedReader will keep reading unitl it finds &^%. For example if a client tries to send a url or a filepath where \n can be found as part of the natural path the readLine() method will read the \n and stop reading further.
I have created the following class in an attempt to solve this problem. But I have now created an even bigger one. When I use the BufferedReader class and the readLine() method my Server can service a lot of clients, but when I use CustomBufferedReader and readCustomLine() the Server crashes after the 4 or 5 threads start running. I'm pretty sure my class is consuming lots of resources compared to readLine() but I have no idea, why or how.
I would appreciate any insight on the matter.
public class CustomBufferedReader extends BufferedReader {
public CustomBufferedReader(Reader reader) {
super(reader);
}
/**
* Keeps reading data from a socket and stores them into a String buffer until
* the combination of $^% is red.
*
* #return A String containing the buffer red without the $^% ending.
* #throws IOException
*/
public String readCustomLine() throws IOException {
//$^%
String buffer="";
try
{
if(super.ready())
{
//First I'm reading 3 bytes in order to have at least 3 bytes
//in the buffer to compare them later on.
try
{
buffer = buffer + (char)super.read();
buffer = buffer + (char)super.read();
buffer = buffer + (char)super.read();
}
catch (IOException e)
{
e.printStackTrace();
System.out.println(e.getMessage());
}
int i=0;
//This while well keep reading bytes and adding the to the buffer until it reads
//$^% as the terminating sequence of bytes.
while (!(buffer.charAt(i)=='$' && buffer.charAt(i+1)=='^' && buffer.charAt(i+2)=='%')){
try
{
buffer = buffer + (char)super.read();
i++;
}
catch (IOException e)
{
e.printStackTrace();
System.out.println(e.getMessage());
}
}
// Returns the saved buffer after subtracting the $^% ending.
return buffer.substring(0, buffer.length() - 3);
}
}
catch (IOException e)
{
//e.printStackTrace();
}
return buffer;
}
}
I think this is an easier way to achieve what you are looking for:
String line = new Scanner(reader).useDelimiter("$^%").next();
About why your implementation of readCustomLine is not working, you may have a concurrency problem. If you take a look at the readLine implementation of BufferedReader you may notice that all its code runs enclosed in a synchronized block. So you may try that in your code.
Also, If an Exception is thrown from super.read() you just catch it and keep going even though the resulting buffer will have errors, you can try removing the inner try/catch blocks.
Finally as EJP has pointed out, you should remove the ready() call and check every super.read() for a -1 (meaning EOF).
import java.io.BufferedReader;
import java.io.IOException;
import java.io.Reader;
public class CustomBufferedReader extends BufferedReader {
public CustomBufferedReader(final Reader reader) {
super(reader);
}
/**
* Keeps reading data from a socket and stores them into a String buffer
* until the combination of $^% is read.
*
* #return A String containing the buffer read without the $^% ending.
* #throws IOException
*/
public synchronized String readCustomLine() throws IOException {
// $^%
String buffer = "";
try {
// First I'm reading 3 bytes in order to have at least 3 bytes
// in the buffer to compare them later on.
buffer = buffer + safeRead();
buffer = buffer + safeRead();
buffer = buffer + safeRead();
int i = 0;
// This while will keep reading bytes and adding them to the
// buffer until it reads $^% as the terminating sequence of bytes.
while (!(buffer.charAt(i) == '$' && buffer.charAt(i + 1) == '^'
&& buffer.charAt(i + 2) == '%')) {
buffer = buffer + safeRead();
i++;
}
// Returns the saved buffer after subtracting the $^% ending.
return buffer.substring(0, buffer.length() - 3);
} catch (IOException e) {
/*
* Personally, I would remove this try/catch block and let the
* exception reach the caller
*/
e.printStackTrace();
System.out.println(e.getMessage());
}
return buffer;
}
private char safeRead() throws IOException {
int value = super.read();
if (value == -1) {
throw new EOFException();
}
return (char) value;
}
}
OK I am using the following function to create multiple threads to download a file. You can see the functions takes link, starting byte, ending byte and the path to download the file as argument. I call this function 2 times to create two threads to download the required file.
For example, if the file is of 100 bytes I do the following
thread-1 --> DownloadFile("http://localhost/file.zip", 0, 50, "output.zip");
thread-2 --> DownloadFile("http://localhost/file.zip", 50, 100, "output.zip");
But you know what happens, only a few bytes don't get downloaded and my progress bar gets stuck at 99%. That's the problem!!!
Why it gets stuck at 99%? In words why some bytes are being lost? I could see the total number of bytes in the downloaded variable.
Here is the function
public void DownloadFile(final String link, final long start,final long end, final String path){
new Thread(new Runnable(){
public void run(){
try {
URL url = new URL(link);
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.setRequestProperty("Range", "bytes="+start+"-"+end);
BufferedInputStream bis = new BufferedInputStream(conn.getInputStream());
RandomAccessFile raf = new RandomAccessFile(path,"rw");
raf.seek(start);
int i=0;
byte bytes[] = new byte[1024];
while((i = bis.read(bytes))!=-1){
raf.write(bytes, 0, i);
downloaded = downloaded+i;
int perc = (int) ((downloaded*100)/FileSize);
progress.setValue(perc);
percentLabel.setText(Long.toString(downloaded)+" out of "+FileSize);
}
if(FileSize==downloaded){
progress.setValue(100);
JOptionPane.showMessageDialog(null, "Download Success! ");
progress.setValue(0);
downloaded=0;
downBtn.setText("Download");
}
bis.close();
raf.close();
} catch (MalformedURLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}).start();
}
Thanks in anticipation.
RandomAccessFile is not thread safe.
raf.seek(begin) fails, see the documentation of RandomAccessFile.seek()
Sets the file-pointer offset, measured from the beginning of this
file, at which the next read or write occurs. The offset may be set
beyond the end of the file. Setting the offset beyond the end of the
file does not change the file length. The file length will change only
by writing after the offset has been set beyond the end of the file.
You may download parts of file into separate files then merge them.
Are you sure that parallel downloads are faster?
according to :
Note that while some implementations of InputStream will return the
total number of bytes in the stream, many will not. It is never
correct to use the return value of this method to allocate a buffer
intended to hold all data in this stream.
from:
http://docs.oracle.com/javase/7/docs/api/java/io/InputStream.html#available%28%29
and this note
In particular, code of the form
int n = in.available();
byte buf = new byte[n];
in.read(buf);
is not guaranteed to read all of the remaining bytes from the given input stream.
http://docs.oracle.com/javase/8/docs/technotes/guides/io/troubleshooting.html
dose it mean that using below function cause not to read file completely?
/**
* Reads a file from /raw/res/ and returns it as a byte array
* #param res Resources instance for Mosembro
* #param resourceId ID of resource (ex: R.raw.resource_name)
* #return byte[] if successful, null otherwise
*/
public static byte[] readRawByteArray(Resources res, int resourceId)
{
InputStream is = null;
byte[] raw = new byte[] {};
try {
is = res.openRawResource(resourceId);
raw = new byte[is.available()];
is.read(raw);
}
catch (IOException e) {
e.printStackTrace();
raw = null;
}
finally {
try {
is.close();
}
catch (IOException e) {
e.printStackTrace();
}
}
return raw;
}
available() returns the number of bytes that can be read without blocking. There is no necessary correlation between that number, which can be zero, and the total length of the file.
Yes it does not necessarily read all. Like RandomAccessFile.read(byte[]) as opposed to RandomAccessFile.readFully(byte[]). Furthermore the code actually physically reads 0 bytes.
It probably reads only the first block, if it were a slow device like a file system.
The principle:
The file is being read by the underlying system software, normally
buffered, so you have a couple of blocks already in memory, and
sometimes already reading further. The software reads asynchrone
blocks, and blocks if trying to read more than the system has
already read.
So in general one has in the software a read loop of a block, and regularly at a read the read operation blocks till the physical read sufficiently buffers.
To hope for a non-blocking you would need to do:
InputStream is = res.openRawResource(resourceId);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
for (;;) {
// Read bytes until no longer available:
for (;;) {
int n = is.available();
if (n == 0) {
break;
}
byte[] part = new byte[n];
int nread = is.read(part);
assert nread == n;
baos.write(part, 0, nread);
}
// Still a probably blocking read:
byte[] part = new byte[128];
int nread = is.read(part);
if (nread <= 0) {
break; // End of file
}
baos.write(part, 0, nread);
}
return baos.toByteArray();
Now, before you copy that code, simply do a blocking read loop immediately. I cannot see an advantage of using available() unless you can do something with partial data while reading the rest.