Read Data File address (Java) - java

I have this code for reading data and works fine but I want to change the start point that the data is read from - My DataFile.txt is "abcdefghi"
and the output is
1)97
2)98
3)99
4)100
I want to start at the second byte so the output would be
1)98
2)99
3)100
4)etc
Code:
import java.io.*;
public class ReadFileDemo3 {
public static void main(String[] args)throws IOException {
MASTER MASTER = new MASTER();
MASTER.PART1();
}
}
class MASTER {
void PART1() throws IOException{
System.out.println("OK START THIS PROGRAM");
File file = new File("D://DataFile.txt");
BufferedInputStream HH = null;
int B = 0;
HH = new BufferedInputStream(new FileInputStream(file));
for (int i=0; i<4; i++) {
B = B + 1;
System.out.println(B+")"+HH.read());
}
}
}

You can simple ignore the first n bytes as follows.
HH = new BufferedInputStream(new FileInputStream(file));
int B = 0;
int n = 1; // number of bytes to ignore
while(HH.available()>0) {
// read the byte and convert the integer to character
char c = (char)HH.read();
if(B<n){
continue;
}
B++;
System.out.println(B + ")" + (int)c);
}
Edit: If you want to access a random location in file, then you need to use RandomAccessFile. See this for detailed examples.
Related SO post:
How do I fetch specific bytes from a file knowing the offset and length?
How to read a file from a certain offset

Related

Java - Write content from one file chunk by chunk (e.g. 8 Bytes) alternately into multiple files

So I've been trying to read the content of a text file and write the content chunk by chunk alternately into e.g. 2 new files.
I already tried multiple ways to do that but it won't work (OutputStream and FileOutputStream seems to be the most suitable).
Before i tried to part the file in e.g. 3 Parts and wrote the first part in one file, the second part in another and so on. Which worked perfectly fine with OutputStream and FileOutputStream.
But it won't work when i want to do it alternately.
To do it alternately i use the round robin algorithm, which on its own works fine.
I would be really thankful if you could show me some examples to do it!
public void splitFile(String filePath, int numberOfParts, long sizeOfParts[]) throws FileNotFoundException, IOException, SQLException {
long bytes = 8;
OutputStream partsPath[] = new OutputStream[numberOfParts];
long bytePositition[] = new long[numberOfParts];
long copy_size[] = new long[numberOfParts];
for (int i = 0; i < numberOfParts; i++) {
copy_size[i] = sizeOfParts[i];
partsPath[i] = new FileOutputStream(path); //Gets Path from my Database (works)
//System.out.println(cloudsTable.getCloudsPathsFromDatabase(i) + '\\' + name + (i + 1) + fileType);
}
InputStream file = new FileInputStream(filePath);
while (true) {
boolean done = true;
for (int i = 0; i < numberOfParts; i++) {
if (copy_size[i] > 0) {
done = false;
if (copy_size[i] > bytes) {
copy_size[i] -= bytes;
bytePositition[i] += bytes;
System.out.println("file " + i + " " + bytePositition[i]);
readWrite(file, bytePositition[i], partsPath[i]);
} else {
bytePositition[i] += copy_size[i];
System.out.println("rest file " + i + " " + bytePositition[i]);
readWrite(file, bytePositition[i], partsPath[i]);
copy_size[i] = 0;
}
}
}
if (done == true) {
break;
}
}
file.close();
for (int i = 0; i < partsPath.length; i++) {
partsPath[i].close();
}
}
private void readWrite(InputStream file, long bytes, OutputStream path) throws IOException {
byte[] buf = new byte[(int) bytes];
while (file.read(buf) != -1) {
path.write(buf);
path.flush();
}
}
What the code does is, it only write the content of the Originalfile in the first-copied file and the following files are empty
EDIT:
To clarify what the code should do is write the first 8 bytes to go to file 1, second 8 bytes to go to file 2, third 8 bytes to go to file 3, fourth 8 bytes to go to file 1, and so on, round robin, until file 1 is sizeOfParts[0] long, file 2 is sizeOfParts[1] long, and file 3 is sizeOfParts[2] long.
The main problem is that the readWrite() method is only supposed to copy one 8-byte block of bytes, but has a loop that makes it copy all the remaining bytes in the input file.
In addition, the code should be enhanced to use try-finally to close the files, and to correctly handle end-of-file, in case the input file is shorter than the sum of parts.
I would eliminate the readWrite() method, and consolidate the logic to prevent duplicate code, like this:
public void splitFile(String inPath, long[] sizeOfParts) throws IOException, SQLException {
final int numberOfParts = sizeOfParts.length;
String[] outPath = new String[numberOfParts];
// Gets Paths from Database here
InputStream in = null;
OutputStream[] out = new OutputStream[numberOfParts];
try {
in = new BufferedInputStream(new FileInputStream(inPath));
for (int part = 0; part < numberOfParts; part++)
out[part] = new BufferedOutputStream(new FileOutputStream(outPath[part]));
byte[] buf = new byte[8];
long[] remain = sizeOfParts.clone();
for (boolean done = false; ! done; ) {
done = true;
for (int part = 0; part < numberOfParts; part++) {
if (remain[part] > 0) {
int len = in.read(buf, 0, (int) Math.min(remain[part], buf.length));
if (len == -1) {
done = true;
break;
}
remain[part] -= len;
System.out.println("file " + part + " " + (sizeOfParts[part] - remain[part]));
out[part].write(buf, 0, len);
done = false;
}
}
}
} finally {
if (in != null)
in.close();
for (int part = 0; part < out.length; part++)
if (out[part] != null)
out[part].close();
}
}

HTTP manual Client not writing to disk properly JAVA

I am trying to build a manual HTTP client (using sockets) along with a cache and I cant seem to figure out why the files are not saving to disk properly. It works pretty good for HTML files, but cant seem to work for other files types that re not text based like .gif. Could anyone tell me why? I am quite new to HTTP protocol and Socket programming in general.
The loop to grab the response.
InputStream inputStream = socket.getInputStream();
PrintWriter outputStream = new PrintWriter(socket.getOutputStream());
ArrayList<Byte> dataIn = new ArrayList<Byte>();
ArrayList<String> stringData = new ArrayList<String>();
//Indices to show the location of certain lines in arrayList
int blankIndex = 8;
int lastModIndex = 0;
int byteBlankIndex = 0;
try
{
//Get last modified date
long lastMod = getLastModified(url);
Date d = new Date(lastMod);
//Construct the get request
outputStream.print("GET "+ "/" + pathName + " HTTP/1.1\r\n");
outputStream.print("If-Modified-Since: " + ft.format(d)+ "\r\n");
outputStream.print("Host: " + hostString+"\r\n");
outputStream.print("\r\n");
outputStream.flush();
//Booleans to prevent duplicates, only need first occurrences of key strings
boolean blankDetected = false;
boolean lastModDetected = false;
//Keep track of current index
int count = 0;
int byteCount = 0;
//While loop to read response
String buff = "";
byte t;
while ( (t = (byte) inputStream.read()) != -1)
{
dataIn.add(t);
//Check for key lines
char x = (char) t;
buff = buff + x;
//For the first blank line (signaling the end of the header)
if(x == '\n')
{
stringData.add(buff);
if(buff.equals("\r\n") && !blankDetected)
{
blankDetected = true;
blankIndex = count;
byteBlankIndex = byteCount + 2;
}
//For the last modified line
if(buff.contains("Last-Modified:") && !lastModDetected)
{
lastModDetected = true;
lastModIndex = count;
}
buff = "";
count++;
}
//Increment count
byteCount++;
}
}
The the code to parse through response and write file to disk.
String catalogKey = hostString+ "/" + pathName;
//Get the directory sequence to make
String directoryPath = catalogKey.substring(0, catalogKey.lastIndexOf("/") + 1);
//Make the directory sequence if possible, ignore the boolean value that results
boolean ignoreThisBooleanVal = new File(directoryPath).mkdirs();
//Setup output file, and then write the contents of dataIn (excluding header) to the file
PrintWriter output = new PrintWriter(new FileWriter(new File(catalogKey)),true);
for(int i = byteBlankIndex + 1 ; i < dataIn.size(); i++)
{
output.print(new String(new byte[]{ (byte)dataIn.get(i)}, StandardCharsets.UTF_8));
}
output.close();
byte t;
while ( (t = (byte) inputStream.read()) != -1)
The problem is here. It should read:
int t;
while ( (t = inputStream.read()) != -1)
{
byte b = (byte)t;
// use b from now on in the loop.
The issue is that a byte of 0xff in the input will be returned to the int as 0xff, but to the byte as -1, so you are unable to distinguish it from end of stream.
And you should use a FileOutputStream, not a FileWriter, and you should not accumulate potentially binary data into a String or StringBuffer or anything to do with char. As soon as you've got to the end of the header you should open a FileOutputStream and just start copying bytes. Use buffered streams to make all this more efficient.
Not much point in any of these given that HttpURLConnection already exists.

How to read the large text files efficiently in java

Here, I am reading the 18 MB file and store it in a two dimensional array. But this program takes almost 15 minutes to run. Is there anyway to optimize the running time of the program. The file contains only binary values. Thanks in advanceā€¦
public class test
{
public static void main(String[] args) throws FileNotFoundException, IOException
{
BufferedReader br;
FileReader fr=null;
int m = 2160;
int n = 4320;
int[][] lof = new int[n][m];
String filename = "D:/New Folder/ETOPOCHAR";
try {
Scanner input = new Scanner(new File("D:/New Folder/ETOPOCHAR"));
double range_km=1.0;
double alonn=-57.07; //180 to 180
double alat=38.53;
while (input.hasNextLine()) {
for (int i = 0; i < m; i++) {
for (int j = 0; j < n; j++) {
try
{
lof[j][i] = input.nextInt();
System.out.println("value[" + j + "][" + i + "] = "+ lof[j][i]);
}
catch (java.util.NoSuchElementException e) {
// e.printStackTrace();
}
}
} //print the input matrix
}
I have also tried with byte array but i can not save it in twoD array...
public class FileToArrayOfBytes
{
public static void main( String[] args )
{
FileInputStream fileInputStream=null;
File file = new File("name of file");
byte[] bFile = new byte[(int) file.length()];
try {
//convert file into array of bytes
fileInputStream = new FileInputStream(file);
fileInputStream.read(bFile);
fileInputStream.close();
for (int i = 0; i < bFile.length; i++) {
System.out.print((char)bFile[i]);
}
System.out.println("Done");
}catch(Exception e){
e.printStackTrace();
}
}
}
You can read the file into a byte array first, then deserialize these bytes. Start with 2048 bytes buffer (as input buffer), then experiment by increasing/decreasing its size, but the experimental buffer size values should be a power of two (512, 1024, 2048, etc).
As far as I rememenber, there are good chances that the best performance can be achived with a buffer of size 2048 bytes, but it is OS dependent and should be verified.
Code sample (here you can try different values of BUFFER_SIZE variable, in my case I've read a test file of size 7.5M in less then one second):
public static void main(String... args) throws IOException {
File f = new File(args[0]);
byte[] buffer = new byte[BUFFER_SIZE];
ByteBuffer result = ByteBuffer.allocateDirect((int) f.length());
try (FileInputStream fos = new FileInputStream(f)) {
int bytesRead;
int totalBytesRead = 0;
while ((bytesRead = fos.read(buffer, 0, BUFFER_SIZE)) != -1) {
result.put(buffer, 0, bytesRead);
totalBytesRead += bytesRead;
}
// debug info
System.out.printf("Read %d bytes\n", totalBytesRead);
// Here you can do whatever you want with the result, including creation of a 2D array...
int pos = result.position();
result.rewind();
for (int i = 0; i < pos / 4; i++) {
System.out.println(result.getInt());
}
}
}
Take your time and read docs for java.io, java.nio packages as well as Scanner class, just to improve understanding.

How to determine which line specific line of a file being read

Scanner scans = new Scanner(System.in);
System.out.print("Enter filename: ");
String thisfile = scans.nextLine();
File thatfile = new File(thisfile);
FileInputStream fileInput = new FileInputStream(thatfile);
int i;
while ((i = fileInput.read()) != -1) {
char a = (char) i;
}
I'm using the code above to get a file(a java program) and search the file by each character. How can I determine which line a certain character is in. For example if this was the program:
public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello, World");
}
}
If I was on the S of System, how would I be able to correctly determine that it is in line 3, using code? Sorry if I'm not clear but it's hard to explain.
What about enhancing you loop like so
char newline_character = <whatever is appropriate for your file>;
int line = 0;
while ((i = fileInput.read()) != -1) {
char a = (char) i;
if (a==newline_character) { ++line; }
}

Storing a large binary file

Are there any ways to store a large binary file like 50 MB in the ten files with 5 MB?
thanks
are there any special classes for doing this?
Use a FileInputStream to read the file and a FileOutputStream to write it.
Here a simple (incomplete) example (missing error handling, writes 1K chunks)
public static int split(File file, String name, int size) throws IOException {
FileInputStream input = new FileInputStream(file);
FileOutputStream output = null;
byte[] buffer = new byte[1024];
int count = 0;
boolean done = false;
while (!done) {
output = new FileOutputStream(String.format(name, count));
count += 1;
for (int written = 0; written < size; ) {
int len = input.read(buffer);
if (len == -1) {
done = true;
break;
}
output.write(buffer, 0, len);
written += len;
}
output.close();
}
input.close();
return count;
}
and called like
File input = new File("C:/data/in.gz");
String name = "C:/data/in.gz.part%02d"; // %02d will be replaced by segment number
split(input, name, 5000 * 1024));
Yes, there are. Basically just count the bytes which you write to file and if it hits a certain limit, then stop writing, reset the counter and continue writing to another file using a certain filename pattern so that you can correlate the files with each other. You can do that in a loop. You can learn here how to write to files in Java and for the remnant just apply the primary school maths.

Categories