I have a StringBuilder (~1GB size) which converts to an almost 1GB String. Then it is sent to AWSS3Client to be put as a file on s3. Now to write the String to s3, I need to further convert the string to a ByteArrayInputStream, which takes another 1GB.
InputStream dataInputStream = new ByteArrayInputStream(data.getBytes(StandardCharsets.UTF_8));
amazonS3Client.putObject(bucket, key, dataInputStream, new ObjectMetadata());
Now I end up with a JVM heap of 3GB. Is there a way to directly send the 1GB StringBuilder directly to S3 with little over 1GB heap?
The only way I could think of is to convert StringBuilder to a File instance (using FileWriter) --> S3 API which takes File as input. But this approach requires a 2-IOPS to the SSD -- penalty that I want to avoid.
Any better way to handle the problem is welcome.
[edit 1]
(as per comments) The builder size changed from 5GB to 1GB. I wanted to convey that the builderis a huge one (and hence used 5GB previously, which was my mistake)
The string-oriented classes are frustratingly locked down with final in many cases, which means that oftentimes buffers that already exist in memory have to be duplicated. This is probably to do with security but is nevertheless wasteful and frustrating. You should be able to do something with the following if you have a StringBuilder. Obviously you'll need a much bigger buffer than in the illustrative main method.
import java.io.ByteArrayInputStream;
import java.io.FilterInputStream;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.io.FileOutputStream;
import java.nio.ByteBuffer;
import java.nio.CharBuffer;
import java.nio.charset.CharsetEncoder;
import java.nio.charset.Charset;
import java.util.Objects;
/**
*
* #author CEHJ
* #version 1.0
*/
public class CharSequenceInputStream extends InputStream {
private CharSequence charSequence;
private int position;
private CharsetEncoder enc;
private Charset outputEncoding;
private CharBuffer cb;
private ByteBuffer bb;
public CharSequenceInputStream(CharSequence charSequence) {
this(charSequence, Charset.forName(System.getProperty("native.encoding", Charset.defaultCharset().name())));
}
public CharSequenceInputStream(CharSequence charSequence, Charset outputEncoding) {
this.charSequence = charSequence;
this.outputEncoding = outputEncoding;
position = 0;
enc = outputEncoding.newEncoder();
cb = CharBuffer.allocate(1);
bb = ByteBuffer.allocate(8);
}
// For testing only
public static void main(String[] args) throws IOException {
if (args.length < 2) {
System.err.printf("Usage: java CharSequenceInputStream <String to be stored> <Output file> [Char encoding for output]");
System.exit(1);
}
String content = args[0];
String outputPath = args[1];
StringBuilder sb = new StringBuilder(content);
CharSequenceInputStream in = null;
if (args.length > 2) {
in = new CharSequenceInputStream(sb, Charset.forName(args[2]));
} else {
in = new CharSequenceInputStream(sb);
}
try (OutputStream out = new FileOutputStream(outputPath)) {
int bytesRead = -1;
final int BUF_SIZE = 8;
byte[] buf = new byte[BUF_SIZE];
while ((bytesRead = in.read(buf, 0, BUF_SIZE)) > -1) {
out.write(buf, 0, bytesRead);
}
}
}
#Override
public int read() throws IOException {
int result = -1;
if (position < charSequence.length()) {
if (bb.remaining() == bb.capacity() || bb.position() == bb.limit()) {
// At start - nothing yet has been decoded
// OR - we've read all bytes in the buffer and need to refill it
cb.clear();
bb.clear();
char currentChar = charSequence.charAt(position++);
cb.append(currentChar);
cb.flip();
enc.reset();
enc.encode(cb, bb, false);
enc.encode(cb, bb, true);
enc.flush(bb);
bb.flip();
}
}
if (bb.position() < bb.limit()) {
result = bb.get() & 0xFF;
}
return result;
}
#Override
public int read(byte[] b) throws IOException {
return read(b, 0, b.length);
}
#Override
public int read(byte b[], int off, int len) throws IOException {
Objects.checkFromIndexSize(off, len, b.length);
if (len == 0) {
return 0;
}
int c = read();
if (c == -1) {
return -1;
}
b[off] = (byte) c;
int i = 1;
try {
for (; i < len; i++) {
c = read();
if (c == -1) {
break;
}
b[off + i] = (byte) c;
}
} catch (IOException ee) {
}
return i;
}
#Override
public void close() {
// NO OP
}
}
Related
I have a lot of massive files I need convert to CSV by replacing certain characters.
I am looking for reliable approach given InputStream return OutputStream and replace all characters c1 to c2.
Trick here is to read and write in parallel, I can't fit whole file in memory.
Do I need to run it in separate thread if I want read and write at the same time?
Thanks a lot for your advices.
To copy data from an input stream to an output stream you write data while you're reading it either a byte (or character) or a line at a time.
Here is an example that reads in a file converting all 'x' characters to 'y'.
BufferedInputStream in = new BufferedInputStream(new FileInputStream("input.dat"));
BufferedOutputStream out = new BufferedOutputStream(new FileOutputStream("output.dat"));
int ch;
while((ch = in.read()) != -1) {
if (ch == 'x') ch = 'y';
out.write(ch);
}
out.close();
in.close();
Or if can use a Reader and process a line at a time then can use this aproach:
BufferedReader reader = new BufferedReader(new FileReader("input.dat"));
PrintWriter writer = new PrintWriter(
new BufferedOutputStream(new FileOutputStream("output.dat")));
String str;
while ((str = reader.readLine()) != null) {
str = str.replace('x', 'y'); // replace character at a time
str = str.replace("abc", "ABC"); // replace string sequence
writer.println(str);
}
writer.close();
reader.close();
BufferedInputStream and BufferedReader read ahead and keep 8K of characters in a buffer for performance. Very large files can be processed while only keeping 8K of characters in memory at a time.
FileWriter writer = new FileWriter("Report.csv");
BufferedReader reader = new BufferedReader(new InputStreamReader(YOURSOURCE, Charsets.UTF_8));
String line;
while ((line = reader.readLine()) != null) {
line.replace('c1', 'c2');
writer.append(line);
writer.append('\n');
}
writer.flush();
writer.close();
You can find related answer here: Filter (search and replace) array of bytes in an InputStream
I took #aioobe's answer in that thread, and built the replacing input stream module in Java, which you can find it in my GitHub gist: https://gist.github.com/lhr0909/e6ac2d6dd6752871eb57c4b083799947
Putting the source code here as well:
import java.io.FilterInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.Queue;
/**
* Created by simon on 8/29/17.
*/
public class ReplacingInputStream extends FilterInputStream {
private Queue<Integer> inQueue, outQueue;
private final byte[] search, replacement;
public ReplacingInputStream(InputStream in, String search, String replacement) {
super(in);
this.inQueue = new LinkedList<>();
this.outQueue = new LinkedList<>();
this.search = search.getBytes();
this.replacement = replacement.getBytes();
}
private boolean isMatchFound() {
Iterator<Integer> iterator = inQueue.iterator();
for (byte b : search) {
if (!iterator.hasNext() || b != iterator.next()) {
return false;
}
}
return true;
}
private void readAhead() throws IOException {
// Work up some look-ahead.
while (inQueue.size() < search.length) {
int next = super.read();
inQueue.offer(next);
if (next == -1) {
break;
}
}
}
#Override
public int read() throws IOException {
// Next byte already determined.
while (outQueue.isEmpty()) {
readAhead();
if (isMatchFound()) {
for (byte a : search) {
inQueue.remove();
}
for (byte b : replacement) {
outQueue.offer((int) b);
}
} else {
outQueue.add(inQueue.remove());
}
}
return outQueue.remove();
}
#Override
public int read(byte b[]) throws IOException {
return read(b, 0, b.length);
}
// copied straight from InputStream inplementation, just needed to to use `read()` from this class
#Override
public int read(byte b[], int off, int len) throws IOException {
if (b == null) {
throw new NullPointerException();
} else if (off < 0 || len < 0 || len > b.length - off) {
throw new IndexOutOfBoundsException();
} else if (len == 0) {
return 0;
}
int c = read();
if (c == -1) {
return -1;
}
b[off] = (byte)c;
int i = 1;
try {
for (; i < len ; i++) {
c = read();
if (c == -1) {
break;
}
b[off + i] = (byte)c;
}
} catch (IOException ee) {
}
return i;
}
}
How can I efficiently determine the possition of the last newline from a specific part from a file?
e.g. I tried this
BufferedReader br = new BufferedReader(new FileReader(file));
long length = file.length();
String line = null;
int tailLength = 0;
while ((line = br.readLine()) != null) {
System.out.println(line);
tailLength = line.getBytes().length;
}
int returnValue = length - tailLength;
but this will only return the possition of the very last newline in the whole file, and not the last newline in a section of the file. This section would be indicated by an int start; and an int end;
I think the most efficient approach is to start from the end of the file and read it in chunks. then, search it backwards for the first line.
i.e.
import java.io.IOException;
import java.nio.MappedByteBuffer;
import java.nio.channels.FileChannel;
import java.nio.channels.FileLock;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;
public class FileUtils {
static final int CHUNK_SIZE = 8 * 1024;
public static long getLastLinePosition(Path path) throws IOException {
try (FileChannel inChannel = FileChannel.open(path, StandardOpenOption.READ);
#SuppressWarnings("unused")
FileLock lock = inChannel.tryLock(0, Long.MAX_VALUE, true)) {
long fileSize = inChannel.size();
long mark = fileSize;
long position;
boolean ignoreCR = false;
while (mark > 0) {
position = Math.max(0, mark - CHUNK_SIZE);
MappedByteBuffer mbb = inChannel.map(FileChannel.MapMode.READ_ONLY, position, Math.min(mark, CHUNK_SIZE));
byte[] bytes = new byte[mbb.remaining()];
mbb.get(bytes);
for (int i = bytes.length - 1; i >= 0; i--, mark--) {
switch (bytes[i]) {
case '\n':
if (mark < fileSize) {
return mark;
}
ignoreCR = true;
break;
case '\r':
if (ignoreCR) {
ignoreCR = false;
} else if (mark < fileSize) {
return mark;
}
break;
}
}
mark = position;
}
}
return 0;
}
}
test file :
abc\r\n
1234\r\n
def\r\n
output : 11
learn more about java.nio.channels.FileChannel and java.nio.MappedByteBuffer :
http://tutorials.jenkov.com/java-nio/file-channel.html
https://examples.javacodegeeks.com/core-java/nio/filechannel/java-nio-channels-filechannel-example/
https://examples.javacodegeeks.com/core-java/nio/mappedbytebuffer/java-mappedbytebuffer-example/
http://tutorials.techmytalk.com/2014/11/05/java-nio-memory-mapped-files/
http://javarevisited.blogspot.nl/2012/01/memorymapped-file-and-io-in-java.html
EDIT :
if you are using Java 6, apply these changes to the above code :
import java.io.IOException;
import java.io.RandomAccessFile;
import java.nio.channels.FileChannel;
import java.nio.channels.FileLock;
public class FileUtils {
static final int CHUNK_SIZE = 8 * 1024;
public static long getLastLinePosition(String name) throws IOException {
FileChannel inChannel = null;
FileLock lock = null;
try {
inChannel = new RandomAccessFile(name, "r").getChannel();
lock = inChannel.tryLock(0, Long.MAX_VALUE, true);
// ...
} finally {
if (lock != null) {
lock.release();
}
if (inChannel != null) {
inChannel.close();
}
}
return 0;
}
}
Tips on choosing ideal buffer size :
https://stackoverflow.com/a/237495/3767784
https://stackoverflow.com/a/4638989/3767784
https://stackoverflow.com/a/19007819/3767784
Unfortunately you can't, I had to use RandomAccessFile which has getFilePointer() method which you can call after readLine(), but it is VERY SLOW and not UTF-8-aware.
I ended up implementing my own byte counting line reader.
Your naive solution will fail horribly when facing files with unicode, malformed or binary contents.
Sorry for my english. I try read realy fast big size text file character-by-character(not use readLine()) but it has not yet obtained. My code:
for(int i = 0; (i = textReader.read()) != -1; ) {
char character = (char) i;
}
It read 1GB text file 56666ms, how can i read faster?
UDP
Its method read 1GB file 28833ms
FileInputStream fIn = null;
FileChannel fChan = null;
ByteBuffer mBuf;
int count;
try {
fIn = new FileInputStream(textReader);
fChan = fIn.getChannel();
mBuf = ByteBuffer.allocate(128);
do {
count = fChan.read(mBuf);
if(count != -1) {
mBuf.rewind();
for(int i = 0; i < count; i++) {
char c = (char)mBuf.get();
}
}
} while(count != -1);
}catch(Exception e) {
}
The fastest way to read input is to use buffer. Here is an example of a class that has internal buffer.
class Parser
{
final private int BUFFER_SIZE = 1 << 16;
private DataInputStream din;
private byte[] buffer;
private int bufferPointer, bytesRead;
public Parser(InputStream in)
{
din = new DataInputStream(in);
buffer = new byte[BUFFER_SIZE];
bufferPointer = bytesRead = 0;
}
public int nextInt() throws Exception
{
int ret = 0;
byte c = read();
while (c <= ' ') c = read();
//boolean neg = c == '-';
//if (neg) c = read();
do
{
ret = ret * 10 + c - '0';
c = read();
} while (c > ' ');
//if (neg) return -ret;
return ret;
}
private void fillBuffer() throws Exception
{
bytesRead = din.read(buffer, bufferPointer = 0, BUFFER_SIZE);
if (bytesRead == -1) buffer[0] = -1;
}
private byte read() throws Exception
{
if (bufferPointer == bytesRead) fillBuffer();
return buffer[bufferPointer++];
}
}
This parser has function that will give you nextInt, if you want next char you can can call read() function.
This is the fastest way to read from a file (as far as I know)
You would initialize this parser like this:
Parser p = new Parser(new FileInputStream("text.txt"));
int c;
while((c = p.read()) != -1)
System.out.print((char)c);
This code reads 250mb in 7782ms.
Disclaimer:
the code is not mine, it has been posted as a solution to a problem on CodeChef by the user 'Kamalakannan CM'
I would use BufferedReader, it reads buffered. A short sample:
import java.io.BufferedReader;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.nio.CharBuffer;
public class Main {
public static void main(String... args) {
try (FileReader fr = new FileReader("a.txt")) {
try (BufferedReader reader = new BufferedReader(fr)) {
CharBuffer charBuffer = CharBuffer.allocate(8192);
reader.read(charBuffer);
} catch (IOException e) {
e.printStackTrace();
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
}
The default constructor uses a default buffersize of 8192. In case you want to use a different buffer size you can use this constructor. Alternatively you can read in an array buffer:
....
char[] buffer = new char[255];
reader.read(buffer);
....
or read one character at a time:
int char = reader.read();
I have a byte array which is filled by a serial port event and code is shown below:
private InputStream input = null;
......
......
public void SerialEvent(SerialEvent se){
if(se.getEventType == SerialPortEvent.DATA_AVAILABLE){
int length = input.available();
if(length > 0){
byte[] array = new byte[length];
int numBytes = input.read(array);
String text = new String(array);
}
}
}
The variable text contains the below characters,
"\033[K", "\033[m", "\033[H2J", "\033[6;1H" ,"\033[?12l", "\033[?25h", "\033[5i", "\033[4i", "\033i" and similar types..
As of now, I use String.replace to remove all these characters from the string.
I have tried new String(array , 'CharSet'); //Tried with all CharSet options but I couldn't able to remove those.
Is there any way where I can remove those characters without using replace method?
I gave a unsatisfying answer, thanks to #OlegEstekhin for pointing that out.
As noone else answered yet, and a solution is not a two-liner, here it goes.
Make a wrapping InputStream that throws away escape sequences. I have used a PushbackInputStream, where a partial sequence skipped, may still be pushed back for reading first. Here a FilterInputStream would suffice.
public class EscapeRemovingInputStream extends PushbackInputStream {
public static void main(String[] args) {
String s = "\u001B[kHello \u001B[H12JWorld!";
byte[] buf = s.getBytes(StandardCharsets.ISO_8859_1);
ByteArrayInputStream bais = new ByteArrayInputStream(buf);
EscapeRemovingInputStream bin = new EscapeRemovingInputStream(bais);
try (InputStreamReader in = new InputStreamReader(bin,
StandardCharsets.ISO_8859_1)) {
int c;
while ((c = in.read()) != -1) {
System.out.print((char) c);
}
System.out.println();
} catch (IOException ex) {
Logger.getLogger(EscapeRemovingInputStream.class.getName()).log(
Level.SEVERE, null, ex);
}
}
private static final Pattern ESCAPE_PATTERN = Pattern.compile(
"\u001B\\[(k|m|H\\d+J|\\d+:\\d+H|\\?\\d+\\w|\\d*i)");
private static final int MAX_ESCAPE_LENGTH = 20;
private final byte[] escapeSequence = new byte[MAX_ESCAPE_LENGTH];
private int escapeLength = 0;
private boolean eof = false;
public EscapeRemovingInputStream(InputStream in) {
this(in, MAX_ESCAPE_LENGTH);
}
#Override
public int read(byte[] b, int off, int len) throws IOException {
for (int i = 0; i < len; ++i) {
int c = read();
if (c == -1) {
return i == 0 ? -1 : i;
}
b[off + i] = (byte) c;
}
return len;
}
#Override
public int read() throws IOException {
int c = eof ? -1 : super.read();
if (c == -1) { // Throw away a trailing half escape sequence.
eof = true;
return c;
}
if (escapeLength == 0 && c != 0x1B) {
return c;
} else {
escapeSequence[escapeLength] = (byte) c;
++escapeLength;
String esc = new String(escapeSequence, 0, escapeLength,
StandardCharsets.ISO_8859_1);
if (ESCAPE_PATTERN.matcher(esc).matches()) {
escapeLength = 0;
} else if (escapeLength == MAX_ESCAPE_LENGTH) {
escapeLength = 0;
unread(escapeSequence);
return super.read(); // No longer registering the escape
}
return read();
}
}
}
User calls EscapeRemovingInputStream.read
this read may call some read's itself to fill an byte buffer escapeSequence
(a push-back may be done calling unread)
the original read returns.
The recognition of an escape sequence seems grammatical: command letter, numerical argument(s). Hence I use a regular expression.
I am writing an FLV parser in Java and have come up against an issue. The program successfully parses and groups together tags into packets and correctly identifies and assigns a byte array for each tag's body based upon the BodyLength flag in the header. However in my test files it successfully completes this but stops before the last 4 bytes.
The byte sequence left out in the first file is :
00 00 14 C3
And in the second:
00 00 01 46
Clearly it is an issue with the final 4 bytes of both files however I cannot spot the error in my logic. I suspect it might be:
while (in.available() != 0)
However I also doubt this is the case as the program is successfully entering the loop for the final tag however it is just stopping 4 bytes short. Any help is greatly appreciated. (I know proper exception handling is as yet not taking place)
Parser.java
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.lang.reflect.Array;
import java.net.URI;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.InputMismatchException;
/**
*
* #author A
*
* Parser class for FLV files
*/
public class Parser {
private static final int HEAD_SIZE = 9;
private static final int TAG_HEAD_SIZE = 15;
private static final byte[] FLVHEAD = { 0x46, 0x4C, 0x56 };
private static final byte AUDIO = 0x08;
private static final byte VIDEO = 0x09;
private static final byte DATA = 0x12;
private static final int TYPE_INDEX = 4;
private File file;
private FileInputStream in;
private ArrayList<Packet> packets;
private byte[] header = new byte[HEAD_SIZE];
Parser() throws FileNotFoundException {
throw new FileNotFoundException();
}
Parser(URI uri) {
file = new File(uri);
init();
}
Parser(File file) {
this.file = file;
init();
}
private void init() {
packets = new ArrayList<Packet>();
}
public void parse() {
boolean test = false;
try {
test = parseHeader();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
if (test) {
System.out.println("Header Verified");
// Add header packet to beginning of list & then null packet
Packet p = new Packet(PTYPE.P_HEAD);
p.setSize(header.length);
p.setByteArr(header);
packets.add(p);
p = null;
try {
parseTags();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
} else {
try {
in.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
// throw FileNotFoundException because incorrect file
}
}
private boolean parseHeader() throws FileNotFoundException, IOException {
if (file == null)
throw new FileNotFoundException();
in = new FileInputStream(file);
in.read(header, 0, 9);
return Arrays.equals(FLVHEAD, Arrays.copyOf(header, FLVHEAD.length));
}
private void parseTags() throws IOException {
if (file == null)
throw new FileNotFoundException();
byte[] tagHeader = new byte[TAG_HEAD_SIZE];
Arrays.fill(tagHeader, (byte) 0x00);
byte[] body;
byte[] buf;
PTYPE pt;
int OFFSET = 0;
while (in.available() != 0) {
// Read first 5 - bytes, previous tag size + tag type
in.read(tagHeader, 0, 5);
if (tagHeader[TYPE_INDEX] == AUDIO) {
pt = PTYPE.P_AUD;
} else if (tagHeader[TYPE_INDEX] == VIDEO) {
pt = PTYPE.P_VID;
} else if (tagHeader[TYPE_INDEX] == DATA) {
pt = PTYPE.P_DAT;
} else {
// Header should've been dealt with - if previous data types not
// found then throw exception
System.out.println("Unexpected header format: ");
System.out.print(String.format("%02x\n", tagHeader[TYPE_INDEX]));
System.out.println("Last Tag");
packets.get(packets.size()-1).diag();
System.out.println("Number of tags found: " + packets.size());
throw new InputMismatchException();
}
OFFSET = TYPE_INDEX;
// Read body size - 3 bytes
in.read(tagHeader, OFFSET + 1, 3);
// Body size buffer array - padding for 1 0x00 bytes
buf = new byte[4];
Arrays.fill(buf, (byte) 0x00);
// Fill size bytes
buf[1] = tagHeader[++OFFSET];
buf[2] = tagHeader[++OFFSET];
buf[3] = tagHeader[++OFFSET];
// Calculate body size
int bSize = ByteBuffer.wrap(buf).order(ByteOrder.BIG_ENDIAN)
.getInt();
// Initialise Array
body = new byte[bSize];
// Timestamp
in.read(tagHeader, ++OFFSET, 3);
Arrays.fill(buf, (byte) 0x00);
// Fill size bytes
buf[1] = tagHeader[OFFSET++];
buf[2] = tagHeader[OFFSET++];
buf[3] = tagHeader[OFFSET++];
int milliseconds = ByteBuffer.wrap(buf).order(ByteOrder.BIG_ENDIAN)
.getInt();
// Read padding
in.read(tagHeader, OFFSET, 4);
// Read body
in.read(body, 0, bSize);
// Diagnostics
//printBytes(body);
Packet p = new Packet(pt);
p.setSize(tagHeader.length + body.length);
p.setByteArr(concat(tagHeader, body));
p.setMilli(milliseconds);
packets.add(p);
p = null;
// Zero out for next iteration
body = null;
Arrays.fill(buf, (byte)0x00);
Arrays.fill(tagHeader, (byte)0x00);
milliseconds = 0;
bSize = 0;
OFFSET = 0;
}
in.close();
}
private byte[] concat(byte[] tagHeader, byte[] body) {
int aLen = tagHeader.length;
int bLen = body.length;
byte[] C = (byte[]) Array.newInstance(tagHeader.getClass()
.getComponentType(), aLen + bLen);
System.arraycopy(tagHeader, 0, C, 0, aLen);
System.arraycopy(body, 0, C, aLen, bLen);
return C;
}
private void printBytes(byte[] b) {
System.out.println("\n--------------------");
for (int i = 0; i < b.length; i++) {
System.out.print(String.format("%02x ", b[i]));
if (((i % 8) == 0 ) && i != 0)
System.out.println();
}
}
}
Packet.java
public class Packet {
private PTYPE type = null;
byte[] buf;
int milliseconds;
Packet(PTYPE t) {
this.setType(t);
}
public void setSize(int s) {
buf = new byte[s];
}
public PTYPE getType() {
return type;
}
public void setType(PTYPE type) {
if (this.type == null)
this.type = type;
}
public void setByteArr(byte[] b) {
this.buf = b;
}
public void setMilli(int milliseconds) {
this.milliseconds = milliseconds;
}
public void diag(){
System.out.println("|-- Tag Type: " + type);
System.out.println("|-- Milliseconds: " + milliseconds);
System.out.println("|-- Size: " + buf.length);
System.out.println("|-- Bytes: ");
for(int i = 0; i < buf.length; i++){
System.out.print(String.format("%02x ", buf[i]));
if (((i % 8) == 0 ) && i != 0)
System.out.println();
}
System.out.println();
}
}
jFLV.java
import java.net.URISyntaxException;
public class jFLV {
/**
* #param args
*/
public static void main(String[] args) {
// TODO Auto-generated method stub
Parser p = null;
try {
p = new Parser(jFLV.class.getResource("sample.flv").toURI());
} catch (URISyntaxException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
p.parse();
}
}
PTYPE.java
public enum PTYPE {
P_HEAD,P_VID,P_AUD,P_DAT
};
Both your use of available() and your call to read are broken. Admittedly I would have somewhat expected this to be okay for a FileInputStream (until you reach the end of the stream, at which point ignoring the return value for read could still be disastrous) but I personally assume that streams can always return partial data.
available() only tells you whether there's any data available right now. It's very rarely useful - just ignore it. If you want to read until the end of the stream, you should usually keep calling read until it returns -1. It's slightly tricky to combine that with "I'm trying to read the next block", admittedly. (It would be nice if InputStream had a peek() method, but it doesn't. You can wrap it in a BufferedInputStream and use mark/reset to test that at the start of each loop... ugly, but it should work.)
Next, you're ignoring the result of InputStream.read (in multiple places). You should always use the result of this, rather than assuming it has read the amount of data you've asked for. You might want a couple of helper methods, e.g.
static byte[] readExactly(InputStream input, int size) throws IOException {
byte[] data = new byte[size];
readExactly(input, data);
return data;
}
static void readExactly(InputStream input, byte[] data) throws IOException {
int index = 0;
while (index < data.length) {
int bytesRead = input.read(data, index, data.length - index);
if (bytesRead < 0) {
throw new EOFException("Expected more data");
}
}
}
You should use one of the read methods instead of available, as available() "Returns an estimate of the number of bytes that can be read (or skipped over) from this input stream without blocking by the next invocation of a method for this input stream."
It is not designed to check how long you can read.