My idea is to make a little software that reads a file (which can't be read "naturally", but it contains some images), turns its data into hex, looks for the PNG chunks (a kind of marks that are at the beginning and end of a .png file), and saves the resulting data in different files (after getting it back from hex). I am doing this in Java, using a code like this:
// out is where to show the result and file is the source
public static void hexDump(PrintStream out, File file) throws IOException {
InputStream is = new FileInputStream(file);
StringBuffer Buffer = new StringBuffer();
while (is.available() > 0) {
StringBuilder sb1 = new StringBuilder();
for (int j = 0; j < 16; j++) {
if (is.available() > 0) {
int value = (int) is.read();
// transform the current data into hex
sb1.append(String.format("%02X ", value));
}
}
Buffer.append(sb1);
// Should I look for the PNG here? I'm not sure
}
is.close();
// Print the result in out (that may be the console or a file)
out.print(Buffer);
}
I'm sure there are another ways to do this using less "machine-resources" while opening huge files. If you have any idea, please tell me. Thanks!
This is the first time I post, so if there is any error, please help me to correct it.
As Erwin Bolwidt says in the comments, first thing is don't convert to hex. If for some reason you must convert to hex, quit appending the content to two buffers, and always use StringBuilder, not StringBuffer. StringBuilder can be as much as 3x faster than StringBuffer.
Also, buffer your file reads with BufferedReader. Reading one character at a time with FileInputStream.read() is very slow.
A very simple way to do this, which is probably quite fast, is to read the entire file into memory (as binary data, not as a hex dump) and then search for the markers.
This has two limitations:
it only handles files up to 2 GiB in length (max size of Java arrays)
it requires large chunks of memory - it is possible to optimize this by reader smaller chunks but that makes the algorithm more complex
The basic code to do that is like this:
import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
public class Png {
static final String PNG_MARKER_HEX = "abcdef0123456789"; // TODO: replace with real marker
static final byte[] PNG_MARKER = hexStringToByteArray(PNG_MARKER_HEX);
public void splitPngChunks(File file) throws IOException {
byte[] bytes = Files.readAllBytes(file.toPath());
int offset = KMPMatch.indexOf(bytes, 0, PNG_MARKER);
while (offset >= 0) {
int nextOffset = KMPMatch.indexOf(bytes, 0, PNG_MARKER);
if (nextOffset < 0) {
writePngChunk(bytes, offset, bytes.length - offset);
} else {
writePngChunk(bytes, offset, nextOffset - offset);
}
offset = nextOffset;
}
}
public void writePngChunk(byte[] bytes, int offset, int length) {
// TODO: implement - where do you want to write the chunks?
}
}
I'm not sure how these PNG chunk markers work exactly, I'm assuming above that they start the section of the data that you're interested in, and that the next marker starts the next section of the data.
There are two things missing in standard Java: code to convert a hex string to a byte array and code to search for a byte array inside another byte array.
Both can be found in various apache-commons libraries but I'll include that answers the people posted to earlier questions on StackOverflow. You can copy these verbatim into the Png class to make the above code work.
Convert a string representation of a hex dump to a byte array using Java?
public static byte[] hexStringToByteArray(String s) {
int len = s.length();
byte[] data = new byte[len / 2];
for (int i = 0; i < len; i += 2) {
data[i / 2] = (byte) ((Character.digit(s.charAt(i), 16) << 4) + Character.digit(s.charAt(i + 1), 16));
}
return data;
}
Searching for a sequence of Bytes in a Binary File with Java
/**
* Knuth-Morris-Pratt Algorithm for Pattern Matching
*/
static class KMPMatch {
/**
* Finds the first occurrence of the pattern in the text.
*/
public static int indexOf(byte[] data, int offset, byte[] pattern) {
int[] failure = computeFailure(pattern);
int j = 0;
if (data.length - offset <= 0)
return -1;
for (int i = offset; i < data.length; i++) {
while (j > 0 && pattern[j] != data[i]) {
j = failure[j - 1];
}
if (pattern[j] == data[i]) {
j++;
}
if (j == pattern.length) {
return i - pattern.length + 1;
}
}
return -1;
}
/**
* Computes the failure function using a boot-strapping process, where the pattern is matched against itself.
*/
private static int[] computeFailure(byte[] pattern) {
int[] failure = new int[pattern.length];
int j = 0;
for (int i = 1; i < pattern.length; i++) {
while (j > 0 && pattern[j] != pattern[i]) {
j = failure[j - 1];
}
if (pattern[j] == pattern[i]) {
j++;
}
failure[i] = j;
}
return failure;
}
}
I modified this last piece of code to make it possible to start the search at an offset other than zero.
Reading the file a byte at a time would be taking substantial time here. You can improve that by orders of magnitude. You should be using a DataInputStream around a BufferedInputStream around the FileInputStream, and reading 16 bytes at a time with readFully.
And then processing them, without conversion to and from hex, which is quite unnecessary here, and writing them to the output(s) as you go, via a BufferedOutputStream around the FileOutputStream, rather than concatenating the entire file into memory and having to write it all out in one go. Of course that takes time, but that's because it does, not because you have to do it that way.
Related
I have been working with a java program (developed by other people) for text-to-speech synthesis. The synthesis is done by concatenation of "di-phones". In the oroginal version, there was no signal processing. The diphones were just collected and concatenated together to produce the output. In order to improve the output, I tried to perform "phase matching" of the concatenating speech signals. The modification I've made is summarized here:
Audio data is collected from the AudioInputStream into a byte array.
Since the audio data is 16 bit, I converted the byte array to a short
array.
The "signal processing" is done on the short array.
To output the audio data, short array is again converted to byte array.
Here's the part of the code that I've changed in the existing program:
Audio Input
This segment is called for every diphone.
Original Version
audioInputStream = AudioSystem.getAudioInputStream(sound);
while ((cnt = audioInputStream.read(byteBuffer, 0, byteBuffer.length)) != -1) {
if (cnt > 0) {
byteArrayPlayStream.write(byteBuffer, 0, cnt);
}
}
My Version
// public varialbe declarations
byte byteSoundFile[]; // byteSoundFile will contain a whole word or the diphones of a whole word
short shortSoundFile[] = new short[5000000]; // sound contents are taken in a short[] array for signal processing
short shortBuffer[];
int pos = 0;
int previousPM = 0;
boolean isWord = false;
public static HashMap<String, Integer> peakMap1 = new HashMap<String, Integer>();
public static HashMap<String, Integer> peakMap2 = new HashMap<String, Integer>();
// code for receiving and processing audio data
if(pos == 0) {
// a new word is going to be processed.
// so reset the shortSoundFile array
Arrays.fill(shortSoundFile, (short)0);
}
audioInputStream = AudioSystem.getAudioInputStream(sound);
while ((cnt = audioInputStream.read(byteBuffer, 0, byteBuffer.length)) != -1) {
if (cnt > 0) {
byteArrayPlayStream.write(byteBuffer, 0, cnt);
}
}
byteSoundFile = byteArrayPlayStream.toByteArray();
int nSamples = byteSoundFile.length;
byteArrayPlayStream.reset();
if(nSamples > 80000) { // it is a word
pos = nSamples;
isWord = true;
}
else { // it is a diphone
// audio data is converted from byte to short, so nSamples is halved
nSamples /= 2;
// transfer byteSoundFile contents to shortBuffer using byte-to-short conversion
shortBuffer = new short[nSamples];
for(int i=0; i<nSamples; i++) {
shortBuffer[i] = (short)((short)(byteSoundFile[i<<1]) << 8 | (short)byteSoundFile[(i<<1)+1]);
}
/************************************/
/**** phase-matching starts here ****/
/************************************/
int pm1 = 0;
int pm2 = 0;
String soundStr = sound.toString();
if(soundStr.contains("\\") && soundStr.contains(".")) {
soundStr = soundStr.substring(soundStr.indexOf("\\")+1, soundStr.indexOf("."));
}
if(peakMap1.containsKey(soundStr)) {
// perform overlap and add
System.out.println("we are here");
pm1 = peakMap1.get(soundStr);
pm2 = peakMap2.get(soundStr);
/*
Idea:
If pm1 is located after more than one third of the samples,
then threre will be too much overlapping.
If pm2 is located before the two third of the samples,
then where will also be extra overlapping for the next diphone.
In both of the cases, we will not perform the peak-matching operation.
*/
int idx1 = (previousPM == 0) ? pos : previousPM - pm1;
if((idx1 < 0) || (pm1 > (nSamples/3))) {
idx1 = pos;
}
int idx2 = idx1 + nSamples - 1;
for(int i=idx1, j=0; i<=idx2; i++, j++) {
if(i < pos) {
shortSoundFile[i] = (short) ((shortSoundFile[i] >> 1) + (shortBuffer[j] >> 1));
}
else {
shortSoundFile[i] = shortBuffer[j];
}
}
previousPM = (pm2 < (nSamples/3)*2) ? 0 : idx1 + pm2;
pos = idx2 + 1;
}
else {
// no peak found. simply concatenate the audio data
for(int i=0; i<nSamples; i++) {
shortSoundFile[pos++] = shortBuffer[i];
}
previousPM = 0;
}
Audio Output
After collecting all the diphones of a word, this segment is called to play the audio output.
Original Version
byte audioData[] = byteArrayPlayStream.toByteArray();
... code for writing audioData to output steam
My Version
byte audioData[];
if(isWord) {
audioData = Arrays.copyOf(byteSoundFile, pos);
isWord = false;
}
else {
audioData = new byte[pos*2];
for(int i=0; i<pos; i++) {
audioData[(i<<1)] = (byte) (shortSoundFile[i] >>> 8);
audioData[(i<<1)+1] = (byte) (shortSoundFile[i]);
}
}
pos = 0;
... code for writing audioData to output steam
But after the modification has done, the output has become worse. There is a lot of noise in the output.
Here is a sample audio with modification: modified output
Here is a sample audio from the original version: original output
Now I'd appreciate it if anyone can point out the reason that generates the noise and how to remove it. Am I doing anything wrong in the code? I have tested my algorithm in Mablab and it worked fine.
The problem has been solved temporarily. It turns out that the conversion between byte array and short array is not necessary. The required signal processing operations can be performed directly on byte arrays.
I'd like to keep this question open in case someone finds the bug(s) in the given code.
i need to reverse the following algorithm which converts a long array into a string:
public final class LongConverter {
private final long[] l;
public LongConverter(long[] paramArrayOfLong) {
this.l = paramArrayOfLong;
}
private void convertLong(long paramLong, byte[] paramArrayOfByte, int paramInt) {
int i = Math.min(paramArrayOfByte.length, paramInt + 8);
while (paramInt < i) {
paramArrayOfByte[paramInt] = ((byte) (int) paramLong);
paramLong >>= 8;
paramInt++;
}
}
public final String toString() {
int i = this.l.length;
byte[] arrayOfByte = new byte[8 * (i - 1)];
long l1 = this.l[0];
Random localRandom = new Random(l1);
for (int j = 1; j < i; j++) {
long l2 = localRandom.nextLong();
convertLong(this.l[j] ^ l2, arrayOfByte, 8 * (j - 1));
}
String str;
try {
str = new String(arrayOfByte, "UTF8");
} catch (UnsupportedEncodingException localUnsupportedEncodingException) {
throw new AssertionError(localUnsupportedEncodingException);
}
int k = str.indexOf(0);
if (-1 == k) {
return str;
}
return str.substring(0, k);
}
So when I do the following call
System.out.println(new LongConverter(new long[]{-6567892116040843544L, 3433539276790523832L}).toString());
it prints 400 as result.
It would be great if anyone could say what algorithm this is or how i could reverse it.
Thanks for your help
This is not a solvable problem as stated because
you only use l[0] so any additional long values could be anything.
it is guaranteed that there is N << 16 solutions to this problem. While the seed for random is 64-bit in reality the value used internally is 48-bit. This means is there is any solution, there if at least 16K solutions for a long seed.
What you can do is;
find the smallest seed which would generate the string using brute force. For a short strings this won't take long, however if you have 5-6 character this will take a while and for 7+ character there might not be a solution.
instead of generating 8-bit characters where all 8-bit values are equal. You could restrict the range to say space, A-Z, a-z and 0-9. This means you can have ~6-bits of randomness, shorter seeds and slightly longer Strings.
BTW You might find this post interesting where I use contrived random seeds to generate specific sequences. http://vanillajava.blogspot.co.uk/2011/10/randomly-no-so-random.html
If you want a process which ensures you can always re-create the original longs from a String or a byte[], I suggest using encryption. You can encrypt a String which has been UTF-8 encoded or a byte[] into another byte[] which can be base64 encoded to be readable as text. (Or you could skip the encryption and use base64 alone)
What is the fastest way to randomly (but repeatedly) permute all the bits within a Java byte array? I've tried successfully doing it with a BitSet, but is there a faster way? Clearly the for-loop consumes the majority of the cpu time.
I've just done some profiling in my IDE and the for-loop constitutes 64% of the cpu time within the entire permute() method.
To clarify, the array (preRound) contains an existing array of numbers going into the procedure. I want the individual set bits of that array to be mixed up in a random manner. This is the reason for P[]. It contains a random list of bit positions. So for example, if bit 13 of preRound is set, it is transferred to place P[13] of postRound. This might be at position 20555 of postRound. The whole thing is part of a substitution - permutation network, and I'm looking to the fastest way to permute the incoming bits.
My code so far...
private byte[] permute(byte[] preRound) {
BitSet beforeBits = BitSet.valueOf(preRound);
BitSet afterBits = new BitSet(blockSize * 8);
for (int i = 0; i < blockSize * 8; i++) {
assert i != P[i];
if (beforeBits.get(i)) {
afterBits.set(P[i]);
}
}
byte[] postRound = afterBits.toByteArray();
postRound = Arrays.copyOf(postRound, blockSize); // Pad with 0s to the specified length
assert postRound.length == blockSize;
return postRound;
}
FYI, blockSize is about 60,000 and P is a random lookup table.
I didn't perform any performance tests, but you may want to consider the following:
To omit the call to Arrays.copyOf (which copies the copy of the long[] used interally, which is kind of annoying), just set the last bit in case it wasn't set before and unset it afterwards.
Furthermore, there is a nice idiom to iterate over the set bits in the input permutation.
private byte[] permute(final byte[] preRound) {
final BitSet beforeBits = BitSet.valueOf(preRound);
final BitSet afterBits = new BitSet(blockSize*8);
for (int i = beforeBits.nextSetBit(0); i >= 0; i =
beforeBits.nextSetBit(i + 1)) {
final int to = P[i];
assert i != to;
afterBits.set(to);
}
final int lastIndex = blockSize*8-1;
if (afterBits.get(lastIndex)) {
return afterBits.toByteArray();
}
afterBits.set(lastIndex);
final byte[] postRound = afterBits.toByteArray();
postRound[blockSize - 1] &= 0x7F;
return postRound;
}
If that doesn't cut it, in case you use the same P for lots of iterations, it may be worthwhile to consider transforming the permutation into cycle notation and perform the transformation in-place.
This way you can linearly iterate over P which may enable you to better exploit caching (P is 32 times as large as the byte array, assuming its an int array).
Yet, you will lose the advantage that you only have to look at 1s and end up shifting around every single bit in the byte array, set or not.
If you want to avoid using the BitSet, you can just do it by hand:
private byte[] permute(final byte[] preRound) {
final byte[] result = new byte[blockSize];
for (int i = 0; i < blockSize; i++) {
final byte b = preRound[i];
// if 1s are sparse, you may want to use this:
// if ((byte) 0 == b) continue;
for (int j = 0; j < 8; ++j) {
if (0 != (b & (1 << j))) {
final int loc = P[i * 8 + j];
result[loc / 8] |= (1 << (loc % 8));
}
}
}
return result;
}
I need to load around 2MB of data quickly on startup of my Android application.
I really need all this data in memory, so something like SQLite etc. is not an alternative.
The data consists of about 3000 int[][] arrays. The array dimension is around [7][7] on average.
I first implemented some prototype on my desktop, and ported it to android. On the desktop, I simply used Java's (de)serialization. Deserialization of that data takes about 90ms on my desktop computer.
However on Android 2.2.1 the same process takes about 15seconds(!) on my HTC Magic. It's so slow that if I don't to the deserialization in a seperate thred, my app will be killed. All in all, this is unacceptably slow.
What am I doing wrong? Should I
switch to something like protocol buffers? Would that really speed up the process of deserialization of several magnitudes - after all, it's not complex objects that I am deserializing, just int[][] arrays?!
design my own custom binary file format? I've never done that before, and no clue where to start
do something else?
Why not bypass the built-in deserialization, and use direct binary I/O?
When speed is your primary concern, not necessarily ease of programming, you can't beat it.
For output the pseudo-code would look like this:
write number of arrays
for each array
write n,m array sizes
for each element of array
write array element
For input, the pseudo-code would be:
read number of arrays
for each array
read n,m array sizes
allocate the array
for each element of array
read array element
When you read/write numbers in binary, you bypass all the conversion between binary and characters.
The speed should be limited only by the data transfer rate of the file storage media.
after trying out several things, as Mike Dunlavey suggested, direct binary I/O seemed fastest. I almost verbatim used his sketched out version. For completeness however, and if someone else wants to try, I'll post my full code here; even though it's very basic and without any kind of sanity check. This is for reading such a binary stream; writing is absolutely analogous.
import java.io.*;
public static int[][][] readBinaryInt(String filename) throws IOException {
DataInputStream in = new DataInputStream(
new BufferedInputStream(new FileInputStream(filename)));
int dimOfData = in.readInt();
int[][][] patternijk = new int[dimofData][][];
for(int i=0;i<dimofData;i++) {
int dimStrokes = in.readInt();
int[][] patternjk = new int[dimStrokes][];
for(int j=0;j<dimStrokes;j++) {
int dimPoints = in.readInt();
int[] patternk = new int[dimPoints];
for(int k=0;k<dimPoints;k++) {
patternk[k] = in.readInt();
}
patternjk[j] = patternk;
}
patternijk[i] = patternjk;
}
in.close();
return patternijk;
}
I had the same kind of issues on a project some months ago. I think you should split your file in various parts, and only load relevant parts following a choice from the user for example.
Hope it will be helpful!
I dont know your data but if you optimize your loop, it will effect the deserialize time unbelievably.
if you look at example below
computeRecursively(30);
computeRecursivelyWithLoop(30); // 270 milisecond
computeIteratively(30); // 1 milisecond
computeRecursivelyFasterUsingBigInteger(30); // about twice s fast as before version
computeRecursivelyFasterUsingBigIntegerAllocations(50000); // only 1.3 Second !!!
public class Fibo {
public static void main(String[] args) {
// try the methods
}
public static long computeRecursively(int n) {
if (n > 1) {
System.out.println(computeRecursively(n - 2)
+ computeRecursively(n - 1));
return computeRecursively(n - 2) + computeRecursively(n - 1);
}
return n;
}
public static long computeRecursivelyWithLoop(int n) {
if (n > 1) {
long result = 1;
do {
result += computeRecursivelyWithLoop(n - 2);
n--;
} while (n > 1);
System.out.println(result);
return result;
}
return n;
}
public static long computeIteratively(int n) {
if (n > 1) {
long a = 0, b = 1;
do {
long tmp = b;
b += a;
a = tmp;
System.out.println(a);
} while (--n > 1);
System.out.println(b);
return b;
}
return n;
}
public static BigInteger computeRecursivelyFasterUsingBigInteger(int n) {
if (n > 1) {
int m = (n / 2) + (n & 1); // not obvious at first – wouldn’t it be
// great to have a better comment here?
BigInteger fM = computeRecursivelyFasterUsingBigInteger(m);
BigInteger fM_1 = computeRecursivelyFasterUsingBigInteger(m - 1);
if ((n & 1) == 1) {
// F(m)^2 + F(m-1)^2
System.out.println(fM.pow(2).add(fM_1.pow(2)));
return fM.pow(2).add(fM_1.pow(2)); // three BigInteger objects
// created
} else {
// (2*F(m-1) + F(m)) * F(m)
System.out.println( fM_1.shiftLeft(1).add(fM).multiply(fM));
return fM_1.shiftLeft(1).add(fM).multiply(fM); // three
// BigInteger
// objects
// created
}
}
return (n == 0) ? BigInteger.ZERO : BigInteger.ONE; // no BigInteger
// object created
}
public static long computeRecursivelyFasterUsingBigIntegerAllocations(int n) {
long allocations = 0;
if (n > 1) {
int m = (n / 2) + (n & 1);
allocations += computeRecursivelyFasterUsingBigIntegerAllocations(m);
allocations += computeRecursivelyFasterUsingBigIntegerAllocations(m - 1);
// 3 more BigInteger objects allocated
allocations += 3;
System.out.println(allocations);
}
return allocations; // approximate number of BigInteger objects
// allocated when
// computeRecursivelyFasterUsingBigInteger(n) is
// called
}
}
I got this question in an interview the other day and would like to know some best possible answers(I did not answer very well haha):
Scenario: There is a webpage that is monitoring the bytes sent over a some network. Every time a byte is sent the recordByte() function is called passing that byte, this could happen hundred of thousands of times per day. There is a button on this page that when pressed displays the last 100 bytes passed to recordByte() on screen (it does this by calling the print method below).
The following code is what I was given and asked to fill out:
public class networkTraffic {
public void recordByte(Byte b){
}
public String print() {
}
}
What is the best way to store the 100 bytes? A list? Curious how best to do this.
Something like this (circular buffer) :
byte[] buffer = new byte[100];
int index = 0;
public void recordByte(Byte b) {
index = (index + 1) % 100;
buffer[index] = b;
}
public void print() {
for(int i = index; i < index + 100; i++) {
System.out.print(buffer[i % 100]);
}
}
The benefits of using a circular buffer:
You can reserve the space statically. In a real-time network application (VoIP, streaming,..)this is often done because you don't need to store all data of a transmission, but only a window containing the new bytes to be processed.
It's fast: can be implemented with an array with read and write cost of O(1).
I don't know java, but there must be a queue concept whereby you would enqueue bytes until the number of items in the queue reached 100, at which point you would dequeue one byte and then enqueue another.
public void recordByte(Byte b)
{
if (queue.ItemCount >= 100)
{
queue.dequeue();
}
queue.enqueue(b);
}
You could print by peeking at the items:
public String print()
{
foreach (Byte b in queue)
{
print("X", b); // some hexadecimal print function
}
}
Circular Buffer using array:
Array of 100 bytes
Keep track of where the head index is i
For recordByte() put the current byte in A[i] and i = i+1 % 100;
For print(), return subarray(i+1, 100) concatenate with subarray(0, i)
Queue using linked list (or the java Queue):
For recordByte() add new byte to the end
If the new length to be more than 100, remove the first element
For print() simply print the list
Here is my code. It might look a bit obscure, but I am pretty sure this is the fastest way to do it (at least it would be in C++, not so sure about Java):
public class networkTraffic {
public networkTraffic() {
_ary = new byte[100];
_idx = _ary.length;
}
public void recordByte(Byte b){
_ary[--_idx] = b;
if (_idx == 0) {
_idx = _ary.length;
}
}
private int _idx;
private byte[] _ary;
}
Some points to note:
No data is allocated/deallocated when calling recordByte().
I did not use %, because it is slower than a direct comparison and using the if (branch prediction might help here too)
--_idx is faster than _idx-- because no temporary variable is involved.
I count backwards to 0, because then I do not have to get _ary.length each time in the call, but only every 100 times when the first entry is reached. Maybe this is not necessary, the compiler could take care of it.
if there were less than 100 calls to recordByte(), the rest is zeroes.
Easiest thing is to shove it in an array. The max size that the array can accommodate is 100 bytes. Keep adding bytes as they are streaming off the web. After the first 100 bytes are in the array, when the 101st byte comes, remove the byte at the head (i.e. 0th). Keep doing this. This is basically a queue. FIFO concept. Ater the download is done, you are left with the last 100 bytes.
Not just after the download but at any given point in time, this array will have the last 100 bytes.
#Yottagray Not getting where the problem is? There seems to be a number of generic approaches (array, circular array etc) & a number of language specific approaches (byteArray etc). Am I missing something?
Multithreaded solution with non-blocking I/O:
private static final int N = 100;
private volatile byte[] buffer1 = new byte[N];
private volatile byte[] buffer2 = new byte[N];
private volatile int index = -1;
private volatile int tag;
synchronized public void recordByte(byte b) {
index++;
if (index == N * 2) {
//both buffers are full
buffer1 = buffer2;
buffer2 = new byte[N];
index = N;
}
if (index < N) {
buffer1[index] = b;
} else {
buffer2[index - N] = b;
}
}
public void print() {
byte[] localBuffer1, localBuffer2;
int localIndex, localTag;
synchronized (this) {
localBuffer1 = buffer1;
localBuffer2 = buffer2;
localIndex = index;
localTag = tag++;
}
int buffer1Start = localIndex - N >= 0 ? localIndex - N + 1 : 0;
int buffer1End = localIndex < N ? localIndex : N - 1;
printSlice(localBuffer1, buffer1Start, buffer1End, localTag);
if (localIndex >= N) {
printSlice(localBuffer2, 0, localIndex - N, localTag);
}
}
private void printSlice(byte[] buffer, int start, int end, int tag) {
for(int i = start; i <= end; i++) {
System.out.println(tag + ": "+ buffer[i]);
}
}
Just for the heck of it. How about using an ArrayList<Byte>? Say why not?
public class networkTraffic {
static ArrayList<Byte> networkMonitor; // ArrayList<Byte> reference
static { networkMonitor = new ArrayList<Byte>(100); } // Static Initialization Block
public void recordByte(Byte b){
networkMonitor.add(b);
while(networkMonitor.size() > 100){
networkMonitor.remove(0);
}
}
public void print() {
for (int i = 0; i < networkMonitor.size(); i++) {
System.out.println(networkMonitor.get(i));
}
// if(networkMonitor.size() < 100){
// for(int i = networkMonitor.size(); i < 100; i++){
// System.out.println("Emtpy byte");
// }
// }
}
}