Retrieve data from via UART - java

I am using UART to get the input to my system from PIC Development Board. and I use the following code to get the values from the board.
public SerialReader( InputStream in ) {
this.in = in;
}
public void run() {
byte[] buffer = new byte[ 1024 ];
int len = -1;
try {
/*len = this.in.read(buffer);
int x = buffer[0] & 0xff;
System.out.println(buffer[0]);
System.out.println(x);
System.out.println((char)x);*/
while( ( len = this.in.read( buffer ) ) > -1 ) {
System.out.print( new String( buffer, 0, len ) );
System.out.println(len);
/*String s = new String(buffer); //buffer.toString(); 1
System.out.println(s);
for (int i=0; i<=buffer.length; i++)
System.out.println(buffer[i]);
//System.out.print( new String( buffer, 0, len ) );
*/ }
} catch( IOException e ) {
e.printStackTrace();
}
}
}
Output: ààà
Expected Output: #C01=0155,INT=16:11,OUT=05:11
How do I retrive the expected output.

Related

Reading incorrect values?

I have the follow class, extending Thread.
The idea is to extract the date inside the thread, and everything goes well, until the sometime when the received data is larger than a few kilobytes, and then i am starting to reading complete incorrect data.
public class ThreadBooksPositions extends Thread
{
public ThreadBooksPositions()
{
}
..
// default constructors
public void run()
{
InputStream iSS = null;
HttpURLConnection connection = null;
Integer sectionsDescriptorSize1 = 0;
Integer sectionsDescriptorSize2 = 0;
try
{
URL url = new URL( "192.168.1.4/bookstore.asp?getbooks" );
connection = (HttpURLConnection) url.openConnection();
connection.setRequestMethod( "GET" );
connection.connect();
iSS = connection.getInputStream();
BufferedInputStream bIS = new BufferedInputStream( iSS );
if( bIS.available() > 4 )
{
Float lat = 0F;
Float lng = 0F;
ByteArrayOutputStream out = new ByteArrayOutputStream();
byte[] bf;
try
{
bf = new byte[ bIS.available() ];
while ( bIS.read( bf ) != -1)
out.write( bf ); //copy streams
out.flush();
}
catch ( IOException e )
{
// TODO Auto-generated catch block
e.printStackTrace();
} //you can configure the buffer size
byte[] bO = out.toByteArray();
if( out != null )
{
try
{
out.close();
}
catch ( IOException e )
{
// TODO Auto-generated catch block
e.printStackTrace();
}
}
ByteBuffer data = ByteBuffer.wrap( bO );
sectionsDescriptorSize1 = data.getInt();
sectionsDescriptorSize2 = data.getInt();
ByteBuffer sectionData;
try
{
if( sectionsDescriptorSize1 > 0 )
{
byte[] bAS0 = new byte[ sectionsDescriptorSize1 ];
data.get( bAS0 );
}
if( sectionsDescriptorSize2 > 1 )
{
// trajectory
byte[] bAS1 = new byte[ sectionsDescriptorSize2 ];
data.get( bAS1, 0, sectionsDescriptorSize2 );
sectionData = ByteBuffer.wrap( bAS1 );
Boolean readingFailed = true;
if( sectionData != null )
{
while( sectionData.available() > 1 )
{
try
{
readingFailed = false;
lat = sectionData.getFloat(); // 4
lng = sectionData.getFloat(); // 4
}
catch( Exception e )
{
readingFailed = true;
}
try
{
if( readingFailed == false )
{
addBookStorePosition( lat, lng );
}
}
catch (Exception e)
{
}
}
}
}
catch( Error e )
{
}
}
}
catch( IOException e )
{
}
finally
{
if( iSS != null )
{
try
{
iSS.close();
}
catch( IOException e )
{
}
}
if( connection != null )
{
connection.disconnect();
}
}
}
}
What might cause reading of incorrect data ?
Found the issue.
Seems like .available() is causing the issue, especially in a threads.

Send objects with MPJ express

I'm new to parallel programming and I want to do it in java.
I am wondering if it is possible to send and receive more complex ojects via MPI. I'm using MPJ express. However whenever I want to send a object I get a ClassCastException.
MPI.Init(args);
myrank = MPI.COMM_WORLD.Rank();
numprocs = MPI.COMM_WORLD.Size();
Vector<CustomClass> chr = new Vector<CustomClass>();
if (myrank == 0 ) { //am I the master?
for (int i = 1; i < numprocs; i++) {
MPI.COMM_WORLD.Send(chr, 0, chr.size(), MPI.OBJECT, i, 99); //Here's where the
exception occurs
}
}
else {
Vector<BasicRegion> chr_received = new Vector<BasicRegion>();
MPI.COMM_WORLD.Recv(chr_received, 0, 1, MPI.OBJECT, 0, 99 );
}
Exception:
mpi.MPIException: mpi.MPIException: java.lang.ClassCastException: java.util.Vector cannot be cast to [Ljava.lang.Object;
So my questions are:
- is it possible to send/receive more complex objects with MPJ Express?
- if so: what am I doing wrong?
I am new to MPJ express as well, but seems the enclosing object needs to be primitive type - array of something. (like as you do with the C/C++ implementation in OpenMPI).
This kind of code worked for me well:
Node t[] = new Node[4];
...
count[0] = t.length;
MPI.COMM_WORLD.Send(count, 0, 1, MPI.INT, 1, 98);
MPI.COMM_WORLD.Send(t, 0, t.length, MPI.OBJECT, 1, 99);
} else if( myRank == 1 ) {
int count[] = new int[1];
MPI.COMM_WORLD.Recv( count, 0, 1, MPI.INT, 0, 98);
Status mps = MPI.COMM_WORLD.Recv( t, 0, count[0], MPI.OBJECT, 0, 99 );
...
And of course, you have to have that custom class implementing Serializable interface.
You want to serialize it prior to sending.
import mpi.*;
import java.io.*;
import java.nio.ByteBuffer;
public class MPITest
{
public static void main(String[] args)
{
MPI.Init(args);
int me = MPI.COMM_WORLD.Rank();
int tasks = MPI.COMM_WORLD.Size();
MPI.COMM_WORLD.Barrier();
if(me == 0)
{
Cat cat = new Cat("Tom", 15);
cat.Speak();
ByteBuffer byteBuff = ByteBuffer.allocateDirect(2000 + MPI.SEND_OVERHEAD);
MPI.Buffer_attach(byteBuff);
try
{
ByteArrayOutputStream bos = new ByteArrayOutputStream();
ObjectOutput out = null;
out = new ObjectOutputStream(bos);
out.writeObject(cat);
byte[] bytes = bos.toByteArray();
System.out.println("Serialized to " + bytes.length);
MPI.COMM_WORLD.Isend(bytes, 0, bytes.length, MPI.BYTE, 1, 0);
}
catch(IOException ex)
{
}
}
else
{
byte[] bytes = new byte[2000];
Cat recv = null;
MPI.COMM_WORLD.Recv(bytes, 0, 2000, MPI.BYTE, MPI.ANY_SOURCE, 0);
ByteArrayInputStream bis = new ByteArrayInputStream(bytes);
ObjectInput in = null;
try
{
in = new ObjectInputStream(bis);
Object obj = in.readObject();
recv = (Cat)obj;
recv.Speak();
}
catch(IOException ex)
{
}
catch(ClassNotFoundException cnf)
{
}
}
MPI.COMM_WORLD.Barrier();
MPI.Finalize();
}
}
This works however you may want to use Externalization and do it manually to avoid some of the extra garbage that the serialize routine will send.
HTH
Brian
import mpi.*;
/**
* Compile: javac -cp $MPJ_HOME/lib/mpj.jar:. ObjSend.java
* Execute: mpjrun.sh -np 2 -dport 11000 ObjSend
*/
public class ObjSend {
public static void main(String[] args) throws Exception {
int peer ;
MPI.Init(args);
int rank = MPI.COMM_WORLD.Rank() ;
int size = MPI.COMM_WORLD.Size() ;
int tag = 100 ;
if(rank == 0) {
String [] smsg = new String[1] ;
smsg[0] = "Hi from proc 0" ;
peer = 1 ;
MPI.COMM_WORLD.Send(smsg, 0, smsg.length, MPI.OBJECT,
peer, tag);
System.out.println("proc <"+rank+"> sent a msg to "+
"proc <"+peer+">") ;
} else if(rank == 1) {
String[] rmsg = new String[1] ;
peer = 0 ;
MPI.COMM_WORLD.Recv(rmsg, 0, rmsg.length , MPI.OBJECT,
peer, tag);
System.out.println("proc <"+rank+"> received a msg from "+
"proc <"+peer+">") ;
System.out.println("proc <"+rank+"> received the following "+
"message: \""+rmsg[0]+"\"") ;
}
MPI.Finalize();
}
}

GZIP outputs and sizes between C#/dot Net and Java

I am testing the feasibility of compressing some messaging between Java and C#.
The messaging used ranges from small strings (40bytes) to larger strings (4K).
I have found differences in the output of Java GZIP implementation to the dot Net GZIP implementation.
I'm guessing that dot Net has a larger header that is causing the large overhead.
I prefer the Java implementation as it works better on small strings, and would like the dot Net to achieve similar results.
Output, Java version 1.6.0_10
Text:EncodeDecode
Bytes:(12 bytes)RW5jb2RlRGVjb2Rl <- Base64
Compressed:(29)H4sIAAAAAAAAAHPNS85PSXVJBZEAd9jYdgwAAAA=
Decompressed:(12)RW5jb2RlRGVjb2Rl
Converted:EncodeDecode
Text:EncodeDecodeEncodeDecodeEncodeDecodeEncodeDecodeEncodeDecodeEncodeDecodeEncodeDecodeEncodeDecodeEncodeDecodeEncodeDecode
Bytes:(120)RW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2Rl
Compressed:(33)H4sIAAAAAAAAAHPNS85PSXVJBZGudGQDAOcKnrd4AAAA
Decompressed:(120)RW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2Rl
Converted:EncodeDecodeEncodeDecodeEncodeDecodeEncodeDecodeEncodeDecodeEncodeDecodeEncodeDecodeEncodeDecodeEncodeDecodeEncodeDecode
Output, dot Net 2.0.50727
Text:EncodeDecode
Bytes:(12)RW5jb2RlRGVjb2Rl
Compressed:(128)H4sIAAAAAAAEAO29B2AcSZYlJi9tynt/SvVK1+B0oQiAYBMk2JBAEOzBiM3mkuwdaUcjKasqgcplVmVdZhZAzO2dvPfee++999577733ujudTif33/8/XGZkAWz2zkrayZ4hgKrIHz9+fB8/Ik6X02qWP83x7/8Dd9jYdgwAAAA=
Decompressed:(12)RW5jb2RlRGVjb2Rl
Text:EncodeDecode
Text:EncodeDecodeEncodeDecodeEncodeDecodeEncodeDecodeEncodeDecodeEncodeDecodeEncodeDecodeEncodeDecodeEncodeDecodeEncodeDecode
Bytes:(120)RW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2Rl
Compressed:(131)H4sIAAAAAAAEAO29B2AcSZYlJi9tynt/SvVK1+B0oQiAYBMk2JBAEOzBiM3mkuwdaUcjKasqgcplVmVdZhZAzO2dvPfee++999577733ujudTif33/8/XGZkAWz2zkrayZ4hgKrIHz9+fB8/Ik6X02qWP83x7w/z9/8H5wqet3gAAAA=
Decompressed:(120)RW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2RlRW5jb2RlRGVjb2Rl
Text:EncodeDecodeEncodeDecodeEncodeDecodeEncodeDecodeEncodeDecodeEncodeDecodeEncodeDecodeEncodeDecodeEncodeDecodeEncodeDecode
How can I achieve the smaller sized encoding on the dot Net side?
Note,
Java implementation can decode dot Net implementation and
dot Net implementation can decode Java implementation.
Java Code
#Test
public void testEncodeDecode()
{
final String strTitle = "EncodeDecode";
try
{
debug( "Text:" + strTitle );
byte[] ba = strTitle.getBytes( "UTF-8" );
debug( "Bytes:" + toString( ba ) );
byte[] eba = encode_GZIP( ba );
debug( "Encoded:" + toString( eba ) );
byte[] ba2 = decode_GZIP( eba );
debug( "Decoded:" + toString( ba2 ) );
debug( "Converted:" + new String( ba2, "UTF-8" ) );
}
catch( Exception ex ) { fail( ex ); }
}
#Test
public void testEncodeDecode2()
{
final String strTitle = "EncodeDecode";
try
{
StringBuilder sb = new StringBuilder();
for( int i = 0 ; i < 10 ; i++ ) sb.append( strTitle );
debug( "Text:" + sb.toString() );
byte[] ba = sb.toString().getBytes( ENCODING );
debug( "Bytes:" + toString( ba ) );
byte[] eba = encode_GZIP( ba );
debug( "Encoded:" + toString( eba ) );
byte[] ba2 = decode_GZIP( eba );
debug( "Decoded:" + toString( ba2 ) );
debug( "Converted:" + new String( ba2, ENCODING ) );
}
catch( Exception ex ) { fail( ex ); }
}
private String toString( byte[] ba )
{
return "("+ba.length+")"+Base64.byteArrayToBase64( ba );
}
protected static byte[] encode_GZIP( byte[] baData ) throws IOException
{
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ByteArrayInputStream bais = new ByteArrayInputStream( baData );
GZIPOutputStream zos = new GZIPOutputStream( baos );
byte[] baBuf = new byte[ 1024 ];
int nSize;
while( -1 != ( nSize = bais.read( baBuf ) ) )
{
zos.write( baBuf, 0, nSize );
zos.flush();
}
Utilities.closeQuietly( zos );
Utilities.closeQuietly( bais );
return baos.toByteArray();
}
protected static byte[] decode_GZIP( byte[] baData ) throws IOException
{
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ByteArrayInputStream bais = new ByteArrayInputStream( baData );
GZIPInputStream zis = new GZIPInputStream( bais );
byte[] baBuf = new byte[ 1024 ];
int nSize;
while( -1 != ( nSize = zis.read( baBuf ) ) )
{
baos.write( baBuf, 0, nSize );
baos.flush();
}
Utilities.closeQuietly( zis );
Utilities.closeQuietly( bais );
return baos.toByteArray();
}
private void debug( Object o ) { System.out.println( o ); }
private void fail( Exception ex )
{
ex.printStackTrace();
Assert.fail( ex.getMessage() );
}
dot Net Code
[Test]
public void TestJava6()
{
string strData = "EncodeDecode";
Console.WriteLine("Text:" + strData);
byte[] baData = Encoding.UTF8.GetBytes(strData);
Console.WriteLine("Bytes:" + toString(baData));
byte[] ebaData2 = encode_GZIP(baData);
Console.WriteLine("Encoded:" + toString(ebaData2));
byte[] baData2 = decode_GZIP(ebaData2);
Console.WriteLine("Decoded:" + toString(baData2));
Console.WriteLine("Text:" + Encoding.UTF8.GetString(baData2));
}
[Test]
public void TestJava7()
{
string strData = "EncodeDecode";
StringBuilder sb = new StringBuilder();
for (int i = 0; i < 10; i++) sb.Append(strData);
Console.WriteLine("Text:" + sb.ToString());
byte[] baData = Encoding.UTF8.GetBytes(sb.ToString());
Console.WriteLine("Bytes:" + toString(baData));
byte[] ebaData2 = encode_GZIP(baData);
Console.WriteLine("Encoded:" + toString(ebaData2));
byte[] baData2 = decode_GZIP(ebaData2);
Console.WriteLine("Decoded:" + toString(baData2));
Console.WriteLine("Text:" + Encoding.UTF8.GetString(baData2));
}
public string toString(byte[] ba)
{
return "(" + ba.Length + ")" + Convert.ToBase64String(ba);
}
protected static byte[] decode_GZIP(byte[] ba)
{
MemoryStream writer = new MemoryStream();
using (GZipStream zis = new GZipStream(new MemoryStream(ba), CompressionMode.Decompress))
{
Utilities.CopyStream(zis, writer);
}
return writer.ToArray();
}
protected static byte[] encode_GZIP(byte[] ba)
{
using (MemoryStream reader = new MemoryStream(ba))
{
MemoryStream writer = new MemoryStream();
using (GZipStream zos = new GZipStream(writer, CompressionMode.Compress))
{
Utilities.CopyStream(reader, zos);
}
return writer.ToArray();
}
}
This is one of several bugs in the .NET gzip code. That code should be avoided. Use DotNetZip instead. See answer here: Why does my C# gzip produce a larger file than Fiddler or PHP? .

Reading files bits and saving them

i have file reader which read entire file and write it's bits.
I have this class which help reading:
import java.io.*;
public class FileReader extends ByteArrayInputStream{
private int bitsRead;
private int bitPosition;
private int currentByte;
private int myMark;
private final static int NUM_BITS_IN_BYTE = 8;
private final static int END_POSITION = -1;
private boolean readingStarted;
/**
* Create a BitInputStream for a File on disk.
*/
public FileReader( byte[] buf ) throws IOException {
super( buf );
myMark = 0;
bitsRead = 0;
bitPosition = NUM_BITS_IN_BYTE-1;
currentByte = 0;
readingStarted = false;
}
/**
* Read a binary "1" or "0" from the File.
*/
public int readBit() throws IOException {
int theBit = -1;
if( bitPosition == END_POSITION || !readingStarted ) {
currentByte = super.read();
bitPosition = NUM_BITS_IN_BYTE-1;
readingStarted = true;
}
theBit = (0x01 << bitPosition) & currentByte;
bitPosition--;
if( theBit > 0 ) {
theBit = 1;
}
return( theBit );
}
/**
* Return the next byte in the File as lowest 8 bits of int.
*/
public int read() {
currentByte = super.read();
bitPosition = END_POSITION;
readingStarted = true;
return( currentByte );
}
/**
*
*/
public void mark( int readAheadLimit ) {
super.mark(readAheadLimit);
myMark = bitPosition;
}
/**
* Add needed functionality to super's reset() method. Reset to
* the last valid position marked in the input stream.
*/
public void reset() {
super.pos = super.mark-1;
currentByte = super.read();
bitPosition = myMark;
}
/**
* Returns the number of bits still available to be read.
*/
public int availableBits() throws IOException {
return( ((super.available() * 8) + (bitPosition + 1)) );
}
}
In class where i call this, i do:
FileInputStream inputStream = new FileInputStream(file);
byte[] fileBits = new byte[inputStream.available()];
inputStream.read(fileBits, 0, inputStream.available());
inputStream.close();
FileReader bitIn = new FileReader(fileBits);
and this work correctly.
However i have problems with big files above 100 mb because byte[] have the end.
So i want to read bigger files. Maybe some could suggest how i can improve this code ?
Thanks.
If scaling to large file sizes is important, you'd be better off not reading the entire file into memory. The downside is that handling the IOException in more locations can be a little messy. Also, it doesn't look like your application needs something that implements the InputStream API, it just needs the readBit() method. So, you can safely encapsulate, rather than extend, the InputStream.
class FileReader {
private final InputStream src;
private final byte[] bits = new byte[8192];
private int len;
private int pos;
FileReader(InputStream src) {
this.src = src;
}
int readBit() throws IOException {
int idx = pos / 8;
if (idx >= len) {
int n = src.read(bits);
if (n < 0)
return -1;
len = n;
pos = 0;
idx = 0;
}
return ((bits[idx] & (1 << (pos++ % 8))) == 0) ? 0 : 1;
}
}
Usage would look similar.
FileInputStream src = new FileInputStream(file);
try {
FileReader bitIn = new FileReader(src);
...
} finally {
src.close();
}
If you really do want to read in the entire file, and you are working with an actual file, you can query the length of the file first.
File file = new File(path);
if (file.length() > Integer.MAX_VALUE)
throw new IllegalArgumentException("File is too large: " + file.length());
int len = (int) file.length();
FileInputStream inputStream = new FileInputStream(file);
try {
byte[] fileBits = new byte[len];
for (int pos = 0; pos < len; ) {
int n = inputStream.read(fileBits, pos, len - pos);
if (n < 0)
throw new EOFException();
pos += n;
}
/* Use bits. */
...
} finally {
inputStream.close();
}
org.apache.commons.io.IOUtils.copy(InputStream in, OutputStream out)

Uploading large gzipped data files to HDFS

I have a use case where I want to upload big gzipped text data files (~ 60 GB) on HDFS.
My code below is taking about 2 hours to upload these files in chunks of 500 MB. Following is the pseudo code. I was chekcing if somebody could help me reduce this time:
i) int fileFetchBuffer = 500000000;
System.out.println("file fetch buffer is: " + fileFetchBuffer);
int offset = 0;
int bytesRead = -1;
try {
fileStream = new FileInputStream (file);
if (fileName.endsWith(".gz")) {
stream = new GZIPInputStream(fileStream);
BufferedReader reader = new BufferedReader(new InputStreamReader(stream));
String[] fileN = fileName.split("\\.");
System.out.println("fil 0 : " + fileN[0]);
System.out.println("fil 1 : " + fileN[1]);
//logger.info("First line is: " + streamBuff.readLine());
byte[] buffer = new byte[fileFetchBuffer];
FileSystem fs = FileSystem.get(conf);
int charsLeft = fileFetchBuffer;
while (true) {
charsLeft = fileFetchBuffer;
logger.info("charsLeft outside while: " + charsLeft);
FSDataOutputStream dos = null;
while (charsLeft != 0) {
bytesRead = stream.read(buffer, 0, charsLeft);
if (bytesRead < 0) {
dos.flush();
dos.close();
break;
}
offset = offset + bytesRead;
charsLeft = charsLeft - bytesRead;
logger.info("offset in record: " + offset);
logger.info("charsLeft: " + charsLeft);
logger.info("bytesRead in record: " + bytesRead);
//prettyPrintHex(buffer);
String outFileStr = Utils.getOutputFileName(
stagingDir,
fileN[0],
outFileNum);
if (dos == null) {
Path outFile = new Path(outFileStr);
if (fs.exists(outFile)) {
fs.delete(outFile, false);
}
dos = fs.create(outFile);
}
dos.write(buffer, 0, bytesRead);
}
logger.info("done writing: " + outFileNum);
dos.flush();
dos.close();
if (bytesRead < 0) {
dos.flush();
dos.close();
break;
}
outFileNum++;
} // end of if
} else {
// Assume uncompressed file
stream = fileStream;
}
} catch(FileNotFoundException e) {
logger.error("File not found" + e);
}
You should consider using the super package IO from Apache.
It has a method
IOUtils.copy( InputStream, OutputStream )
that would tremendously reduce time needed to copy your files.
I tried with buffered input stream and saw no real difference.
I suppose a file channel implementation could be even more efficient. Tell me if it's not fast enough.
package toto;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
public class Slicer {
private static final int BUFFER_SIZE = 50000;
public static void main(String[] args) {
try
{
slice( args[ 0 ], args[ 1 ], Long.parseLong( args[2]) );
}//try
catch (IOException e)
{
e.printStackTrace();
}//catch
catch( Exception ex )
{
ex.printStackTrace();
System.out.println( "Usage : toto.Slicer <big file> <chunk name radix > <chunks size>" );
}//catch
}//met
/**
* Slices a huge files in chunks.
* #param inputFileName the big file to slice.
* #param outputFileRadix the base name of slices generated by the slicer. All slices will then be numbered outputFileRadix0,outputFileRadix1,outputFileRadix2...
* #param chunkSize the size of chunks in bytes
* #return the number of slices.
*/
public static int slice( String inputFileName, String outputFileRadix, long chunkSize ) throws IOException
{
//I would had some code to pretty print the output file names
//I mean adding a couple of 0 before chunkNumber in output file name
//so that they all have same number of chars
//use java.io.File for that, estimate number of chunks, take power of 10, got number of leading 0s
//just to get some stats
long timeStart = System.currentTimeMillis();
long timeStartSlice = timeStart;
long timeEnd = 0;
//io streams and chunk counter
int chunkNumber = 0;
FileInputStream fis = null;
FileOutputStream fos = null;
try
{
//open files
fis = new FileInputStream( inputFileName );
fos = new FileOutputStream( outputFileRadix + chunkNumber );
//declare state variables
boolean finished = false;
byte[] buffer = new byte[ BUFFER_SIZE ];
int bytesRead = 0;
long bytesInChunk = 0;
while( !finished )
{
//System.out.println( "bytes to read " +(int)Math.min( BUFFER_SIZE, chunkSize - bytesInChunk ) );
bytesRead = fis.read( buffer,0, (int)Math.min( BUFFER_SIZE, chunkSize - bytesInChunk ) );
if( bytesRead == -1 )
finished = true;
else
{
fos.write( buffer, 0, bytesRead );
bytesInChunk += bytesRead;
if( bytesInChunk == chunkSize )
{
if( fos != null )
{
fos.close();
timeEnd = System.currentTimeMillis();
System.out.println( "Chunk "+chunkNumber + " has been generated in "+ (timeEnd - timeStartSlice) +" ms");
chunkNumber ++;
bytesInChunk = 0;
timeStartSlice = timeEnd;
System.out.println( "Creating slice number " + chunkNumber );
fos = new FileOutputStream( outputFileRadix + chunkNumber );
}//if
}//if
}//else
}//while
}
catch (Exception e)
{
System.out.println( "A problem occured during slicing : " );
e.printStackTrace();
}//catch
finally
{
//whatever happens close all files
System.out.println( "Closing all files.");
if( fis != null )
fis.close();
if( fos != null )
fos.close();
}//fin
timeEnd = System.currentTimeMillis();
System.out.println( "Total slicing time : " + (timeEnd - timeStart) +" ms" );
System.out.println( "Total number of slices "+ (chunkNumber +1) );
return chunkNumber+1;
}//met
}//class
Greetings,
Stéphane

Categories