I'm using Android HttpURLConnection to do network downloading. Here is my code:
HttpURLConnection conn = null;
try {
conn = (HttpURLConnection) new URL("http://XXXXXXXXXX").openConnection();
InputStream inputStream = conn.getInputStream();
Log.d("Reflection",inputStream.getClass().getCanonicalName());
int TOTAL_LEN = conn.getContentLength();
byte[] buf = new byte[1024 * 8];
int len = 0, total = 0;
while (len != -1) {
len = inputStream.read(buf);
total += len;
Log.d("Download Process:", total * 1.0 / TOTAL_LEN * 100 + "");
}
}catch (Exception e) {
e.printStackTrace();
}
Since Inputstream is a abstract class, I use reflection to show its implementation:inputStream.getClass().getCanonicalName(). But the result is null.
So I try inputStream.getClass() and the result is class com.android.okio.RealBufferedSource$1.
Why? How can I figure out the real implementation of the inputstream here?
PROBLEM SOLVED
The answer is anonymous class. I went through the Android source code and discovered it. The location is /external/okhttp/okio/src/main/java/okio/RealBufferedSource.java and the code is below:
public InputStream inputStream() {
return new InputStream() {
#Override
public int read() throws IOException {
if (closed) throw new IOException("closed");
if (buffer.size == 0) {
long count = source.read(buffer, Segment.SIZE);
if (count == -1) return -1;
}
return buffer.readByte() & 0xff;
}
#Override
public int read(byte[] data, int offset, int byteCount) throws IOException {
if (closed) throw new IOException("closed");
checkOffsetAndCount(data.length, offset, byteCount);
if (buffer.size == 0) {
long count = source.read(buffer, Segment.SIZE);
if (count == -1) return -1;
}
return buffer.read(data, offset, byteCount);
}
#Override
public int available() throws IOException {
if (closed) throw new IOException("closed");
return (int) Math.min(buffer.size, Integer.MAX_VALUE);
}
#Override
public void close() throws IOException {
RealBufferedSource.this.close();
}
#Override
public String toString() {
return RealBufferedSource.this + ".inputStream()";
}
};
}
Notice that this method returns a new InputStream() and it is anonymous.
Related
When sending file, you can do ctx.writeAndFlush(new ChunkedFile(new File("file.png")));.
how about a List<Object>?
The list contains String and bytes of image.
from the documentation there's ChunkedInput() but I'm not able to get the use of it.
UPDATE
let's say in my Handler, inside channelRead0(ChannelHandlerContext ctx, Object o) method where I want to send the List<Object> I've done the following
#Override
protected void channelRead0(ChannelHandlerContext ctx, Object o) throws Exception {
List<Object> msg = new ArrayList<>();
/**getting the bytes of image**/
byte[] imageInByte;
BufferedImage originalImage = ImageIO.read(new File(fileName));
// convert BufferedImage to byte array
ByteArrayOutputStream bAoS = new ByteArrayOutputStream();
ImageIO.write(originalImage, "png", bAoS);
bAoS.flush();
imageInByte = baos.toByteArray();
baos.close();
msg.clear();
msg.add(0, "String"); //add the String into List
msg.add(1, imageInByte); //add the bytes of images into list
/**Chunk the List<Object> and Send it just like the chunked file**/
ctx.writeAndFlush(new ChunkedInput(DONT_KNOW_WHAT_TO_DO_HERE)); //
}
Just implement your own ChunkedInput<ByteBuf>. Following the implementations shipped with Netty you can implement it as follows:
public class ChunkedList implements ChunkedInput<ByteBuf> {
private static final byte[] EMPTY = new byte[0];
private byte[] previousPart = EMPTY;
private final int chunkSize;
private final Iterator<Object> iterator;
public ChunkedList(int chunkSize, List<Object> objs) {
//chunk size in bytes
this.chunkSize = chunkSize;
this.iterator = objs.iterator();
}
public ByteBuf readChunk(ChannelHandlerContext ctx) {
return readChunk(ctx.alloc());
}
public ByteBuf readChunk(ByteBufAllocator allocator) {
if (isEndOfInput())
return null;
else {
ByteBuf buf = allocator.buffer(chunkSize);
boolean release = true;
try {
int bytesRead = 0;
if (previousPart.length > 0) {
if (previousPart.length > chunkSize) {
throw new IllegalStateException();
}
bytesRead += previousPart.length;
buf.writeBytes(previousPart);
}
boolean done = false;
while (!done) {
if (!iterator.hasNext()) {
done = true;
previousPart = EMPTY;
} else {
Object o = iterator.next();
//depending on the encoding
byte[] bytes = o instanceof String ? ((String) o).getBytes() : (byte[]) o;
bytesRead += bytes.length;
if (bytesRead > chunkSize) {
done = true;
previousPart = bytes;
} else {
buf.writeBytes(bytes);
}
}
}
release = false;
} finally {
if (release)
buf.release();
}
return buf;
}
}
public long length() {
return -1;
}
public boolean isEndOfInput() {
return !iterator.hasNext() && previousPart.length == 0;
}
public long progress() {
return 0;
}
public void close(){
//close
}
}
In order to write ChunkedContent there is a special handler shipped with Netty. See io.netty.handler.stream.ChunkedWriteHandler. So just add to your downstream. Here is the quote from documentation:
A ChannelHandler that adds support for writing a large data stream
asynchronously neither spending a lot of memory nor getting
OutOfMemoryError. Large data streaming such as file transfer requires
complicated state management in a ChannelHandler implementation.
ChunkedWriteHandler manages such complicated states so that you can
send a large data stream without difficulties.
In the Java Spring framework, how could I create an end point that starts consuming the HTTP request body in chunks before the request has finished?
It seem like the default behavior i that the end point method is not executed until the request has ended.
The following Node.js server starts consuming the request body, how to do the same with the Java Spring framework?
const http = require('http');
const server = http.createServer((request, response) => {
request.on('data', (chunk) => {
console.log('NEW CHUNK: ', chunk.toString().length);
});
request.on('end', () => {
response.end('done');
});
});
server.listen(3000);
Outputs:
NEW CHUNK: 65189
NEW CHUNK: 65536
NEW CHUNK: 65536
NEW CHUNK: 65536
NEW CHUNK: 54212
I'm not sure there is a solution for mapping chunked request with spring what i'll do is something like this:
import java.io.IOException;
import java.io.InputStream;
import java.util.Arrays;
import javax.servlet.http.HttpServletRequest;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RequestParam;
#Controller
public class ChunkController {
private static final int EOS = -1;
#RequestMapping(method = RequestMethod.POST)
public ResponseEntity<Void> upload(final HttpServletRequest request, #RequestParam final int chunkSize) {
try (InputStream in = request.getInputStream()) {
byte[] readBuffer = new byte[chunkSize];
int nbByteRead = 0;
int remainingByteToChunk = chunkSize;
while ((nbByteRead = in.read(readBuffer, chunkSize - remainingByteToChunk, remainingByteToChunk)) != EOS) {
remainingByteToChunk -= nbByteRead;
if (remainingByteToChunk == 0) {
byte[] chunk = Arrays.copyOf(readBuffer, readBuffer.length);
remainingByteToChunk = readBuffer.length;
// do something with the chunk.
}
}
if (remainingByteToChunk != chunkSize) {
byte[] lastChunk = Arrays.copyOf(readBuffer, readBuffer.length - remainingByteToChunk);
// do something with the last chunk
}
return new ResponseEntity<>(HttpStatus.OK);
} catch (IOException e) {
return new ResponseEntity<>(HttpStatus.INTERNAL_SERVER_ERROR);
}
}
}
or you can define a constant for the size of the chunk.
You can also ignore the size the chunk and just handle the result of in.read until the end of stream.
Reading like this you will need to parse data to find actual chunk sent by the client. A typical body will look like:
7\r\n
Mozilla\r\n
9\r\n
Developer\r\n
7\r\n
Network\r\n
0\r\n
\r\n
What you can do is create custom InputStream like this (adapted from Apache HttpClient ChunkedInputStream)
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.nio.charset.Charset;
import java.nio.charset.StandardCharsets;
import java.util.Arrays;
import java.util.Objects;
public class ChunkedInputStream extends InputStream {
public static final byte[] EMPTY = new byte[0];
private final Charset charset;
private InputStream in;
private int chunkSize;
private int pos;
private boolean bof = true;
private boolean eof = false;
private boolean closed = false;
public ChunkedInputStream(final InputStream in, Charset charset) throws IOException {
if (in == null) {
throw new IllegalArgumentException("InputStream parameter may not be null");
}
this.in = in;
this.pos = 0;
this.charset = Objects.requireNonNullElse(charset, StandardCharsets.US_ASCII);
}
public int read() throws IOException {
if (closed) {
throw new IOException("Attempted read from closed stream.");
}
if (eof) {
return -1;
}
if (pos >= chunkSize) {
nextChunk();
if (eof) {
return -1;
}
}
pos++;
return in.read();
}
public int read(byte[] b, int off, int len) throws IOException {
if (closed) {
throw new IOException("Attempted read from closed stream.");
}
if (eof) {
return -1;
}
if (pos >= chunkSize) {
nextChunk();
if (eof) {
return -1;
}
}
len = Math.min(len, chunkSize - pos);
int count = in.read(b, off, len);
pos += count;
return count;
}
public int read(byte[] b) throws IOException {
return read(b, 0, b.length);
}
public byte[] readChunk() throws IOException {
if (eof) {
return EMPTY;
}
if (pos >= chunkSize) {
nextChunk();
if (eof) {
return EMPTY;
}
}
byte[] chunk = new byte[chunkSize];
int nbByteRead = 0;
int remainingByteToChunk = chunkSize;
while (remainingByteToChunk > 0 && !eof) {
nbByteRead = read(chunk, chunkSize - remainingByteToChunk, remainingByteToChunk);
remainingByteToChunk -= nbByteRead;
}
if (remainingByteToChunk == 0) {
return chunk;
} else {
return Arrays.copyOf(chunk, chunk.length - remainingByteToChunk);
}
}
private void readCRLF() throws IOException {
int cr = in.read();
int lf = in.read();
if ((cr != '\r') || (lf != '\n')) {
throw new IOException(
"CRLF expected at end of chunk: " + cr + "/" + lf);
}
}
private void nextChunk() throws IOException {
if (!bof) {
readCRLF();
}
chunkSize = getChunkSizeFromInputStream(in);
bof = false;
pos = 0;
if (chunkSize == 0) {
eof = true;
}
}
private int getChunkSizeFromInputStream(final InputStream in)
throws IOException {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
// States: 0=normal, 1=\r was scanned, 2=inside quoted string, -1=end
int state = 0;
while (state != -1) {
int b = in.read();
if (b == -1) {
throw new IOException("chunked stream ended unexpectedly");
}
switch (state) {
case 0:
switch (b) {
case '\r':
state = 1;
break;
case '\"':
state = 2;
/* fall through */
default:
baos.write(b);
}
break;
case 1:
if (b == '\n') {
state = -1;
} else {
// this was not CRLF
throw new IOException("Protocol violation: Unexpected"
+ " single newline character in chunk size");
}
break;
case 2:
switch (b) {
case '\\':
b = in.read();
baos.write(b);
break;
case '\"':
state = 0;
/* fall through */
default:
baos.write(b);
}
break;
default:
throw new RuntimeException("assertion failed");
}
}
//parse data
String dataString = baos.toString(charset);
int separator = dataString.indexOf(';');
dataString = (separator > 0)
? dataString.substring(0, separator).trim()
: dataString.trim();
int result;
try {
result = Integer.parseInt(dataString.trim(), 16);
} catch (NumberFormatException e) {
throw new IOException("Bad chunk size: " + dataString);
}
return result;
}
public void close() throws IOException {
if (!closed) {
try {
if (!eof) {
exhaustInputStream(this);
}
} finally {
eof = true;
closed = true;
}
}
}
static void exhaustInputStream(InputStream inStream) throws IOException {
// read and discard the remainder of the message
byte[] buffer = new byte[1024];
while (inStream.read(buffer) >= 0) {
;
}
}
}
In the controller you can keep the same controller code but wrap the request.getInputStream() with this but you still won't get the actual client chunk. That's why I add the readChunk() method
#PostMapping("/upload")
public ResponseEntity<Void> upload(final HttpServletRequest request, #RequestHeader HttpHeaders headers) {
Charset charset = StandardCharsets.US_ASCII;
if (headers.getContentType() != null) {
charset = Objects.requireNonNullElse(headers.getContentType().getCharset(), charset);
}
try (ChunkedInputStream in = new ChunkedInputStream(request.getInputStream(), charset)) {
byte[] chunk;
while ((chunk = in.readChunk()).length > 0) {
// do something with the chunk.
System.out.println(new String(chunk, Objects.requireNonNullElse(charset, StandardCharsets.US_ASCII)));
}
return new ResponseEntity<>(HttpStatus.OK);
} catch (IOException e) {
return new ResponseEntity<>(HttpStatus.INTERNAL_SERVER_ERROR);
}
}
Is there any way InputStream wrapping a list of UTF-8 String? I'd like to do something like:
InputStream in = new XyzInputStream( List<String> lines )
You can read from a ByteArrayOutputStream and you can create your source byte[] array using a ByteArrayInputStream.
So create the array as follows:
List<String> source = new ArrayList<String>();
source.add("one");
source.add("two");
source.add("three");
ByteArrayOutputStream baos = new ByteArrayOutputStream();
for (String line : source) {
baos.write(line.getBytes());
}
byte[] bytes = baos.toByteArray();
And reading from it is as simple as:
InputStream in = new ByteArrayInputStream(bytes);
Alternatively, depending on what you're trying to do, a StringReader might be better.
You can concatenate all the lines together to create a String then convert it to a byte array using String#getBytes and pass it into ByteArrayInputStream. However this is not the most efficient way of doing it.
In short, no, there is no way of doing this using existing JDK classes. You could, however, implement your own InputStream that read from a List of Strings.
EDIT: Dave Web has an answer above, which I think is the way to go. If you need a reusable class, then something like this might do:
public class StringsInputStream<T extends Iterable<String>> extends InputStream {
private ByteArrayInputStream bais = null;
public StringsInputStream(final T strings) throws IOException {
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
for (String line : strings) {
outputStream.write(line.getBytes());
}
bais = new ByteArrayInputStream(outputStream.toByteArray());
}
#Override
public int read() throws IOException {
return bais.read();
}
#Override
public int read(byte[] b) throws IOException {
return bais.read(b);
}
#Override
public int read(byte[] b, int off, int len) throws IOException {
return bais.read(b, off, len);
}
#Override
public long skip(long n) throws IOException {
return bais.skip(n);
}
#Override
public int available() throws IOException {
return bais.available();
}
#Override
public void close() throws IOException {
bais.close();
}
#Override
public synchronized void mark(int readlimit) {
bais.mark(readlimit);
}
#Override
public synchronized void reset() throws IOException {
bais.reset();
}
#Override
public boolean markSupported() {
return bais.markSupported();
}
public static void main(String[] args) throws Exception {
List source = new ArrayList();
source.add("foo ");
source.add("bar ");
source.add("baz");
StringsInputStream<List<String>> in = new StringsInputStream<List<String>>(source);
int read = in.read();
while (read != -1) {
System.out.print((char) read);
read = in.read();
}
}
}
This basically an adapter for ByteArrayInputStream.
You can create some kind of IterableInputStream
public class IterableInputStream<T> extends InputStream {
public static final int EOF = -1;
private static final InputStream EOF_IS = new InputStream() {
#Override public int read() throws IOException {
return EOF;
}
};
private final Iterator<T> iterator;
private final Function<T, byte[]> mapper;
private InputStream current;
public IterableInputStream(Iterable<T> iterable, Function<T, byte[]> mapper) {
this.iterator = iterable.iterator();
this.mapper = mapper;
next();
}
#Override
public int read() throws IOException {
int n = current.read();
while (n == EOF && current != EOF_IS) {
next();
n = current.read();
}
return n;
}
private void next() {
current = iterator.hasNext()
? new ByteArrayInputStream(mapper.apply(iterator.next()))
: EOF_IS;
}
}
To use it
public static void main(String[] args) throws IOException {
Iterable<String> strings = Arrays.asList("1", "22", "333", "4444");
try (InputStream is = new IterableInputStream<String>(strings, String::getBytes)) {
for (int b = is.read(); b != -1; b = is.read()) {
System.out.print((char) b);
}
}
}
In my case I had to convert a list of string in the equivalent file (with a line feed for each line).
This was my solution:
List<String> inputList = Arrays.asList("line1", "line2", "line3");
byte[] bytes = inputList.stream().collect(Collectors.joining("\n", "", "\n")).getBytes();
InputStream inputStream = new ByteArrayInputStream(bytes);
You can do something similar to this:
https://commons.apache.org/sandbox/flatfile/xref/org/apache/commons/flatfile/util/ConcatenatedInputStream.html
It just implements the read() method of InputStream and has a list of InputStreams it is concatenating. Once it reads an EOF it starts reading from the next InputStream. Just convert the Strings to ByteArrayInputStreams.
you can also do this way create a Serializable List
List<String> quarks = Arrays.asList(
"up", "down", "strange", "charm", "top", "bottom"
);
//serialize the List
//note the use of abstract base class references
try{
//use buffering
OutputStream file = new FileOutputStream( "quarks.ser" );
OutputStream buffer = new BufferedOutputStream( file );
ObjectOutput output = new ObjectOutputStream( buffer );
try{
output.writeObject(quarks);
}
finally{
output.close();
}
}
catch(IOException ex){
fLogger.log(Level.SEVERE, "Cannot perform output.", ex);
}
//deserialize the quarks.ser file
//note the use of abstract base class references
try{
//use buffering
InputStream file = new FileInputStream( "quarks.ser" );
InputStream buffer = new BufferedInputStream( file );
ObjectInput input = new ObjectInputStream ( buffer );
try{
//deserialize the List
List<String> recoveredQuarks = (List<String>)input.readObject();
//display its data
for(String quark: recoveredQuarks){
System.out.println("Recovered Quark: " + quark);
}
}
finally{
input.close();
}
}
catch(ClassNotFoundException ex){
fLogger.log(Level.SEVERE, "Cannot perform input. Class not found.", ex);
}
catch(IOException ex){
fLogger.log(Level.SEVERE, "Cannot perform input.", ex);
}
I'd like to propose my simple solution:
public class StringListInputStream extends InputStream {
private final List<String> strings;
private int pos = 0;
private byte[] bytes = null;
private int i = 0;
public StringListInputStream(List<String> strings) {
this.strings = strings;
this.bytes = strings.get(0).getBytes();
}
#Override
public int read() throws IOException {
if (pos >= bytes.length) {
if (!next()) return -1;
else return read();
}
return bytes[pos++];
}
private boolean next() {
if (i + 1 >= strings.size()) return false;
pos = 0;
bytes = strings.get(++i).getBytes();
return true;
}
}
I am trying to figure out object design to implement large file(~600 MB) respository in the Database using hibernate.
Please suggest a correct approach/design?
class ModelClass{
String name; //meta data
...
Option 1.
byte[] file; // dont want to load the content of the entire file
// in memory by using this but hibernate recognizes
// this datatype
Option 2.
InputStream inputStream;
OutputStream outputStream;
// I can have the methods to provide the input or output stream
// but i dont think its a clean approach. I am not sure how
// I will be able to work with hibernate with streams
Option 3.
File fileHandle;
}
Any other options??
I would like to call save(Object) method of hibernateTemplate to save the object in Database. Dont know if I should have just the meta-data in the class and handle the file save and retreive seperately.
Thanks in advance.
Another workable solution is to use "Work" Interface. The purpose was to avoid loading the file content into memory.
session.doWork(new Work(){
#Override
public void execute(Connection conn) {
//direct sql queries go here
}
});
I have written a SerializableFile class that keeps data in a file. When the object is read, it creates a temporary file.
Here it is:
public class SerializableFile implements Serializable {
private static final File TEMP_DIR = getTempDir();
private transient boolean temporary;
private transient String name;
private transient File file;
public SerializableFile() {
}
public SerializableFile(File file) {
this.file = file;
this.name = file.getName();
this.temporary = false;
}
#Override
protected void finalize() throws Throwable {
dispose();
super.finalize();
}
public void dispose() {
if (temporary && file != null) {
file.delete();
file = null;
}
}
public File keep(String name) throws IOException {
if (temporary) {
temporary = false;
} else {
File newFile = new File(TEMP_DIR, name);
keepAs(newFile);
file = newFile;
}
return file;
}
public void keepAs(File outFile) throws IOException {
if ((temporary || file.equals(outFile)) && file.renameTo(outFile)) {
temporary = false;
file = outFile;
} else {
InputStream in = new FileInputStream(file);
try {
OutputStream out = new FileOutputStream(outFile);
try {
byte buf[] = new byte[4096];
for (int n = in.read(buf); n > 0; n = in.read(buf)) {
out.write(buf, 0, n);
}
} finally {
out.close();
}
} finally {
in.close();
}
outFile.setLastModified(file.lastModified());
}
}
public String getName() {
return name;
}
public File getFile() {
return file;
}
public long lastModified() {
return file.lastModified();
}
private void writeObject(ObjectOutputStream out) throws IOException {
int size = (int)file.length();
long date = file.lastModified();
out.writeUTF(name);
out.writeInt(size);
out.writeLong(date);
InputStream in = new FileInputStream(file);
try {
byte buf[] = new byte[4096];
while (size > 0) {
int n = in.read(buf);
if (n <= 0 || n > size) {
throw new IOException("Unexpected file size");
}
out.write(buf, 0, n);
size -= n;
}
} finally {
in.close();
}
}
private void readObject(ObjectInputStream in) throws IOException {
name = in.readUTF();
int size = in.readInt();
long date = in.readLong();
file = File.createTempFile("tmp", ".tmp", TEMP_DIR);
OutputStream out = new FileOutputStream(file);
try {
byte buf[] = new byte[4096];
while (size > 0) {
int n = in.read(buf, 0, size <= buf.length ? size : buf.length);
if (n <= 0 || n > size) {
throw new IOException("Unexpected file size");
}
out.write(buf, 0, n);
size -= n;
}
} finally {
out.close();
}
file.setLastModified(date);
temporary = true;
}
private static File getTempDir() {
File dir;
String temp = System.getProperty("com.lagalerie.live.temp-dir");
if (temp != null) {
dir = new File(temp);
} else {
String home = System.getProperty("user.home");
dir = new File(home, "temp");
}
if (!dir.isDirectory() && !dir.mkdirs()) {
throw new RuntimeException("Could not create temp dir " + dir);
}
return dir;
}
}
Open JPA supports a #Persistent annotation with some databases:
MySQL
Oracle
PostgreSQL
SQL Server
DB2
Even if you are still using an RDBMS as a data store, you should consider storing this binary data into a file system, and saving the directory / location of the path into the database, instead of storing this as a BLOB or CLOB into the database.
In a traditional blocking-thread server, I would do something like this
class ServerSideThread {
ObjectInputStream in;
ObjectOutputStream out;
Engine engine;
public ServerSideThread(Socket socket, Engine engine) {
in = new ObjectInputStream(socket.getInputStream());
out = new ObjectOutputStream(socket.getOutputStream());
this.engine = engine;
}
public void sendMessage(Message m) {
out.writeObject(m);
}
public void run() {
while(true) {
Message m = (Message)in.readObject();
engine.queueMessage(m,this); // give the engine a message with this as a callback
}
}
}
Now, the object can be expected to be quite large. In my nio loop, I can't simply wait for the object to come through, all my other connections (with much smaller workloads) will be waiting on me.
How can I only get notified that a connection has the entire object before it tells my nio channel it's ready?
You can write the object to a ByteArrayOutputStream allowing you to give the length before an object sent. On the receiving side, read the amount of data required before attempting to decode it.
However, you are likely to find it much simpler and more efficient to use blocking IO (rather than NIO) with Object*Stream
Edit something like this
public static void send(SocketChannel socket, Serializable serializable) throws IOException {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
for(int i=0;i<4;i++) baos.write(0);
ObjectOutputStream oos = new ObjectOutputStream(baos);
oos.writeObject(serializable);
oos.close();
final ByteBuffer wrap = ByteBuffer.wrap(baos.toByteArray());
wrap.putInt(0, baos.size()-4);
socket.write(wrap);
}
private final ByteBuffer lengthByteBuffer = ByteBuffer.wrap(new byte[4]);
private ByteBuffer dataByteBuffer = null;
private boolean readLength = true;
public Serializable recv(SocketChannel socket) throws IOException, ClassNotFoundException {
if (readLength) {
socket.read(lengthByteBuffer);
if (lengthByteBuffer.remaining() == 0) {
readLength = false;
dataByteBuffer = ByteBuffer.allocate(lengthByteBuffer.getInt(0));
lengthByteBuffer.clear();
}
} else {
socket.read(dataByteBuffer);
if (dataByteBuffer.remaining() == 0) {
ObjectInputStream ois = new ObjectInputStream(new ByteArrayInputStream(dataByteBuffer.array()));
final Serializable ret = (Serializable) ois.readObject();
// clean up
dataByteBuffer = null;
readLength = true;
return ret;
}
}
return null;
}
Inspired by the code above I've created a (GoogleCode project)
It includes a simple unit test:
SeriServer server = new SeriServer(6001, nthreads);
final SeriClient client[] = new SeriClient[nclients];
//write the data with multiple threads to flood the server
for (int cnt = 0; cnt < nclients; cnt++) {
final int counterVal = cnt;
client[cnt] = new SeriClient("localhost", 6001);
Thread t = new Thread(new Runnable() {
public void run() {
try {
for (int cnt2 = 0; cnt2 < nsends; cnt2++) {
String msg = "[" + counterVal + "]";
client[counterVal].send(msg);
}
} catch (IOException e) {
e.printStackTrace();
fail();
}
}
});
t.start();
}
HashMap<String, Integer> counts = new HashMap<String, Integer>();
int nullCounts = 0;
for (int cnt = 0; cnt < nsends * nclients;) {
//read the data from a vector (that the server pool automatically fills
SeriDataPackage data = server.read();
if (data == null) {
nullCounts++;
System.out.println("NULL");
continue;
}
if (counts.containsKey(data.getObject())) {
Integer c = counts.get(data.getObject());
counts.put((String) data.getObject(), c + 1);
} else {
counts.put((String) data.getObject(), 1);
}
cnt++;
System.out.println("Received: " + data.getObject());
}
// asserts the results
Collection<Integer> values = counts.values();
for (Integer value : values) {
int ivalue = value;
assertEquals(nsends, ivalue);
System.out.println(value);
}
assertEquals(counts.size(), nclients);
System.out.println(counts.size());
System.out.println("Finishing");
server.shutdown();