Spring Stomp over Websocket: Stream large files - java

My SockJs client in webpage, sends message with a frame size of 16K. The message size limit is what determines the max size of the file that I can transfer.
Below is what I found in the doc.
/**
* Configure the maximum size for an incoming sub-protocol message.
* For example a STOMP message may be received as multiple WebSocket messages
* or multiple HTTP POST requests when SockJS fallback options are in use.
*
* <p>In theory a WebSocket message can be almost unlimited in size.
* In practice WebSocket servers impose limits on incoming message size.
* STOMP clients for example tend to split large messages around 16K
* boundaries. Therefore a server must be able to buffer partial content
* and decode when enough data is received. Use this property to configure
* the max size of the buffer to use.
*
* <p>The default value is 64K (i.e. 64 * 1024).
*
* <p><strong>NOTE</strong> that the current version 1.2 of the STOMP spec
* does not specifically discuss how to send STOMP messages over WebSocket.
* Version 2 of the spec will but in the mean time existing client libraries
* have already established a practice that servers must handle.
*/
public WebSocketTransportRegistration setMessageSizeLimit(int messageSizeLimit) {
this.messageSizeLimit = messageSizeLimit;
return this;
}
MY QUESTION:
Can I setup a partial messaging so that a file is transferred part by part and is not getting transferred as a single message as it is been done now?
Update:
Still looking for a solution with partial messaging
Meanwhile using HTTP now for large messages (which is file uploads/downloads in my application).

Can I setup a partial messaging so that a file is transferred part by part and is not getting transferred as a single message as it is been done now?
Yes. Here is the relevant Config from my Spring boot experimental project - basically UploadWSHandler is registered and WebSocketTransportRegistration.setMessageSizeLimit is set.
#Configuration
#EnableWebSocket
public class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer implements WebSocketConfigurer {
public void registerWebSocketHandlers(WebSocketHandlerRegistry registry) {
registry.addHandler(new UploadWSHandler(), "/binary");
}
#Override
public void configureWebSocketTransport(WebSocketTransportRegistration registration) {
registration.setMessageSizeLimit(50 * 1024 * 1024);
}
}
The UploadWShandler is as follows. Sorry too much code here - Key points
supportsPartialMessage which returns true.
handleBinaryMessage will be called multiple times with partial message so we need to assemble the bytes. So afterConnectionEstablished establishes an identity using the websocket URL query. But you don't have to use this mechanism. The reason why I choose this mechanism is to keep the client side simple so that I call webSocket.send(files[0]) only once i.e. I am not slicing the file blob object on the javascript side. (side point: I want to use plain websocket on client side - no stomp/socks)
The iternal client side chunking mechanism provides message.isLast() last message
For demo purposes I am writing this to file system and accumulating those bytes in memory with FileUploadInFlight but you don't have to do this and can stream somewhere else as you go.
public class UploadWSHandler extends BinaryWebSocketHandler {
Map<WebSocketSession, FileUploadInFlight> sessionToFileMap = new WeakHashMap<>();
#Override
public boolean supportsPartialMessages() {
return true;
}
#Override
protected void handleBinaryMessage(WebSocketSession session, BinaryMessage message) throws Exception {
ByteBuffer payload = message.getPayload();
FileUploadInFlight inflightUpload = sessionToFileMap.get(session);
if (inflightUpload == null) {
throw new IllegalStateException("This is not expected");
}
inflightUpload.append(payload);
if (message.isLast()) {
Path basePath = Paths.get(".", "uploads", UUID.randomUUID().toString());
Files.createDirectories(basePath);
FileChannel channel = new FileOutputStream(
Paths.get(basePath.toString() ,inflightUpload.name).toFile(), false).getChannel();
channel.write(ByteBuffer.wrap(inflightUpload.bos.toByteArray()));
channel.close();
session.sendMessage(new TextMessage("UPLOAD "+inflightUpload.name));
session.close();
sessionToFileMap.remove(session);
}
String response = "Upload Chunk: size "+ payload.array().length;
System.out.println(response);
}
#Override
public void afterConnectionEstablished(WebSocketSession session) throws Exception {
sessionToFileMap.put(session, new FileUploadInFlight(session));
}
static class FileUploadInFlight {
String name;
String uniqueUploadId;
ByteArrayOutputStream bos = new ByteArrayOutputStream();
/**
* Fragile constructor - beware not prod ready
* #param session
*/
FileUploadInFlight(WebSocketSession session) {
String query = session.getUri().getQuery();
String uploadSessionIdBase64 = query.split("=")[1];
String uploadSessionId = new String(Base64Utils.decodeUrlSafe(uploadSessionIdBase64.getBytes()));
System.out.println(uploadSessionId);
List<String> sessionIdentifiers = Splitter.on("\\").splitToList(uploadSessionId);
String uniqueUploadId = session.getRemoteAddress().toString()+sessionIdentifiers.get(0);
String fileName = sessionIdentifiers.get(1);
this.name = fileName;
this.uniqueUploadId = uniqueUploadId;
}
public void append(ByteBuffer byteBuffer) throws IOException{
bos.write(byteBuffer.array());
}
}
}
BTW a working project is also sprint-boot-with-websocked-chunking-assembly-and-fetch in with-websocked-chunking-assembly-and-fetch branch

Related

Azure Queue trigger not working with Java

I have a spring boot application which will publish message on azure Queue. I have one more azure queueTrigger function written in Java which will listen to the same queue to which spring boot application has published a message. The queueTrigger function not able to detected messages published on queue.
Here is my publisher code
public static void addQueueMessage(String connectStr, String queueName, String message) {
try {
// Instantiate a QueueClient which will be
// used to create and manipulate the queue
QueueClient queueClient = new QueueClientBuilder()
.connectionString(connectStr)
.queueName(queueName)
.buildClient();
System.out.println("Adding message to the queue: " + message);
// Add a message to the queue
queueClient.sendMessage(message);
} catch (QueueStorageException e) {
// Output the exception message and stack trace
System.out.println(e.getMessage());
e.printStackTrace();
}
}
Here is my queueTrigger function app code
#FunctionName("queueprocessor")
public void run(
#QueueTrigger(name = "message",
queueName = "queuetest",
connection = "AzureWebJobsStorage") String message,
final ExecutionContext context
) {
context.getLogger().info(message);
}
I'm passing same connection-String and queueName, still doesn't work. If i run function on my local machine then it gets triggered but with error error image
As the official doc suggests,
Functions expect a base64 encoded string. Any adjustments to the encoding type (in order to prepare data as a base64 encoded string) need to be implemented in the calling service.
Update sender code to send base64 encoded message.
String encodedMsg = Base64.getEncoder().encodeToString(message.getBytes())
queueClient.sendMessage(encodedMsg);

Netty Correlating Request and Responses

I want to write a proxy for a TCP binary protocol. I’m using the HexDump example in Netty’s repo as a guide.
https://github.com/netty/netty/tree/4.1/example/src/main/java/io/netty/example/proxy
This works fine. But I sometimes want to modify the response based on the original request.
Looking around it seems that using the inbound channels AttributeMap could be the place to store such request details. (Some more details below)
io.netty.util.AttributeMap
But while it sort of works sometimes one request overwrites the details of another request.
This makes sense, Netty is asynchronous and you can’t really guarantee when somethings going to happen.
So I was wondering how can I reliably correlate each request with is response. Note I can’t
change the protocol, this might have been one way to pass details between request and response.
Thanks for your insight.
HexDumpFrontendHandler
#Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) throws InterruptedException {
…
ctx.channel().attr(utils.REQUEST_ATTRIBUTE).set(requestDetails);
…
}
#Override
public void channelActive(ChannelHandlerContext ctx) {
final Channel inboundChannel = ctx.channel();
// Start the connection attempt.
Bootstrap b = new Bootstrap();
b.group(inboundChannel.eventLoop())
.channel(ctx.channel().getClass())
.handler(new HexDumpBackendHandler(inboundChannel))
.option(ChannelOption.AUTO_READ, false);
ChannelFuture f = b.connect(remoteHost, remotePort);
outboundChannel = f.channel();
f.addListener((ChannelFutureListener) future -> {
if (future.isSuccess()) {
// connection complete start to read first data
inboundChannel.read();
} else {
// Close the connection if the connection attempt has failed.
inboundChannel.close();
}
});
}
HexDumpBackendHandler
#Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) {
…
RequestDetails requestDetails = inboundChannel.attr(utils.REQUEST_ATTRIBUTE).getAndRemove();
…
}
My solution (work around?) to this was the following. The protocol I was working with couldn't guarantee a unique identifier per request globally but it did uniquely identify request's within a tcp connection.
So the following combination allowed me to create a ConcurrentHashMap with the following as the key
host + ephemeral port + identifier local to the connection
This work for my case. I'm sure their other ways to solve it within the Netty framework itself

Netty server send a byte[] encoded by Protobuf, but C# client Socket.Receive keeps being 0

I am trying to accomplish an Unity game demo with network function, using C# for programming of client, and Java for server.
To be specific, server communication is implemented by Netty.
I also brought in Protobuf, which helps me define protocols of messages.
As I am new to server programming, dealing with packet merging and loss in TCP has not been considered in my code yet.
When I created sockets from client, and sent message to server, everything went well.
Problem happened when server replied:
In the client, an async method is ready to receive message. When I simply sent a string-format message from server, the method were able to get it.
But when I replaced the message with a 4-length byte[], which encoded from a Protobuf Message object, client just showed that it received NOTHING.
when I print what I've sent in the server console, it is like this:
00001000
00000001
00010000
00000001
My server code overrides channelRead and channelReadComplete functions of Netty.
In channelRead, ChannelHandlerContext.write was invoked to write the message to the transmission cache.
And in channelReadComplete, ChannelHandlerContext.flush was invoked, so that the message could be sent finally.
channelRead()
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
Request.MsgPack msgPack = (Request.MsgPack) msg;
Request.MsgPack.MsgType type = msgPack.getType();
switch (type)
{
case GetServerState:
final Request.GetServerState gssbody = msgPack.getGetServerState();
System.out.println("收到类型为" + type + "的消息,内容为:" +
"\nrequestId = " + gssbody.getRequestId()
);
byte[] bytes = ServerStateManager.getState(gssbody.getRequestId());
ctx.write(bytes);
break;
getState(): including Protobuf-encoding procedure
public static byte[] getState(int requestId)
{
ReturnServerState.Message.Builder replyBuilder = ReturnServerState.Message.newBuilder();
replyBuilder.setRequestId(requestId);
replyBuilder.setIsIdle(new ServerStateManager().isIdle());
return replyBuilder.build().toByteArray();
}
channelReadComplete()
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
try
{
ctx.flush();
}
finally
{
ctx.close();
}
}
Client code:
public class ShortLink
{
Socket clientSocket = null;
static byte[] result = new byte[1024];
Task ReceiveAsync<T>(string ip, int port)
{
return Task.Run(() =>
{
T component = default(T);
while (clientSocket.Receive(result) == 0)
{
break;
ReceiveAsync is invoked in the way of:
await ReceiveAsync<ReturnServerState>(ip, port);
when I found clientSocket.Receive(result) always output 0, I tried to log result[0], result[1], result[2], result[3] like this:
Debug.Log(Convert.ToString(result[0]) + ", " +
Convert.ToString(result[1]) + ", " +
Convert.ToString(result[2]) + ", " +
Convert.ToString(result[3]));
And the log turned to be 0,0,0,0.
I will be grateful for any idea of "why the client socket received nothing", and the solution.
Since I come from Asia, there may be a time lag between your reply and mine, and also English is not my mother tongue. However, I will try my best to reply in time.
Thanks a lot!
Okay..I have finally solve it myself
1.The usage "return replyBuilder.build().toByteArray()" is wrong because ProtoEncoder has already do toByteArray() for me:
public class ProtobufEncoder extends MessageToMessageEncoder<MessageLiteOrBuilder> {
public ProtobufEncoder() {
}
protected void encode(ChannelHandlerContext ctx, MessageLiteOrBuilder msg, List<Object> out) throws Exception {
if (msg instanceof MessageLite) {
out.add(Unpooled.wrappedBuffer(((MessageLite)msg).toByteArray()));
} else {
if (msg instanceof Builder) {
out.add(Unpooled.wrappedBuffer(((Builder)msg).build().toByteArray()));
}
}
}
}
So once I registered "new ProtobufEncoder()" in the Netty Channel Pipeline, I can just use "return replyBuilder.build()" - that is correct.
2.In "static byte[] result = new byte[1024];", The length of received message is defined casually, and it doesn't matter - until it really receives a message.
When receiving message, I shall always copy the message bytes to a new byte[] with a correct length firstly - or there will be just a 1024-length bytes[], with the data I need at the beginning, and several zeroes following, which will certainly fail to be decoded.

Netty slower than Tomcat

We just finished building a server to store data to disk and fronted it with Netty. During load testing we were seeing Netty scaling to about 8,000 messages per second. Given our systems, this looked really low. For a benchmark, we wrote a Tomcat front-end and run the same load tests. With these tests we were getting roughly 25,000 messages per second.
Here are the specs for our load testing machine:
Macbook Pro Quad core
16GB of RAM
Java 1.6
Here is the load test setup for Netty:
10 threads
100,000 messages per thread
Netty server code (pretty standard) - our Netty pipeline on the server is two handlers: a FrameDecoder and a SimpleChannelHandler that handles the request and response.
Client side JIO using Commons Pool to pool and reuse connections (the pool was sized the same as the # of threads)
Here is the load test setup for Tomcat:
10 threads
100,000 messages per thread
Tomcat 7.0.16 with default configuration using a Servlet to call the server code
Client side using URLConnection without any pooling
My main question is why such a huge different in performance? Is there something obvious with respect to Netty that can get it to run faster than Tomcat?
Edit: Here is the main Netty server code:
NioServerSocketChannelFactory factory = new NioServerSocketChannelFactory();
ServerBootstrap server = new ServerBootstrap(factory);
server.setPipelineFactory(new ChannelPipelineFactory() {
public ChannelPipeline getPipeline() {
RequestDecoder decoder = injector.getInstance(RequestDecoder.class);
ContentStoreChannelHandler handler = injector.getInstance(ContentStoreChannelHandler.class);
return Channels.pipeline(decoder, handler);
}
});
server.setOption("child.tcpNoDelay", true);
server.setOption("child.keepAlive", true);
Channel channel = server.bind(new InetSocketAddress(port));
allChannels.add(channel);
Our handlers look like this:
public class RequestDecoder extends FrameDecoder {
#Override
protected ChannelBuffer decode(ChannelHandlerContext ctx, Channel channel, ChannelBuffer buffer) {
if (buffer.readableBytes() < 4) {
return null;
}
buffer.markReaderIndex();
int length = buffer.readInt();
if (buffer.readableBytes() < length) {
buffer.resetReaderIndex();
return null;
}
return buffer;
}
}
public class ContentStoreChannelHandler extends SimpleChannelHandler {
private final RequestHandler handler;
#Inject
public ContentStoreChannelHandler(RequestHandler handler) {
this.handler = handler;
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
ChannelBuffer in = (ChannelBuffer) e.getMessage();
in.readerIndex(4);
ChannelBuffer out = ChannelBuffers.dynamicBuffer(512);
out.writerIndex(8); // Skip the length and status code
boolean success = handler.handle(new ChannelBufferInputStream(in), new ChannelBufferOutputStream(out), new NettyErrorStream(out));
if (success) {
out.setInt(0, out.writerIndex() - 8); // length
out.setInt(4, 0); // Status
}
Channels.write(e.getChannel(), out, e.getRemoteAddress());
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
Throwable throwable = e.getCause();
ChannelBuffer out = ChannelBuffers.dynamicBuffer(8);
out.writeInt(0); // Length
out.writeInt(Errors.generalException.getCode()); // status
Channels.write(ctx, e.getFuture(), out);
}
#Override
public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent e) {
NettyContentStoreServer.allChannels.add(e.getChannel());
}
}
UPDATE:
I've managed to get my Netty solution to within 4,000/second. A few weeks back I was testing a client side PING in my connection pool as a safe guard against idle sockets but I forgot to remove that code before I started load testing. This code effectively PINGed the server every time a Socket was checked out from the pool (using Commons Pool). I commented that code out and I'm now getting 21,000/second with Netty and 25,000/second with Tomcat.
Although, this is great news on the Netty side, I'm still getting 4,000/second less with Netty than Tomcat. I can post my client side (which I thought I had ruled out but apparently not) if anyone is interested in seeing that.
The method messageReceived is executed using a worker thread that is possibly getting blocked by RequestHandler#handle which may be busy doing some I/O work.
You could try adding into the channel pipeline an OrderdMemoryAwareThreadPoolExecutor (recommended) for executing the handlers or alternatively, try dispatching your handler work to a new ThreadPoolExecutor and passing a reference to the socket channel for later writing the response back to client. Ex.:
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
executor.submit(new Runnable() {
processHandlerAndRespond(e);
});
}
private void processHandlerAndRespond(MessageEvent e) {
ChannelBuffer in = (ChannelBuffer) e.getMessage();
in.readerIndex(4);
ChannelBuffer out = ChannelBuffers.dynamicBuffer(512);
out.writerIndex(8); // Skip the length and status code
boolean success = handler.handle(new ChannelBufferInputStream(in), new ChannelBufferOutputStream(out), new NettyErrorStream(out));
if (success) {
out.setInt(0, out.writerIndex() - 8); // length
out.setInt(4, 0); // Status
}
Channels.write(e.getChannel(), out, e.getRemoteAddress());
}

Java NIO reading and writing from distant machine

I'd like to use the NIO to send/receive data to/from a distant machine. I can send or receive data at any time, when i need to send data i just send it without any queries from the distant machine, and the distant machine send me data at regular interval. I don't understand the NIO mechanism. What generates and read or write event on the Selector SelectionKey ? Is it possible to use only one ServerSocketChannel on my side, to read data from the distant machine et to write data to it ? That is what i understand but i don't see how the writing event can be triggered... Thank you for your explanation.
I already did some coding and i can read data coming in from the distant machine, but cannot write. I use Selector and i don't know how can i write data. The logged message "handle write" is never written, but in wireshark i can see my packet.
public class ServerSelector {
private static final Logger logger = Logger.getLogger(ServerSelector.class.getName());
private static final int TIMEOUT = 3000; // Wait timeout (milliseconds)
private static final int MAXTRIES = 3;
private final Selector selector;
public ServerSelector(Controller controller, int... servPorts) throws IOException {
if (servPorts.length <= 0) {
throw new IllegalArgumentException("Parameter(s) : <Port>...");
}
Handler consolehHandler = new ConsoleHandler();
consolehHandler.setLevel(Level.INFO);
logger.addHandler(consolehHandler);
// Create a selector to multiplex listening sockets and connections
selector = Selector.open();
// Create listening socket channel for each port and register selector
for (int servPort : servPorts) {
ServerSocketChannel listnChannel = ServerSocketChannel.open();
listnChannel.socket().bind(new InetSocketAddress(servPort));
listnChannel.configureBlocking(false); // must be nonblocking to register
// Register selector with channel. The returned key is ignored
listnChannel.register(selector, SelectionKey.OP_ACCEPT);
}
// Create a handler that will implement the protocol
IOProtocol protocol = new IOProtocol();
int tries = 0;
// Run forever, processing available I/O operations
while (tries < MAXTRIES) {
// Wait for some channel to be ready (or timeout)
if (selector.select(TIMEOUT) == 0) { // returns # of ready chans
System.out.println(".");
tries += 1;
continue;
}
// Get iterator on set of keys with I/O to process
Iterator<SelectionKey> keyIter = selector.selectedKeys().iterator();
while (keyIter.hasNext()) {
SelectionKey key = keyIter.next(); // Key is a bit mask
// Server socket channel has pending connection requests?
if (key.isAcceptable()) {
logger.log(Level.INFO, "handle accept");
protocol.handleAccept(key, controller);
}
// Client socket channel has pending data?
if (key.isReadable()) {
logger.log(Level.INFO, "handle read");
protocol.handleRead(key);
}
// Client socket channel is available for writing and
// key is valid (i.e., channel not closed) ?
if (key.isValid() && key.isWritable()) {
logger.log(Level.INFO, "handle write");
protocol.handleWrite(key);
}
keyIter.remove(); // remove from set of selected keys
tries = 0;
}
}
}
}
The protocol
public class IOProtocol implements Protocol {
private static final Logger logger = Logger.getLogger(IOProtocol.class.getName());
IOProtocol() {
Handler consolehHandler = new ConsoleHandler();
consolehHandler.setLevel(Level.INFO);
logger.addHandler(consolehHandler);
}
/**
*
* #param key
* #throws IOException
*/
#Override
public void handleAccept(SelectionKey key, Controller controller) throws IOException {
SocketChannel clntChan = ((ServerSocketChannel) key.channel()).accept();
clntChan.configureBlocking(false); // Must be nonblocking to register
controller.setCommChannel(clntChan);
// Register the selector with new channel for read and attach byte buffer
SelectionKey socketKey = clntChan.register(key.selector(), SelectionKey.OP_READ | SelectionKey.OP_WRITE, controller);
}
/**
* Client socket channel has pending data
*
* #param key
* #throws IOException
*/
#Override
public void handleRead(SelectionKey key) throws IOException {
Controller ctrller = (Controller)key.attachment();
try {
ctrller.readData();
} catch (CommandUnknownException ex) {
logger.log(Level.SEVERE, null, ex);
}
key.interestOps(SelectionKey.OP_READ | SelectionKey.OP_WRITE);
}
/**
* Channel is available for writing, and key is valid (i.e., client channel
* not closed).
*
* #param key
* #throws IOException
*/
#Override
public void handleWrite(SelectionKey key) throws IOException {
Controller ctrl = (Controller)key.attachment();
ctrl.writePendingData();
if (!buf.hasRemaining()) { // Buffer completely written ?
// Nothing left, so no longer interested in writes
key.interestOps(SelectionKey.OP_READ);
}
buf.compact();
}
}
The controller
/**
* Fill buffer with data.
* #param msg The data to be sent
* #throws IOException
*/
private void writeData(AbstractMsg msg) throws IOException {
//
writeBuffer = ByteBuffer.allocate(msg.getSize() + 4);
writeBuffer.putInt(msg.getSize());
msg.writeHeader(writeBuffer);
msg.writeData(writeBuffer);
logger.log(Level.INFO, "Write data - message size : {0}", new Object[]{msg.getSize()});
logger.log(Level.INFO, "Write data - message : {0}", new Object[]{msg});
}
/**
* Write to the SocketChannel
* #throws IOException
*/
public void writePendingData() throws IOException {
commChannel.write(writeBuffer);
}
ServerSocketChannel is used to make a connection, but not send data. You need one ServerSocketChannel and one SocketChannel per each connection.
Examples of reading and writing using SocketChannel:
ByteBuffer buf = ByteBuffer.allocate(48);
int bytesRead = socketChannel.read(buf);
Your program will sleep at second line until data will come. You need to put this code in infinite loop and run it in background Thread. When data came you can process it from this thread, then wait for another data to come.
ByteBuffer buf = ByteBuffer.allocate(48);
buf.clear();
buf.put("Hello!".getBytes());
buf.flip();
while(buf.hasRemaining()) {
channel.write(buf);
}
There is no blocking methods, so if you sending small byte buffer you can call this from your main Thread.
Source
ADD:
Don't set OP_WRITE key on new connection. Only OP_READ. When you want to write some data you need to notify selector that you want to send something and send it in events loop. Good solution is to make a Queue of outcoming messages. Then follow this steps:
adding data to Queue
setting OP_WRITE to channel's key
in while (keyIter.hasNext()) loop you'll have writable key, write all data from queue and remove OP_WRITE key.
It's hard for me to understand your code, but I think you'll find out what's the problem. Also if you want to have only one connection there is no need to use Selector. And this is weird that you binding few ServerSocketChannels.
I would suggest you use blocking NIO (which is the default behaviour for SocketChannel) You don't need to use a Selector but you can use one thread for reading and another for writing.
Based on your example.
private final ByteBuffer writeBuffer = ByteBuffer.allocateDirect(1024*1024);
private void writeData(AbstractMsg msg) {
writeBuffer.clear();
writeBuffer.putInt(0); // set later
msg.writeHeader(writeBuffer);
msg.writeData(writeBuffer);
writeBuffer.putInt(0, writeBuffer.position());
writeBuffer.flip();
while(writeBuffer.hasRemaining())
commChannel.write(writeBuffer);
}
What generates and read or write event on the Selector SelectionKey?
OP_READ: presence of data or an EOS in the socket receive buffer.
OP_WRITE: room in the socket send buffer.

Categories