ByteBuffer and partial write - java

If the ByteBuffer is written partially, the position is updated and the next _channel.write call will resume from last position, yep?
compact() is not necessary?
private AsynchronousSocketChannel _channel;
private ByteBuffer _buffer;
final CompletionHandler<Integer, LogstashClientStream> _writeCompletionHandler = new CompletionHandler<Integer, LogstashClientStream>(){
#Override
public void completed(Integer sent, LogstashClientStream self) {
if( _buffer.remaining() == 0 ){
_buffer.clear();
//...
}
else {
// partial write
self.send();
}
}
#Override
public void failed(Throwable exc, LogstashClientStream self) {
//...
}
};
private void send(){
try{
_channel.write( _buffer, this, _writeCompletionHandler);
} catch(Throwable e){
//...
}
}

Yes, it will resume, and no, compact() is not necessary here. It's useful mainly in cases when you want to fill the rest of the buffer from some input stream before invoking write() again.

Related

AsynchronousSocketChannel Read/WritePendingException - possible to synchronize?

I am working on a TCP Server and I am curious whether it is possible to synchronize the methods read and write of AsynchronousSocketChannel. I wrapped the channel into another class because I needed some additional functionality on my channel. My question is if this is really the right way to synchronize this:
/**
* writes bytes from a <b>ByteBuffer</b> into an
* <b>AsynchronousSocketChannel</b>
*
* #param buffer the ByteBuffer to write from
* #param onFailure specifies the method that should be called on failure of the
* write operation
*/
public void write(ByteBuffer buffer, final C onFailure) {
CompletionHandler<Integer, ByteBuffer> handler = new CompletionHandler<Integer, ByteBuffer>() {
#Override
public void completed(Integer result, ByteBuffer buf) {
if (buf.hasRemaining())
channel.write(buf, buf, this);
}
#Override
public void failed(Throwable exc, ByteBuffer buf) {
attachment.call(onFailure, exc);
}
};
synchronized (writeLock) {
this.channel.write(buffer, buffer, handler);
}
}
In this case writeLock is a static final object that aquires a lock when any of the arbitary instances of my wrapper class starts a write operation. Does this really work or does it just run out of the synchronized block?
This is how I fixed it:
/**
* writes bytes from a <b>ByteBuffer</b> into an
* <b>AsynchronousSocketChannel</b>
*
* #param buffer the ByteBuffer to write from
* #param onFailure specifies the method that should be called on failure of the
* write operation
*/
public void write(ByteBuffer buffer, final C onFailure) {
CompletionHandler<Integer, ByteBuffer> handler = new CompletionHandler<Integer, ByteBuffer>() {
#Override
public void completed(Integer result, ByteBuffer buf) {
if (buf.hasRemaining()) {
channel.write(buf, buf, this);
return;
}
synchronized (writeLock) {
if (!writeQueue.isEmpty()) {
while (writePending)
;
ByteBuffer writeBuf = writeQueue.pop();
channel.write(writeBuf, writeBuf, this);
writePending = true;
return;
}
}
writePending = false;
}
#Override
public void failed(Throwable exc, ByteBuffer buf) {
writePending = false;
attachment.call(onFailure, exc);
}
};
synchronized (writeLock) {
while (this.writePending)
;
this.writeQueue.push(buffer);
ByteBuffer writeBuffer = this.writeQueue.pop();
this.channel.write(writeBuffer, writeBuffer, handler);
this.writePending = true;
}
}

Send record and wait for its acknowledgement to receive

I am using below class to send data to our messaging queue by using socket either in a synchronous way or asynchronous way as shown below.
sendAsync - It sends data asynchronously without any timeout. After sending (on LINE A) it adds to retryHolder bucket so that if acknowledgement is not received then it will retry again from the background thread which is started in a constructor.
send - It internally calls sendAsync method and then sleep for a particular timeout period and if acknowledgement is not received then it removes from retryHolder bucket so that we don't retry again.
So the only difference between those two above methods is - For async I need to retry at all cost but for sync I don't need to retry but looks like it might be getting retried since we share the same retry bucket cache and retry thread runs every 1 second.
ResponsePoller is a class which receives the acknowledgement for the data that was sent to our messaging queue and then calls removeFromretryHolder method below to remove the address so that we don't retry after receiving the acknowledgement.
public class SendToQueue {
private final ExecutorService cleanupExecutor = Executors.newFixedThreadPool(5);
private final ScheduledExecutorService executorService = Executors.newScheduledThreadPool(3);
private final Cache<Long, byte[]> retryHolder =
CacheBuilder
.newBuilder()
.maximumSize(1000000)
.concurrencyLevel(100)
.removalListener(
RemovalListeners.asynchronous(new LoggingRemovalListener(), cleanupExecutor)).build();
private static class Holder {
private static final SendToQueue INSTANCE = new SendToQueue();
}
public static SendToQueue getInstance() {
return Holder.INSTANCE;
}
private SendToQueue() {
executorService.submit(new ResponsePoller()); // another thread which receives acknowledgement and then delete entry from the `retryHolder` cache accordingly.
executorService.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
// retry again
for (Entry<Long, byte[]> entry : retryHolder.asMap().entrySet()) {
sendAsync(entry.getKey(), entry.getValue());
}
}
}, 0, 1, TimeUnit.SECONDS);
}
public boolean sendAsync(final long address, final byte[] encodedRecords, final Socket socket) {
ZMsg msg = new ZMsg();
msg.add(encodedRecords);
// send data on a socket LINE A
boolean sent = msg.send(socket);
msg.destroy();
retryHolder.put(address, encodedRecords);
return sent;
}
public boolean send(final long address, final byte[] encodedRecords, final Socket socket) {
boolean sent = sendAsync(address, encodedRecords, socket);
// if the record was sent successfully, then only sleep for timeout period
if (sent) {
try {
TimeUnit.MILLISECONDS.sleep(500);
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
}
}
// if key is not present, then acknowledgement was received successfully
sent = !retryHolder.asMap().containsKey(address);
// and key is still present in the cache, then it means acknowledgment was not received after
// waiting for timeout period, so we will remove it from cache.
if (!sent)
removeFromretryHolder(address);
return sent;
}
public void removeFromretryHolder(final long address) {
retryHolder.invalidate(address);
}
}
What is the best way by which we dont retry if anyone is calling send method but we still need to know whether acknowledgement was received or not. Only thing is I dont need to retry at all.
Do we need separate bucket for all the sync calls just for acknowledgement and we dont retry from that bucket?
The code has a number of potential issues:
An answer may be received before the call to retryHolder#put.
Possibly there is a race condition when messages are retried too.
If two messages are sent to the same address the second overwrites the first?
Send always wastes time with a sleep, use a wait+notify instead.
I would store a class with more state instead. It could contain a flag (retryIfNoAnswer yes/no) that the retry handler could check. It could provide waitForAnswer/markAnswerReceived methods using wait/notify so that send doesn't have to sleep for a fixed time. The waitForAnswer method can return true if an answer was obtained and false on timeout. Put the object in the retry handler before sending and use a timestamp so that only messages older than a certain age are retried. That fixes the first race condition.
EDIT: updated example code below, compiles with your code, not tested:
public class SendToQueue {
private final ExecutorService cleanupExecutor = Executors.newFixedThreadPool(5);
private final ScheduledExecutorService executorService = Executors.newScheduledThreadPool(3);
// Not sure why you are using a cache rather than a standard ConcurrentHashMap?
private final Cache<Long, PendingMessage> cache = CacheBuilder.newBuilder().maximumSize(1000000)
.concurrencyLevel(100)
.removalListener(RemovalListeners.asynchronous(new LoggingRemovalListener(), cleanupExecutor)).build();
private static class PendingMessage {
private final long _address;
private final byte[] _encodedRecords;
private final Socket _socket;
private final boolean _retryEnabled;
private final Object _monitor = new Object();
private long _sendTimeMillis;
private volatile boolean _acknowledged;
public PendingMessage(long address, byte[] encodedRecords, Socket socket, boolean retryEnabled) {
_address = address;
_sendTimeMillis = System.currentTimeMillis();
_encodedRecords = encodedRecords;
_socket = socket;
_retryEnabled = retryEnabled;
}
public synchronized boolean hasExpired() {
return System.currentTimeMillis() - _sendTimeMillis > 500L;
}
public synchronized void markResent() {
_sendTimeMillis = System.currentTimeMillis();
}
public boolean shouldRetry() {
return _retryEnabled && !_acknowledged;
}
public boolean waitForAck() {
try {
synchronized(_monitor) {
_monitor.wait(500L);
}
return _acknowledged;
}
catch (InterruptedException e) {
return false;
}
}
public void ackReceived() {
_acknowledged = true;
synchronized(_monitor) {
_monitor.notifyAll();
}
}
public long getAddress() {
return _address;
}
public byte[] getEncodedRecords() {
return _encodedRecords;
}
public Socket getSocket() {
return _socket;
}
}
private static class Holder {
private static final SendToQueue INSTANCE = new SendToQueue();
}
public static SendToQueue getInstance() {
return Holder.INSTANCE;
}
private void handleRetries() {
List<PendingMessage> messages = new ArrayList<>(cache.asMap().values());
for (PendingMessage m : messages) {
if (m.hasExpired()) {
if (m.shouldRetry()) {
m.markResent();
doSendAsync(m, m.getSocket());
}
else {
// Or leave the message and let send remove it
cache.invalidate(m.getAddress());
}
}
}
}
private SendToQueue() {
executorService.submit(new ResponsePoller()); // another thread which receives acknowledgement and then delete entry from the cache accordingly.
executorService.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
handleRetries();
}
}, 0, 1, TimeUnit.SECONDS);
}
public boolean sendAsync(final long address, final byte[] encodedRecords, final Socket socket) {
PendingMessage m = new PendingMessage(address, encodedRecords, socket, true);
cache.put(address, m);
return doSendAsync(m, socket);
}
private boolean doSendAsync(final PendingMessage pendingMessage, final Socket socket) {
ZMsg msg = new ZMsg();
msg.add(pendingMessage.getEncodedRecords());
try {
// send data on a socket LINE A
return msg.send(socket);
}
finally {
msg.destroy();
}
}
public boolean send(final long address, final byte[] encodedRecords, final Socket socket) {
PendingMessage m = new PendingMessage(address, encodedRecords, socket, false);
cache.put(address, m);
try {
if (doSendAsync(m, socket)) {
return m.waitForAck();
}
return false;
}
finally {
// Alternatively (checks that address points to m):
// cache.asMap().remove(address, m);
cache.invalidate(address);
}
}
public void handleAckReceived(final long address) {
PendingMessage m = cache.getIfPresent(address);
if (m != null) {
m.ackReceived();
cache.invalidate(address);
}
}
}
And called from ResponsePoller:
SendToQueue.getInstance().handleAckReceived(addressFrom);
Design-wise: I feel like you are trying to write a thread-safe and somewhat efficient NIO message sender/receiver but (both) code I see here aren't OK and won't be without significant changes. The best thing to do is either:
make full use of the 0MQ framework. I see things and expectations here that are actually available out-of-the-box in ZMQ and java.util.concurrent API.
or have a look at Netty (https://netty.io/index.html) preferably if it applies to your project. "Netty is an asynchronous event-driven network application framework
for rapid development of maintainable high performance protocol servers & clients." This will save you time if your project gets complex, otherwise it might be overkill to start with (but then expect issues ...).
However if you think you are almost at it with your code or #john's code then I will just give advices to complete:
don't use wait() and notify(). Don't sleep() either.
use a single thread for your "flow tracker" (i.e. ~the pending message Cache).
You don't actually need 3 threads to process pending messages except if this processing itself is slow (or does heavy stuff) which is not the case here as you basically make an async call (as far as it is really async.. is it?).
The same for the reverse path: use an executor service (multiple threads) for your received packets processing only if the actual processing is slow/blocking or heavy.
I'm not an expert in 0MQ at all but as far as socket.send(...) is thread-safe and non-blocking (which I'm not sure personally - tell me) the above advices shall be correct and make things simpler.
That said, to strictly answer your question:
Do we need separate bucket for all the sync calls just for acknowledgement and we dont retry from that bucket?
I'd say no, hence what do you think of the following? Based on your code and independently of my own feelings this seems acceptable:
public class SendToQueue {
// ...
private final Map<Long, Boolean> transactions = new ConcurrentHashMap<>();
// ...
private void startTransaction(long address) {
this.transactions.put(address, Boolean.FALSE);
}
public void updateTransaction(long address) {
Boolean state = this.transactions.get(address);
if (state != null) {
this.transactions.put(address, Boolean.TRUE);
}
}
private void clearTransaction(long address) {
this.transactions.remove(address);
}
public boolean send(final long address, final byte[] encodedRecords, final Socket socket) {
boolean success = false;
// If address is enough randomized or atomically counted (then ok for parallel send())
startTransaction(address);
try {
boolean sent = sendAsync(address, encodedRecords, socket);
// if the record was sent successfully, then only sleep for timeout period
if (sent) {
// wait for acknowledgement
success = waitDoneUntil(new DoneCondition() {
#Override
public boolean isDone() {
return SendToQueue.this.transactions.get(address); // no NPE
}
}, 500, TimeUnit.MILLISECONDS);
if (success) {
// Message acknowledged!
}
}
} finally {
clearTransaction(address);
}
return success;
}
public static interface DoneCondition {
public boolean isDone();
}
/**
* WaitDoneUntil(Future f, int duration, TimeUnit unit). Note: includes a
* sleep(50).
*
* #param f Will block for this future done until maxWaitMillis
* #param waitTime Duration expressed in (time) unit.
* #param unit Time unit.
* #return DoneCondition finally met or not
*/
public static boolean waitDoneUntil(DoneCondition f, int waitTime, TimeUnit unit) {
long curMillis = 0;
long maxWaitMillis = unit.toMillis(waitTime);
while (!f.isDone() && curMillis < maxWaitMillis) {
try {
Thread.sleep(50); // define your step here accordingly or set as parameter
} catch (InterruptedException ex1) {
//logger.debug("waitDoneUntil() interrupted.");
break;
}
curMillis += 50L;
}
return f.isDone();
}
//...
}
public class ResponsePoller {
//...
public void onReceive(long address) { // sample prototype
// ...
SendToQueue.getInstance().updateTransaction(address);
// The interested sender will know that its transaction is complete.
// While subsequent (late) calls will have no effect.
}
}

Data concurrency in Android Service with sensor data

My application is running a Service that holds a BLE connection to a multi-sensor wristband. The Serviceimplements some callback methods for the wristband SDK which are called several times every seconds with new data.
I want to put these data, from the different sensors, within the same Observation object relative to its timestamp. All Observation objects are pushed to a backend server every 60 seconds, sensor data is put together to reduce the overhead in sending these Observation objects.
What I'm doing now is presented in the code snippet below. My problem is that the while-loop in observationFetcher completely blocks the application. Is there any other approaches for synchronizing these sensor data without using a block while-loop?
observationFetcher = new Runnable() {
#Override
public void run() {
while (isRecording) {
if (lastMillis != currentMillis) {
Observation obs = sm.getValues();
obs.setPropertyAsString("gateway.id", UUID);
observations.add(obs);
lastMillis = currentMillis;
}
}
}
};
public void didReceiveGSR(float gsr, double timestamp) {
long t = System.currentTimeMillis() / 1000;
sm.setGsrValue(t, gsr);
currentMillis = t;
}
public void didReceiveIBI(float ibi, double timestamp) {
sm.setIbiValue(ibi);
}
sm is an object with synchronized methods for putting all the sensor data within the same second together.
Please correct me if I'm wrong, but I don't see a reason to waste CPU time infinity iterating. Of course, I don't see the entire code and your API may not allow you to do something, but I would implement the data processing in following way:
final class Observation {
private float gsr;
private float ibi;
public Observation(float gsr, float ibi) {
this.gsr = gsr;
this.ibi = ibi;
}
// getters & setters
}
public final class Observations {
private final ConcurrentHashMap<Long, Observation> observations = new ConcurrentHashMap<>();
public void insertGsrValue(long timestamp, float gsr) {
for (;;) {
Observation observation = observations.get(timestamp);
if (observation == null) {
observation = observations.putIfAbsent(timestamp, new Observation(gsr, 0.0f));
if (observation == null) {
return;
}
}
if (observations.replace(timestamp, observation, new Observation(gsr, observation.getIbi()))) {
return;
}
}
}
public void insertIbiValue(long timestamp, float ibi) {
for (;;) {
Observation observation = observations.get(timestamp);
if (observation == null) {
observation = observations.putIfAbsent(timestamp, new Observation(0.0f, ibi));
if (observation == null) {
return;
}
}
if (observations.replace(timestamp, observation, new Observation(observation.getGsr(), ibi))) {
return;
}
}
}
public List<Observation> getObservations() {
return new ArrayList<>(observations.values());
}
public void clear() {
observations.clear();
}
}
public final class ObservationService extends Service {
private final Observations observations = new Observations();
private volatile long currentMillis;
private HandlerThread handlerThread;
private Handler handler;
#Override
public void onCreate() {
super.onCreate();
handlerThread = new HandlerThread("observations_sender_thread");
handlerThread.start();
handler = new Handler(handlerThread.getLooper());
handler.postDelayed(new Runnable() {
#Override
public void run() {
sendData();
handler.postDelayed(this, TimeUnit.SECONDS.toMillis(60));
}
}, TimeUnit.SECONDS.toMillis(60));
}
#Override
public void onDestroy() {
handlerThread.quit();
}
private void sendData() {
List<Observation> observationList = observations.getObservations();
observations.clear();
// send observation list somehow
}
public void didReceiveGSR(float gsr, double timestamp) {
// assuming this is called on a worker thread
long t = System.currentTimeMillis() / 1000;
observations.insertGsrValue(t, gsr);
currentMillis = t;
}
public void didReceiveIBI(float ibi, double timestamp) {
// assuming this is called on a worker thread
observations.insertIbiValue(currentMillis, ibi);
}
#Nullable
#Override
public IBinder onBind(Intent intent) {
return null;
}
}
So what this code does is insert new values from sensors into a hash map and send it somewhere every 60 seconds. This code is still not perfect as there is a problem with concurrency. For example, if 2 gsr values come first and then one ibi value, then we will lose the first gsr value.
Anyway, this code should give an idea how you can avoid blocking the thread and store the data concurrency.
Please do let me know if you have any questions regarding the code.

Creating a Flowable that emits items at a limited rate to avoid the need to buffer events

I've got a data access object that passes each item in a data source to a consumer:
public interface Dao<T> {
void forEachItem(Consumer<T> item);
}
This always produces items in a single threaded way - I can't currently change this interface.
I wanted to create a Flowable from this interface:
private static Flowable<String> flowable(final Dao dao) {
return Flowable.create(emitter -> {
dao.forEachItem(item ->
emitter.onNext(item));
emitter.onComplete();
}, ERROR);
}
If I use this Flowable in a situation where the processing takes longer than the rate at which items are emitted then I understandably get a missing back pressure exception as I am using ERROR mode:
Dao<String> exampleDao =
itemConsumer ->
IntStream.range(0, 1_000).forEach(i ->
itemConsumer.accept(String.valueOf(i)));
flowable(exampleDao)
.map(v -> {
Thread.sleep(100);
return "id:" + v;
})
.blockingSubscribe(System.out::println);
I don't wish to buffer items - seems like this could lead to exhausting memory on very large data sets - if the operation is significantly slower than the producer.
I was hoping there would be a backpressure mode that would allow the emitter to block when passed next/completion events when it detects back pressure but that does not seem to be the case?
In my case as I know that the dao produces items in a single threaded way I thought I would be able to do something like:
dao.forEachItem(item -> {
while (emitter.requested() == 0) {
waitABit();
}
emitter.onNext(item)
});
but this seems to hang forever.
How wrong is my approach? :-) Is there a way of producing items in a way that respects downstream back pressure given my (relatively restrictive) set of circumstances?
I know I could do this with a separate process writing to a queue and then write a Flowable based on consuming from that queue- would that be the preferred approach instead?
Check the part of the Flowable, especially the part with Supscription.request(long). I hope that gets you on the right way.
The TestProducerfrom this example produces Integerobjects in a given range and pushes them to its Subscriber. It extends the Flowable<Integer> class. For a new subscriber, it creates a Subscription object whose request(long) method is used to create and publish the Integer values.
It is important for the Subscription that is passed to the subscriber that the request() method which calls onNext()on the subscriber can be recursively called from within this onNext() call. To prevent a stack overflow, the shown implementation uses the outStandingRequests counter and the isProducing flag.
class TestProducer extends Flowable<Integer> {
static final Logger logger = LoggerFactory.getLogger(TestProducer.class);
final int from, to;
public TestProducer(int from, int to) {
this.from = from;
this.to = to;
}
#Override
protected void subscribeActual(Subscriber<? super Integer> subscriber) {
subscriber.onSubscribe(new Subscription() {
/** the next value. */
public int next = from;
/** cancellation flag. */
private volatile boolean cancelled = false;
private volatile boolean isProducing = false;
private AtomicLong outStandingRequests = new AtomicLong(0);
#Override
public void request(long n) {
if (!cancelled) {
outStandingRequests.addAndGet(n);
// check if already fulfilling request to prevent call between request() an subscriber .onNext()
if (isProducing) {
return;
}
// start producing
isProducing = true;
while (outStandingRequests.get() > 0) {
if (next > to) {
logger.info("producer finished");
subscriber.onComplete();
break;
}
subscriber.onNext(next++);
outStandingRequests.decrementAndGet();
}
isProducing = false;
}
}
#Override
public void cancel() {
cancelled = true;
}
});
}
}
The Consumer in this example extends DefaultSubscriber<Integer> and on start and after consuming an Integer requests the next one. On consuming the Integer values, there is a little delay, so the backpressure will be built up for the producer.
class TestConsumer extends DefaultSubscriber<Integer> {
private static final Logger logger = LoggerFactory.getLogger(TestConsumer.class);
#Override
protected void onStart() {
request(1);
}
#Override
public void onNext(Integer i) {
logger.info("consuming {}", i);
if (0 == (i % 5)) {
try {
Thread.sleep(500);
} catch (InterruptedException ignored) {
// can be ignored, just used for pausing
}
}
request(1);
}
#Override
public void onError(Throwable throwable) {
logger.error("error received", throwable);
}
#Override
public void onComplete() {
logger.info("consumer finished");
}
}
in the following main method of a test class the producer and consumer are created and wired up:
public static void main(String[] args) {
try {
final TestProducer testProducer = new TestProducer(1, 1_000);
final TestConsumer testConsumer = new TestConsumer();
testProducer
.subscribeOn(Schedulers.computation())
.observeOn(Schedulers.single())
.blockingSubscribe(testConsumer);
} catch (Throwable t) {
t.printStackTrace();
}
}
When running the example, the logfile shows that the consumer runs continuously, while the producer only gets active when the internal Flowable buffer of rxjava2 needs to be refilled.

Efficient channelRead for Java Netty Server

I'm using netty to develop a proxy server and my proxy ProxyBackendHandler class is as follows. There on channelRead method I need to get the msg data and write to client as TextWebSocketFrame. To do that I have used a StringBuilder and a while loop to iterate the ByteBuf. Can anyone suggest me a better way to do this as it seems that above code has high perfomance overhead when the high data loads.
public class ProxyBackendHandler extends ChannelInboundHandlerAdapter {
private final Channel inboundChannel;
StringBuilder sReplyBuffer;
public ProxyBackendHandler(Channel inboundChannel) {
this.inboundChannel = inboundChannel;
sReplyBuffer = new StringBuilder(4000);
}
#Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) {
// Please suggest a efficient implementation for read msg and pass it to writeAndFlush.
ByteBuf in = (ByteBuf) msg;
sReplyBuffer.setLength(0);
try {
while (in.isReadable()) {
sReplyBuffer.append((char) in.readByte());
}
} finally {
((ByteBuf) msg).release();
}
inboundChannel.writeAndFlush(new TextWebSocketFrame (sReplyBuffer.toString())).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) {
if (future.isSuccess()) {
ctx.channel().read();
System.out.println("Sent To Client");
} else {
future.channel().close();
}
}
});
}
}
Maybe something like this:
public class ProxyBackendHandler extends ChannelInboundHandlerAdapter {
private final Channel inboundChannel;
public ProxyBackendHandler(Channel inboundChannel) {
this.inboundChannel = inboundChannel;
}
#Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) {
inboundChannel.writeAndFlush(new TextWebSocketFrame((ByteBuf) msg)).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) {
if (future.isSuccess()) {
ctx.channel().read();
System.out.println("Sent To Client");
} else {
future.channel().close();
}
}
});
}
}
I suggest not using a StringBuilder at all. Just use the buffer you already have. You don't state what a TextWebSocketFrame might be, or why you think you need it, but ultimately a proxy server only has to copy bytes. You don't need StringBuilders or extra classes for that. Or Netty, frankly.

Categories