Using Netty, I'm receiving multiple asynchronous messages from a framework on multiple threads. I need to send these messages to a network device (UDP) which uses a synchronous, stateful protocol. So, I need to use a state variable, and only allow one message to be sent at a time, which should only happen when the client is in the "idle" state.
In addition, the state-machine will need to send it's own internally generated messages - retrys - ahead of whatever is waiting in queue. For this use case I know how to inject messages into the pipeline, which would work as long as outbound messages can be held at the head of the pipeline.
Any idea how to control the output using a client state?
TIA
I've come up with a proof-of-concept / proposed solution, although, hopefully someone knows a better way. While this works as intended it introduces a number of undesirable side effects which would need to be solved for.
Run a Blocking Write Handler on its own thread.
Put the thread to sleep if the state isn't idle, and wake it when it becomes Idle.
If the State was Idle, or becomes Idle, send the message on its way.
This is the bootstrap I used
public class UDPConnector {
public Init() {
this.workerGroup = EPOLL ? new EpollEventLoopGroup() : new NioEventLoopGroup();
this.blockingExecutor = new DefaultEventExecutor();
bootstrap = new Bootstrap()
.channel(EPOLL ? EpollDatagramChannel.class : NioDatagramChannel.class)
.group(workerGroup)
.handler(new ChannelInitializer<DatagramChannel>() {
#Override
public void initChannel(DatagramChannel ch) throws Exception {
ch.pipeline().addLast("logging", new LoggingHandler());
ch.pipeline().addLast("encode", new RequestEncoderNetty());
ch.pipeline().addLast("decode", new ResponseDecoderNetty());
ch.pipeline().addLast("ack", new AckHandler());
ch.pipeline().addLast(blockingExecutor, "blocking", new BlockingOutboundHandler());
}
});
}
}
The Blocking Outbound Handler looks like this
public class BlockingOutboundHandler extends ChannelOutboundHandlerAdapter {
private final Logger logger = LoggerFactory.getLogger(BlockingOutboundHandler.class);
private final AtomicBoolean isIdle = new AtomicBoolean(true);
public void setIdle() {
synchronized (this.isIdle) {
logger.debug("setIdle() called");
this.isIdle.set(true);
this.isIdle.notify();
}
}
#Override
public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception {
synchronized (isIdle) {
if (!isIdle.get()) {
logger.debug("write(): I/O State not Idle, Waiting");
isIdle.wait();
logger.debug("write(): Finished waiting on I/O State");
}
isIdle.set(false);
}
logger.debug("write(): {}", msg.toString());
ctx.write(msg, promise);
}
}
Finally, when the StateMachine transitions to idle the the block is released
Optional.ofNullable((BlockingOutboundHandler) ctx.pipeline().get("blocking")).ifPresent(h -> h.setIdle());
All of this results in the Outbound messages being synchronized with the synchronous, statefull responses from the device.
Of course, I'd prefer not to have to deal with additional threads and the synchronization which come with them. I'm also not sure what kind of "yet to be discovered" issues I'm going to run into doing it this way. It does have the side effect of causing the main handler to be visited by multiple threads which just creates a new problem hich needs to be solved.
Also, next up, is implementation of a timeout and retry with back-off; this thread can't stay blocked indefinitely.
15:07:16.539 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2] REGISTERED
15:07:16.540 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2] CONNECT: portserver1.tedworld.net/192.168.2.173:2102
15:07:16.541 [DEBUG] [internal.connection.UDPConnectorNetty] - connect(): connect() complete
15:07:16.541 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2, L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] ACTIVE
15:07:16.542 [DEBUG] [c.projector.internal.ProjectorHandler] - scheduler.execute: creating test message
15:07:16.543 [DEBUG] [c.projector.internal.ProjectorHandler] - scheduler.execute: sending test message
15:07:16.544 [DEBUG] [internal.connection.UDPConnectorNetty] - sendRequest: Adding msg to queue { super={ messageType=21, channelId=test, data= } }
15:07:16.546 [DEBUG] [c.projector.internal.ProjectorHandler] - scheduler.execute: Finished
15:07:16.545 [DEBUG] [rnal.protocol.BlockingOutboundHandler] - write { super={ messageType=21, channelId=test, data= } }
15:07:16.547 [DEBUG] [internal.connection.UDPConnectorNetty] - sendRequest: Adding msg to queue { super={ messageType=3F, channelId=lamp, data= } }
15:07:16.548 [DEBUG] [rnal.protocol.BlockingOutboundHandler] - write(): I/O State not Idle, Waiting
15:07:16.548 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2, L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] WRITE: 6B
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 21 89 01 00 00 0a |!..... |
+--------+-------------------------------------------------+----------------+
15:07:16.550 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2, L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] FLUSH
15:07:16.567 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2, L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] READ: DatagramPacket(/192.168.2.173:2102 => /192.168.2.186:47010, PooledUnsafeDirectByteBuf(ridx: 0, widx: 6, cap: 2048)), 6B
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 06 89 01 00 00 0a |...... |
+--------+-------------------------------------------------+----------------+
15:07:16.568 [DEBUG] [nternal.protocol.ResponseDecoderNetty] - decode DatagramPacket(/192.168.2.173:2102 => /192.168.2.186:47010, PooledUnsafeDirectByteBuf(ridx: 0, widx: 6, cap: 2048))
15:07:16.569 [DEBUG] [rojector.internal.protocol.AckHandler] - channelRead0 { super={ messageType=06, channelId=test, data= } }
15:07:16.570 [DEBUG] [rnal.protocol.BlockingOutboundHandler] - setIdle called
15:07:16.571 [DEBUG] [rnal.protocol.BlockingOutboundHandler] - write(): Finished waiting on I/O State
15:07:16.571 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2, L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] READ COMPLETE
15:07:16.571 [DEBUG] [rnal.protocol.BlockingOutboundHandler] - write { super={ messageType=3F, channelId=lamp, data= } }
15:07:16.573 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2, L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] WRITE: 6B
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 3f 89 01 50 57 0a |?..PW. |
+--------+-------------------------------------------------+----------------+
15:07:16.573 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2, L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] FLUSH
15:07:16.587 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2, L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] READ: DatagramPacket(/192.168.2.173:2102 => /192.168.2.186:47010, PooledUnsafeDirectByteBuf(ridx: 0, widx: 6, cap: 2048)), 6B
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 06 89 01 50 57 0a |...PW. |
+--------+-------------------------------------------------+----------------+
15:07:16.588 [DEBUG] [nternal.protocol.ResponseDecoderNetty] - decode DatagramPacket(/192.168.2.173:2102 => /192.168.2.186:47010, PooledUnsafeDirectByteBuf(ridx: 0, widx: 6, cap: 2048))
15:07:16.589 [DEBUG] [rojector.internal.protocol.AckHandler] - channelRead0 { super={ messageType=06, channelId=lamp, data= } }
15:07:16.590 [DEBUG] [rnal.protocol.BlockingOutboundHandler] - setIdle called
15:07:16.591 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2, L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] READ COMPLETE
15:07:16.592 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2, L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] READ: DatagramPacket(/192.168.2.173:2102 => /192.168.2.186:47010, PooledUnsafeDirectByteBuf(ridx: 0, widx: 7, cap: 2048)), 7B
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 40 89 01 50 57 30 0a |#..PW0. |
+--------+-------------------------------------------------+----------------+
15:07:16.593 [DEBUG] [nternal.protocol.ResponseDecoderNetty] - decode DatagramPacket(/192.168.2.173:2102 => /192.168.2.186:47010, PooledUnsafeDirectByteBuf(ridx: 0, widx: 7, cap: 2048))
15:07:16.594 [DEBUG] [.netty.channel.DefaultChannelPipeline] - Discarded inbound message { super={ messageType=40, channelId=lamp, data=30 } } that reached at the tail of the pipeline. Please check your pipeline configuration.
15:07:16.595 [DEBUG] [.netty.channel.DefaultChannelPipeline] - Discarded message pipeline : [logging, encode, decode, ack, blocking, DefaultChannelPipeline$TailContext#0]. Channel : [id: 0xb19e73e2, L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102].
15:07:16.596 [DEBUG] [.netty.handler.logging.LoggingHandler] - [id: 0xb19e73e2, L:/192.168.2.186:47010 - R:portserver1.tedworld.net/192.168.2.173:2102] READ COMPLETE
Related
I expect that all invocations of the server will be processed in parallel, but it is not true.
Here is simple example.
RSocket version: 1.1.0
Server
public class ServerApp {
private static final Logger log = LoggerFactory.getLogger(ServerApp.class);
public static void main(String[] args) throws InterruptedException {
RSocketServer.create(SocketAcceptor.forRequestResponse(payload ->
Mono.fromCallable(() -> {
log.debug("Start of my business logic");
sleepSeconds(5);
return DefaultPayload.create("OK");
})))
.bind(WebsocketServerTransport.create(15000))
.block();
log.debug("Server started");
TimeUnit.MINUTES.sleep(30);
}
private static void sleepSeconds(int sec) {
try {
TimeUnit.SECONDS.sleep(sec);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Client
public class ClientApp {
private static final Logger log = LoggerFactory.getLogger(ClientApp.class);
public static void main(String[] args) throws InterruptedException {
RSocket client = RSocketConnector.create()
.connect(WebsocketClientTransport.create(15000))
.block();
long start1 = System.currentTimeMillis();
client.requestResponse(DefaultPayload.create("Request 1"))
.doOnNext(r -> log.debug("finished within {}ms", System.currentTimeMillis() - start1))
.subscribe();
long start2 = System.currentTimeMillis();
client.requestResponse(DefaultPayload.create("Request 2"))
.doOnNext(r -> log.debug("finished within {}ms", System.currentTimeMillis() - start2))
.subscribe();
TimeUnit.SECONDS.sleep(20);
}
}
In client logs, we can see that both request was sent at the same time, and both responses was received at the same time after 10sec (each request was proceed in 5 seconds).
In server logs, we can see that requests executed sequentially and not in parallel.
Could you please help me to understand this behavior?
Why we have received the first response after 10 seconds and not 5?
How do I create the server correctly if I want all requests to be processed in parallel?
If I replace Mono.fromCallable by Mono.fromFuture(CompletableFuture.supplyAsync(() -> myBusinessLogic(), executorService)), then it will resolve 1.
If I replace Mono.fromCallable by Mono.delay(Duration.ZERO).map(ignore -> myBusinessLogic(), then it will resolve 1. and 2.
If I replace Mono.fromCallable by Mono.create(sink -> sink.success(myBusinessLogic())), then it will not resolve my issues.
Client logs:
2021-07-16 10:39:46,880 DEBUG [reactor-tcp-nio-1] [/] - sending ->
Frame => Stream ID: 0 Type: SETUP Flags: 0b0 Length: 56
Data:
2021-07-16 10:39:46,952 DEBUG [main] [/] - sending ->
Frame => Stream ID: 1 Type: REQUEST_RESPONSE Flags: 0b0 Length: 15
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 52 65 71 75 65 73 74 20 31 |Request 1 |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:46,957 DEBUG [main] [/] - sending ->
Frame => Stream ID: 3 Type: REQUEST_RESPONSE Flags: 0b0 Length: 15
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 52 65 71 75 65 73 74 20 32 |Request 2 |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:57,043 DEBUG [reactor-tcp-nio-1] [/] - receiving ->
Frame => Stream ID: 1 Type: NEXT_COMPLETE Flags: 0b1100000 Length: 8
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 4f 4b |OK |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:57,046 DEBUG [reactor-tcp-nio-1] [/] - finished within 10120ms
2021-07-16 10:39:57,046 DEBUG [reactor-tcp-nio-1] [/] - receiving ->
Frame => Stream ID: 3 Type: NEXT_COMPLETE Flags: 0b1100000 Length: 8
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 4f 4b |OK |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:57,046 DEBUG [reactor-tcp-nio-1] [/] - finished within 10094ms
Server Logs:
2021-07-16 10:39:46,965 DEBUG [reactor-http-nio-2] [/] - receiving ->
Frame => Stream ID: 0 Type: SETUP Flags: 0b0 Length: 56
Data:
2021-07-16 10:39:47,021 DEBUG [reactor-http-nio-2] [/] - receiving ->
Frame => Stream ID: 1 Type: REQUEST_RESPONSE Flags: 0b0 Length: 15
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 52 65 71 75 65 73 74 20 31 |Request 1 |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:47,027 DEBUG [reactor-http-nio-2] [/] - Start of my business logic
2021-07-16 10:39:52,037 DEBUG [reactor-http-nio-2] [/] - sending ->
Frame => Stream ID: 1 Type: NEXT_COMPLETE Flags: 0b1100000 Length: 8
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 4f 4b |OK |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:52,038 DEBUG [reactor-http-nio-2] [/] - receiving ->
Frame => Stream ID: 3 Type: REQUEST_RESPONSE Flags: 0b0 Length: 15
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 52 65 71 75 65 73 74 20 32 |Request 2 |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:52,038 DEBUG [reactor-http-nio-2] [/] - Start of my business logic
2021-07-16 10:39:57,039 DEBUG [reactor-http-nio-2] [/] - sending ->
Frame => Stream ID: 3 Type: NEXT_COMPLETE Flags: 0b1100000 Length: 8
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 4f 4b |OK |
+--------+-------------------------------------------------+----------------+
You shouldn't mix asynchronous code like Reactive Mono operations with blocking code like
private static void sleepSeconds(int sec) {
try {
TimeUnit.SECONDS.sleep(sec);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
I suspect the central issue here is that a framework like rsocket-java doesn't want to run everything on new threads, at the cost of excessive context switching. So generally relies on you run long running CPU or IO operations appropriately.
You should look at the various async delay operations instead https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Mono.html#delayElement-java.time.Duration-
If your delay is meant to simulate a long running operation, then you should look at subscribing on a different scheduler like https://projectreactor.io/docs/core/release/api/reactor/core/scheduler/Schedulers.html#boundedElastic--
I custom the k8s core dns file to resolve a custom name.which works fine in pods checked by ping xx.
But it not resolved in java appliation(jdk14).
Nameserver is ok.
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search xxxx-5-production.svc.cluster.local svc.cluster.local cluster.local
/ # ping xx
PING xx (192.168.65.2): 56 data bytes
64 bytes from 192.168.65.2: seq=0 ttl=37 time=0.787 ms
Edit: I use coredns rewrite host name xx to host.docker.internal,this is change to coredns config
rewrite name regex (^|(?:\S*\.)*)xx\.?$ {1}host.docker.internal
I add some debug code to the entry:
static void runCommand(String... commands) {
try {
ProcessBuilder cat = new ProcessBuilder(commands);
Process start = cat.start();
start.waitFor();
String output = new BufferedReader(new InputStreamReader(start.getInputStream())).lines().collect(Collectors.joining());
String err = new BufferedReader(new InputStreamReader(start.getErrorStream())).lines().collect(Collectors.joining());
log.info("\n{}: stout {}", Arrays.toString(commands),output);
log.info("\n{}: sterr{}", Arrays.toString(commands),err);
} catch (IOException | InterruptedException e) {
log.error(e.getClass().getCanonicalName(), e);
}
}
public static void main(String[] args) {
try {
InetAddress xx = Inet4Address.getByName("xx");
log.info("{}: {}", "InetAddress xx", xx.getHostAddress());
} catch (IOException e) {
log.error(e.getClass().getCanonicalName(), e);
}
runCommand("cat", "/etc/resolv.conf");
runCommand("ping", "xx","-c","1");
runCommand("ping", "host.docker.internal","-c","1");
runCommand("nslookup", "xx");
runCommand("ifconfig");
SpringApplication.run(FileServerApp.class, args);
}
Here is output:
01:01:39.950 [main] ERROR com.j.file_server_app.FileServerApp - java.net.UnknownHostException
java.net.UnknownHostException: xx: Name or service not known
at java.base/java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:932)
at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1505)
at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:851)
at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1495)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1354)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1288)
at java.base/java.net.InetAddress.getByName(InetAddress.java:1238)
at com.j.file_server_app.FileServerApp.main(FileServerApp.java:43)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:51)
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:52)
01:01:39.983 [main] INFO com.j.file_server_app.FileServerApp -
[cat, /etc/resolv.conf]: stout nameserver 10.96.0.10search default.svc.cluster.local svc.cluster.local cluster.localoptions ndots:5
01:01:39.985 [main] INFO com.j.file_server_app.FileServerApp -
[cat, /etc/resolv.conf]: sterr
01:01:39.991 [main] INFO com.j.file_server_app.FileServerApp -
[ping, xx, -c, 1]: stout
01:01:39.991 [main] INFO com.j.file_server_app.FileServerApp -
[ping, xx, -c, 1]: sterrping: unknown host
01:01:39.998 [main] INFO com.j.file_server_app.FileServerApp -
[ping, host.docker.internal, -c, 1]: stout PING host.docker.internal (192.168.65.2): 56 data bytes64 bytes from 192.168.65.2: icmp_seq=0 ttl=37 time=0.757 ms--- host.docker.internal ping statistics ---1 packets transmitted, 1 packets received, 0% packet lossround-trip min/avg/max/stddev = 0.757/0.757/0.757/0.000 ms
01:01:39.998 [main] INFO com.j.file_server_app.FileServerApp -
[ping, host.docker.internal, -c, 1]: sterr
01:01:40.045 [main] INFO com.j.file_server_app.FileServerApp -
[nslookup, xx]: stout Server: 10.96.0.10Address: 10.96.0.10#53Non-authoritative answer:Name: host.docker.internalAddress: 192.168.65.2** server can't find xx: NXDOMAIN
01:01:40.045 [main] INFO com.j.file_server_app.FileServerApp -
[nslookup, xx]: sterr
01:01:40.048 [main] INFO com.j.file_server_app.FileServerApp -
[ifconfig]: stout eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.1.3.14 netmask 255.255.0.0 broadcast 0.0.0.0 ether ce:71:60:9a:75:05 txqueuelen 0 (Ethernet) RX packets 35 bytes 3776 (3.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 22 bytes 1650 (1.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1000 (Local Loopback) RX packets 1 bytes 29 (29.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1 bytes 29 (29.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
01:01:40.048 [main] INFO com.j.file_server_app.FileServerApp -
[ifconfig]: sterr
Looks like coredns not working,but in the front end pod,ping is ok,this is front end Dockerfile
FROM library/nginx:stable-alpine
RUN mkdir /app
EXPOSE 80
ADD dist /app
COPY nginx.conf /etc/nginx/nginx.conf
Using docker inspect for fontend and backend container,both network setting are:
"NetworkSettings": {
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {}
}
Both frontend and backend has service with type: LoadBalancer,now my question is why the name resolve behave different in this two pods?
I have a system with HTTP POST requests and it runs with Spring 5 (standalone tomcat). In short it looks like this:
client (Apache AB) ----> micro service (java or golang) --> RabbitMQ --> Core(spring + tomcat).
The thing is, when I use my Java (Spring) service, it is ok. AB shows this output:
ab -n 1000 -k -s 2 -c 10 -s 60 -p test2.sh -A 113:113 -T 'application/json' https://127.0.0.1:8449/SecureChat/chat/v1/rest-message/send
This is ApacheBench, Version 2.3 <$Revision: 1807734 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 100 requests
...
Completed 1000 requests
Finished 1000 requests
Server Software:
Server Hostname: 127.0.0.1
Server Port: 8449
SSL/TLS Protocol: TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,2048,256
Document Path: /rest-message/send
Document Length: 39 bytes
Concurrency Level: 10
Time taken for tests: 434.853 seconds
Complete requests: 1000
Failed requests: 0
Keep-Alive requests: 0
Total transferred: 498000 bytes
Total body sent: 393000
HTML transferred: 39000 bytes
Requests per second: 2.30 [#/sec] (mean)
Time per request: 4348.528 [ms] (mean)
Time per request: 434.853 [ms] (mean, across all concurrent
requests)
Transfer rate: 1.12 [Kbytes/sec] received
0.88 kb/s sent
2.00 kb/s total
Connection Times (ms)
min mean[+/-sd] median max
Connect: 4 14 7.6 17 53
Processing: 1110 4317 437.2 4285 8383
Waiting: 1107 4314 437.2 4282 8377
Total: 1126 4332 436.8 4300 8403
That is through TLS.
But when I try to use my Golang service I get timeout:
Benchmarking 127.0.0.1 (be patient)...apr_pollset_poll: The timeout specified has expired (70007)
Total of 92 requests completed
And this output:
ab -n 100 -k -s 2 -c 10 -s 60 -p test2.sh -T 'application/json' http://127.0.0.1:8089/
This is ApacheBench, Version 2.3 <$Revision: 1807734 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)...^C
Server Software:
Server Hostname: 127.0.0.1
Server Port: 8089
Document Path: /
Document Length: 39 bytes
Concurrency Level: 10
Time taken for tests: 145.734 seconds
Complete requests: 92
Failed requests: 1
(Connect: 0, Receive: 0, Length: 1, Exceptions: 0)
Keep-Alive requests: 91
Total transferred: 16380 bytes
Total body sent: 32200
HTML transferred: 3549 bytes
Requests per second: 0.63 [#/sec] (mean)
Time per request: 15840.663 [ms] (mean)
Time per request: 1584.066 [ms] (mean, across all concurrent requests)
Transfer rate: 0.11 [Kbytes/sec] received
0.22 kb/s sent
0.33 kb/s total
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 1229 1494 1955.9 1262 20000
Waiting: 1229 1291 143.8 1262 2212
Total: 1229 1494 1955.9 1262 20000
That is through plane tcp.
I guess I have some mistakes in my code. I made it in one file
func initAmqp(rabbitUrl string) {
var err error
conn, err = amqp.Dial(rabbitUrl)
failOnError(err, "Failed to connect to RabbitMQ")
}
func main() {
err := gcfg.ReadFileInto(&cfg, "config.gcfg")
if err != nil {
log.Fatal(err);
}
PrintConfig(cfg)
if cfg.Section_rabbit.RabbitUrl != "" {
initAmqp(cfg.Section_rabbit.RabbitUrl);
}
mux := http.NewServeMux();
mux.Handle("/", NewLimitHandler(1000, newTestHandler()))
server := http.Server {
Addr: cfg.Section_basic.Port,
Handler: mux,
ReadTimeout: 20 * time.Second,
WriteTimeout: 20 * time.Second,
}
defer conn.Close();
log.Println(server.ListenAndServe());
}
func NewLimitHandler(maxConns int, handler http.Handler) http.Handler {
h := &limitHandler{
connc: make(chan struct{}, maxConns),
handler: handler,
}
for i := 0; i < maxConns; i++ {
h.connc <- struct{}{}
}
return h
}
func newTestHandler() http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
handler(w, r);
})
}
func handler(w http.ResponseWriter, r *http.Request) {
if b, err := ioutil.ReadAll(r.Body); err == nil {
fmt.Println("message is ", string(b));
res := publishMessages(string(b))
w.Write([]byte(res))
w.WriteHeader(http.StatusOK)
counter ++;
}else {
w.WriteHeader(http.StatusInternalServerError)
w.Write([]byte("500 - Something bad happened!"))
}
}
func publishMessages(payload string) string {
ch, err := conn.Channel()
failOnError(err, "Failed to open a channel")
q, err = ch.QueueDeclare(
"", // name
false, // durable
false, // delete when unused
true, // exclusive
false, // noWait
nil, // arguments
)
failOnError(err, "Failed to declare a queue")
msgs, err := ch.Consume(
q.Name, // queue
"", // consumer
true, // auto-ack
false, // exclusive
false, // no-local
false, // no-wait
nil, // args
)
failOnError(err, "Failed to register a consumer")
corrId := randomString(32)
log.Println("corrId ", corrId)
err = ch.Publish(
"", // exchange
cfg.Section_rabbit.RabbitQeue, // routing key
false, // mandatory
false, // immediate
amqp.Publishing{
DeliveryMode: amqp.Transient,
ContentType: "application/json",
CorrelationId: corrId,
Body: []byte(payload),
Timestamp: time.Now(),
ReplyTo: q.Name,
})
failOnError(err, "Failed to Publish on RabbitMQ")
defer ch.Close();
result := "";
for d := range msgs {
if corrId == d.CorrelationId {
failOnError(err, "Failed to convert body to integer")
log.Println("result = ", string(d.Body))
return string(d.Body);
}else {
log.Println("waiting for result = ")
}
}
return result;
}
Can someone help?
EDIT
here are my variables
type limitHandler struct {
connc chan struct{}
handler http.Handler
}
var conn *amqp.Connection
var q amqp.Queue
EDIT 2
func (h *limitHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
select {
case <-h.connc:
fmt.Println("ServeHTTP");
h.handler.ServeHTTP(w, req)
h.connc <- struct{}{}
default:
http.Error(w, "503 too busy", http.StatusServiceUnavailable)
}
}
EDIT 3
func failOnError(err error, msg string) {
if err != nil {
log.Fatalf("%s: %s", msg, err)
panic(fmt.Sprintf("%s: %s", msg, err))
}
}
I have an unknown number of peers that I will need to make TCP connections to. I'm running into a few problems and not certain whether my overall approach is correct. My current set up on the client side consists of a Peer Manager that shares its EventLoopGroup and creates clients as needed:
public class PeerManagement
{
public PeerManagement()
{
// this group is shared across all clients
_eventLoopGroup = new NioEventLoopGroup();
_peers = new ConcurrentHashMap<>();
}
public void send(String s, String host)
{
// ensure that the peer exists
setPeer(host);
// look up the peer
Peer requestedPeer = _peers.get(host);
// send the request directly to the peer
requestedPeer.send(s);
}
private synchronized void setPeer(String host)
{
if (!_peers.containsKey(host))
{
// create the Peer using the EventLoopGroup & connect
Peer peer = new Peer();
peer.connect(_eventLoopGroup, host);
// add the peer to the Peer list
_peers.put(host, peer);
}
}
}
The Peer class:
public class Peer
{
private static final int PORT = 6010;
private Bootstrap _bootstrap;
private ChannelFuture _channelFuture;
public boolean connect(EventLoopGroup eventLoopGroup, String host)
{
_bootstrap = new Bootstrap();
_bootstrap.group(eventLoopGroup)
.channel(NioSocketChannel.class)
.option(ChannelOption.SO_KEEPALIVE, true)
.handler(new ChannelInitializer<SocketChannel>()
{
#Override
public void initChannel(SocketChannel socketChannel) throws Exception
{
socketChannel.pipeline().addLast(new LengthFieldBasedFrameDecoder( 1024,0,4,0,4));
socketChannel.pipeline().addLast(new LengthFieldPrepender(4));
socketChannel.pipeline().addLast("customHandler", new CustomPeerHandler());
}
} );
// hold this for communicating with client
_channelFuture = _bootstrap.connect(host, PORT);
return _channelFuture.syncUninterruptibly().isSuccess();
}
public boolean send(String s)
{
if (_channelFuture.channel().isWritable())
{
// not the best method but String will be replaced by byte[]
ByteBuf buffer = _channelFuture.channel().alloc().buffer();
buffer.writeBytes(s.getBytes());
// NEVER returns true but the message is sent
return _channelFuture.channel().writeAndFlush(buffer).isSuccess();
}
return false;
}
}
If I send the following string "this is a test" then writeAndFlush.isSuccess() is always false but sends the message and then I get the following on the server side:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| ff 00 00 00 00 00 00 00 01 7f |.......... |
+--------+-------------------------------------------------+----------------+
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 00 00 00 0e 74 68 69 73 20 69 73 20 61 20 74 65 |....this is a te|
|00000010| 73 74 |st |
+--------+-------------------------------------------------+----------------+
io.netty.handler.codec.TooLongFrameException: Adjusted frame length exceeds 1024: 4278190084 - discarded
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| ff 00 00 00 00 00 00 00 01 7f |.......... |
+--------+-------------------------------------------------+----------------+
io.netty.handler.codec.TooLongFrameException: Adjusted frame length exceeds 1024: 4278190084 - discarded
The reason that writeAndFlush().isSuccess() returns false is that, like all outbound commands, writeAndFlush() is asynchronous. The actual write is done in the channel's event loop thread, and this just hasn't happened yet when you call isSuccess() in the main thread. If you want to block and wait for the write to complete you could use:
channel.writeAndFlush(msg).sync().isSuccess();
The error you see on the server side is because of this frame that arrives before your "this is a test" message:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| ff 00 00 00 00 00 00 00 01 7f |.......... |
+--------+-------------------------------------------------+----------------+
The LengthFieldBasedFrameDecoder tries to decode the first 4 bytes ff 00 00 00 as the length, which is obviously too large. Do you know what is sending this frame? Could it be your CustomPeerHandler?
Upon calling the following I code snipit:
Message message_in = null;
inbox instanceof IMAPFolder
IMAPFolder f = (IMAPFolder)inbox;
f.idle();
System.out.println("IDLE done");
message_in = inbox.getMessage(inbox.getMessageCount());
message_in.setFlag(Flags.Flag.DELETED, true);
inbox.expunge();
I receive the error message:
Got 1 new messages
***********************************************************
------------ Message 1 ------------
DONE
A4 OK IDLE completed.
A5 FETCH 13 (ENVELOPE INTERNALDATE RFC822.SIZE)
IDLE done
* 13 FETCH (ENVELOPE ("Wed, 29 Aug 2012 13:25:15 -0500" "Doc Com: Voice msg from Outside caller 5555555555 Unread:2" (("Support" NIL "support" "example.com")) NIL NIL (("Support" NIL "support" "example.com")) NIL NIL "<94BA85B8-FC4D-4193-B386-7FD2C1DE1B1F#example.com>" "<873439BD-8640-47D9-BF88-5FD3521B8173#example.com>") INTERNALDATE "29-Aug-2012 13:25:17 -0500" RFC822.SIZE 967)
A5 OK FETCH completed.
A6 STORE 13 +FLAGS (\Deleted)
* 13 FETCH (FLAGS (\Deleted \Recent))
A6 OK STORE completed.
A7 EXPUNGE
* 13 EXPUNGE
* 12 EXPUNGE
* 11 EXPUNGE
* 10 EXPUNGE
* 9 EXPUNGE
* 8 EXISTS
A7 OK EXPUNGE completed.
javax.mail.MessageRemovedException
at com.sun.mail.imap.IMAPMessage.checkExpunged(IMAPMessage.java:205)
at com.sun.mail.imap.IMAPMessage.loadBODYSTRUCTURE(IMAPMessage.java:1305)
at com.sun.mail.imap.IMAPMessage.getContentType(IMAPMessage.java:450)
at javax.mail.internet.MimeBodyPart.isMimeType(MimeBodyPart.java:1050)
at javax.mail.internet.MimeMessage.isMimeType(MimeMessage.java:986)
at com.example.vmmonitor.main.VMMonitor.getText(VMMonitor.java:211)
at com.example.vmmonitor.main.VMMonitor.access$000(VMMonitor.java:27)
at com.example.vmmonitor.main.VMMonitor$1.messagesAdded(VMMonitor.java:109)
at javax.mail.event.MessageCountEvent.dispatch(MessageCountEvent.java:150)
at javax.mail.EventQueue.run(EventQueue.java:134)
at java.lang.Thread.run(Thread.java:680)
DEBUG IMAP: IMAPProtocol noop
I was able to patch the issues by adding a Thread.sleep() as follows:
What is the issue? Why am I able to flag a message as DELETED but I am not able to expunge the mailbox?
Message message_in = null;
inbox instanceof IMAPFolder
IMAPFolder f = (IMAPFolder)inbox;
f.idle();
System.out.println("IDLE done");
message_in = inbox.getMessage(inbox.getMessageCount());
message_in.setFlag(Flags.Flag.DELETED, true);
Thread.sleep(2000);/*****************bug patch***********/
inbox.expunge();
This multithreaded program is not accessing the inbox resource in a non-thread safe manor. The program is defined in a manor such that inbox.expunge(); and an additional functioning are mutually accessing the mailbox and hence the exception is thrown.