using netty to send several RTSP message - java

I want to create a RTSP client, to send some RTSP message. I use netty to write this, but my code can only send one message. how to send another message?
My client Code like this:
public class RtspClient {
public static class ClientHandler extends SimpleChannelInboundHandler<DefaultHttpResponse> {
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
protected void channelRead0(ChannelHandlerContext ctx, DefaultHttpResponse msg) throws Exception {
System.out.println(msg.toString());
}
}
public static void main(String[] args) throws InterruptedException {
EventLoopGroup workerGroup = new NioEventLoopGroup();
final ClientHandler handler = new ClientHandler();
Bootstrap b = new Bootstrap();
b.group(workerGroup);
b.channel(NioSocketChannel.class);
b.option(ChannelOption.SO_KEEPALIVE, true);
b.remoteAddress("127.0.0.1", 8557);
b.handler(new ChannelInitializer<SocketChannel>() {
protected void initChannel(SocketChannel ch) {
ChannelPipeline p = ch.pipeline();
p.addLast("encoder", new RtspEncoder());
p.addLast("decoder", new RtspDecoder());
p.addLast(handler);
}
});
Channel channel = b.connect().sync().channel();
DefaultHttpRequest request = new DefaultHttpRequest(RtspVersions.RTSP_1_0, RtspMethods.PLAY, "rtsp:123");
request.headers().add(RtspHeaderNames.CSEQ, 1);
request.headers().add(RtspHeaderNames.SESSION, "294");
channel.writeAndFlush(request);
Thread.sleep(10000);
System.out.println(channel.isWritable());
System.out.println(channel.isActive());
request = new DefaultHttpRequest(RtspVersions.RTSP_1_0, RtspMethods.TEARDOWN, "rtsp3");
request.headers().add(RtspHeaderNames.CSEQ, 2);
request.headers().add(RtspHeaderNames.SESSION, "294");
}
channel.writeAndFlush(request);
Scanner sc = new Scanner(System.in);
sc.nextLine();
channel.closeFuture().sync();
}
this code could only send first message. The server did not receive the second data. how to send another message?

You want to use DefaultFullHttpRequest or you need to "terminate" each DefaultHttpRequest with a LastHttpContent.

Related

Mina usage of DatagramConnector does not work

I have a tcp client, which is based on the mina (V2.0.21 and J8) framework. It is working fine.
Here is the minimal example:
private static IoConnector connector;
public static void main(String[] args) throws InterruptedException {
connector = new NioSocketConnector();
connector.getFilterChain().addLast( "logger", new LoggingFilter() );
connector.getFilterChain().addLast( "codec", new ProtocolCodecFilter( new TextLineCodecFactory( Charset.forName( "UTF-8" ))));
connector.setHandler(new Handler());
try {
ConnectFuture connFuture = connector.connect(new InetSocketAddress("x.x.x.x", 9999));
connFuture.awaitUninterruptibly();
connFuture.getSession();
} catch (Exception e) {
System.err.println(e);
}
while(true) {
System.out.println("sleep.");
Thread.sleep(3000);
}
}
This is my handler:
public class Handler implements IoHandler {
#Override
public void messageReceived(IoSession session, Object message) throws Exception {
String str = (String)message;
System.out.println("->" + str);
}
#Override
public void sessionCreated(IoSession session) throws Exception {
System.out.println("CREATED.");
}
#Override
public void sessionOpened(IoSession session) throws Exception {
System.out.println("OPENED.");
}
...
}
Now, i have changed the line
connector = new NioSocketConnector();
to
connector = new NioDatagramConnector();
to be able to receive data via UDP.
If i now send packages via UDP (e.g. using a network test tool) to the port 9999 this program will not receive anything anymore. But i can see the log information, that the session was opened and created. Can somebody explain, why UDP is not working (to be more specific: messageReceived() is not called), but TCP does?
UPDATE: As a test tool i am using this method here:
public static void main(String[] args) throws IOException {
InetAddress ia = InetAddress.getByName("x.x.x.x");
int port = 9999;
String s = "Message";
byte[] data = s.getBytes();
DatagramPacket packet = new DatagramPacket( data, data.length, ia, port );
DatagramSocket toSocket = new DatagramSocket();
toSocket.send( packet );
toSocket.close();
System.out.println("Send.");
}
Thanks
Ok, the secret is to know, that in UDP case both, the "connector" and "acceptor" side is handled by the class NioDatagramAcceptor.
This piece of code does the magic for the UDP-"connector"-side:
NioDatagramAcceptor acceptor = new NioDatagramAcceptor();
acceptor.getFilterChain().addLast( "logger", new LoggingFilter() );
acceptor.getFilterChain().addLast( "codec", new ProtocolCodecFilter( new TextLineCodecFactory( Charset.forName( "UTF-8" ))));
acceptor.setHandler(new Handler());
acceptor.bind(new InetSocketAddress(9999));

Shutting down Netty server itself

I'm using Netty server to solve this: reading a big file line by line and processing it. Doing it on single machine is still slow, so I've decided to use server to serve chunks of data to clients. That already works, but what I also want is that server shut downs itself when processed whole file. The source code I'm using right now is:
public static void main(String[] args) {
new Thread(() -> {
//reading the big file and populating 'dq' - data queue
}).start();
final EventLoopGroup bGrp = new NioEventLoopGroup(1);
final EventLoopGroup wGrp = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.option(ChannelOption.SO_BACKLOG, 100)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) {
ChannelPipeline p = ch.pipeline();
p.addLast(new StringDecoder());
p.addLast(new StringEncoder());
p.addLast(new ServerHandler(dq, bGrp, wGrp));
}
});
ChannelFuture f = b.bind(PORT).sync();
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
class ServerHandler extends ChannelInboundHandlerAdapter {
public ServerHandler(dq, bGrp, wGrp) {
//assigning params to instance fields
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
//creating bulk of data from 'dq' and sending to client
/* e.g.
ctx.write(dq.get());
*/
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
if (dq.isEmpty() /*or other check that file was processed*/ ) {
try {
ctx.channel().closeFuture().sync();
} catch (InterruptedException ie) {
//...
}
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
ctx.close();
ctx.executor().parent().shutdownGracefully();
}
}
Is the server shutdown in channelReadComplete(...) method correct? What I'm afraid is that there can still be another served client (e.g. sending big bulk in other client and with current client reached the end of 'dq').
The base code is from netty EchoServer/DiscardServer examples.
The question is : how to shut down netty server (from handler) when reached specific condition.
Thanks
You can not shutdown a server from a handler. What you could do is signal to a different thread that it should shutdown the server.

SimpleChannelInboundHandler never fires channelRead0

on some day i decided to create a Netty Chat server using Tcp protocol. Currently, it successfully logging connect and disconnect, but channelRead0 in my handler is never fires. I tried Python client.
Netty version: 4.1.6.Final
Handler code:
public class ServerWrapperHandler extends SimpleChannelInboundHandler<String> {
private final TcpServer server;
public ServerWrapperHandler(TcpServer server){
this.server = server;
}
#Override
public void handlerAdded(ChannelHandlerContext ctx) {
System.out.println("Client connected.");
server.addClient(ctx);
}
#Override
public void handlerRemoved(ChannelHandlerContext ctx) {
System.out.println("Client disconnected.");
server.removeClient(ctx);
}
#Override
public void channelRead0(ChannelHandlerContext ctx, String msg) {
System.out.println("Message received.");
server.handleMessage(ctx, msg);
}
#Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
System.out.println("Read complete.");
super.channelReadComplete(ctx);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
Output:
[TCPServ] Starting on 0.0.0.0:1052
Client connected.
Read complete.
Read complete.
Client disconnected.
Client code:
import socket
conn = socket.socket()
conn.connect(("127.0.0.1", 1052))
conn.send("Hello")
tmp = conn.recv(1024)
while tmp:
data += tmp
tmp = conn.recv(1024)
print(data.decode("utf-8"))
conn.close()
Btw, the problem was in my initializer: i added DelimiterBasedFrameDecoder to my pipeline, and this decoder is stopping the thread. I dont know why, but i dont needed this decoder, so i just deleted it, and everything started to work.
#Override
protected void initChannel(SocketChannel ch) throws Exception {
// Create a default pipeline implementation.
ChannelPipeline pipeline = ch.pipeline();
// Protocol Decoder - translates binary data (e.g. ByteBuf) into a Java object.
// Protocol Encoder - translates a Java object into binary data.
// Add the text line codec combination first,
pipeline.addLast("framer", new DelimiterBasedFrameDecoder(8192, Delimiters.lineDelimiter())); //<--- DELETE THIS
pipeline.addLast("decoder", new StringDecoder());
pipeline.addLast("encoder", new StringEncoder());
pipeline.addLast("handler", new ServerWrapperHandler(tcpServer));
}

Netty : Why use ChannelInboundHandlerAdapter in TimeServerHandler?

I am going through netty's documentation here and the diagram here.
My question is, the Timeserver is writing time into the socket, for the client to read the time. Shouldn't it use the ChannelOutboundHandlerAdapter ? Why is the logic in ChannelInboundHandlerAdapter ?
Couldn't understand, please explain.
Timeserver,
public class TimeServerHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelActive(final ChannelHandlerContext ctx) { // (1)
final ByteBuf time = ctx.alloc().buffer(4); // (2)
time.writeInt((int) (System.currentTimeMillis() / 1000L + 2208988800L));
final ChannelFuture f = ctx.writeAndFlush(time); // (3)
f.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) {
assert f == future;
ctx.close();
}
}); // (4)
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
TimeClient,
public class TimeClient {
public static void main(String[] args) throws Exception {
String host = args[0];
int port = Integer.parseInt(args[1]);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
Bootstrap b = new Bootstrap(); // (1)
b.group(workerGroup); // (2)
b.channel(NioSocketChannel.class); // (3)
b.option(ChannelOption.SO_KEEPALIVE, true); // (4)
b.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new TimeClientHandler());
}
});
// Start the client.
ChannelFuture f = b.connect(host, port).sync(); // (5)
// Wait until the connection is closed.
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
}
}
}
The reason we use a ChannelInboundHandlerAdapter is because, we are writing into the channel which was established by the client to the server. Since its inbound with respect to the server, we use a ChannelInboundHandlerAdapter. The Client connects to the Server, through the channel into which the server sends out the time.
Because your server will respond to incoming messages, it will need to implement interface ChannelInboundHandler, which defines methods for acting on inbound events.
Further, ChannelInboundHandlerAdapter has a straightforward API, and each of its methods can be overridden to hook into the event lifecycle at the appropriate point.
I just started learning Netty, hope this helps. :)

Netty RtspEncoder/Decoder issue

To develop RtspClient(just message transaction, not playing video), I thought that I will use Netty, and I have created the classes as follow,(rtsp is creating toooo much problem to me, I don't know, is this because lag of knowledge? any-how...),
TestRtspClient.java
public class TestRtspClient {
private final String host;
private final int port;
public TestRtspClient(String host, int port) {
this.host = host;
this.port = port;
}
public void start() throws Exception {
EventLoopGroup group = new NioEventLoopGroup();
try {
Bootstrap bootstrap = new Bootstrap();
bootstrap.group(group);
bootstrap.channel(NioSocketChannel.class);
bootstrap.handler(new TestRtspClientInitializer());
Channel channel = bootstrap.connect(host, port).sync().channel();
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
//DESCRIBE rtsp://192.168.1.26:8554/nature.ts RTSP/1.0\r\nCSeq: 3\r\n\r\n
while(true) {
channel.write(br.readLine() + "\r\n");
channel.flush();
}
} finally {
group.shutdownGracefully();
}
}
public static void main(String[] args) throws Exception {
new TestRtspClient("192.168.1.26", 8554).start();
}
}
and here is TestRtspClientInitializer.java
public class TestRtspClientInitializer extends ChannelInitializer<SocketChannel> {
#Override
protected void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipe = ch.pipeline();
// pipe.addLast("framer", new DelimiterBasedFrameDecoder(8192, Delimiters.lineDelimiter()));
// pipe.addLast("decoder", new StringDecoder());
// pipe.addLast("encoder", new StringEncoder());
// pipe.addLast("encoder", new RtspRequestEncoder());
// pipe.addLast("decoder", new RtspResponseDecoder());
pipe.addLast("encoder", new HttpRequestEncoder());
pipe.addLast("decoder", new HttpResponseDecoder());
pipe.addLast("handler", new TestRtspClientHandler());
}
}
and here is TestRtspClientHandler.java
public class TestRtspClientHandler extends ChannelInboundMessageHandlerAdapter<HttpObject> {
#Override
public void messageReceived(ChannelHandlerContext ctx, String msg) throws Exception {
if(msg instanceof HttpResponse) {
System.out.println("Rtsp Response");
} else if(msg instanceof HttpRequest) {
System.out.println("Rtsp Request");
} else {
System.err.println("not supported format");
}
}
}
and I am using Live555MediaServer as RtspServer and Netty 4.0.0.CR3. When I am using DelimiterBasedFrameDecoder with stringdecoder and encoder its working fine, but If I use RtspRequest/Response encoder/decoder I am getting following warning, and no msg will be sent to L555.(also same with HttpReq/Resp encoder and decoder)
passing this as commandline arg in eclipse
DESCRIBE rtsp://192.168.1.26:8554/nature.ts RTSP/1.0\r\nCSeq: 3\r\n
Mar 25, 2014 6:45:28 PM
io.netty.channel.DefaultChannelPipeline$ByteHeadHandler flush WARNING:
Discarded 1 outbound message(s) that reached at the head of the
pipeline. Please check your pipeline configuration.
Help me to fix this issue, and also explain me in-brief what is wrong in this code modules.
Thank You.
First of all, please upgrade to the latest Netty 4.0.x version.
Because you specified <String> when you extend ChannelInboundMessageHandlerAdapter in your TestRtspClientHandler, it will only receive a message whose type is String, which is not the case. You have to use HttpObject instead of String.

Categories