i have network client / server application that using java zeromq
framework for the communications .
i have the main server and N clients that polls the server . when the server gets online.
the clients connect him and there some short massaging going on between them
until now i done with single client and it worked fine .
but when adding another client ( that's 2 )
i getting in the request null as returned massage :
request = socket.recv (0);
based on the this example :
http://zguide.zeromq.org/java:mtserver
my code (part of it its very long )
all context and the ZeroMq Settings are set and not null
and i allways get this exception :
Exception in thread "Thread-1" org.zeromq.ZMQException: Operation cannot be accomplished in current state(0x9523dfb)
at org.zeromq.ZMQ$Socket.recv(Native Method)
at com.controller.core.Daemon$1.run(Daemon.java:127)
for(int thread_nbr = 0; thread_nbr < m_iThreadPoolCount; thread_nbr++) {
Thread worker_routine = new Thread() {
#Override
public void run() {
//synchronized(OBJ_LOCK) {
ZMQ.Socket socket = m_pNetworkManager.getContext().socket(ZMQ.REP);//context.socket(ZMQ.REP);
socket.connect ("inproc://workers");
while (true) {
/** Wait for next request from client (C string) */
byte[] request = null;
try{
if(m_pNetworkManager.getContext()!=null) // its never null
{
request = socket.recv (0);
}
}catch (Exception e)
{
// it allays gets null exception
}
boolean bFoundInList = false;
if(request!=null)
{
// multi frame sending
socket.send(m_UT.getbyteArray(
m_UT.getReplayStructure(aStateMap_replay)
),ZMQ.SNDMORE);
socket.send(new byte[0], ZMQ.SNDMORE);
byte[] byteFileStruct = null;
byteFileStruct = m_UT.serialize(stateFilesStruct);
boolean send = socket.send(byteFileStruct,0);
} // socket.recv end
}
// }// synchronized block
}
}; //Thread worker_routine
worker_routine.start();
}
// Connect work threads to client threads via a queue
ZMQQueue zMQQueue = new ZMQQueue( m_pNetworkManager.getContext(),
m_pNetworkManager.getClients(),
m_pNetworkManager.getWorkers());
zMQQueue.run();
// We never get here but clean up anyhow
m_pNetworkManager.getClients().close();
m_pNetworkManager.getWorkers().close();
m_pNetworkManager.getContext().term();
}
allso added the NetworkManager class
public class NetworkManager {
/** ZeroMQ context */
private ZMQ.Context m_context = null;
/** ZeroMQ socket */
private ZMQ.Socket m_socket = null;
/** representation of the clients */
ZMQ.Socket m_clients = null;
/** representation of the workers threads */
ZMQ.Socket m_workers = null;
/**
* NetworkManager constructor.
*/
public NetworkManager()
{
;
}
/**
* Setup the network ZeroMQ network layer
* #param sControllerDomain the Controller domain name and port
*/
public void Init(String sControllerDomain)
{
/** Prepare our context and socket */
m_context = ZMQ.context(1);
m_clients = m_context.socket(ZMQ.ROUTER);
// m_clients = m_context.socket(ZMQ.REP);
m_clients.bind (sControllerDomain);
m_workers = m_context.socket(ZMQ.DEALER);
m_workers.bind ("inproc://workers");
}
/**
* Get ZeroMQ context
* #return ZMQ.Context
*/
public ZMQ.Context getContext() {
return m_context;
}
/**
* get ZeroMQ Socket
* #return ZMQ.Socket
*/
public ZMQ.Socket getSocket() {
return m_socket;
}
/**
* get the workers as ZMQ.Socket
* #return ZMQ.Socket
*/
public ZMQ.Socket getWorkers() {
return m_workers;
}
/**
* get the Clients as ZMQ.Socket
* #return ZMQ.Socket
*/
public ZMQ.Socket getClients() {
return m_clients;
}
}
What your OS system? If you are using Windows the operations using: m_workers.bind ("inproc://workers") is not supported. IIRC.
Related
I am doing a UDP program in Java language. I wish to send a message from the server to the client. However, as I am using UDP protocol.
How do I ensure that the client is connected before the datagram packet is sent?
buf = stringMessage.getBytes();
serversocket.send(new DatagramPacket(buf, stringMessage.length(), ia, cport));
// how to ensure that client is connected before sending?
UDP protocol doesn't have state, so there is no "connection".
You either use TCP or have to make your server respond to confirm that message is received.
there is no such 'connected' state in UDP protocol, however, you can create your own function to kinda have a list of connected clients.
Below I will present you some code I created to retrieve UDP clients and maintain them in a connected list of clients,
When you create the server you can wait for incoming connections, ( UDP clients sending "connect" messages ), then when the server receive those request check on the client list if that client is already there, if not, it creates a new client, it assigns an ID and send a response to the client with the ID assigned and message connected something like:
"1001#connected", after client send the request is waiting for the response, when the response arrives then ID is retrieved and set to the ID property of the client, and execute socket.connect( ip, port ) in order to only allow request/response from/to server
/**#TODO consider add variable to specify number of clients
* this class contains main server connection with all clients
* connected to a game, this connection is using UDP and it is really
* simple, if you need to use other kind of connection you are free
* to create your own
* #author PavulZavala
*/
public class Server
implements Conectable
{
protected DatagramSocket serverSocket;
protected boolean isAccepting;
protected List<Client> clientList;
protected String ip;
protected int port;
private Thread connectThread;
/**
*
* #param port
* #throws IOException
*/
public Server( int port ) throws IOException
{
serverSocket = new DatagramSocket( port );
this.port = port;
this.isAccepting = true;
clientList = new ArrayList<>();
}//
/**
* Accept UDP connections and store in clientList
* ----------------------------------------------
* this method is used to receive packages from UDP clients,
* and store their IP and ADDRESS in the client list,
* - you can change isAccepting to false to no receive more
* client connections or simple, call stopIsAcception to finish
* the Tread.
* #TODO it can be changed to accept like server socket
*/
#Override
public void connect()
{
connectThread = new Thread( ()->
{
while( isAccepting )
{
try {
//datagram packet to receive incoming request from client
DatagramPacket request =
new DatagramPacket( new byte[ Config.UDP_BUFFER_SIZE ], Config.UDP_BUFFER_SIZE );
serverSocket.receive( request );
//get Port and Address from client
//and check if exists in clientList
Client c = clientList
.stream()
.filter( client ->
{
return client.getIp().equals( request.getAddress().getHostAddress() );
}).findFirst().orElse( null );
//if no exists add it and send response
if( null == c )
{
Client client = new Client();
client.setIp( request.getAddress().getHostAddress() );
client.setPort( request.getPort() );
client.setId( generateId() );
//adding new client to the list
clientList.add( client );
byte[] bufResp = (client.getId() + "#connected").getBytes( "UTF-8" );
DatagramPacket resp =
new DatagramPacket(bufResp, bufResp.length,
InetAddress.getByName( client.getIp() ),
client.getPort());
System.err.println( client.getId()+ " Connected, response Sent" );
serverSocket.send( resp );
}//
} //
catch (IOException ex)
{
Logger.getLogger(Server.class.getName()).log(Level.SEVERE, null, ex);
}
}
});//.start();
connectThread.start();
}//
/**
* this stops thread to accepts client socket connections
* #throws java.lang.InterruptedException
*/
public void stopAccepting() throws InterruptedException
{
connectThread.join();
}
/**
* this closes the DatagramSocket that is acting
* as server
*/
public void closeServer()
{
serverSocket.close();
}
/**
* used to receive UDP packets from clients
* this method creates its own Thread so it can
* receive packages without blocking the game
* #param r
*/
public void receive( Requestable r)
{
new Thread(()->
{
while( true )
{
r.receiveData();
}
}).start();
}//
/**
* used to generate id for connected clients
* #return
*/
private int generateId()
{
return ++Config.SOCKET_ID_COUNTER;
}
/**
* used to send UDP packets to clients
* #param r
*/
public void send( Responsable r )
{
r.sendData();
}
public DatagramSocket getServerSocket() {
return serverSocket;
}
public void setServerSocket(DatagramSocket serverSocket) {
this.serverSocket = serverSocket;
}
public boolean isIsAccepting() {
return isAccepting;
}
public void setIsAccepting(boolean isAccepting) {
this.isAccepting = isAccepting;
}
public List<Client> getClientList() {
return clientList;
}
public void setClientList(List<Client> clientList) {
this.clientList = clientList;
}
public String getIp() {
return ip;
}
public void setIp(String ip) {
this.ip = ip;
}
public int getPort() {
return port;
}
public void setPort(int port) {
this.port = port;
}
}//
Client class:
how do you now client was connected with the server, simple id property must be different than cero, in my implementation all ID are numeric starting on 1001, and that ID is only created by server, the only gap i have right now is how the server now if the client is still active, i am thinking to create another Thread where i can send messages periodically from client to server in order to ensure we are still communication, if the server does not receive for example a request from a client in 5 minutes, servers disconnects the client or it can ignore send broadcast messages to the client until it receives a new message ( i am currently working on this )
public class Client
implements Conectable
{
protected DatagramSocket socket;
protected String ip;
protected int port;
protected int id;
private Thread connectThread;
/**
* constructor without arguments to use with getters and setters
* #throws java.net.SocketException
*/
public Client() throws SocketException
{
this.socket = new DatagramSocket();
id = 0;
//id set after increasement
//id = ++Config.SOCKET_ID_COUNTER;
}
/**
* this constructor creates a client indicating the ip and port
* where the server
* #param ip
* #param port
* #throws SocketException
*/
public Client( String ip, int port ) throws SocketException
{
this();
this.setIp( ip );
this.setPort(port);
}
public DatagramSocket getSocket() {
return socket;
}
public String getIp() {
return ip;
}
public void setIp(String ip) {
this.ip = ip;
}
public int getPort() {
return port;
}
public void setPort(int port) {
this.port = port;
}
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
/**
* this method send a request to the server to connect
*/
#Override
public void connect()
{
try
{
//send connect request to server
send( "connect" );
}
catch (IOException ex)
{
Logger.getLogger(Client.class.getName()).log(Level.SEVERE, null, ex);
}
connectThread =
new Thread( ()->
{
while( id == 0 )
{
DatagramPacket response =
new DatagramPacket(new byte[ Config.UDP_BUFFER_SIZE], Config.UDP_BUFFER_SIZE );
try
{
socket.receive( response );
String resp = new String( response.getData(), "UTF-8" );
resp = resp.trim();
System.err.println("getting DATA: "+resp);
if( resp.trim().contains( "connected" ) )
{
id = Integer.parseInt( resp.trim().split( "#" )[0] ) ;
socket.connect( InetAddress.getByName( ip ), port );
stopConnecting();
}
}
catch ( IOException | InterruptedException ex)
{
Logger.getLogger(Client.class.getName()).log(Level.SEVERE, null, ex);
}
}
});
connectThread.start();
}
/**
* method used to receive responses from server,
* every time this method is called a new Thread is created
* be careful not to call many times this
* #param r
*/
public void receive( Requestable r )
{
new Thread(()->
{
while( true )
{
r.receiveData();
}
}).start();
}
/**
* this method is used to send requests to server
* #param r
*/
public void send( Responsable r )
{
r.sendData();
}
/**
* this method will send a request to the server to
* the specific IP and port set by this class
* #param data request data
* #throws UnknownHostException
*/
public void send( String data ) throws UnknownHostException, IOException
{
byte[] dataBuf = data.getBytes();
DatagramPacket request =
new DatagramPacket(dataBuf,
dataBuf.length, InetAddress.getByName( ip ), port );
socket.send( request );
}
/**
* this method kills Thread used that is created
* when we attempt to connect to the server
* #throws InterruptedException
*/
public void stopConnecting() throws InterruptedException
{
connectThread.join();
}
}//
Server implementation, this can be done in the main of the app that will be the client
try
{
System.err.println("starting server");
Server s = new Server( 24586 );
//accept incoming conenctions
s.connect();
}
catch (IOException ex)
{
Logger.getLogger(DemoLevel.class.getName()).log(Level.SEVERE, "::: error con algo", ex);
}
Client implementation:
try
{
client = new Client( "127.0.0.1" , 24586 );
System.err.println("waiting to connect to the server");
client.connect();
}
catch ( SocketException ex )
{
Logger.getLogger(DemoLevel.class.getName()).log(Level.SEVERE, "::: error with server connection", ex);
}
i hope this can be usefull for you.
Console Messages from server:
> Task :run
starting server
1001 Connected, response Sent
Console Messages from client:
> Task :run
waiting to connect to the server
getting DATA: 1001#connected
This is my first question on StackOverflow and I hope I have adhered to the expected standards.
I have been taking over some code from someone else who isn't working here anymore and I'm pretty much stranded here. I searched and asked some colleagues (not too much Java experience unfortunately) but no-one seems to be able to help me. Searching didn't really help me either.
I'm sending Json requests to a Netty server from a client which intentionally is NOT implemented using Netty. For now it is just a simple Java socket, but the intention is to have a Flask client send requests to the Netty server. The requests arrive (both using Java Sockets and using Python Flask), and get properly processed in the pipeline, but I want to send a response to the client and although I suspect where in the code to send the response I'm clearly missing out on something as I don't get any response. Any suggestions?
The Java Socket client (note that the json1 and json2 strings have been omitted from the snippet here as they are rather long, but they are formatted properly). Posting requests using a Socket and the related output stream. The response part (with the input stream for the same socket) is just some test which I have my doubt about, but not sure how to do this otherwise (and that's why I kept it here). I've been seeing plenty of examples with clients implementing Netty interfaces and that seems to work fine, but as said I want a client not using Netty to be able to receive the responses as well (if that's possible at all).
String serverResponse;
for (int j = 0; j < 100; j++) {
for (int i = 0; i < 1000; i++) {
try {
Socket s = new Socket("localhost", 12000);
PrintWriter out = new PrintWriter(s.getOutputStream(), true);
out.write(json1 + i + json2);
out.flush();
// Testing only - trying to get the response back from the server
BufferedReader in = new BufferedReader(new InputStreamReader(s.getInputStream()));
while(true) {
if ((serverResponse = in.readLine()) != null) {
log.info("server says", serverResponse);
break;
}
}
out.close();
s.close();
Thread.sleep(1000);
} catch (Exception ex) {
ex.printStackTrace();
}
}
Thread.sleep(2000);
}
MCTcpServer.java
/**
* Abstract TCP Server class. this class should be implemented in the subclass to implement an actual server.
*
* #param <R> The data to be read from the socket.
* #param <W> data to be written (in case of duplex) from the socket.
*/
public abstract class MFTcpServer<R, W> {
protected final AtomicBoolean started;
protected MFTcpServer() {
this.started = new AtomicBoolean();
}
/**
* Start the server.
*
* #param initializer the channel initializers. they will be called when a new client connects to the server.
* #return instance of tcp server
*/
public final MFTcpServer<R, W> start(ChannelInitializer<Channel> initializer) {
if (!started.compareAndSet(false, true)) {
throw new IllegalStateException("Server already started");
}
doStart(initializer);
return this;
}
/**
* Start the server and wait for all the threads to be finished before shutdown.
* #param initializer the channel initializers. they will be called when a new client connects to the server.
*/
public final void startAndAwait(ChannelInitializer<Channel> initializer) {
start(initializer);
awaitShutdown();
}
/**
* Shutdown the server
* #return true if successfully shutdown.
*/
public final boolean shutdown() {
return !started.compareAndSet(true, false) || doShutdown();
}
/**
* Wait for all the threads to be finished before shutdown.
*/
public abstract void awaitShutdown();
/**
* Do the shutdown now.
* #return true if successfully shutdown
*/
public abstract boolean doShutdown();
/**
* start the server
* #param initializer the channel initializers. they will be called when a new client connetcs to the server.
* #return instance of tcp server
*/
public abstract MFTcpServer<R, W> doStart(ChannelInitializer<Channel> initializer);
/**
*
* #return the port where the server is running.
*/
public abstract int getPort();
MFNetty4TcpServer.java Actual server implementation
public class MFNetty4TcpServer<R, W> extends MFTcpServer<R, W> {
private static final Logger logger = LoggerFactory.getLogger(MFNetty4TcpServer.class);
private static final int BOSS_THREAD_POOL_SIZE = 2;
private int port;
private ServerBootstrap bootstrap;
private ChannelFuture bindFuture;
/**
* The constructor.
*
* #param port port where to listen
*/
protected MFNetty4TcpServer(int port) {
this.port = port;
final NioEventLoopGroup bossGroup = new NioEventLoopGroup(0, new DefaultEventExecutorGroup
(BOSS_THREAD_POOL_SIZE));
final NioEventLoopGroup workerGroup = new NioEventLoopGroup(0, new DefaultEventExecutorGroup
(JsonProducerConfig.THREAD_POOL_SIZE));
bootstrap = new ServerBootstrap()
.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class);
}
#Override
public MFNetty4TcpServer<R, W> doStart(ChannelInitializer<Channel> initializer) {
bootstrap.childHandler(new ChannelInitializer<Channel>() {
#Override
protected void initChannel(Channel ch) throws Exception {
if (initializer != null) {
ch.pipeline().addLast(initializer);
}
}
});
try {
bindFuture = bootstrap.bind(port).sync();
if (!bindFuture.isSuccess()) {
// Connection not successful
throw new RuntimeException(bindFuture.cause());
}
SocketAddress localAddress = bindFuture.channel().localAddress();
if (localAddress instanceof InetSocketAddress) {
port = ((InetSocketAddress) localAddress).getPort();
logger.info("Started server at port: " + port);
}
} catch (InterruptedException e) {
logger.error("Error waiting for binding server port: " + port, e);
}
return this;
}
#Override
public void awaitShutdown() {
try {
bindFuture.channel().closeFuture().await();
} catch (InterruptedException e) {
Thread.interrupted(); // Reset the interrupted status
logger.error("Interrupted while waiting for the server socket to close.", e);
}
}
#Override
public boolean doShutdown() {
try {
bindFuture.channel().close().sync();
return true;
} catch (InterruptedException e) {
logger.error("Failed to shutdown the server.", e);
return false;
}
}
#Override
public int getPort() {
return port;
}
/**
* Creates a tcp server at the defined port.
*
* #param port port to listen to
* #param <R> data to be read
* #param <W> data to be written back. Only in case of duplex connection.
* #return instance of tcp server.
*/
public static <R, W> MFTcpServer<R, W> create(int port) {
return new MFNetty4TcpServer<>(port);
}
}
JsonProducerConfig.java The pipeline is setup here.
/**
* Spring Configuration class of the application.
*/
#Configuration
#Import({DatabusConfig.class})
public class JsonProducerConfig {
private static final Logger log = LoggerFactory.getLogger(JsonProducerConfig.class);
public static final int THREAD_POOL_SIZE = Runtime.getRuntime().availableProcessors() * 2;
public static final String TCP_SERVER = "tcpServer";
public static final String CHANNEL_PIPELINE_INITIALIZER = "channel_initializer";
public static final String MF_KAFKA_PRODUCER = "mf_kafka_producer";
public static final String JSON_AVRO_CONVERTOR = "jsonAvroConvertor";
#Value("#{systemProperties['tcpserver.port']?:'12000'}")
private String tcpServerPort;
#Bean(name = TCP_SERVER)
#Scope(ConfigurableBeanFactory.SCOPE_SINGLETON)
public MFTcpServer nettyTCPServer() {
return MFNetty4TcpServer.create(Integer.parseInt(tcpServerPort));
}
#Bean(name = MF_KAFKA_PRODUCER)
#Scope(ConfigurableBeanFactory.SCOPE_SINGLETON)
public MFKafkaProducer pushToKafka() {
return new MFKafkaProducer();
}
#Bean(name = JSON_AVRO_CONVERTOR)
#Scope(ConfigurableBeanFactory.SCOPE_SINGLETON)
public JsonAvroConvertor jsonAvroConvertor() {
return new JsonAvroConvertor();
}
/**
* This is where the pipeline is set for processing of events.
*
* #param jsonAvroConvertor converts json to avro
* #param kafkaProducer pushes to kafka
* #return chanenl initializers pipeline.
*/
#Bean(name = CHANNEL_PIPELINE_INITIALIZER)
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public ChannelInitializer<Channel> channelInitializers(JsonAvroConvertor jsonAvroConvertor,
MFKafkaProducer kafkaProducer) {
return new ChannelInitializer<Channel>() {
#Override
protected void initChannel(Channel channel) throws Exception {
if (log.isInfoEnabled())
log.info("initChannel - initing channel...");
channel.pipeline().addLast(new NioEventLoopGroup(0, new DefaultEventExecutorGroup(THREAD_POOL_SIZE)));
channel.pipeline().addLast(new JsonObjectDecoder(1048576));
channel.pipeline().addLast(jsonAvroConvertor);
channel.pipeline().addLast(kafkaProducer);
if (log.isInfoEnabled())
log.info("channel = " + channel.toString());
}
};
}
}
JsonProducer.java The main program
public class JsonProducer {
private static final Logger log = LoggerFactory.getLogger(JsonProducer.class);
private static MFTcpServer tcpServer;
/**
* Main startup method
*
* #param args not used
*/
public static void main(String[] args) {
System.setProperty("solschema", "false");
try {
// the shutdown hook.
Runtime.getRuntime().addShutdownHook(new Thread(
() -> {
if (tcpServer != null) {
tcpServer.shutdown();
}
}
));
AnnotationConfigApplicationContext context = new
AnnotationConfigApplicationContext(JsonProducerConfig.class);
tcpServer = (MFTcpServer) context.getBean(JsonProducerConfig.TCP_SERVER);
ChannelInitializer<Channel> channelInitializer = (ChannelInitializer<Channel>) context.
getBean(JsonProducerConfig.CHANNEL_PIPELINE_INITIALIZER);
tcpServer.startAndAwait(channelInitializer);
} catch (Exception t) {
log.error("Error while starting JsonProducer ", t);
System.exit(-1);
}
}
}
The MFKafkaProducer.java as the last channel in the pipeline. Note the ctx.writeAndFlush(msg) in the channelRead method which is where I understand the response should be initiated. But what after that. When running this channelFuture.isSuccess() evaluates to false. The response object was an attempt to a String response.
#ChannelHandler.Sharable
public class MFKafkaProducer extends ChannelInboundHandlerAdapter {
private static final Logger log = LoggerFactory.getLogger(MFKafkaProducer.class);
#Resource
ApplicationContext context;
#Resource(name = DatabusConfig.ADMIN)
Admin admin;
private Map<String, IProducer> streams = new HashMap<>();
#PreDestroy
public void stop() {
removeAllStreams(); // then stop writing to producers
}
/**
* #param clickRecord the record to be pushed to kafka
* #throws Exception
*/
public void handle(GenericRecord clickRecord) throws Exception {
Utf8 clientId = null;
try {
clientId = (Utf8) clickRecord.get(SchemaUtil.APP_ID);
stream(producer(clientId.toString()), clickRecord);
} catch (Exception e) {
String message = "Could not push click data for clientId:" + clientId;
log.warn("handle - " + message + "!!!", e);
assert clientId != null;
removeStream(clientId.toString());
}
}
/**
* removes all the streams
*/
private void removeAllStreams() {
Set<String> strings = streams.keySet();
for (String clientId : strings) {
removeStream(clientId);
}
}
/**
* removes a particular stream
*
* #param clientId the stream to be removed
*/
private void removeStream(String clientId) {
Assert.notEmpty(streams);
IProducer producer = streams.get(clientId);
producer.stopProducer();
streams.remove(clientId);
}
/**
* #param producer the producer where data needs to be written
* #param clickRecord teh record to be written
*/
private void stream(IProducer producer, GenericRecord clickRecord) {
producer.send(clickRecord);
}
/**
* This will create a producer in case it is not already created.
* If already created return the already present one
*
* #param clientId stream id
* #return the producer instance
*/
private IProducer producer(String clientId) {
if (streams.containsKey(clientId)) {
return streams.get(clientId);
} else {
IProducer producer = admin.createKeyTopicProducer(SchemaUtil.APP_ID, "test_" + clientId, new ICallback() {
#Override
public void onSuccess(long offset) {
if (log.isInfoEnabled())
log.info("onSuccess - Data at offset:" + offset + " send.");
}
#Override
public void onError(long offset, Exception ex) {
if (log.isInfoEnabled())
log.info("onError - Data at offset:" + offset + " failed. Exception: ", ex);
}
#Override
public void onStreamClosed() {
log.warn("onStreamClosed - Stream:" + clientId + " closed.");
removeStream(clientId);
}
});
producer.startProducer();
streams.put(clientId, producer);
return producer;
}
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
log.debug("KafkaProducer - channelRead() called with " + "ctx = [" + ctx + "], msg = [" + msg + "]");
if (msg instanceof GenericRecord) {
GenericRecord genericRecord = (GenericRecord) msg;
try {
handle(genericRecord);
log.debug("channelRead sending response");
Charset charset = Charset.defaultCharset();
ByteBuf response = Unpooled.copiedBuffer("Just a response", charset);
ChannelFuture future = ctx.writeAndFlush(msg);
future.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture channelFuture) throws Exception {
if (channelFuture.isSuccess())
log.info("channelRead - future.operationComplete - Response has been delivered to all channels");
else
log.info("channelRead - future.operationComplete - Response has NOT been delivered to all channels");
}
});
} catch (Exception ex) {
log.error("Something went wrong processing the generic record: " + msg + "\n ", ex);
}
} else {
log.debug("KafkaProducer - msg not of Type Generic Record !!! " + msg);
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
// Close the connection when an exception is raised.
log.error("Something went wrong writing to Kafka: \n", cause);
ctx.close();
}
}
Using ChannelFuture#cause() I noticed I was not serializing a ByteBuf object, but a GenericRecord instead. Using
ByteBuf response = Unpooled.copiedBuffer(genericRecord.toString(), charset);
ChannelFuture future = ctx.writeAndFlush(response);
the GenericRecord gets converted to a ButeBuf and sends a response using the writeAndFlush method.
The test client using a Socket implementation somehow never really received a response, but by using a SocketChannel this was resolved as well.
I am working with apache Thrift. I am getting TTransportException exception while everthing looks fine with my code. Here is my Thrift Server Code:
private TNonblockingServerSocket socket;
/**
* #breif Store processor instance.
*/
private PringService.Processor processor;
/**
* Store server instance.
*/
private TServer tServer;
/**
*
* #breif A handle to the unique Singleton instance.
*/
static private ThriftServer _instance = null;
/**
* #breif The unique instance of this class.
* #throws TTransportException
*/
static public ThriftServer getInstance() throws TTransportException {
if (null == _instance) {
_instance = new ThriftServer();
}
return _instance;
}
/**
* #breif A Ctor for ThriftServer. Initialize all members.
* #throws TTransportException
*/
private ThriftServer() throws TTransportException {
socket = new TNonblockingServerSocket(Config.THRIFT_PORT);
processor = new PringService.Processor(new Handler());
THsHaServer.Args args = new THsHaServer.Args(socket);
args.processor(processor);
args.transportFactory(new TFramedTransport.Factory());
args.inputProtocolFactory(new TBinaryProtocol.Factory());
args.outputProtocolFactory(new TBinaryProtocol.Factory());
tServer = new THsHaServer(args);
/*tServer = new THsHaServer(processor, socket,
new TFramedTransport.Factory(),
new TFramedTransport.Factory(),
new TBinaryProtocol.Factory(),
new TBinaryProtocol.Factory());*/
}
/**
* #breif main method
* #param args the command line arguments
* #throws TTransportException
*/
public static void main(String[] args) throws TTransportException {
// To Run it directly from PringCore.jar, else use SmsProcessor Helper functionality
ThriftServer server = new ThriftServer();
server.execute(args);
}
#Override
/**
* #breif Starts the execution.
*/
protected void execute(String[] args) {
if (db != null) {
db.close();
}
tServer.serve();
}
private static class Handler implements PringService.Iface {
......
}
}
And This is my thrift client:
TTransport transport;
try {
transport = new TSocket("localhost", Config.THRIFT_PORT);
transport.open();
TProtocol protocol = new TBinaryProtocol(transport);
PringService.Client client = new PringService.Client(protocol);
String result = client.importPringer(2558456, true);
System.out.println("Result String is ::"+result);
transport.close();
} catch (TTransportException e) {
e.printStackTrace();
} catch (TException e) {
e.printStackTrace();
}
When I run my Thrift server and then run thrift client, I get the following exception:
org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
Am I using mis-matching transport sockt/layer on my thrift server or client ? Or there is something other wrong?
Thanks in Advance for your guidance :)
When you use TNonblockingServerSocket, you need to use TFramedTransport both server- and client-side. The documentation of TNonblockingServerSocket is quite explicit about that:
To use this server, you MUST use a TFramedTransport at the outermost transport, otherwise this server will be unable to determine when a whole method call has been read off the wire. Clients must also use TFramedTransport.
Your client should therefore look like this:
TTransport transport;
try {
transport = new TSocket("localhost", Config.THRIFT_PORT);
transport.open();
TProtocol protocol = new TBinaryProtocol(new TFramedTransport(transport));
PringService.Client client = new PringService.Client(protocol);
String result = client.importPringer(2558456, true);
System.out.println("Result String is ::"+result);
transport.close();
} catch (TTransportException e) {
e.printStackTrace();
} catch (TException e) {
e.printStackTrace();
}
I'm having an issue with a multi-threaded server I'm building as an academic exercise, more specifically with getting a connection to close down gracefully.
Each connection is managed by a Session class. This class maintains 2 threads for the connection, a DownstreamThread and an UpstreamThread.
The UpstreamThread blocks on the client socket and encodes all incoming strings into messages to be passed up to another layer to deal with. The DownstreamThread blocks on a BlockingQueue into which messages for the client are inserted. When there's a message on the queue, the Downstream thread takes the message off the queue, turns it into a string and sends it to the client. In the final system, an application layer will act on incoming messages and push outgoing messages down to the server to send to the appropriate client, but for now I just have a simple application that sleeps for a second on an incoming message before echoing it back as an outgoing message with a timestamp appended.
The problem I'm having is getting the whole thing to shut down gracefully when the client disconnects. The first issue I'm contending with is a normal disconnect, where the client lets the server know that it's ending the connection with a QUIT command. The basic pseudocode is:
while (!quitting) {
inputString = socket.readLine () // blocks
if (inputString != "QUIT") {
// forward the message upstream
server.acceptMessage (inputString);
} else {
// Do cleanup
quitting = true;
socket.close ();
}
}
The upstream thread's main loop looks at the input string. If it's QUIT the thread sets a flag to say that the client has ended communications and exits the loop. This leads to the upstream thread shutting down nicely.
The downstream thread's main loop waits for messages in the BlockingQueue for as long as the connection closing flag isn't set. When it is, the downstream thread is also supposed to terminate. However, it doesn't, it just sits there waiting. Its psuedocode looks like this:
while (!quitting) {
outputMessage = messageQueue.take (); // blocks
sendMessageToClient (outputMessage);
}
When I tested this, I noticed that when the client quit, the upstream thread shut down, but the downstream thread didn't.
After a bit of head scratching, I realised that the downstream thread is still blocking on the BlockingQueue waiting for an incoming message that will never come. The upstream thread doesn't forward the QUIT message any further up the chain.
How can I make the downstream thread shut down gracefully? The first idea that sprang to mind was setting a timeout on the take() call. I'm not too keen on this idea though, because whatever value you select, it's bound to be not entirely satisfactory. Either it's too long and a zombie thread sits there for a long time before shutting down, or it's too short and connections that have idled for a few minutes but are still valid will be killed. I did think of sending the QUIT message up the chain, but that requires it to make a full round trip to the server, then the application, then back down to the server again and finally to the session. This doesn't seem like an elegant solution either.
I did look at the documentation for Thread.stop() but that's apparently deprecated because it never worked properly anyway, so that looks like it's not really an option either. Another idea I had was to force an exception to be triggered in the downstream thread somehow and let it clean up in its finally block, but this strikes me as a horrible and kludgey idea.
I feel that both threads should be able to gracefully shutdown on their own, but I also suspect that if one thread ends it must also signal the other thread to end in a more proactive way than simply setting a flag for the other thread to check. As I'm still not very experienced with Java, I'm rather out of ideas at this point. If anyone has any advice, it would be greatly appreciated.
For the sake of completeness, I've included the real code for the Session class below, though I believe the pseudocode snippets above cover the relevant parts of the problem. The full class is about 250 lines.
import java.io.*;
import java.net.*;
import java.util.concurrent.*;
import java.util.logging.*;
/**
* Session class
*
* A session manages the individual connection between a client and the server.
* It accepts input from the client and sends output to the client over the
* provided socket.
*
*/
public class Session {
private Socket clientSocket = null;
private Server server = null;
private Integer sessionId = 0;
private DownstreamThread downstream = null;
private UpstreamThread upstream = null;
private boolean sessionEnding = false;
/**
* This thread handles waiting for messages from the server and sending
* them to the client
*/
private class DownstreamThread implements Runnable {
private BlockingQueue<DownstreamMessage> incomingMessages = null;
private OutputStreamWriter streamWriter = null;
private Session outer = null;
#Override
public void run () {
DownstreamMessage message;
Thread.currentThread ().setName ("DownstreamThread_" + outer.getId ());
try {
// Send connect message
this.sendMessageToClient ("Hello, you are client " + outer.getId ());
while (!outer.sessionEnding) {
message = this.incomingMessages.take ();
this.sendMessageToClient (message.getPayload ());
}
// Send disconnect message
this.sendMessageToClient ("Goodbye, client " + getId ());
} catch (InterruptedException | IOException ex) {
Logger.getLogger (DownstreamThread.class.getName ()).log (Level.SEVERE, ex.getMessage (), ex);
} finally {
this.terminate ();
}
}
/**
* Add a message to the downstream queue
*
* #param message
* #return
* #throws InterruptedException
*/
public DownstreamThread acceptMessage (DownstreamMessage message) throws InterruptedException {
if (!outer.sessionEnding) {
this.incomingMessages.put (message);
}
return this;
}
/**
* Send the given message to the client
*
* #param message
* #throws IOException
*/
private DownstreamThread sendMessageToClient (CharSequence message) throws IOException {
OutputStreamWriter osw;
// Output to client
if (null != (osw = this.getStreamWriter ())) {
osw.write ((String) message);
osw.write ("\r\n");
osw.flush ();
}
return this;
}
/**
* Perform session cleanup
*
* #return
*/
private DownstreamThread terminate () {
try {
this.streamWriter.close ();
} catch (IOException ex) {
Logger.getLogger (DownstreamThread.class.getName ()).log (Level.SEVERE, ex.getMessage (), ex);
}
this.streamWriter = null;
return this;
}
/**
* Get an output stream writer, initialize it if it's not active
*
* #return A configured OutputStreamWriter object
* #throws IOException
*/
private OutputStreamWriter getStreamWriter () throws IOException {
if ((null == this.streamWriter)
&& (!outer.sessionEnding)) {
BufferedOutputStream os = new BufferedOutputStream (outer.clientSocket.getOutputStream ());
this.streamWriter = new OutputStreamWriter (os, "UTF8");
}
return this.streamWriter;
}
/**
*
* #param outer
*/
public DownstreamThread (Session outer) {
this.outer = outer;
this.incomingMessages = new LinkedBlockingQueue ();
System.out.println ("Class " + this.getClass () + " created");
}
}
/**
* This thread handles waiting for client input and sending it upstream
*/
private class UpstreamThread implements Runnable {
private Session outer = null;
#Override
public void run () {
StringBuffer inputBuffer = new StringBuffer ();
BufferedReader inReader;
Thread.currentThread ().setName ("UpstreamThread_" + outer.getId ());
try {
inReader = new BufferedReader (new InputStreamReader (outer.clientSocket.getInputStream (), "UTF8"));
while (!outer.sessionEnding) {
// Read whatever was in the input buffer
inputBuffer.delete (0, inputBuffer.length ());
inputBuffer.append (inReader.readLine ());
System.out.println ("Input message was: " + inputBuffer);
if (!inputBuffer.toString ().equals ("QUIT")) {
// Forward the message up the chain to the Server
outer.server.acceptMessage (new UpstreamMessage (sessionId, inputBuffer.toString ()));
} else {
// End the session
outer.sessionEnding = true;
}
}
} catch (IOException | InterruptedException e) {
Logger.getLogger (Session.class.getName ()).log (Level.SEVERE, e.getMessage (), e);
} finally {
outer.terminate ();
outer.server.deleteSession (outer.getId ());
}
}
/**
* Class constructor
*
* The Core Java volume 1 book said that a constructor such as this
* should be implicitly created, but that doesn't seem to be the case!
*
* #param outer
*/
public UpstreamThread (Session outer) {
this.outer = outer;
System.out.println ("Class " + this.getClass () + " created");
}
}
/**
* Start the session threads
*/
public void run () //throws InterruptedException
{
Thread upThread = new Thread (this.upstream);
Thread downThread = new Thread (this.downstream);
upThread.start ();
downThread.start ();
}
/**
* Accept a message to send to the client
*
* #param message
* #return
* #throws InterruptedException
*/
public Session acceptMessage (DownstreamMessage message) throws InterruptedException {
this.downstream.acceptMessage (message);
return this;
}
/**
* Accept a message to send to the client
*
* #param message
* #return
* #throws InterruptedException
*/
public Session acceptMessage (String message) throws InterruptedException {
return this.acceptMessage (new DownstreamMessage (this.getId (), message));
}
/**
* Terminate the client connection
*/
private void terminate () {
try {
this.clientSocket.close ();
} catch (IOException e) {
Logger.getLogger (Session.class.getName ()).log (Level.SEVERE, e.getMessage (), e);
}
}
/**
* Get this Session's ID
*
* #return The ID of this session
*/
public Integer getId () {
return this.sessionId;
}
/**
* Session constructor
*
* #param owner The Server object that owns this session
* #param sessionId The unique ID this session will be given
* #throws IOException
*/
public Session (Server owner, Socket clientSocket, Integer sessionId) throws IOException {
this.server = owner;
this.clientSocket = clientSocket;
this.sessionId = sessionId;
this.upstream = new UpstreamThread (this);
this.downstream = new DownstreamThread (this);
System.out.println ("Class " + this.getClass () + " created");
System.out.println ("Session ID is " + this.sessionId);
}
}
Instead of calling Thread.stop use Thread.interrupt. That will cause the take method to throw an InterruptedException which you can use to know that you should shut down.
Can you just create "fake" quit message instead of setting outer.sessionEnding to true when "QUIT" appears. Putting this fake message in queue will wake the DownstreamThread and you can end it. In that case you can even eliminate this sessionEnding variable.
In pseudo code this could look like this:
while (true) {
outputMessage = messageQueue.take (); // blocks
if (QUIT == outputMessage)
break
sendMessageToClient (outputMessage);
}
I am doing my assignment in Network architecture 1, where I have to implement a distance vector routing at each node.
At each node, I have a thread which listens for incoming DatagramPackets containing routing information from neighboring nodes only on a specific port. When a datagram arrives, the thread processes that datagram, and if there are updates in its internal routing tables, then it sends its routing information to all of its neighbors.
I am trying to do it in Java.
The problem I am facing is that when a datagram arrives, I need to process it. If during that time any other datagram arrives, it is dropped, as the thread is currently processing information. That means I have a loss of information.
Can any one help me with this?
I am using the usual way of reading from a socket in java.
DatagramSocket socket = new DatagramSocket(4445, InetAddress.getByName("127.0.0.1"));
while (true) {
try {
byte[] buf = new byte[2000];
// receive request
DatagramPacket recvRequest = new DatagramPacket(buf, buf.length);
socket.receive(recvRequest);
//Some process of data in datagram
} catch (IOException e) {
e.printStackTrace();
}
}
You can process the received datagram in a thread, so your thread with the socket listener can continue to receive new datagrams.
This the final project that i submitted.
It might be having some improper documentation and some bad usage of Java.
As this project runs on local system, instead of using different IP address and same port number, i am doing it other way.
NetworkBoot.java provides the initial neighbor details to each router.
Thanks
-Sunny Jain
enter code here
/*
* File Name : Router.java
* Public Class Name : Router
*
*/
//~--- JDK imports ------------------------------------------------------------
import java.io.IOException;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetAddress;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Set;
import java.util.concurrent.LinkedBlockingQueue;
import javax.swing.SwingUtilities;
/**
*
* NA1 project 2 spring 2009 semester
* #author sunny jain
*
*
*/
public class Router extends Thread {
/**
* HashMap containing list of neighbors and cost to reach them.
*/
private HashMap<Integer, Integer> hmapDirectNeighbours = new HashMap<Integer, Integer>(61);
/**
* HashMap containing list of destination as key and routing info to them as value.
* Routing info contains RouteDetail object.
* #see RouteDetail
*/
private HashMap<Integer, RouteDetail> hmapRoutes = new HashMap<Integer, RouteDetail>();
/**
* DatagramSocket
*/
private DatagramSocket dSoc;
/**
* DatagramPacket
*/
private DatagramPacket dpackReceive, dpackSend;
/**
* Inetaddress of system on which runs this algorithm.
*/
private InetAddress localAddress;
/**
* port to listen at for incoming route info from neighbors.
*/
int port;
private LinkedBlockingQueue<DatagramPacket> lbq = new LinkedBlockingQueue<DatagramPacket>();
/**
* Made constructor private to force initialization by specifying port
* compulsory.
*/
private Router() {
}
/**
* Constuctor taking port number as parameter and creates a datagramSocket
* to listen for incoming DatagramPacket on that socket.
* #param port
*/
public Router(int port) {
try {
this.port = port;
localAddress = InetAddress.getByName("127.0.0.1");
dSoc = new DatagramSocket(port, localAddress);
} catch (Exception ex) {
System.out.println("Error while creating socket : " + ex.getMessage());
}
this.start();
SwingUtilities.invokeLater(new Runnable() {
public void run() {
while (true) {
try {
received_Route_Info(lbq.take());
} catch (InterruptedException ex) {
System.out.println("Error while reading elements from datagram queue");
}}}});
}
public void setRouterBootInfo(String strNeighboursInfo) {
String[] strNeighbouringNodes = strNeighboursInfo.split(";");
for (int i = 0; i < strNeighbouringNodes.length; i++) {
String[] strNodeIpAndPort = strNeighbouringNodes[i].split(":");
hmapDirectNeighbours.put(Integer.valueOf(strNodeIpAndPort[0]), Integer.valueOf(strNodeIpAndPort[1]));
hmapRoutes.put(Integer.valueOf(strNodeIpAndPort[0]), new RouteDetail(null, Integer.valueOf(strNodeIpAndPort[1])));
}
propagateChanges();
// entry in Route table....No need for infinity as we creat entry when a node is reachable.
}
#Override
public void run() {
while (true) {
try {
byte[] buf = new byte[250];
// receive request
dpackReceive = new DatagramPacket(buf, buf.length);
dSoc.receive(dpackReceive);
lbq.put(dpackReceive);
} catch (InterruptedException ex) {
ex.printStackTrace();
dSoc.close();
} catch (IOException e) {
e.printStackTrace();
dSoc.close();
}
}
}
/**
* This method is called for each DatagramPacket received containing new
* routing information.
*
* This method checks whether this packet came from neighboring node
* (routers) only. If true it applies Distance vector algorithm on data
* present in datagram packet and due to this information if their is any
* change in local routing information that it displays current local
* updated routing information and also sends this updated information to
* other neighbours only.
*
* #param dataPckt
* #see #validate_Is_Packet_From_Neighbor(java.net.DatagramPacket)
* #see #apply_Routing_Algorithm(java.net.DatagramPacket, java.util.HashMap)
* #see #print_route_info()
* #see #send_Updates_To_Neighbors(routesInfo)
*/
private void received_Route_Info(DatagramPacket dataPckt) {
if (dataPckt.getPort() == 4000) {
setRouterBootInfo(getStringFromBytes(dataPckt));
} else if (validate_Is_Packet_From_Neighbor(dataPckt)) {
if (apply_Routing_Algorithm(dataPckt, create_HashMap_Routes(getStringFromBytes(dataPckt)))) {
// if their is change in routing information.
propagateChanges();
}
}
}
/**
* Validates whether the Datagram packet received is from the neighbors only.
* #param datagrampckt DatagramPacket comtaining routing information.
* #return true if datagrampckt is from neighbors only otherwise false.
*/
private boolean validate_Is_Packet_From_Neighbor(DatagramPacket datagrampckt) {
return hmapDirectNeighbours.containsKey(Integer.valueOf(datagrampckt.getPort()));
}
/**
* Returns byte representaion of data contained in DatagramPacket pkt.
* #param pkt DatagramPacket
* #return byte representation of data contained in pkt
*/
private String getStringFromBytes(DatagramPacket pkt) {
String strData = new String(pkt.getData());
return strData.substring(0, strData.lastIndexOf(';'));
}
/**
* Applies Distance Vector algorithm using newly received routing information
* and information presently with this node (Router).
* #param datagrampckt DatagramPacket containing routing information.
* #param newRoutes HashMap of routes new information received with
* destination as key and cost to that destination as value.
*/
private boolean apply_Routing_Algorithm(DatagramPacket dataPckt, HashMap<Integer, Integer> newRoutes) {
boolean updated = false;
Integer pktSourse = Integer.valueOf(dataPckt.getPort());
// Get a set of the routes
Set<Integer> set = newRoutes.keySet();
// Get an iterator
Iterator<Integer> iterator = set.iterator();
// Display elements.
while (iterator.hasNext()) {
Integer key = iterator.next();
Integer nextHopCost = hmapRoutes.get(pktSourse).getPathCost();
int optionalCost = newRoutes.get(key) + (nextHopCost == null ? 0 : nextHopCost);
if (hmapRoutes.containsKey(key)) {
RouteDetail routeDetail = hmapRoutes.get(key);
if (routeDetail.getPathCost().compareTo(optionalCost) > 0) {
routeDetail.setNextHop(pktSourse);
routeDetail.setPathCost(optionalCost);
hmapRoutes.put(key, routeDetail);
updated = true;
// try to verify above statement
}
} else {
if (!key.equals(port)) {
RouteDetail newRouteDetail = new RouteDetail(pktSourse, optionalCost);
hmapRoutes.put(key, newRouteDetail);
updated = true;
}
}
}
return updated;
}
/**
* When internal routing information is chaged, send this information to
* other neighbors.
* #param routesInfo byte representaion of routing information.
*/
private void send_Updates_To_Neighbors(byte[] routesInfo) {
// Get a set of the routes
Set<Integer> set = hmapDirectNeighbours.keySet();
// Get an iterator
Iterator<Integer> iterator = set.iterator();
// Display elements.
while (iterator.hasNext()) {
dpackSend = new DatagramPacket(routesInfo, routesInfo.length, localAddress, iterator.next().intValue());
try {
dSoc.send(dpackSend);
} catch (IOException ex) {
System.out.println("Error while sending route updates : " + ex.getMessage());
}
}
}
/**
* Parses routeInfo to creat an HashMap based on this informationin the
* format as HashMap of <<Integer:Destination>,<Integer: Cost to this destination>>
* #param routeInfo contains routing information as String in the syntax
* of {<Destination>:<Cost to destination>;}
* #return Hashmap<<Integer:Destination>,<Integer: Cost to this destination>>
*/
private HashMap<Integer, Integer> create_HashMap_Routes(String routeInfo) {
HashMap<Integer, Integer> routes = new HashMap<Integer, Integer>();
String[] straRoute = routeInfo.split(";");
for (int i = 0; i < straRoute.length; i++) {
String[] straDestAndCost = straRoute[i].split(":");
routes.put(Integer.parseInt(straDestAndCost[0]), Integer.parseInt(straDestAndCost[1]));
}
return routes;
}
/**
* Converts current routing information stored as HashMap to String
* presentation in format as {<Destination>:<Cost to destination>;}
*
* #return String representaion of routing information.
* #see #hmapRoutes.
*/
private String create_String_Of_Routes() {
StringBuilder strB = new StringBuilder();
// Get a set of the routes
Set<Integer> set = hmapRoutes.keySet();
// Get an iterator
Iterator<Integer> iterator = set.iterator();
// Display elements.
while (iterator.hasNext()) {
Integer destination = iterator.next();
strB.append(destination);
strB.append(":");
strB.append(hmapRoutes.get(destination).getPathCost());
strB.append(";");
}
return strB.toString();
}
/**
* Prints the current routing information stored in <code>hmapRoutes</code>
* to default output stream of this program.
* #see #hmapRoutes.
*/
public void print_route_info() {
RouteDetail route;
StringBuilder builder;
// PRINT THE CURRENT ROUTING INFO AT THIS NODE
System.out.println("");
System.out.println(" TABLE AT NODE WITH PORT : " + port);
System.out.println("--------------------------------------------------------------------------------");
System.out.println("\t\tTo \t|\t Via\t|\tCost\t\t");
System.out.println("--------------------------------------------------------------------------------");
// Get a set of the routes
Set<Integer> set = hmapRoutes.keySet();
// Get an iterator
Iterator<Integer> iterator = set.iterator();
// Display elements.
while (iterator.hasNext()) {
Integer key = iterator.next();
route = hmapRoutes.get(key);
builder = new StringBuilder();
builder.append("\t\t" + key.intValue());
builder.append("\t|\t" + (route.getNextHop() == null ? " -" : route.getNextHop()));
builder.append("\t|\t" + route.getPathCost() + "\t\t");
System.out.println(builder.toString());
}
}
/**
* This class provides details for each destination.
* It provides detail of cost that will be incurred to reach that
* destination and next router on that path.
*/
private class RouteDetail {
Integer nextHop;
Integer pathCost;
public RouteDetail(Integer nextHop, Integer pathCost) {
this.nextHop = nextHop;
this.pathCost = pathCost;
}
public Integer getNextHop() {
return nextHop;
}
public void setNextHop(Integer nextHop) {
this.nextHop = nextHop;
}
public Integer getPathCost() {
return pathCost;
}
public void setPathCost(Integer pathCost) {
this.pathCost = pathCost;
}
}
private void propagateChanges() {
print_route_info();
send_Updates_To_Neighbors(create_String_Of_Routes().getBytes());
}
public static void main(String[] args) {
new Router(Integer.parseInt(args[0]));
}
}
/*
* File Name : NetworkBoot.java
* Public Class Name : NetworkBoot
*
*/
import java.io.IOException;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetAddress;
/**
*
* NA1 project 2 spring 2009 semester
* #author sunny jain
*
*
*/
public class NetworkBoot {
public static void main(String[] args) {
try {
DatagramSocket dSoc = new DatagramSocket(4000, InetAddress.getByName("127.0.0.1"));
String[] sendD = {"4006:3;4007:5;4009:2;", "4005:3;4007:3;4008:6;", "4005:5;4006:3;", "4009:2;4006:6;", "4008:2;4005:2;"};
for (int i = 0, port = 4005; i < 5; i++) {
dSoc.send(new DatagramPacket(sendD[i].getBytes(), sendD[i].length(), InetAddress.getByName("127.0.0.1"), port++));
}
} catch (IOException ex) {
ex.printStackTrace();
}
}
}
DatagramSocket socket = new DatagramSocket(4445, InetAddress.getByName("127.0.0.1"));
while (true) {
try {
// note final ..
final byte[] buf = new byte[2000];
// receive request
DatagramPacket recvRequest = new DatagramPacket(buf, buf.length);
socket.receive(recvRequest);
//Some process of data in datagram
(new Thread(new Runnable() {
public void run () {
// do stuff with data in buf
...
}
})).start();
} catch (IOException e) {
e.printStackTrace();
}
}
I haven't done this in Java, but, you can (or should) pass more than one simultaneous datagram buffer to the socket (either with several threads each invoking the synchrnonous receive method, or preferably with one thread invoking an asynchrnonous receive method more than once).
The advantage of passing multiple simultaneous datagram buffers to the socket is obvious: i.e. the socket will still have a buffer (into which to receive the next datagram) even while it has already filled one buffer (with a previous datagram) and passed that buffer back to you.
You might ask, "in what sequence will the buffers be passed back to me?" and the answer to that is, "it shouldn't matter." If the sequence in which you process datagrams is important then the datagrams themselves should contain a sequence number (because datagrams might get out of sequence as they're routed over the network, whether or not you've passed multiple simultaneous to the local socket with a consequent possibility of "simultaneous" receives being delivered back to you out of sequence).
It is worth remembering UDP is lossy transport, while minimising packet loss is a good idea you should never assume you will get every packet, (or that the packets will arrive in the order you sent them)