Fastest way to scan ports with Java - java

I made a very simple port scanner, but it runs too slow, so I'm looking for a way to make it scan faster. Here is my code:
public boolean portIsOpen(String ip, int port, int timeout) {
try {
Socket socket = new Socket();
socket.connect(new InetSocketAddress(ip, port), timeout);
socket.close();
return true;
} catch (Exception ex) {
return false;
}
}
This code tests if a specific port is open on a specific ip. For timeout I used a minimum value of 200 because when I go lower it doesn't have enough time to test the port.
It works well, but takes too much to scan from 0 to 65535. Is there any other way that could maybe scan from 0 to 65535 in less than 5 minutes?

If you need 200ms for each of the 65536 ports (in the worst case, a firewall is blocking everything, thus making you hit your timeout for every single port), the maths is pretty simple: you need 13k seconds, or about 3 hours and a half.
You have 2 (non-exclusive) options to make it faster:
reduce your timeout
paralellize your code
Since the operation is I/O bound (in contrast to CPU bound -- that is, you spend time waiting for I/O, and not for some huge calculation to complete), you can use many, many threads. Try starting with 20. They would divide the 3 hours and a half among them, so the maximum expected time is about 10 minutes. Just remember that this will put pressure on the other side, ie, the scanned host will see huge network activity with "unreasonable" or "strange" patterns, making the scan extremely easy to detect.
The easiest way (ie, with minimal changes) is to use the ExecutorService and Future APIs:
public static Future<Boolean> portIsOpen(final ExecutorService es, final String ip, final int port, final int timeout) {
return es.submit(new Callable<Boolean>() {
#Override public Boolean call() {
try {
Socket socket = new Socket();
socket.connect(new InetSocketAddress(ip, port), timeout);
socket.close();
return true;
} catch (Exception ex) {
return false;
}
}
});
}
Then, you can do something like:
public static void main(final String... args) {
final ExecutorService es = Executors.newFixedThreadPool(20);
final String ip = "127.0.0.1";
final int timeout = 200;
final List<Future<Boolean>> futures = new ArrayList<>();
for (int port = 1; port <= 65535; port++) {
futures.add(portIsOpen(es, ip, port, timeout));
}
es.shutdown();
int openPorts = 0;
for (final Future<Boolean> f : futures) {
if (f.get()) {
openPorts++;
}
}
System.out.println("There are " + openPorts + " open ports on host " + ip + " (probed with a timeout of " + timeout + "ms)");
}
If you need to know which ports are open (and not just how many, as in the above example), you'd need to change the return type of the function to Future<SomethingElse>, where SomethingElse would hold the port and the result of the scan, something like:
public final class ScanResult {
private final int port;
private final boolean isOpen;
// constructor
// getters
}
Then, change Boolean to ScanResult in the first snippet, and return new ScanResult(port, true) or new ScanResult(port, false) instead of just true or false
EDIT: Actually, I just noticed: in this particular case, you don't need the ScanResult class to hold result + port, and still know which port is open. Since you add the futures to a List, which is ordered, and, later on, you process them in the same order you added them, you could have a counter that you'd increment on each iteration to know which port you are dealing with. But, hey, this is just to be complete and precise. Don't ever try doing that, it is horrible, I'm mostly ashamed that I thought about this... Using the ScanResult object is much cleaner, the code is way easier to read and maintain, and allows you to, later, for example, use a CompletionService to improve the scanner.

Apart from parallelizing the scan, you can use more advanced port scanning techniques like the ones (TCP SYN and TCP FIN scanning) explained here: http://nmap.org/nmap_doc.html. VB code of an implementation can be found here: http://h.ackack.net/spoon-worlds-fastest-port-scanner.html
In order to use these techniques, however, you need to use raw TCP/IP sockets. You should use RockSaw library for this.

Code sample is inspired by "Bruno Reis"
class PortScanner {
public static void main(final String... args) throws InterruptedException, ExecutionException {
final ExecutorService es = Executors.newFixedThreadPool(20);
final String ip = "127.0.0.1";
final int timeout = 200;
final List<Future<ScanResult>> futures = new ArrayList<>();
for (int port = 1; port <= 65535; port++) {
// for (int port = 1; port <= 80; port++) {
futures.add(portIsOpen(es, ip, port, timeout));
}
es.awaitTermination(200L, TimeUnit.MILLISECONDS);
int openPorts = 0;
for (final Future<ScanResult> f : futures) {
if (f.get().isOpen()) {
openPorts++;
System.out.println(f.get().getPort());
}
}
System.out.println("There are " + openPorts + " open ports on host " + ip + " (probed with a timeout of "
+ timeout + "ms)");
}
public static Future<ScanResult> portIsOpen(final ExecutorService es, final String ip, final int port,
final int timeout) {
return es.submit(new Callable<ScanResult>() {
#Override
public ScanResult call() {
try {
Socket socket = new Socket();
socket.connect(new InetSocketAddress(ip, port), timeout);
socket.close();
return new ScanResult(port, true);
} catch (Exception ex) {
return new ScanResult(port, false);
}
}
});
}
public static class ScanResult {
private int port;
private boolean isOpen;
public ScanResult(int port, boolean isOpen) {
super();
this.port = port;
this.isOpen = isOpen;
}
public int getPort() {
return port;
}
public void setPort(int port) {
this.port = port;
}
public boolean isOpen() {
return isOpen;
}
public void setOpen(boolean isOpen) {
this.isOpen = isOpen;
}
}
}

I wrote my own asynchronous portscanner java service that can scan ports via TCP-SYN-Scan like Nmap does. It also support IMCP ping scans and can work with a very high throughput (depending on what the network can sustain):
https://github.com/subes/invesdwin-webproxy
Internally it uses a java binding pcap and exposes its services via JMS/AMQP. Though you can also use it directly in your application if you don't mind it having root permissions.

If you decide to use the Nmap option and want to continue with Java, you should look at Nmap4j on SourceForge.net.
It's a simple API that allows you to integrate Nmap into a java app.

Nay, fastest way here is to use the dynamically created thread method
Executors.newCachedThreadPool();
This way it uses threads until all of them are taken, then when all of them are taken and there is a new task it will open up a new thread and preform the new task on it.
Here's my code snippet (Creds due to Jack and Bruno Reis)
I also added the function to search any IP address you type in for some added functionality and ease of use.
import java.net.InetSocketAddress;
import java.net.Socket;
import java.util.ArrayList;
import java.util.List;
import java.util.Scanner;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
class PortScanner {
public static void main(final String... args) throws InterruptedException, ExecutionException
{
final ExecutorService es = Executors.newCachedThreadPool();
System.out.print("Please input the ip address you would like to scan for open ports: ");
Scanner inputScanner = new Scanner(System.in);
final String ip = inputScanner.nextLine();
final int timeout = 200;
final List<Future<ScanResult>> futures = new ArrayList<>();
for (int port = 1; port <= 65535; port++) {
// for (int port = 1; port <= 80; port++) {
futures.add(portIsOpen(es, ip, port, timeout));
}
es.awaitTermination(200L, TimeUnit.MILLISECONDS);
int openPorts = 0;
for (final Future<ScanResult> f : futures) {
if (f.get().isOpen()) {
openPorts++;
System.out.println(f.get().getPort());
}
}
System.out.println("There are " + openPorts + " open ports on host " + ip + " (probed with a timeout of "
+ timeout + "ms)");
es.shutdown();
}
public static Future<ScanResult> portIsOpen(final ExecutorService es, final String ip, final int port,
final int timeout)
{
return es.submit(new Callable<ScanResult>() {
#Override
public ScanResult call() {
try {
Socket socket = new Socket();
socket.connect(new InetSocketAddress(ip, port), timeout);
socket.close();
return new ScanResult(port, true);
} catch (Exception ex) {
return new ScanResult(port, false);
}
}
});
}
public static class ScanResult {
private int port;
private boolean isOpen;
public ScanResult(int port, boolean isOpen) {
super();
this.port = port;
this.isOpen = isOpen;
}
public int getPort() {
return port;
}
public void setPort(int port) {
this.port = port;
}
public boolean isOpen() {
return isOpen;
}
public void setOpen(boolean isOpen) {
this.isOpen = isOpen;
}
}
}

I may be late to this but you can do a bulk port scan by doing the following using NIO2 single-threaded. By following NIO2 code with a single thread, I am able to scan all the hosts for a given port. Please try a reasonable timeout and make sure you have large File Discriptor for process
public static List<HostTarget> getRechabilityStatus(String...hosts,final int port, final int bulkDevicesPingTimeoutinMS) throws Exception {
List<AsynchronousSocketChannel> channels = new ArrayList<>(hosts.length);
try {
List<CompletableFuture<HostTarget>> all = new ArrayList<>(hosts.length);
List<HostTarget> allHosts = new ArrayList(hosts.length);
for (String host : hosts) {
InetSocketAddress address = new InetSocketAddress(host, port);
HostTarget target = new HostTarget();
target.setIpAddress(host);
allHosts.add(target);
AsynchronousSocketChannel client = AsynchronousSocketChannel.open();
channels.add(client);
final CompletableFuture<HostTarget> targetFuture = new CompletableFuture<>();
all.add(targetFuture);
client.connect(address, target, new CompletionHandler<Void, HostTarget>() {
#Override
public void completed(Void result, HostTarget attachment) {
attachment.setIsReachable(true);
targetFuture.complete(attachment);
}
#Override
public void failed(Throwable exc, HostTarget attachment) {
attachment.setIsReachable(false);
attachment.errrorMessage = exc.getMessage();
targetFuture.complete(attachment);
}
});
}
try {
if(bulkDevicesPingTimeoutinMS > 0) {
CompletableFuture.allOf(all.toArray(new CompletableFuture[]{})).get(bulkDevicesPingTimeoutinMS, TimeUnit.MILLISECONDS);
}else{
// wait for all future to be complete 1000 scan is taking 7 seconds.
CompletableFuture.allOf(all.toArray(new CompletableFuture[]{})).join();
}
} catch (Exception timeoutException) {
// ignore
}
return allHosts;
}finally {
for(AsynchronousSocketChannel channel : channels){
try{
channel.close();
}catch (Exception e){
if(LOGGER.isDebugEnabled()) {
LOGGER.error("Erorr while closing socket",e);
}
}
}
}
static class HostTarget {
String ipAddress;
Boolean isReachable;
public String getIpAddress() {
return ipAddress;
}
public Boolean getIsReachable() {
return isReachable;
}
public void setIpAddress(String ipAddress) {
this.ipAddress = ipAddress;
}
public void setIsReachable(Boolean isReachable) {
this.isReachable = isReachable;
}
}

Inspired by you all, but just this Code realy worked!
class PortScaner
import java.net.InetSocketAddress;
import java.net.Socket;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
public class PortScaner {
public static void main(String[] args) throws InterruptedException, ExecutionException {
final ExecutorService es = Executors.newFixedThreadPool(20);
final String ip = "127.0.0.1";
final int timeout = 200;
final List<Future<ScanResult>> futures = new ArrayList<>();
for (int port = 1; port <= 65535; port++)
futures.add(portIsOpen(es, ip, port, timeout));
es.shutdown();
int openPorts = 0;
for (final Future<ScanResult> f : futures)
if (f.get().isOpen()) {
openPorts++;
System.out.println(f.get());
}
System.out.println("There are " + openPorts + " open ports on host " + ip + " (probed with a timeout of " + timeout + "ms)");
}
public static Future<ScanResult> portIsOpen(final ExecutorService es, final String ip, final int port, final int timeout) {
return es.submit(
new Callable<ScanResult>() {
#Override
public ScanResult call() {
try {
Socket socket = new Socket();
socket.connect(new InetSocketAddress(ip, port), timeout);
socket.close();
return new ScanResult(port, true);
} catch (Exception ex) {
return new ScanResult(port, false);
}
}
});
}
}
class ScanResult
public final class ScanResult {
private final int port;
private final boolean isOpen;
public ScanResult(int port, boolean isOpen) {
super();
this.port = port;
this.isOpen = isOpen;
}
/**
* #return the port
*/
public int getPort() {
return port;
}
/**
* #return the isOpen
*/
public boolean isOpen() {
return isOpen;
}
#Override
public String toString() {
return "ScanResult [port=" + port + ", isOpen=" + isOpen + "]";
}
}

Related

How to release a mutex lock held by Thread which never lets it go, by continuously listening to a socket

There are two classes Client and ChatWindow, client has DatagramSocket, InetAddress and port fields along with methods for sending, receiving, and closing the socket. To close a socket I use an anonymous thread "socketCLOSE"
Client class
public class Client {
private static final long serialVersionUID = 1L;
private DatagramSocket socket;
private String name, address;
private int port;
private InetAddress ip;
private Thread send;
private int ID = -1;
private boolean flag = false;
public Client(String name, String address, int port) {
this.name = name;
this.address = address;
this.port = port;
}
public String receive() {
byte[] data = new byte[1024];
DatagramPacket packet = new DatagramPacket(data, data.length);
try {
socket.receive(packet);
} catch (IOException e) {
e.printStackTrace();
}
String message = new String(packet.getData());
return message;
}
public void send(final byte[] data) {
send = new Thread("Send") {
public void run() {
DatagramPacket packet = new DatagramPacket(data, data.length, ip, port);
try {
socket.send(packet);
} catch (IOException e) {
e.printStackTrace();
}
}
};
send.start();
}
public int close() {
System.err.println("close function called");
new Thread("socketClOSE") {
public void run() {
synchronized (socket) {
socket.close();
System.err.println("is socket closed "+socket.isClosed());
}
}
}.start();
return 0;
}
ChatWindow class is sort of a GUI which extends JPanel and implements Runnable, There are two threads inside the class - run and Listen.
public class ClientWindow extends JFrame implements Runnable {
private static final long serialVersionUID = 1L;
private Thread run, listen;
private Client client;
private boolean running = false;
public ClientWindow(String name, String address, int port) {
client = new Client(name, address, port);
createWindow();
console("Attempting a connection to " + address + ":" + port + ", user: " + name);
String connection = "/c/" + name + "/e/";
client.send(connection.getBytes());
running = true;
run = new Thread(this, "Running");
run.start();
}
private void createWindow() {
{
//Jcomponents and Layouts here
}
addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent e) {
String disconnect = "/d/" + client.getID() + "/e/";
send(disconnect, false);
running = false;
client.close();
dispose();
}
});
setVisible(true);
txtMessage.requestFocusInWindow();
}
public void run() {
listen();
}
private void send(String message, boolean text) {
if (message.equals("")) return;
if (text) {
message = client.getName() + ": " + message;
message = "/m/" + message + "/e/";
txtMessage.setText("");
}
client.send(message.getBytes());
}
public void listen() {
listen = new Thread("Listen") {
public void run() {
while (running) {
String message = client.receive();
if (message.startsWith("/c/")) {
client.setID(Integer.parseInt(message.split("/c/|/e/")[1]));
console("Successfully connected to server! ID: " + client.getID());
} else if (message.startsWith("/m/")) {
String text = message.substring(3);
text = text.split("/e/")[0];
console(text);
} else if (message.startsWith("/i/")) {
String text = "/i/" + client.getID() + "/e/";
send(text, false);
} else if (message.startsWith("/u/")) {
String[] u = message.split("/u/|/n/|/e/");
users.update(Arrays.copyOfRange(u, 1, u.length - 1));
}
}
}
};
listen.start();
}
public void console(String message) {
}
}
Whenever the client is closed, the client.close() is called which spawns socketCLOSE thread, but the thread does nothing, it enters the blocked state, as revealed by stack trace -
Name: socketClOSE
State: BLOCKED on java.net.DatagramSocket#1de1602 owned by: Listen
Total blocked: 1 Total waited: 0
Stack trace:
app//com.server.Client$2.run(Client.java:90)
Name: Listen
State: RUNNABLE
Total blocked: 0 Total waited: 0
Stack trace:
java.base#14.0.1/java.net.DualStackPlainDatagramSocketImpl.socketReceiveOrPeekData(Native Method)
java.base#14.0.1/java.net.DualStackPlainDatagramSocketImpl.receive0(DualStackPlainDatagramSocketImpl.java:130)
locked java.net.DualStackPlainDatagramSocketImpl#3dd26cc7
java.base#14.0.1/java.net.AbstractPlainDatagramSocketImpl.receive(AbstractPlainDatagramSocketImpl.java:181)
locked java.net.DualStackPlainDatagramSocketImpl#3dd26cc7
java.base#14.0.1/java.net.DatagramSocket.receive(DatagramSocket.java:864)
locked java.net.DatagramPacket#6d21ecb
locked java.net.DatagramSocket#1de1602
app//com.thecherno.chernochat.Client.receive(Client.java:59)
app//com.thecherno.chernochat.ClientWindow$5.run(ClientWindow.java:183)
This doesn’t let SocketCLOSE thread close the socket inside the synchronized block as lock on socket is held by Listen thread. How can I make the listen thread release its lock, the program terminates without closing the socket and debugger shows Listen thread as still runnable. Is the implementation flawed itself or can this be solved ?
I can reproduce the problem with JDK 14, but not with JDK 15 or newer.
This seems plausible, as JDK-8235674, JEP 373: Reimplement the Legacy DatagramSocket API indicates that the implementation has been rewritten for JDK 15. The report even says “The implementation also has several concurrency issues (e.g., with asynchronous close) that require an overhaul to address properly.”
However, you can get rid of the problem with JDK 14 too; just remove the synchronized. Nothing in the documentation says that synchronization was required to call close() and when I removed it, my testcase worked as intended.
When you want to coordinate the multithreaded access to the socket of your application, you should use a different lock object than the socket instance itself.

Netty Channel fail when write and flush too many and too fast

When I write a producer to publish message to my server. I've seen this:
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:384)
at io.netty.buffer.UnpooledUnsafeDirectByteBuf.setBytes(UnpooledUnsafeDirectByteBuf.java:447)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
I've searched all around and was told that because of channel is closed.
But, in my code. I'm just close my channel when my channel pool destroy the channel.
Here my code:
public static class ChannelFactory implements PoolableObjectFactory<Channel> {
private final Bootstrap bootstrap;
private String host;
private int port;
public ChannelFactory(Bootstrap bootstrap, String host, int port) {
this.bootstrap = bootstrap;
this.host = host;
this.port = port;
}
#Override
public Channel makeObject() throws Exception {
System.out.println("Create new channel!!!");
bootstrap.validate();
return bootstrap.connect(host, port).channel();
}
#Override
public void destroyObject(Channel channel) throws Exception {
ChannelFuture close = channel.close();
if (close.isSuccess()) {
System.out.println(channel + " close successfully");
}
}
#Override
public boolean validateObject(Channel channel) {
System.out.println("Validate object");
return (channel.isOpen());
}
#Override
public void activateObject(Channel channel) throws Exception {
System.out.println(channel + " is activated");
}
#Override
public void passivateObject(Channel channel) throws Exception {
System.out.println(channel + " is passivated");
}
/**
* #return the host
*/
public String getHost() {
return host;
}
/**
* #param host the host to set
* #return
*/
public ChannelFactory setHost(String host) {
this.host = host;
return this;
}
/**
* #return the port
*/
public int getPort() {
return port;
}
/**
* #param port the port to set
* #return
*/
public ChannelFactory setPort(int port) {
this.port = port;
return this;
}
}
And here is my Runner:
public static class Runner implements Runnable {
private Channel channel;
private ButtyMessage message;
private MyChannelPool channelPool;
public Runner(MyChannelPool channelPool, Channel channel, ButtyMessage message) {
this.channel = channel;
this.message = message;
this.channelPool = channelPool;
}
#Override
public void run() {
channel.writeAndFlush(message.content()).syncUninterruptibly().addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
channelPool.returnObject(future.channel());
}
});
}
}
And my main:
public static void main(String[] args) throws InterruptedException {
final String host = "127.0.0.1";
final int port = 8080;
int jobSize = 100;
int jobNumber = 10000;
final Bootstrap b = func(host, port);
final MyChannelPool channelPool = new MyChannelPool(new ChannelFactory(b, host, port));
ExecutorService threadPool = Executors.newFixedThreadPool(1);
for (int i = 0; i < jobNumber; i++) {
try {
threadPool.execute(new Runner(channelPool, channelPool.borrowObject(), new ButtyMessage()));
} catch (Exception ex) {
System.out.println("ex = " + ex.getMessage());
}
}
}
With ButtyMessage extends ByteBufHolder.
In my Runner class, if I sleep(10) after writeAndFlush it run quite OK. But I don't want to reply on sleep. So I use ChannelFutureListener, but the result is bad. If I send about 1000 to 10.000 messages, it will crash and throw exception above. Is there any way to avoid this?
Thanks all.
Sorry for my bad explain and my English :)
You have several issues that could explain this. Most of them are related to wrong usage of asynchronous operations and future usage.
I don't know if it could be in link with your issue but, if you really want to print when the channel is really closed, you have to wait on the future, since the future on close() (or any other operations) immediately returns, without waiting for the real close. Therefore your test if (close.isSuccess()) shall be always false.
public void destroyObject(final Channel channel) throws Exception {
channel.close().addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture close) {
if (close.isSuccess()) {
System.out.println(channel + " close successfully");
}
}
});
}
However, as I suppose it is only for debug purpose, it is not mandatory.
Another one: you send back to your pool a channel that is not already connected (which could explain your sleep(10) maybe?). You have to wait on the connect().
public Channel makeObject() throws Exception {
System.out.println("Create new channel!!!");
//bootstrap.validate(); // this is implicitely called in connect()
ChannelFuture future = bootstrap.connect(host, port).awaitUninterruptibly();
if (future.isSuccess()) {
return future.channel();
} else {
// do what you need to do when the connection is not done
}
}
third one: validation of a connected channel might be better using isActive():
#Override
public boolean validateObject(Channel channel) {
System.out.println("Validate object");
return channel.isActive(); // instead of isOpen()
}
fourth one: in your runner, you wrongly await on the future while you should not. You can remove your syncUninterruptibly() and let the rest as is.
#Override
public void run() {
Channel.writeAndFlush(message.content()).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
channelPool.returnObject(future.channel());
}
});
}
And finally, I suppose you know your test is completely sequential (1 thread in your pool), such that each client will reuse over and over the very same channel?
Could you try to change the 4 points to see if it corrects your issue?
EDIT: after requester comment
For syncUntinterruptibly(), I did not read carefully. If you want to block on write, then you don't need the extra addListener since the future is done once the sync is over. So you can directly call your channelPool.returnObject as next command just after your sync.
So you should write it this way, simpler.
#Override
public void run() {
Channel.writeAndFlush(message.content()).syncUntinterruptibly();
channelPool.returnObject(future.channel());
}
For fireChannelActive, it will be called as soon as the connect finished (so from makeObject, sometime in the future). Moreover, once disconnected (as you did have notice in your exception), the channel is no more usable and must be recreated from zero. So I would suggest to use isActive however, such that, if not active, it will be removed using destroyObject...
Take a look at the channel state model here.
Finally, I've found a solution for myself. But, I'm still think about another solution. (this solution is exactly copy from 4.0.28 netty release note)
final String host = "127.0.0.1";
final int port = 8080;
int jobNumber = 100000;
final EventLoopGroup group = new NioEventLoopGroup(100);
ChannelPoolMap<InetSocketAddress, MyChannelPool> poolMap = new AbstractChannelPoolMap<InetSocketAddress, MyChannelPool>() {
#Override
protected MyChannelPool newPool(InetSocketAddress key) {
Bootstrap bootstrap = func(group, key.getHostName(), key.getPort());
return new MyChannelPool(bootstrap, new _AbstractChannelPoolHandler());
}
};
ChannelPoolMap<InetSocketAddress, FixedChannelPool> poolMap1 = new AbstractChannelPoolMap<InetSocketAddress, FixedChannelPool>() {
#Override
protected FixedChannelPool newPool(InetSocketAddress key) {
Bootstrap bootstrap = func(group, key.getHostName(), key.getPort());
return new FixedChannelPool(bootstrap, new _AbstractChannelPoolHandler(), 10);
}
};
final ChannelPool myChannelPool = poolMap.get(new InetSocketAddress(host, port));
final CountDownLatch latch = new CountDownLatch(jobNumber);
for (int i = 0; i < jobNumber; i++) {
final int counter = i;
final Future<Channel> future = myChannelPool.acquire();
future.addListener(new FutureListener<Channel>() {
#Override
public void operationComplete(Future<Channel> f) {
if (f.isSuccess()) {
Channel ch = f.getNow();
// Do somethings
ch.writeAndFlush(new ButtyMessage().content()).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
System.out.println("counter = " + counter);
System.out.println("future = " + future.channel());
latch.countDown();
}
}
});
// Release back to pool
myChannelPool.release(ch);
} else {
System.out.println(f.cause().getMessage());
f.cause().printStackTrace();
}
}
});
}
try {
latch.await();
System.exit(0);
} catch (InterruptedException ex) {
System.out.println("ex = " + ex.getMessage());
}
As you can see, I use SimpleChannelPool and FixedChannelPool (an implementation of SimpleChannelPool provided by netty).
What it can do:
SimpleChannelPool: open channels as much as it need ---> if you has 100.000 msg -> cuz error, of course. Many socket open, then IOExeption: Too many file open occur. (is that really pool? Create as much as possible and throw exception? I don't call this is pooling)
FixedChannelPool: not work in my case (Still study why? =)) Sorry for my stupidness)
Indeed, I want to use ObjectPool instead. And I may post it as soon as when I finish. Tks #Frederic Brégier for helping me so much!

Can't properly keep track of connected clients java

I can't seem to figure out how to notify my Server class that a connection was lost. My server code is:
public class Server {
static int port = 4444;
static boolean listening = true;
static ArrayList<Thread>Clients = new ArrayList<Thread>();
static MatchMaker arena;
public static void main(String[] args) {
Initialize();
Thread startConnections = new Thread(run());
startConnections.start();
}
private static Runnable run(){
System.out.println("(" + new SimpleDateFormat("HH:mm:ss").format(new Date()) + ") Started listening on port: " + port);
try(ServerSocket socket = new ServerSocket(port)){
while(listening){
if(Clients.size() <= 4){
Socket clientSocket = socket.accept();
MultiThread connection = new MultiThread(clientSocket, arena, );
Clients.add(connection);
System.out.println("Client connected from " + clientSocket.getRemoteSocketAddress() + " Assigned ID: " + connection.getId());
System.out.println("Currently connected clients(" + Clients.size() + "): ");
for(int i = 0; i < Clients.size(); i++)
System.out.println(" - " + Clients.get(i).getId());
connection.start();
}
}
}
catch(Exception e){
e.printStackTrace();
}
return null;
}
private static void Initialize(){
arena = new MatchMaker();
}
}
The problem here is that since this class keeps track of the connected clients I want it to notice when a client has lost connection. The MultiThread class already has a functional way of detecting clients that lost connection, however I don't know how to pass that information back to the Server class. I've tried passing the server class to MultiThread as a parameter, but it said I couldn't use 'this' in a static manner.
You can keep them in synchronized map like:
Map<Integer, ClientObject> connectedClients = new HashMap<Integer, ClientObject>(); //key integer will be the client id
Other suggestion:
Map<String, ClientObject> connectedClients = new HashMap<String, ClientObject>(); //key String will be the client IP&userName (you decide)
Fist of all use thread safe collection for monitoring the client connection so replace the following
ArrayList Clients = new ArrayList();
with
ConcurrentLinkedQueue Clients = new ConcurrentLinkedQueue();
But your problem is that you are trying to use limited resource in thread safe manner so best option would be using Semaphore. I have re factored your class a bit to give an idea.Hope it helps. Plz look closely on 'SERVER_INSTANCE'.
import java.io.IOException;
import java.net.ServerSocket;
import java.net.Socket;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Date;
import java.util.concurrent.ConcurrentLinkedQueue;
import java.util.concurrent.Semaphore;
public class Server {
private final int MAX_AVAILABLE = 4;
public final Semaphore SERVER_INSTANCE = new Semaphore(MAX_AVAILABLE, true);
static int port = 4444;
static volatile boolean listening = true;
static MatchMaker arena;
public static void main(String[] args) {
Initialize();
Thread startConnections = new Thread(run());
startConnections.start();
}
private static void Initialize() {
//do somthing
}
private static Runnable run(){
System.out.println("(" + new SimpleDateFormat("HH:mm:ss").format(new Date()) + ") Started listening on port: " + port);
ServerSocket socket = null;
while(listening){
try {
socket = new ServerSocket(port);
try {
SERVER_INSTANCE.acquire();
Socket clientSocket = socket.accept();
MultiThread connection = new MultiThread(clientSocket, arena, SERVER_INSTANCE);
} catch (InterruptedException e) {
e.printStackTrace();
}
} catch (IOException e1) {
e1.printStackTrace();
}
}
return null;
}
static class MultiThread implements Runnable{
Semaphore serverInstance ;
public MultiThread(Socket clientSocket, MatchMaker arena, Semaphore serverInstance) {
serverInstance = serverInstance;
}
#Override
public void run() {
try {
serverInstance.acquire();
//Do your work here
} catch (InterruptedException e) {
e.printStackTrace();
}finally {
serverInstance.release();
}
}
}
class MatchMaker {
}
}

How to check if JZMQ socket is connected

Is there way to check if JZMQ (java binding of zmq) socket is connected?
ZContext zmqContext = new ZContext();
ZMQ.Socket workerSocket = zmqContext.createSocket(ZMQ.DEALER);
workerSocket.setIdentity("ID".getBytes());
workerSocket.connect("tcp://localhost:5556");
After code above I would like to check if workerSocket is connected. It would be nice to check connection status.
No, there's no method in the API to check if a socket is connected.
ZeroMq abstracts the network; client and server connections are completely transparent to the peer making the connection. A client or server may send messages to non-existent peers; no errors will be generated; instead, they'll queue up in socket buffers based on HWM config.
To check for peer availability, do it manually using a synchronous request/reply heartbeat with a timeout factor; here's an example, hope it helps!
Check out samples for request/reply here!
https://github.com/imatix/zguide/tree/master/examples/
i think I found a trick that works for me to check if a socket is connected. The best solution to your client side is to create a socket poller and poll on the pull socket until a message is received. This avoids wasteful sleeps, and makes for generally tighter code:
Here is the code that do the works:
private void blockUntilConnected() {
ZMQ.Poller poller = ctx.createPoller(1);
poller.register(this.subscriber, ZMQ.Poller.POLLIN);
int rc = -1;
while (rc == -1) {
rc = poller.poll(1000);
}
poller.pollin(0);
}
I will also supply the full source code:
Server Part:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.zeromq.SocketType;
import org.zeromq.ZContext;
import org.zeromq.ZMQ;
import java.net.InetSocketAddress;
import java.net.Socket;
import static io.Adrestus.config.ConsensusConfiguration.*;
public class ConsensusServer {
private static Logger LOG = LoggerFactory.getLogger(ConsensusServer.class);
private final ZContext ctx;
private final String IP;
private final ZMQ.Socket publisher;
private final ZMQ.Socket collector;
public ConsensusServer(String IP) {
this.IP = IP;
this.ctx = new ZContext();
this.publisher = ctx.createSocket(SocketType.PUB);
this.publisher.setHeartbeatIvl(2);
this.collector = ctx.createSocket(SocketType.PULL);
this.publisher.bind("tcp://" + IP + ":" + PUBLISHER_PORT);
this.collector.bind("tcp://" + IP + ":" + COLLECTOR_PORT);
this.collector.setReceiveTimeOut(CONSENSUS_TIMEOUT);
this.publisher.setSendTimeOut(CONSENSUS_TIMEOUT);
}
public ConsensusServer() {
this.IP = findIP();
this.ctx = new ZContext();
this.publisher = ctx.createSocket(SocketType.PUB);
this.collector = ctx.createSocket(SocketType.PULL);
this.publisher.bind("tcp://" + IP + ":" + PUBLISHER_PORT);
this.collector.bind("tcp://" + IP + ":" + COLLECTOR_PORT);
this.publisher.setSendTimeOut(CONSENSUS_TIMEOUT);
this.collector.setReceiveTimeOut(CONSENSUS_TIMEOUT);
}
private String findIP() {
try {
Socket socket = new Socket();
socket.connect(new InetSocketAddress("google.com", 80));
return socket.getLocalAddress().getHostAddress();
} catch (Exception e) {
e.printStackTrace();
}
throw new IllegalArgumentException("Make sure you intern connection is working");
}
public void publishMessage(byte[] data) {
publisher.send(data, 0);
}
public byte[] receiveData() {
byte[] data = null;
try {
data = collector.recv();
} catch (Exception e) {
LOG.info("Socket Closed");
}
return data;
}
public static Logger getLOG() {
return LOG;
}
public static void setLOG(Logger LOG) {
ConsensusServer.LOG = LOG;
}
public ZContext getCtx() {
return ctx;
}
public String getIP() {
return IP;
}
public ZMQ.Socket getPublisher() {
return publisher;
}
public ZMQ.Socket getCollector() {
return collector;
}
public void close() {
this.publisher.close();
this.collector.close();
this.ctx.close();
}
}
Client Part:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.zeromq.SocketType;
import org.zeromq.ZContext;
import org.zeromq.ZMQ;
import static io.Adrestus.config.ConsensusConfiguration.*;
public class ConsensusClient {
private static Logger LOG = LoggerFactory.getLogger(ConsensusClient.class);
private final String IP;
private ZContext ctx;
private final ZMQ.Socket subscriber;
private final ZMQ.Socket push;
public ConsensusClient(String IP) {
this.ctx = new ZContext();
this.IP = IP;
this.subscriber = ctx.createSocket(SocketType.SUB);
this.push = ctx.createSocket(SocketType.PUSH);
this.subscriber.connect("tcp://" + IP + ":" + SUBSCRIBER_PORT);
this.subscriber.subscribe(ZMQ.SUBSCRIPTION_ALL);
this.subscriber.setReceiveTimeOut(CONSENSUS_TIMEOUT);
blockUntilConnected();
this.push.connect("tcp://" + IP + ":" + COLLECTOR_PORT);
}
private void blockUntilConnected() {
ZMQ.Poller poller = ctx.createPoller(1);
poller.register(this.subscriber, ZMQ.Poller.POLLIN);
int rc = -1;
while (rc == -1) {
rc = poller.poll(1000);
}
poller.pollin(0);
}
public void pushMessage(byte[] data) {
push.send(data);
}
public byte[] receiveData() {
byte[] data = subscriber.recv();
return data;
}
public void close() {
this.subscriber.close();
this.push.close();
this.ctx.close();
}
}
Main part: (Notice that the client is first initialized and it's blocked until a server is started and connected. You can simply add a timeout if you don't want to hang on forever)
import java.nio.charset.StandardCharsets;
public class CustomTest {
public static void main(String[] args) {
//client already started and block until server is connected
(new Thread() {
public void run() {
ConsensusClient Client = new ConsensusClient("localhost");
while (true) {
byte[] res = Client.receiveData();
System.out.println(new String(res));
}
}
}).start();
Thread.sleep(3000);
//server started
ConsensusServer Server = new ConsensusServer("localhost");
Thread.sleep(100);
Server.publishMessage("Message".getBytes(StandardCharsets.UTF_8));
Server.publishMessage("Message".getBytes(StandardCharsets.UTF_8));
Server.publishMessage("Message".getBytes(StandardCharsets.UTF_8));
Thread.sleep(10000);
Server.close();
}
}

Connection refusal on Java7 async NIO2 server

I have written a async socketserver using java 7 nio2.
Here is a snipper of the Server.
public class AsyncJava7Server implements Runnable, CounterProtocol, CounterServer{
private int port = 0;
private AsynchronousChannelGroup group;
public AsyncJava7Server(int port) throws IOException, InterruptedException, ExecutionException {
this.port = port;
}
public void run() {
try {
String localhostname = java.net.InetAddress.getLocalHost().getHostName();
group = AsynchronousChannelGroup.withThreadPool(
Executors.newCachedThreadPool(new NamedThreadFactory("Channel_Group_Thread")));
// open a server channel and bind to a free address, then accept a connection
final AsynchronousServerSocketChannel asyncServerSocketChannel =
AsynchronousServerSocketChannel.open(group).bind(
new InetSocketAddress(localhostname, port));
asyncServerSocketChannel.accept(null,
new CompletionHandler <AsynchronousSocketChannel, Object>() {
#Override
public void completed(final AsynchronousSocketChannel asyncSocketChannel,
Object attachment) {
// Invoke simple handle accept code - only takes about 10 milliseconds.
handleAccept(asyncSocketChannel);
asyncServerSocketChannel.accept(null, this);
}
#Override
public void failed(Throwable exc, Object attachment) {
System.out.println("***********" + exc + " statement=" + attachment);
}
});
and here is a snippet of the client code which tries to connect...
public class AsyncJava7Client implements CounterProtocol, CounterClientBridge {
AsynchronousSocketChannel asyncSocketChannel;
private String serverName= null;
private int port;
private String clientName;
public AsyncJava7Client(String clientName, String serverName, int port) throws IOException {
this.clientName = clientName;
this.serverName = serverName;
this.port = port;
}
private void connectToServer() {
Future<Void> connectFuture = null;
try {
log("Opening client async channel...");
asyncSocketChannel = AsynchronousSocketChannel.open();
// Connecting to server
connectFuture = asyncSocketChannel.connect(new InetSocketAddress("Alex-PC", 9999));
} catch (Exception ex) {
ex.printStackTrace();
throw new RuntimeException(ex);
}
// open a new socket channel and connect to the server
long beginTime = 0;
try {
// You have two seconds to connect. This will throw exception if server is not there.
beginTime = System.currentTimeMillis();
Void connectVoid = connectFuture.get(15, TimeUnit.SECONDS);
} catch (Exception ex) {
//EXCEPTIONS THROWN HERE AFTER ABOUT 150 CLIENTS
long endTime = System.currentTimeMillis();
long timeTaken = endTime - beginTime;
log("************* TIME TAKEN=" + timeTaken);
ex.printStackTrace();
throw new RuntimeException(ex);
}
}
I have a test which fires off clients.
#Test
public void testManyClientsAtSametime() throws Exception {
int clientsize = 150;
ScheduledThreadPoolExecutor executor =
(ScheduledThreadPoolExecutor) Executors.newScheduledThreadPool(clientsize + 1,
new NamedThreadFactory("Test_Thread"));
AsyncJava7Server asyncJava7Server = startServer();
List<AsyncJava7Client> clients = new ArrayList<AsyncJava7Client>();
List<Future<String>> results = new ArrayList<Future<String>>();
for (int i = 0; i < clientsize; i++) {
// Now start a client
final AsyncJava7Client client =
new AsyncJava7Client("client" + i, InetAddress.getLocalHost().getHostName(), 9999);
clients.add(client);
}
long beginTime = System.currentTimeMillis();
Random random = new Random();
for (final AsyncJava7Client client: clients) {
Callable<String> callable = new Callable<String>() {
public String call() {
...
... invoke APIs to connect client to server
...
return counterValue;
}
};
long delay = random.nextLong() % 10000; // somewhere between 0 and 10 seconds.
Future<String> startClientFuture = executor.schedule(callable, delay, TimeUnit.MILLISECONDS);
results.add(startClientFuture);
}
It works super for about 100 clients. At about 140+ I get a load of exceptions in the client - when it tries to connect. The exception is: java.util.concurrent.ExecutionException: java.io.IOException: The remote computer refused the network connection.
My test is on a single laptop running windows 7. When it bombs out I check the TCP connections and there about 500 - 600 connections -that's ok. AS I have similiar JDK 1.0 java.net socket programs that can handle 4,000 TCP connections.
No exceptions or anything dodgy looking in server.
So I am at a loss as to what could be wrong here. any ideas?
Try using the form of bind that accepts a backlog limit and set that to a higher number. For example:
final AsynchronousServerSocketChannel asyncServerSocketChannel =
AsynchronousServerSocketChannel.open(group).bind(
new InetSocketAddress(localhostname, port), 1000);
I don't know what the win7 implementation limit is by default but can be a cause of refused connections.

Categories