I am currently developing an application that requires random access to many (60k-100k) relatively large files.
Since opening and closing streams is a rather costly operation, I'd prefer to keep the FileChannels for the largest files open until they are no longer needed.
The problem is that since this kind of behaviour is not covered by Java 7's try-with statement, I'm required to close all the FileChannels manually.
But that is becoming increasingly too complicated since the same files could be accessed concurrently throughout the software.
I have implemented a ChannelPool class that can keep track of opened FileChannel instances for each registered Path. The ChannelPool can then be issued to close those channels whose Path is only weakly referenced by the pool itself in certain intervals.
I would prefer an event-listener approach, but I'd also rather not have to listen to the GC.
The FileChannelPool from Apache Commons doesn't address my problem, because channels still need to be closed manually.
Is there a more elegant solution to this problem? And if not, how can my implementation be improved?
import java.io.IOException;
import java.lang.ref.WeakReference;
import java.nio.channels.FileChannel;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;
import java.util.Map;
import java.util.Timer;
import java.util.TimerTask;
import java.util.concurrent.ConcurrentHashMap;
public class ChannelPool {
private static final ChannelPool defaultInstance = new ChannelPool();
private final ConcurrentHashMap<String, ChannelRef> channels;
private final Timer timer;
private ChannelPool(){
channels = new ConcurrentHashMap<>();
timer = new Timer();
}
public static ChannelPool getDefault(){
return defaultInstance;
}
public void initCleanUp(){
// wait 2 seconds then repeat clean-up every 10 seconds.
timer.schedule(new CleanUpTask(this), 2000, 10000);
}
public void shutDown(){
// must be called manually.
timer.cancel();
closeAll();
}
public FileChannel getChannel(Path path){
ChannelRef cref = channels.get(path.toString());
System.out.println("getChannel called " + channels.size());
if (cref == null){
cref = ChannelRef.newInstance(path);
if (cref == null){
// failed to open channel
return null;
}
ChannelRef oldRef = channels.putIfAbsent(path.toString(), cref);
if (oldRef != null){
try{
// close new channel and let GC dispose of it
cref.channel().close();
System.out.println("redundant channel closed");
}
catch (IOException ex) {}
cref = oldRef;
}
}
return cref.channel();
}
private void remove(String str) {
ChannelRef ref = channels.remove(str);
if (ref != null){
try {
ref.channel().close();
System.out.println("old channel closed");
}
catch (IOException ex) {}
}
}
private void closeAll() {
for (Map.Entry<String, ChannelRef> e : channels.entrySet()){
remove(e.getKey());
}
}
private void maintain() {
// close channels for derefenced paths
for (Map.Entry<String, ChannelRef> e : channels.entrySet()){
ChannelRef ref = e.getValue();
if (ref != null){
Path p = ref.pathRef().get();
if (p == null){
// gc'd
remove(e.getKey());
}
}
}
}
private static class ChannelRef{
private FileChannel channel;
private WeakReference<Path> ref;
private ChannelRef(FileChannel channel, WeakReference<Path> ref) {
this.channel = channel;
this.ref = ref;
}
private static ChannelRef newInstance(Path path) {
FileChannel fc;
try {
fc = FileChannel.open(path, StandardOpenOption.READ);
}
catch (IOException ex) {
return null;
}
return new ChannelRef(fc, new WeakReference<>(path));
}
private FileChannel channel() {
return channel;
}
private WeakReference<Path> pathRef() {
return ref;
}
}
private static class CleanUpTask extends TimerTask {
private ChannelPool pool;
private CleanUpTask(ChannelPool pool){
super();
this.pool = pool;
}
#Override
public void run() {
pool.maintain();
pool.printState();
}
}
private void printState(){
System.out.println("Clean up performed. " + channels.size() + " channels remain. -- " + System.currentTimeMillis());
for (Map.Entry<String, ChannelRef> e : channels.entrySet()){
ChannelRef cref = e.getValue();
String out = "open: " + cref.channel().isOpen() + " - " + cref.channel().toString();
System.out.println(out);
}
}
}
EDIT:
Thanks to fge's answer I have now exactly what I needed. Thanks!
import com.google.common.cache.CacheBuilder;
import com.google.common.cache.CacheLoader;
import com.google.common.cache.LoadingCache;
import com.google.common.cache.RemovalListener;
import com.google.common.cache.RemovalNotification;
import java.io.IOException;
import java.nio.channels.FileChannel;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;
import java.util.concurrent.ExecutionException;
public class Channels {
private static final LoadingCache<Path, FileChannel> channelCache =
CacheBuilder.newBuilder()
.weakKeys()
.removalListener(
new RemovalListener<Path, FileChannel>(){
#Override
public void onRemoval(RemovalNotification<Path, FileChannel> removal) {
FileChannel fc = removal.getValue();
try {
fc.close();
}
catch (IOException ex) {}
}
}
)
.build(
new CacheLoader<Path, FileChannel>() {
#Override
public FileChannel load(Path path) throws IOException {
return FileChannel.open(path, StandardOpenOption.READ);
}
}
);
public static FileChannel get(Path path){
try {
return channelCache.get(path);
}
catch (ExecutionException ex){}
return null;
}
}
Have a look here:
http://code.google.com/p/guava-libraries/wiki/CachesExplained
You can use a LoadingCache with a removal listener which would close the channel for you when it expires, and you can specify expiry after access or write.
Related
is there any nice way to print the progresss in a kafka stream app? I feel that my app is falling behind and I want a nice way to show the progress of processing the events in my app
Out of the box, not within the Streams API.
You're more than welcome to import methods that ConsumerGroupCommand.scala uses to get the group lag and calculate / print from there.
Or you can externally install a tool like Burrow or Remora which have REST APIs for accessing lag information
I wrote the following class to help be print the lag/progress easily
package util;
import lombok.extern.slf4j.Slf4j;
import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.ListConsumerGroupOffsetsResult;
import org.apache.kafka.clients.admin.ListOffsetsResult;
import org.apache.kafka.clients.admin.OffsetSpec;
import org.apache.kafka.clients.consumer.*;
import org.apache.kafka.common.TopicPartition;
import java.util.*;
import java.util.concurrent.*;
import java.util.function.Function;
import java.util.stream.Collectors;
#Slf4j
public class LagLogger implements AutoCloseable {
private ScheduledExecutorService scheduledExecutorService = Executors.newScheduledThreadPool(1);
private String topic;
private String consumerGroupName;
private int logDelayInMilliSeconds;
private Properties kafkaStreamsProperties;
private boolean closed;
private AdminClient adminClient;
public LagLogger(String topic, String consumerGroupName, Properties kafkaStreamProperties, int logDelayInMilliSeconds) {
this.topic = topic;
this.kafkaStreamsProperties = kafkaStreamProperties;
this.logDelayInMilliSeconds = logDelayInMilliSeconds;
this.consumerGroupName = consumerGroupName;
adminClient = AdminClient.create(LagLogger.this.kafkaStreamsProperties);
}
public class LagVisualizerTask implements AutoCloseable, Runnable {
public LagVisualizerTask() {
}
public void run() {
ListConsumerGroupOffsetsResult listConsumerGroupOffsetsResult = adminClient.listConsumerGroupOffsets(LagLogger.this.consumerGroupName);
// Current offsets.
Map<TopicPartition, OffsetAndMetadata> topicPartitionOffsetAndMetadataMap = null;
try {
topicPartitionOffsetAndMetadataMap = listConsumerGroupOffsetsResult.partitionsToOffsetAndMetadata().get();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
// all topic partitions.
Set<TopicPartition> topicPartitions = topicPartitionOffsetAndMetadataMap.keySet();
// list of end offsets for each partitions.
ListOffsetsResult listOffsetsResult = adminClient.listOffsets(topicPartitions.stream()
.collect(Collectors.toMap(Function.identity(), tp -> OffsetSpec.latest())));
StringBuilder stringBuilder = new StringBuilder();
stringBuilder.append(topic+": ");
for (var entry : topicPartitionOffsetAndMetadataMap.entrySet()) {
String finalString = stringBuilder.toString();
if (entry.getKey().topic().equals(LagLogger.this.topic)) {
long current_offset = entry.getValue().offset();
long end_offset = 0;
try {
end_offset = listOffsetsResult.partitionResult(entry.getKey()).get().offset();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
stringBuilder.append(current_offset);
stringBuilder.append(" --> ");
stringBuilder.append(end_offset);
stringBuilder.append(" ("+String.format("%.2f", ((double)current_offset/end_offset)*100) +"%)");
stringBuilder.append(" / ");
}
}
log.info(stringBuilder.toString());
}
public void close() {
closed = true;
}
}
public LagVisualizerTask startNewLagVisualizerTask() {
LagVisualizerTask lagVisualizerTask = new LagVisualizerTask();
scheduledExecutorService.scheduleWithFixedDelay(lagVisualizerTask,0, LagLogger.this.logDelayInMilliSeconds, TimeUnit.MILLISECONDS);
return lagVisualizerTask;
}
public void close() {
if (scheduledExecutorService != null) {
scheduledExecutorService.shutdownNow();
scheduledExecutorService = null;
}
}
}
Which can be used as follows:
LagLogger lagVisualizer = new LagLogger(INPUT_TOPIC_NAME,APPLICATION_ID,configuration.getKafkaStreamsProperties(),DELY_BETWEEN_LOGS);
lagVisualizer.startNewLagVisualizerTask();
At first I have created an empty file, and then I've invoked some thread to search the database and get the result content, and then append to the file. The result content is String type and may be 20M. Each thread should write into the file one at a time. I have tested many times and I find that it is not necessary to lock. Is that right? The total lines of the example is 1000. When should I need to add a write lock to operate on the file?
String currentName = "test.txt";
final String LINE_SEPARATOR = System.getProperty("line.separator");
ThreadPoolExecutor pool = new ThreadPoolExecutor(
10, 100, 10, TimeUnit.SECONDS, new LinkedBlockingDeque<Runnable>());
for (int i = 0; i < 500; i++) {
pool.execute(() -> {
try {
appendFileByFilesWrite(currentName, "abc" +
ThreadLocalRandom.current().nextInt(1000) + LINE_SEPARATOR);
} catch (IOException e) {
e.printStackTrace();
}
});
}
IntStream.range(0, 500).<Runnable>mapToObj(a -> () -> {
try {
appendFileByFilesWrite( currentName,
"def" + ThreadLocalRandom.current().nextInt(1000) +
LINE_SEPARATOR);
} catch (IOException e) {
e.printStackTrace();
}
}).forEach(pool::execute);
pool.shutdown();
Here is the method:
public static void appendFileByFilesWrite(String fileName,String fileContent) throws IOException {
Files.write(Paths.get(fileName), fileContent.getBytes(),StandardOpenOption.APPEND);
}
The answer is: always.
Your test works for you. Right now. Today. Maybe during a full moon, it won't. Maybe if you buy a new computer, or your OS vendor updates, or the JDK updates, or you're playing a britney spears song in your winamp, it won't.
The spec says that it is legitimate for the write to be smeared out over multiple steps, and the behaviour of SOO.APPEND is undefined at that point. Possibly if you write 'Hello' and 'World' simultaneously, the file may end up containing 'HelWorllod'. It probably won't. But it could.
Generally, bugs in concurrency are very hard (sometimes literally impossible) to test for. Doesn't make it any less of a bug; mostly you end up with a ton of bug reports, and you answering 'cannot reproduce' on all of them. This is not a good place to be.
Most likely if you want to observe the problem in action, you should write extremely long strings in your writer; the aim is to end up with the actual low-level disk command involving multiple separated out blocks. Even then there is no guarantee that you'll observe a problem. And yet, absence of proof is not proof of absence.
I use this class when I need to lock a file. It allows for read write locks across multiple JVMs and multiple threads.
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.io.OutputStreamWriter;
import java.io.RandomAccessFile;
import java.nio.channels.Channels;
import java.nio.channels.FileChannel;
import java.util.Date;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantReadWriteLock;
import com.lfp.joe.core.process.CentralExecutor;
public class FileLocks {
private static final String WRITE_MODE = "rws";
private static final String READ_MODE = "r";
private static final Map<String, LockContext> JVM_LOCK_MAP = new ConcurrentHashMap<>();
private FileLocks() {
}
public static <X> X read(File file, ReadAccessor<X> accessor) throws IOException {
return access(file, false, fc -> {
try (var is = Channels.newInputStream(fc);) {
return accessor.read(fc, is);
}
});
}
public static void write(File file, WriterAccessor accessor) throws IOException {
access(file, true, fc -> {
try (var os = Channels.newOutputStream(fc);) {
accessor.write(fc, os);
}
return null;
});
}
public static <X> X access(File file, boolean write, FileChannelAccessor<X> accessor)
throws FileNotFoundException, IOException {
Objects.requireNonNull(file);
Objects.requireNonNull(accessor);
String path = file.getAbsolutePath();
var lockContext = JVM_LOCK_MAP.compute(path, (k, v) -> {
if (v == null)
v = new LockContext();
v.incrementAndGetThreadCount();
return v;
});
var jvmLock = write ? lockContext.getAndLockWrite() : lockContext.getAndLockRead();
try (var randomAccessFile = new RandomAccessFile(file, write ? WRITE_MODE : READ_MODE);
var fileChannel = randomAccessFile.getChannel();) {
var fileLock = write ? fileChannel.lock() : null;
try {
return accessor.access(fileChannel);
} finally {
if (fileLock != null && fileLock.isValid())
fileLock.close();
}
} finally {
jvmLock.unlock();
JVM_LOCK_MAP.compute(path, (k, v) -> {
if (v == null)
return null;
var threadCount = v.decrementAndGetThreadCount();
if (threadCount <= 0)
return null;
return v;
});
}
}
public static interface FileChannelAccessor<X> {
X access(FileChannel fileChannel) throws IOException;
}
public static interface ReadAccessor<X> {
X read(FileChannel fileChannel, InputStream inputStream) throws IOException;
}
public static interface WriterAccessor {
void write(FileChannel fileChannel, OutputStream outputStream) throws IOException;
}
private static class LockContext {
private final ReentrantReadWriteLock rwLock = new ReentrantReadWriteLock();
private long threadCount = 0;
public long incrementAndGetThreadCount() {
threadCount++;
return threadCount;
}
public long decrementAndGetThreadCount() {
threadCount--;
return threadCount;
}
public Lock getAndLockWrite() {
var lock = rwLock.writeLock();
lock.lock();
return lock;
}
public Lock getAndLockRead() {
var lock = rwLock.readLock();
lock.lock();
return lock;
}
}
}
You can then use it for writing like so:
File file = new File("test/lock-test.txt");
FileLocks.write(file, (fileChannel, outputStream) -> {
try (var bw = new BufferedWriter(new OutputStreamWriter(outputStream));) {
bw.append("cool beans " + new Date().getTime());
}
});
And reading:
File file = new File("test/lock-test.txt")
var lines = FileLocks.read(file, (fileChannel, inputStream) -> {
try (var br = new BufferedReader(new InputStreamReader(inputStream));) {
return br.lines().collect(Collectors.toList());
}
});
You can use fileLock or just add synchronized to the method.
while (true) {
try {
lock = fc.lock();
break;
} catch (OverlappingFileLockException e) {
Thread.sleep(1 * 1000);
}
}
appendFileByFilesWrite( fileName, fileContent) ;
or just change like this:
public synchronized static void appendFileByFilesWrite(String fileName,String fileContent) throws IOException {
Files.write(Paths.get(fileName), fileContent.getBytes(),StandardOpenOption.APPEND);
}
When I tested a simple producer/consumer example, I got a very strange result as below.
If I used main() to test the following code, I'll get the correct and expected result.
But I only can get the 1st directory correctly, the remaining works were dropped by the JUnit.
What is the exact reason?
Working code:
import java.io.File;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;
import org.junit.Test;
public class TestProducerAndConsumer {
public static void main(String[] args) {
BlockingQueue<File> queue = new LinkedBlockingQueue<File>(1000);
new Thread(new FileCrawler(queue, new File("C:\\"))).start();
new Thread(new Indexer(queue)).start();
}
}
Bad Code:
import java.io.File;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;
import org.junit.Test;
public class TestProducerAndConsumer {
#Test
public void start2() {
BlockingQueue<File> queue = new LinkedBlockingQueue<File>(1000);
new Thread(new FileCrawler(queue, new File("C:\\"))).start();
new Thread(new Indexer(queue)).start();
}
}
Other function code:
import java.io.File;
import java.util.Arrays;
import java.util.concurrent.BlockingQueue;
public class FileCrawler implements Runnable {
private final BlockingQueue<File> fileQueue;
private final File root;
private int i = 0;
public FileCrawler(BlockingQueue<File> fileQueue, File root) {
this.fileQueue = fileQueue;
this.root = root;
}
#Override
public void run() {
try {
craw(root);
} catch (InterruptedException e) {
System.out.println("shit!");
e.printStackTrace();
Thread.currentThread().interrupt();
}
}
private void craw(File file) throws InterruptedException {
File[] entries = file.listFiles();
//System.out.println(Arrays.toString(entries));
if (entries != null && entries.length > 0) {
for (File entry : entries) {
if (entry.isDirectory()) {
craw(entry);
} else {
fileQueue.offer(entry);
i++;
System.out.println(entry);
System.out.println(i);
}
}
}
}
public static void main(String[] args) throws InterruptedException {
FileCrawler fc = new FileCrawler(null, null);
fc.craw(new File("C:\\"));
System.out.println(fc.i);
}
}
import java.io.File;
import java.util.concurrent.BlockingQueue;
public class Indexer implements Runnable {
private BlockingQueue<File> queue;
public Indexer(BlockingQueue<File> queue) {
this.queue = queue;
}
#Override
public void run() {
try {
while (true) {
indexFile(queue.take());
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
private void indexFile(File file) {
System.out.println("Indexing ... " + file);
}
}
Junit's presumably allowing the JVM & threads to terminate, once the test is finished -- thus your threads do not complete working.
Try waiting for the threads to 'join':
Thread crawlerThread = new Thread(new FileCrawler(queue, new File("C:\\")));
Thread indexerThread = new Thread(new Indexer(queue));
crawlerThread.start();
indexerThread.start();
//
// wait for them to finish.
crawlerThread.join();
indexerThread.join();
This should help.
.. The other thing that can go wrong, is that log output (via Log4J) can sometimes be truncated at the end of execution; flushing & pausing can help. But I don't think that will affect you here.
I have made a program that continuously monitors a log file. But I don't know how to monitor multiple log files. This is what I did to monitor single file. What changes should I make in the following code so that it monitors multiple files also?
package com.read;
import java.io.File;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.nio.channels.FileChannel;
import java.nio.channels.FileLock;
import java.util.Date;
import java.util.Timer;
import java.util.TimerTask;
import java.util.logging.Level;
import java.util.logging.Logger;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
public class FileWatcherTest {
public static void main(String args[]) {
final File fileName = new File("D:/logs/myFile.log");
// monitor a single file
TimerTask fileWatcherTask = new FileWatcher(fileName) {
long addFileLen = fileName.length();
FileChannel channel;
FileLock lock;
String a = "";
String b = "";
#Override
protected void onChange(File file) {
RandomAccessFile access = null;
try {
access = new RandomAccessFile(file, "rw");
channel = access.getChannel();
lock = channel.lock();
if (file.length() < addFileLen) {
access.seek(file.length());
} else {
access.seek(addFileLen);
}
} catch (Exception e) {
e.printStackTrace();
}
String line = null;
try {
while ((line = access.readLine()) != null) {
System.out.println(line);
}
addFileLen = file.length();
} catch (IOException ex) {
Logger.getLogger(FileWatcherTest.class.getName()).log(
Level.SEVERE, null, ex);
}
try {
lock.release();
} catch (IOException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
} // Close the file
try {
channel.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
};
Timer timer = new Timer();
// repeat the check every second
timer.schedule(fileWatcherTask, new Date(), 1000);
}
}
package com.read;
import java.util.*;
import java.io.*;
public abstract class FileWatcher extends TimerTask {
private long timeStamp;
private File file;
static String s;
public FileWatcher(File file) {
this.file = file;
this.timeStamp = file.lastModified();
}
public final void run() {
long timeStamp = file.lastModified();
if (this.timeStamp != timeStamp) {
this.timeStamp = timeStamp;
onChange(file);
}
}
protected abstract void onChange(File file);
}
You should use threads. Here's a good tutorial:
http://docs.oracle.com/javase/tutorial/essential/concurrency/
You would do something like:
public class FileWatcherTest {
public static void main(String args[]) {
(new Thread(new FileWatcherRunnable("first.log"))).start();
(new Thread(new FileWatcherRunnable("second.log"))).start();
}
private static class FileWatcherRunnable implements Runnable {
private String logFilePath;
// you should inject the file path of the log file to watch
public FileWatcherRunnable(String logFilePath) {
this.logFilePath = logFilePath;
}
public void run() {
// your code from main goes in here
}
}
}
I am trying to implement a TCP Server in Java using nio.
Its simply using the Selector's select method to get the ready keys. And then processing those keys if they are acceptable, readable and so. Server is working just fine till im using a single thread. But when im trying to use more threads to process the keys, the server's response gets slowed and eventually stops responding, say after 4-5 requests.
This is all what im doing:(Pseudo)
Iterator<SelectionKey> keyIterator = selector.selectedKeys().iterator();
while (keyIterator.hasNext()) {
SelectionKey readyKey = keyIterator.next();
if (readyKey.isAcceptable()) {
//A new connection attempt, registering socket channel with selector
} else {
Worker.add( readyKey );
}
Worker is the thread class that performs Input/Output from the channel.
This is the code of my Worker class:
private static List<SelectionKey> keyPool = Collections.synchronizedList(new LinkedList());
public static void add(SelectionKey key) {
synchronized (keyPool) {
keyPool.add(key);
keyPool.notifyAll();
}
}
public void run() {
while ( true ) {
SelectionKey myKey = null;
synchronized (keyPool) {
try {
while (keyPool.isEmpty()) {
keyPool.wait();
}
} catch (InterruptedException ex) {
}
myKey = keyPool.remove(0);
keyPool.notifyAll();
}
if (myKey != null && myKey.isValid() ) {
if (myKey.isReadable()) {
//Performing reading
} else if (myKey.isWritable()) {
//performing writing
myKey.cancel();
}
}
}
My basic idea is to add the key to the keyPool from which various threads can get a key, one at a time.
My BaseServer class itself is running as a thread. It is creating 10 Worker threads before the event loop to begin. I also tried to increase the priority of BaseServer thread, so that it gets more chance to accept the acceptable keys. Still, to it stops responding after approx 8 requests. Please help, were I am going wrong. Thanks in advance. :)
Third, you aren't removing anything from the selected-key set. You must do that every time around the loop, e.g. by calling keyIterator.remove() after you call next().
You need to read the NIO Tutorials.
First of all, you should not really be using wait() and notify() calls anymore since there exist good Standard Java (1.5+) wrapper classes in java.util.concurrent, such as BlockingQueue.
Second, it's suggested to do IO in the selecting thread itself, not in the worker threads. The worker threads should just queue up reads/and writes to the selector thread(s).
This page explains it pretty good and even provides working code samples of a simple TCP/IP server: http://rox-xmlrpc.sourceforge.net/niotut/
Sorry, I don't yet have time to look at your specific example.
Try using xsocket library. It saved me a lot of time reading on forums.
Download: http://xsocket.org/
Tutorial: http://xsocket.sourceforge.net/core/tutorial/V2/TutorialCore.htm
Server Code:
import org.xsocket.connection.*;
/**
*
* #author wsserver
*/
public class XServer {
protected static IServer server;
public static void main(String[] args) {
try {
server = new Server(9905, new XServerHandler());
server.start();
} catch (Exception ex) {
System.out.println(ex.getMessage());
}
}
protected static void shutdownServer(){
try{
server.close();
}catch(Exception ex){
System.out.println(ex.getMessage());
}
}
}
Server Handler:
import java.io.IOException;
import java.nio.BufferUnderflowException;
import java.nio.ByteBuffer;
import java.nio.channels.ClosedChannelException;
import java.nio.charset.Charset;
import java.nio.charset.CharsetDecoder;
import java.nio.charset.CharsetEncoder;
import java.util.*;
import org.xsocket.*;
import org.xsocket.connection.*;
public class XServerHandler implements IConnectHandler, IDisconnectHandler, IDataHandler {
private Set<ConnectedClients> sessions = Collections.synchronizedSet(new HashSet<ConnectedClients>());
Charset charset = Charset.forName("ISO-8859-1");
CharsetEncoder encoder = charset.newEncoder();
CharsetDecoder decoder = charset.newDecoder();
ByteBuffer buffer = ByteBuffer.allocate(1024);
#Override
public boolean onConnect(INonBlockingConnection inbc) throws IOException, BufferUnderflowException, MaxReadSizeExceededException {
try {
synchronized (sessions) {
sessions.add(new ConnectedClients(inbc, inbc.getRemoteAddress()));
}
System.out.println("onConnect"+" IP:"+inbc.getRemoteAddress().getHostAddress()+" Port:"+inbc.getRemotePort());
} catch (Exception ex) {
System.out.println("onConnect: " + ex.getMessage());
}
return true;
}
#Override
public boolean onDisconnect(INonBlockingConnection inbc) throws IOException {
try {
synchronized (sessions) {
sessions.remove(inbc);
}
System.out.println("onDisconnect");
} catch (Exception ex) {
System.out.println("onDisconnect: " + ex.getMessage());
}
return true;
}
#Override
public boolean onData(INonBlockingConnection inbc) throws IOException, BufferUnderflowException, ClosedChannelException, MaxReadSizeExceededException {
inbc.read(buffer);
buffer.flip();
String request = decoder.decode(buffer).toString();
System.out.println("request:"+request);
buffer.clear();
return true;
}
}
Connected Clients:
import java.net.InetAddress;
import org.xsocket.connection.INonBlockingConnection;
/**
*
* #author wsserver
*/
public class ConnectedClients {
private INonBlockingConnection inbc;
private InetAddress address;
//CONSTRUCTOR
public ConnectedClients(INonBlockingConnection inbc, InetAddress address) {
this.inbc = inbc;
this.address = address;
}
//GETERS AND SETTERS
public INonBlockingConnection getInbc() {
return inbc;
}
public void setInbc(INonBlockingConnection inbc) {
this.inbc = inbc;
}
public InetAddress getAddress() {
return address;
}
public void setAddress(InetAddress address) {
this.address = address;
}
}
Client Code:
import java.net.InetAddress;
import org.xsocket.connection.INonBlockingConnection;
import org.xsocket.connection.NonBlockingConnection;
/**
*
* #author wsserver
*/
public class XClient {
protected static INonBlockingConnection inbc;
public static void main(String[] args) {
try {
inbc = new NonBlockingConnection(InetAddress.getByName("localhost"), 9905, new XClientHandler());
while(true){
}
} catch (Exception ex) {
System.out.println(ex.getMessage());
}
}
}
Client Handler:
import java.io.IOException;
import java.nio.BufferUnderflowException;
import java.nio.ByteBuffer;
import java.nio.channels.ClosedChannelException;
import java.nio.charset.Charset;
import java.nio.charset.CharsetDecoder;
import java.nio.charset.CharsetEncoder;
import org.xsocket.MaxReadSizeExceededException;
import org.xsocket.connection.IConnectExceptionHandler;
import org.xsocket.connection.IConnectHandler;
import org.xsocket.connection.IDataHandler;
import org.xsocket.connection.IDisconnectHandler;
import org.xsocket.connection.INonBlockingConnection;
/**
*
* #author wsserver
*/
public class XClientHandler implements IConnectHandler, IDataHandler,IDisconnectHandler, IConnectExceptionHandler {
Charset charset = Charset.forName("ISO-8859-1");
CharsetEncoder encoder = charset.newEncoder();
CharsetDecoder decoder = charset.newDecoder();
ByteBuffer buffer = ByteBuffer.allocate(1024);
#Override
public boolean onConnect(INonBlockingConnection nbc) throws IOException {
System.out.println("Connected to server");
nbc.write("hello server\r\n");
return true;
}
#Override
public boolean onConnectException(INonBlockingConnection nbc, IOException ioe) throws IOException {
System.out.println("On connect exception:"+ioe.getMessage());
return true;
}
#Override
public boolean onDisconnect(INonBlockingConnection nbc) throws IOException {
System.out.println("Dissconected from server");
return true;
}
#Override
public boolean onData(INonBlockingConnection inbc) throws IOException, BufferUnderflowException, ClosedChannelException, MaxReadSizeExceededException {
inbc.read(buffer);
buffer.flip();
String request = decoder.decode(buffer).toString();
System.out.println(request);
buffer.clear();
return true;
}
}
I spent a lot of time reading on forums about this, i hope i can help u with my code.