I'm trying to perform communication between server and client using Genson library. I've detected the following problem: trying to send a message to the server my application stalls when genson on the server is trying to read the message.
Meanwhile, if I shutdown the client, the message is perfectly read and processed. I've thought it to be deadlock but not sure.
There is no such a problem with native Java serialization.
Here is my server:
import com.owlike.genson.Genson;
import java.io.IOException;
import java.net.ServerSocket;
import java.net.Socket;
import java.util.Arrays;
public class Server {
public static void main(String[] args) throws IOException {
Genson genson = new Genson();
try (ServerSocket server = new ServerSocket(9991)) {
try (Socket socket = server.accept()) {
int[] loc = genson.deserialize(socket.getInputStream(), int[].class);
System.out.println("Server: " + Arrays.toString(loc));
genson.serialize(loc, socket.getOutputStream());
}
}
}
}
Here is the client:
import com.owlike.genson.Genson;
import java.io.IOException;
import java.net.Socket;
import java.util.Arrays;
public class Client {
public static void main(String[] args) throws IOException {
Genson genson = new Genson();
try (Socket socket = new Socket("localhost", 9991)) {
genson.serialize(new int[] {1, 2, 3}, socket.getOutputStream());
int[] loc = genson.deserialize(socket.getInputStream(), int[].class);
System.out.println("Client: " + Arrays.toString(loc));
}
}
}
I wound highly appreciate any help with this question. Thanks in advance.
Edit: This is really wierd. I've made some additional tests and here is what i get:
Additional class:
import com.owlike.genson.annotation.JsonProperty;
import java.io.Serializable;
public class Tester implements Serializable {
public static final Tester TEST = new Tester(Math.E);
private double val = Math.PI;
public Tester(#JsonProperty("val") double val) {
this.val = val;
}
public Tester() {}
public String toString() {
return "" + val;
}
}
Having written genson.serialize(Tester.TEST, socket.getOutputStream()) in the client request I have the same strange result. But having written genson.serialize(new Tester(Double.NaN), socket.getOutputStream()) the result is the expexted one.
Furthermore, if I define the only field in Tester class to be of type int[], lets say, it only works with values of null or new int[0].
In addition to that, if I'm trying to serialize and transmit int for integers in range 0..9 I observe the following behaviour: the same strange thing except that when I shutdown the client, server always shows 0 value.
Besides, for constants like Double.NaN, Double.POSITIVE_INFINITY, Integer.MAX_VALUE and similar there is nothing strange at all (everything works as expected).
For those additional tests Genson class was defined as follows:
Genson genson = new GensonBuilder()
.useMethods(false)
.setFieldFilter(VisibilityFilter.PRIVATE)
.create();
Note that there is no such issue when ser/deser to/from a file using streams:
import com.owlike.genson.Genson;
import com.owlike.genson.GensonBuilder;
import com.owlike.genson.reflect.VisibilityFilter;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
public class FileTest {
private static final String FILENAME = "test.json";
public static void main(String[] args) throws IOException {
Genson genson = new GensonBuilder()
.useMethods(false)
.setFieldFilter(VisibilityFilter.PRIVATE)
.useIndentation(true)
.create();
try (OutputStream stream = new FileOutputStream(FILENAME)) {
genson.serialize(Tester.TEST, stream);
}
try (InputStream stream = new FileInputStream(FILENAME)) {
System.out.println(genson.deserialize(stream, Tester.class));
}
}
}
Looks like it was a mistake of mine all the time. I've forgotten that socket's stream can't be closed (unless you want to close the socket too). So in this case Server tries to get as much data from InputStream as it can but it can't consume all the stream (because it is always opened and data can be sent at any time from the client). So the Server basically freezes waiting for data but there is no more data to come. As a result we have the very situation described above.
A solution would be to specify some kind of protocole to denote query size so the Server can know how much data it should consume. See this answer for more details.
Some code paths in the reading API will try to eagerly ensure that there are at least N bytes available or EOF has been reached. This happens all the time when parsing a number (that is not a constant) as you noted.
An option could be to implement a small layer that serializes to a byte array or string, gets the message length and writes it to the output, followed by the payload. On the reading side you would first read the length and then read from the stream in a while loop until you reached that length or EOF.
Then you would just pass to Genson.deseralize this in memory message.
Related
I was trying to do a chat and I notice that readUTF() and writeUTF() did not work. readUTF() stays waiting when I already used writeUTF(). I made a simple test and does not work, what I am doing bad?
(I know that I could use Data Streams instead of Object Streams but in my chat I want to write objects and strings)
Server code:
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.net.ServerSocket;
import java.net.Socket;
public class Server {
public static void main(String[] args) throws IOException {
ServerSocket server = new ServerSocket(40001);
Socket s = server.accept();
ObjectOutputStream out = new ObjectOutputStream(s.getOutputStream());
ObjectInputStream in = new ObjectInputStream(s.getInputStream());
System.out.println(in.readUTF());
out.writeUTF("E");
}
}
Client code:
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.net.Socket;
public class Client {
public static void main(String[] args) throws IOException {
Socket s = new Socket("localhost", 40001);
ObjectOutputStream out = new ObjectOutputStream(s.getOutputStream());
ObjectInputStream in = new ObjectInputStream(s.getInputStream());
out.writeUTF("A");
System.out.println(in.readUTF());
}
}
I could use writeObject("A") and cast the String with readObject(), but I want to know why this way does not work.
You need to call flush() after writing using writeUTF. The reason that writeObject just seems to work is that writeObject will switch to a specific mode before it starts writing, and switch back after it is done. This switching back will automatically flush the buffered data (at least in Java 11). This is not the case for writeUTF, and an explicit call to flush() is needed.
ObjectOutputStream uses an internal buffer, so you should try out.flush() after writes if you want content to be available to read on the InputStream.
The javadoc for ObjectOutputStream includes:
callers may wish to flush the stream immediately to ensure that constructors for receiving ObjectInputStreams will not block when reading the header
My final goal is to convert a file from ANSI to UTF-8. To do so, I use some code with Java :
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.CharBuffer;
import java.nio.charset.Charset;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
public class ConvertFromAnsiToUtf8 {
public static void main(String[] args) throws IOException {
try {
Path p = Paths.get("C:\\shared_to_vm\\test_encode\\test.csv");
ByteBuffer bb = ByteBuffer.wrap(Files.readAllBytes(p));
CharBuffer cb = Charset.forName("windows-1252").decode(bb);
bb = Charset.forName("UTF-8").encode(cb);
Files.write(p, bb.array());
} catch (Exception e) {
System.out.println(e);
}
}
}
The code works perfectly when I test it on small files. My file is convert from ANSI to UTF-8 and all characters are recognize and well encoded. But as soon as I try to use it on the file I need to convert, I get the error java.lang.OutOfMemoryError: Java heap space.
So far as my understanding goes, I got like 1.5 million lines in my file so I am pretty sure I create too many objects with my application.
Of course, I have checked what this error means and how I could solve it (like here or here for example) but is improving the memory capacity of my JVM the only way to solve it ? And if it is, how much more should i use ?
Any kind of help (tip, advice, link or else) would be greatly appreciated !
Don't read the whole file at once:
ByteBuffer bb = ByteBuffer.wrap(Files.readAllBytes(p));
Instead, try to read line-by-line:
Files.lines(p, Charset.forName("windows-1252")).forEach(line -> {
// Convert your line, write to file
});
Stream the input, convert the character encoding, and write the output as you go. This way, you don't need to read the entire file into memory, but only as much as you want.
If you want to minimize the number of (slowish) system calls, you could use a similar approach, but explicitly create a BufferedInputStream with a larger internal buffer, and then wrap that in an InputStreamReader. But the simple approach shown here is unlikely to be a critical point in many applications.
private static final Charset WINDOWS1252 = Charset.forName("windows-1252");
private static final int DEFAULT_BUF_SIZE = 8192;
public static void transcode(Path input, Path output) throws IOException {
try (Reader r = Files.newBufferedReader(input, WINDOWS1252);
Writer w = Files.newBufferedWriter(output, StandardCharsets.UTF_8, StandardOpenOption.CREATE_NEW)) {
char[] buf = new char[DEFAULT_BUF_SIZE];
while (true) {
int n = r.read(buf);
if (n < 0) break;
w.write(buf, 0, n);
}
}
}
If you have a large file, which is larger then available random access memory you should convert characters chunk-by-chunk.
Following you can found the example:
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;
import java.nio.channels.ReadableByteChannel;
import java.nio.channels.WritableByteChannel;
import java.nio.charset.Charset;
import java.nio.charset.CharsetDecoder;
import java.nio.charset.CharsetEncoder;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.StandardOpenOption;
public class Iconv {
private static void iconv(Charset toCode, Charset fromCode, Path src, Path dst) throws IOException {
CharsetDecoder decoder = fromCode.newDecoder();
CharsetEncoder encoder = toCode.newEncoder();
try (ReadableByteChannel source = FileChannel.open(src, StandardOpenOption.READ);
WritableByteChannel destination = FileChannel.open(dst, StandardOpenOption.CREATE, StandardOpenOption.TRUNCATE_EXISTING,
StandardOpenOption.WRITE);) {
ByteBuffer readBytes = ByteBuffer.allocate(4096);
while (source.read(readBytes) > 0) {
readBytes.flip();
destination.write(encoder.encode(decoder.decode(readBytes)));
readBytes.clear();
}
}
}
public static void main(String[] args) throws Exception {
iconv(Charset.forName("UTF-8"), Charset.forName("Windows-1252"), Paths.get("test.csv") , Paths.get("test-utf8.csv") );
}
}
I'm working on a tool which analyze some SSL Services, and right now I'm trying to test the client-initiated renegotiation.
I'm using BouncyCastle to do so, with a TlsClientProtocol with a custom function, because BC doesn't "handle" natively the renegotiation.
So, right now I'm using this class:
package org.bouncycastle.crypto.tls;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.Socket;
import java.util.Hashtable;
import org.bouncycastle.crypto.tls.Certificate;
import org.bouncycastle.crypto.tls.CipherSuite;
import org.bouncycastle.crypto.tls.DefaultTlsClient;
import org.bouncycastle.crypto.tls.ExtensionType;
import org.bouncycastle.crypto.tls.ServerOnlyTlsAuthentication;
import org.bouncycastle.crypto.tls.TlsAuthentication;
import org.bouncycastle.crypto.tls.TlsClientProtocol;
import org.bouncycastle.util.encoders.Hex;
/**
*
* #version 1.0
*/
public class TestProtocol extends TlsClientProtocol {
private byte[] verifyData;
public TestProtocol(InputStream input, OutputStream output) {
super(input, output);
}
// I need to replace this method to have the verifyData,
// because we need to send it into the renegotiation_info ext
#Override
protected void sendFinishedMessage() throws IOException {
verifyData = createVerifyData(getContext().isServer());
ByteArrayOutputStream bos = new ByteArrayOutputStream();
TlsUtils.writeUint8(HandshakeType.finished, bos);
TlsUtils.writeUint24(verifyData.length, bos);
bos.write(verifyData);
byte[] message = bos.toByteArray();
safeWriteRecord(ContentType.handshake, message, 0, message.length);
}
public void renegotiate() throws IOException {
this.connection_state = CS_START;
sendClientHelloMessage();
this.connection_state = CS_CLIENT_HELLO;
completeHandshake();
}
public static void main(String[] args) throws IOException, InterruptedException {
Socket s = new Socket("10.0.0.101", 443);
final TestProtocol proto = new TestProtocol(s.getInputStream(), s.getOutputStream());
proto.connect(new DefaultTlsClient() {
public TlsAuthentication getAuthentication() throws IOException {
return new ServerOnlyTlsAuthentication() {
public void notifyServerCertificate(Certificate serverCertificate) throws IOException {}
};
}
#Override
public int[] getCipherSuites() {
return new int[]{CipherSuite.TLS_RSA_WITH_NULL_SHA, CipherSuite.TLS_RSA_WITH_NULL_MD5};
}
private boolean first = true;
#Override
public Hashtable getClientExtensions() throws IOException {
#SuppressWarnings("unchecked")
Hashtable<Integer, byte[]> clientExtensions = super.getClientExtensions();
if (clientExtensions == null) {
clientExtensions = new Hashtable<Integer, byte[]>();
}
// If this is the first ClientHello, we're not doing anything
if (first) {
first = false;
}
else {
// If this is the second, we add the renegotiation_info extension
byte[] ext = new byte[proto.verifyData.length + 1];
ext[0] = (byte) proto.verifyData.length;
System.arraycopy(proto.verifyData, 0, ext, 1, proto.verifyData.length);
clientExtensions.put(ExtensionType.renegotiation_info, ext);
}
clientExtensions.put(ExtensionType.session_ticket, new byte[] {});
return clientExtensions;
}
});
proto.renegotiate();
}
}
And it's working.. Almost..
When I call the renegotiate() method, it :
- sends the ClientHello
- receives the ServerHello
- receives the Certificate
- receives the ServerHelloDone
- sends the ClientKeyExchange
- sends the ChangeCipherSpec
- sends the Finish
- receives an alert: Fatal, Decrypt Error ; instead of NewSessionTicket,ChangeCipherSpec,Finish
And I just can't figure out why. I thought it could be the SeqNumber used to create the MAC which would need a refresh, but no. When I'm giving an obviously wrong value, I receive also a MAC Error Alert.
To do my testing, I'm using a server allowing CLEAR cipher suites and obviously allowing Client-initiated Renegotiation.
When I'm trying with OpenSSL, it works fine, and I can't see where is the difference, what I'm doing wrong.
The server is on a private VPN, so you can't use it to test things, but here are the .cap of the handshakes.
https://stuff.stooit.com/d/1/528b4a314e35d/openssl.cap
https://stuff.stooit.com/d/1/528b4a54a68cd/my.cap
The first one is the working one, using openSSL.
And the second one is mine, using BouncyCastle.
I'm aware that it won't be very easy to help me on this case, but hey, thanks to ppl who'll try :)
Ok, as always I found the answer a few time after posting my question -- (Even if I was on it for hours / days).
The problem comes with the "Finished" message the client sends. The verify_data is a hash containing all previous handshake messages of the current negotiation.
But in my case, it also contained the handshake messages of the first negotiation, so the verify_data doesn't have the good value.
So to make it works, I need to reset the RecordStream.hash, using RecordStream.hash.reset()
First to say I'm n00b in Java. I can understand most concepts but in my situation I want somebody to help me. I'm using JBoss Netty to handle simple http request and using MemCachedClient check existence of client ip in memcached.
import org.jboss.netty.channel.ChannelHandler;
import static org.jboss.netty.handler.codec.http.HttpHeaders.*;
import static org.jboss.netty.handler.codec.http.HttpHeaders.Names.*;
import static org.jboss.netty.handler.codec.http.HttpResponseStatus.*;
import static org.jboss.netty.handler.codec.http.HttpVersion.*;
import com.danga.MemCached.*;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Set;
import org.jboss.netty.buffer.ChannelBuffer;
import org.jboss.netty.buffer.ChannelBuffers;
import org.jboss.netty.channel.ChannelFuture;
import org.jboss.netty.channel.ChannelFutureListener;
import org.jboss.netty.channel.ChannelHandlerContext;
import org.jboss.netty.channel.ExceptionEvent;
import org.jboss.netty.channel.MessageEvent;
import org.jboss.netty.channel.SimpleChannelUpstreamHandler;
import org.jboss.netty.handler.codec.http.Cookie;
import org.jboss.netty.handler.codec.http.CookieDecoder;
import org.jboss.netty.handler.codec.http.CookieEncoder;
import org.jboss.netty.handler.codec.http.DefaultHttpResponse;
import org.jboss.netty.handler.codec.http.HttpChunk;
import org.jboss.netty.handler.codec.http.HttpChunkTrailer;
import org.jboss.netty.handler.codec.http.HttpRequest;
import org.jboss.netty.handler.codec.http.HttpResponse;
import org.jboss.netty.handler.codec.http.HttpResponseStatus;
import org.jboss.netty.handler.codec.http.QueryStringDecoder;
import org.jboss.netty.util.CharsetUtil;
/**
* #author The Netty Project
* #author Andy Taylor (andy.taylor#jboss.org)
* #author Trustin Lee
*
* #version $Rev: 2368 $, $Date: 2010-10-18 17:19:03 +0900 (Mon, 18 Oct 2010) $
*/
#SuppressWarnings({"ALL"})
public class HttpRequestHandler extends SimpleChannelUpstreamHandler {
private HttpRequest request;
private boolean readingChunks;
/** Buffer that stores the response content */
private final StringBuilder buf = new StringBuilder();
protected MemCachedClient mcc = new MemCachedClient();
private static SockIOPool poolInstance = null;
static {
// server list and weights
String[] servers =
{
"lcalhost:11211"
};
//Integer[] weights = { 3, 3, 2 };
Integer[] weights = {1};
// grab an instance of our connection pool
SockIOPool pool = SockIOPool.getInstance();
// set the servers and the weights
pool.setServers(servers);
pool.setWeights(weights);
// set some basic pool settings
// 5 initial, 5 min, and 250 max conns
// and set the max idle time for a conn
// to 6 hours
pool.setInitConn(5);
pool.setMinConn(5);
pool.setMaxConn(250);
pool.setMaxIdle(21600000); //1000 * 60 * 60 * 6
// set the sleep for the maint thread
// it will wake up every x seconds and
// maintain the pool size
pool.setMaintSleep(30);
// set some TCP settings
// disable nagle
// set the read timeout to 3 secs
// and don't set a connect timeout
pool.setNagle(false);
pool.setSocketTO(3000);
pool.setSocketConnectTO(0);
// initialize the connection pool
pool.initialize();
// lets set some compression on for the client
// compress anything larger than 64k
//mcc.setCompressEnable(true);
//mcc.setCompressThreshold(64 * 1024);
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) throws Exception {
HttpRequest request = this.request = (HttpRequest) e.getMessage();
if(mcc.get(request.getHeader("X-Real-Ip")) != null)
{
HttpResponse response = new DefaultHttpResponse(HTTP_1_1, OK);
response.setHeader("X-Accel-Redirect", request.getUri());
ctx.getChannel().write(response).addListener(ChannelFutureListener.CLOSE);
}
else {
sendError(ctx, NOT_FOUND);
}
}
private void writeResponse(MessageEvent e) {
// Decide whether to close the connection or not.
boolean keepAlive = isKeepAlive(request);
// Build the response object.
HttpResponse response = new DefaultHttpResponse(HTTP_1_1, OK);
response.setContent(ChannelBuffers.copiedBuffer(buf.toString(), CharsetUtil.UTF_8));
response.setHeader(CONTENT_TYPE, "text/plain; charset=UTF-8");
if (keepAlive) {
// Add 'Content-Length' header only for a keep-alive connection.
response.setHeader(CONTENT_LENGTH, response.getContent().readableBytes());
}
// Encode the cookie.
String cookieString = request.getHeader(COOKIE);
if (cookieString != null) {
CookieDecoder cookieDecoder = new CookieDecoder();
Set<Cookie> cookies = cookieDecoder.decode(cookieString);
if(!cookies.isEmpty()) {
// Reset the cookies if necessary.
CookieEncoder cookieEncoder = new CookieEncoder(true);
for (Cookie cookie : cookies) {
cookieEncoder.addCookie(cookie);
}
response.addHeader(SET_COOKIE, cookieEncoder.encode());
}
}
// Write the response.
ChannelFuture future = e.getChannel().write(response);
// Close the non-keep-alive connection after the write operation is done.
if (!keepAlive) {
future.addListener(ChannelFutureListener.CLOSE);
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e)
throws Exception {
e.getCause().printStackTrace();
e.getChannel().close();
}
private void sendError(ChannelHandlerContext ctx, HttpResponseStatus status) {
HttpResponse response = new DefaultHttpResponse(HTTP_1_1, status);
response.setHeader(CONTENT_TYPE, "text/plain; charset=UTF-8");
response.setContent(ChannelBuffers.copiedBuffer(
"Failure: " + status.toString() + "\r\n",
CharsetUtil.UTF_8));
// Close the connection as soon as the error message is sent.
ctx.getChannel().write(response).addListener(ChannelFutureListener.CLOSE);
}
}
When I try to send request like http://127.0.0.1:8090/1/2/3
I'm getting
java.lang.NoClassDefFoundError: com/danga/MemCached/MemCachedClient
at httpClientValidator.server.HttpRequestHandler.<clinit>(HttpRequestHandler.java:66)
I believe it's not related to classpath. May be it's related to context in which mcc doesn't exist.
Any help appreciated
EDIT:
Original code http://docs.jboss.org/netty/3.2/xref/org/jboss/netty/example/http/snoop/package-summary.html
I've modified some parts to fit my needs.
Why do you think this is not classpath related? That's the kind of error you get when the jar you need is not available. How do you start your app?
EDIT
Sorry - i loaded and tried the java_memcached-release_2.5.2 bundle in eclipse and found no issue so far. Debugging the class loading revealed nothing unusual. I can't help besides some more hints to double check:
make sure your download is correct. download and unpack again. (are the com.schooner.* classes available?)
make sure you use > java 1.5
make sure your classpath is correct and complete. The example you have shown does not include netty. Where is it.
I'm not familiar with interactions stemming from adding a classpath to the manifest. Maybe revert to plain style, add all jars needed (memcached, netty, yours) to the classpath and reference the main class to start, not a startable jar file
I want to write to a named pipe (already created) without blocking on the reader. My reader is another application that may go down. If the reader does go down, I want the writer application to keep writing to that named pipe. Something like a this in Java
fopen(fPath, O_NONBLOCK)
So that when the reader comes up, it may resume from where it failed.
First I try to answer your questions. Next I will try to show you a code snippet I created that solves your problem using blocking IO.
Your questions
I want to write to a named pipe
(already created) without blocking on
the reader
You don't need non blocking IO to solve your problem. I think it can not even help you solve your problem. Blocking IO will also run good(maybe even better then non blocking IO because of the low concurrency). A plus is blocking IO is easier to program. Your reader can/should stay blocking.
My reader is another application that
may go down. If the reader does go
down, I want the writer application to
neep writing to the named pipe. So that when the reader comes up, it may resume from where it failed.
just put the messages inside a blocking queue. Next write to the named pipe only when the reader is reading from it(happens automatically because of blocking IO). No need for non-blocking file IO when you use a blocking queue. The data is asynchronous delivered from the blocking queue when a reader is reading, which will sent your data from your writer to the reader.
Something like a fopen(fPath,
O_NONBLOCK) in Java
You don't need non-blocking IO on the reader and even if you used it. just use blocking IO.
CODE SNIPPET
A created a little snippet which I believe demonstrates what your needs.
Components:
Writer.java: reads lines from console as an example. When you start program enter text followed by enter which will sent it to your named pipe. The writer will resume writing if necessary.
Reader.java: reads lines written from your named pipe(Writer.java).
Named pipe: I assume you have created a pipe named "pipe" in the same directory.
Writer.java
import java.io.BufferedWriter;
import java.io.Console;
import java.io.FileWriter;
import java.io.PrintWriter;
import java.util.concurrent.BlockingDeque;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.LinkedBlockingDeque;
import java.util.logging.Level;
import java.util.logging.Logger;
public class Writer {
private final BlockingDeque<StringBuffer> queue;
private final String filename;
public static void main(String[] args) throws Exception {
final Console console = System.console();
final Writer writer = new Writer("pipe");
writer.init();
while(true) {
String readLine = console.readLine();
writer.write(new StringBuffer(readLine));
}
}
public Writer(final String filename){
this.queue = new LinkedBlockingDeque<StringBuffer>();
this.filename = filename;
}
public void write(StringBuffer buf) {
queue.add(buf);
}
public void init() {
ExecutorService single = Executors.newSingleThreadExecutor();
Runnable runnable = new Runnable() {
public void run() {
while(true) {
PrintWriter w = null;
try {
String toString = queue.take().toString();
w = new PrintWriter(new BufferedWriter(new FileWriter(filename)), true);
w.println(toString);
} catch (Exception ex) {
Logger.getLogger(Writer.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
};
single.submit(runnable);
}
}
Reader.java
import java.io.BufferedReader;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.util.logging.Level;
import java.util.logging.Logger;
public class Reader {
private final BufferedReader br;
public Reader(final String filename) throws FileNotFoundException {
br = new BufferedReader(new FileReader(filename));
}
public String readLine() throws IOException {
return br.readLine();
}
public void close() {
try {
br.close();
} catch (IOException ex) {
Logger.getLogger(Reader.class.getName()).log(Level.SEVERE, null, ex);
}
}
public static void main(String[] args) throws FileNotFoundException {
Reader reader = new Reader("pipe");
while(true) {
try {
String readLine = reader.readLine();
System.out.println("readLine = " + readLine);
} catch (IOException ex) {
reader.close();
break;
}
}
}
}
If you want pipes to stay active and queue up messages, you probably want a messaging system rather than a raw pipe. In Java, the standard API is called "Java Messaging System" (JMS), and there are many standard implementations-- the most common of which I've seen being Apache ActiveMQ. If you want a cross-platform, sockets-like interface that does buffering and recovery I might suggest 0MQ, which while not being "pure Java" has bindings for many languages and excellent performance.
If there was such a thing as non-blocking file I/O in Java, which there isn't, a write to a named pipe that wasn't being read would return zero and not write anything. So non-blocking isn't part of the solution.
There's also the issue that named pipes have a finite buffer size. They aren't infinite queues regardless of whether there is a reading process or not. I agree with the suggestion to look into JMS.
You should be able to use NIO's asynch write on a UNIX FIFO, just as you can to any other file:
AsynchronousFileChannel channel = AsynchronousFileChannel.open(...);
Future<Integer> writeFuture = channel.write(...);
... or...
channel.write(..., myCompletionHandler);
However, it's not clear to me what you want to happen when the FIFO isn't accepting writes. Do you want it to buffer? If so you'll need to provide it within the Java program. Do you want it to time out? There's no simple timeout option on Java file writes.
These aren't insurmountable problems. If you're determined you can probably get something working. But I wonder whether you'd not find life much easier if you just used a TCP socket or a JMS queue.