JavaMail Transmission of Large and Binary MIME Messages(rfc3030) - java

I need to exchange with one of mail server, using RFC3030 for large mime messages.
Original task is: if MIME message size > 80MB, I need to use RFC3030.
How I understand, JavaMail can't do this "from the box"?
Maybe I can create some handler or extension for JavaMail that implement RFC3030?
Please help. I don't know what to do.

A quick look into SMTPTransport confirms: Plain old JavaMail does not support BDAT, it will always try to send with DATA command like this:
this.message.writeTo(data(), ignoreList);
finishData();
If you're not afraid (and have no legal reason not to) to tinker with core JDK classes, you could override the methods data() and finishData() as they're both protected (source code from here):
/**
* Send the <code>DATA</code> command to the SMTP host and return
* an OutputStream to which the data is to be written.
*
* #since JavaMail 1.4.1
*/
protected OutputStream data() throws MessagingException {
assert Thread.holdsLock(this);
issueSendCommand("DATA", 354);
dataStream = new SMTPOutputStream(serverOutput);
return dataStream;
}
/**
* Terminate the sent data.
*
* #since JavaMail 1.4.1
*/
protected void finishData() throws IOException, MessagingException {
assert Thread.holdsLock(this);
dataStream.ensureAtBOL();
issueSendCommand(".", 250);
}
In order to support RFC3030, I'd suggest you start off by buffering the whole message into an ByteArrayOutputStream which you'll need to determine the size of the message to be sent. If "small" -> do as SMTPTransport would have done. If "big", split the bytes into chunks and send them in BDAT style. I'd suggest to end with 0 lentgh LAST BDAT and code
protected void finishData() throws IOException, MessagingException {
assert Thread.holdsLock(this);
dataStream.ensureAtBOL();
issueSendCommand("BDAT 0 LAST", 250);
}
-- EDIT --
Here's a very unsophisticated first approach, lots of things to do better. Most important a chunking-as-you-go implementation for the outputStream sending out chunks of data while the message.writeTo() keeps filling it. Filling up a big byte[] just to split it up into chunks later is very, very bad in terms of memory footprint. But it's easier to read that way as all of chunking and sending happens in one place. Please note that this example uses reflection to access serverOutput field in Oracle's SMTPTransport. So it might break anytime with no warning with any new release of JavaMail. Also my Exception handling does not follow RFC-3030 for now as no RSET is performed if BDAT fails.
package de.janschweizer;
import java.io.BufferedReader;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.OutputStream;
import java.io.PrintWriter;
import java.io.StringReader;
import java.lang.reflect.Field;
import javax.mail.MessagingException;
import javax.mail.Session;
import javax.mail.URLName;
import com.sun.mail.smtp.SMTPOutputStream;
public class SMTPTransport extends com.sun.mail.smtp.SMTPTransport {
//We can have our own copy - it's only used in the methods we override anyways.
private SMTPOutputStream dataStream;
private ByteArrayOutputStream baos;
public SMTPTransport(Session session, URLName urlname, String string, boolean bool) {
super(session, urlname, string, bool);
}
public SMTPTransport(Session session, URLName urlname) {
super(session, urlname);
}
protected OutputStream data() throws MessagingException {
assert(Thread.holdsLock(this));
if(!supportsExtension("CHUNKING")) {
return super.data();
}
baos = new ByteArrayOutputStream();
this.dataStream = new SMTPOutputStream(baos);
return this.dataStream;
}
protected void finishData() throws IOException, MessagingException {
assert(Thread.holdsLock(this));
if(!supportsExtension("CHUNKING")) {
super.finishData();
return;
}
this.dataStream.ensureAtBOL();
dataStream.flush();
BufferedReader br = new BufferedReader(new StringReader(new String(baos.toByteArray())));
try {
//BAD reflection hack
Field fServerOutput = com.sun.mail.smtp.SMTPTransport.class.getDeclaredField("serverOutput");
fServerOutput.setAccessible(true);
OutputStream os = (OutputStream)fServerOutput.get(this);
//Do the Chunky
ByteArrayOutputStream bchunk = new ByteArrayOutputStream();
PrintWriter pw = new PrintWriter(bchunk);
String line = br.readLine();
int linecount = 0;
while(line != null) {
pw.println(line);
if(++linecount % 5000 == 0) {
pw.flush();
byte[] chunk = bchunk.toByteArray();
sendChunk(os, chunk);
bchunk = new ByteArrayOutputStream();
pw = new PrintWriter(bchunk);
}
line = br.readLine();
}
pw.flush();
byte[] chunk = bchunk.toByteArray();
sendLastChunk(os, chunk);
} catch (Exception e) {
throw new MessagingException("ReflectionError", e);
}
}
private void sendChunk(OutputStream os, byte[] chunk) throws MessagingException, IOException {
sendCommand("BDAT "+chunk.length);
os.write(chunk);
os.flush();
int rc = readServerResponse();
if(rc != 250) {
throw new MessagingException("Something very wrong");
}
}
private void sendLastChunk(OutputStream os, byte[] chunk) throws MessagingException, IOException {
sendCommand("BDAT "+chunk.length+" LAST");
os.write(chunk);
os.flush();
int rc = readServerResponse();
if(rc != 250) {
throw new MessagingException("Something very wrong");
}
}
}
With this META-INF/javamail.providers
protocol=smtp; type=transport; class=de.janschweizer.SMTPTransport; vendor=Jan Schweizer;

Related

How to stream JSON result with Jackson in Vert.x (java)

I'm making a REST api for my Java database-like service using Vert.x.
It's not too dificult to write the JSON result as a String to the request's stream, as shown below:
...
routeMatcher.get("/myservice/api/v1/query/:query", req -> {
// get query
String querySring = req.params().get("query");
Query query = jsonMapper.readValue(querySring, Query.class);
// my service creates a list of resulting records...
List<Record> result = myservice.query(query);
String jsonResult = jsonMapper.writeValueAsString(result);
// write entire string to response
req.response().headers().set("Content-Type", "application/json; charset=UTF-8");
req.response().end(jsonResult);
});
...
However I'd like to stream the Java List to the request object by using Jackson's method:
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.writeValue(Outputstream, result);
But I don't know how to connect Jackson's Outputstream argument to Vert.x's re.response(), as they have their own Buffer system that seems incompatible with Jackson's java.io.Outputstream argument.
Can't I use Jackson in combination with Vert.x? Should I write a custom serializer by hand with Vert.x's own JSON library? Other suggestions?
I assume you are generating huge JSON documents as for the small ones string output is good enough: objectMapper.writeValue(<String>, result);
There's a problem with streams. ObjectMapper doesn't know the result size and you will end up with the exception:
java.lang.IllegalStateException: You must set the Content-Length header to be the total size of the message body BEFORE sending any data if you are not using HTTP chunked encoding.
at org.vertx.java.core.http.impl.DefaultHttpServerResponse.write(DefaultHttpServerResponse.java:474)
So in your example I would use temporary files for JSON output and then flush them into response (I haven't tested the code)
File tmpFile = File.createTempFile("tmp", ".json");
mapper.writeValue(tmpFile, result);
req.response().sendFile(tmpFile.getAbsolutePath(), (result) -> tmpFile.delete());
In case you know content length initially you can use the following code to map OutputStream with WriteStream
import org.vertx.java.core.buffer.Buffer;
import org.vertx.java.core.streams.WriteStream;
import java.io.IOException;
import java.io.OutputStream;
public class OutputWriterStream extends OutputStream {
public WriteStream writeStream;
public Runnable closeHandler;
#Override
public void write(int b) throws IOException {
throw new UnsupportedOperationException();
}
#Override
public void write(byte[] b, int off, int len) throws IOException {
if (off == 0 && len == b.length) {
writeStream.write(new Buffer(b));
return;
}
byte[] bytes = new byte[len];
System.arraycopy(b, off, bytes, 0, len);
writeStream.write(new Buffer(bytes));
}
#Override
public void write(byte[] b) throws IOException {
writeStream.write(new Buffer(b));
}
#Override
public void close() throws IOException {
closeHandler.run();
}
}
This might be a bit better (and updated for Vertx3) answer:
import io.vertx.core.file.AsyncFile;
import io.vertx.core.buffer.Buffer;
import io.vertx.core.http.HttpServerResponse;
import io.vertx.core.streams.WriteStream;
import java.io.IOException;
import java.io.OutputStream;
public class OutputWriterStream extends OutputStream {
public OutputWriterStream(final WriteStream response) {
this.response = response;
this.buffer = new byte[8192];
}
#Override
public synchronized void write(final int b) throws IOException {
buffer[counter++] = (byte) b;
if (counter >= buffer.length) {
flush();
}
}
#Override
public void flush() throws IOException {
super.flush();
if (counter > 0) {
byte[] remaining = buffer;
if (counter < buffer.length) {
remaining = new byte[counter];
System.arraycopy(buffer, 0, remaining, 0, counter);
}
response.write(Buffer.buffer(remaining));
counter = 0;
}
}
#Override
public void close() throws IOException {
flush();
super.close();
if (response instanceof HttpServerResponse) {
try {
response.end();
}
catch (final IllegalStateException ignore) {
}
}
else if (response instanceof AsyncFile) {
((AsyncFile) response).close();
}
}
private final WriteStream<Buffer> response;
private final byte[] buffer;
private int counter = 0;
}

How to properly upload (image) file to Google Cloud Storage using Java App Engine?

I have a Google app engine instance, using java (sdk 1.9.7), and it is connected to Google Cloud Storage. I'm able to successfully take a request's input and output it to a file/object in my google cloud storage bucket. here's my code for my servlet:
public void doPost(HttpServletRequest req, HttpServletResponse resp) throws IOException {
// read the input stream
byte[] buffer = new byte[1024];
List<byte[]> allBytes = new LinkedList<byte[]>();
InputStream reader = req.getInputStream();
while(true) {
int bytesRead = reader.read(buffer);
if (bytesRead == -1) {
break; // have a break up with the loop.
} else if (bytesRead < 1024) {
byte[] temp = Arrays.copyOf(buffer, bytesRead);
allBytes.add(temp);
} else {
allBytes.add(buffer);
}
}
// init the bucket access
GcsService gcsService = GcsServiceFactory.createGcsService(RetryParams.getDefaultInstance());
GcsFilename filename = new GcsFilename("my-bucket", "my-file");
Builder fileOptionsBuilder = new GcsFileOptions.Builder();
fileOptionsBuilder.mimeType("text/html"); // or "image/jpeg" for image files
GcsFileOptions fileOptions = fileOptionsBuilder.build();
GcsOutputChannel outputChannel = gcsService.createOrReplace(filename, fileOptions);
// write file out
BufferedOutputStream outStream = new BufferedOutputStream(Channels.newOutputStream(outputChannel));
for (byte[] b : allBytes) {
outStream.write(b);
}
outStream.close();
outputChannel.close();
}
and when i do something like a curl POST command, this works perfectly if i just feed it data directly, like so:
curl --data "someContentToBeRead" http://myAppEngineProj.appspot.com/myServlet
and i can see the exactly string that i put in, "someContentToBeRead".
HOWEVER, when i put a file, like so:
curl -F file=#"picture.jpg" http://myAppEngineProj.appspot.com/myServlet
the file is completely corrupted. if i upload a text file, it has a line of crap in the beginning of the file, and a line of crap at the end, like:
------------------------------266cb0e18eba
Content-Disposition: form-data; name="file"; filename="blah.txt"
Content-Type: text/plain
hi how are you
------------------------------266cb0e18eba--
how do i tell cloud storage i want to store the data as file?
This is what worked for me
To upload, use
curl -F file=#"picture.jpg" http://myAppEngineProj.appspot.com/myServlet
And the servlet looks like
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.io.PrintWriter;
import java.nio.channels.Channels;
import java.util.Enumeration;
import java.util.logging.Logger;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.apache.commons.fileupload.FileItemIterator;
import org.apache.commons.fileupload.FileItemStream;
import org.apache.commons.fileupload.servlet.ServletFileUpload;
import com.google.appengine.tools.cloudstorage.GcsFileOptions;
import com.google.appengine.tools.cloudstorage.GcsFilename;
import com.google.appengine.tools.cloudstorage.GcsOutputChannel;
import com.google.appengine.tools.cloudstorage.GcsService;
import com.google.appengine.tools.cloudstorage.GcsServiceFactory;
import com.google.appengine.tools.cloudstorage.RetryParams;
public class UploadServlet extends HttpServlet {
private static final Logger log = Logger.getLogger(UploadServlet.class.getName());
private final GcsService gcsService = GcsServiceFactory.createGcsService(new RetryParams.Builder()
.initialRetryDelayMillis(10)
.retryMaxAttempts(10)
.totalRetryPeriodMillis(15000)
.build());
private String bucketName = "myBucketNameOnGoogleCloudStorage";
/**Used below to determine the size of chucks to read in. Should be > 1kb and < 10MB */
private static final int BUFFER_SIZE = 2 * 1024 * 1024;
#SuppressWarnings("unchecked")
#Override
public void doPost(HttpServletRequest req, HttpServletResponse res)
throws ServletException, IOException {
String sctype = null, sfieldname, sname = null;
ServletFileUpload upload;
FileItemIterator iterator;
FileItemStream item;
InputStream stream = null;
try {
upload = new ServletFileUpload();
res.setContentType("text/plain");
iterator = upload.getItemIterator(req);
while (iterator.hasNext()) {
item = iterator.next();
stream = item.openStream();
if (item.isFormField()) {
log.warning("Got a form field: " + item.getFieldName());
} else {
log.warning("Got an uploaded file: " + item.getFieldName() +
", name = " + item.getName());
sfieldname = item.getFieldName();
sname = item.getName();
sctype = item.getContentType();
GcsFilename gcsfileName = new GcsFilename(bucketName, sname);
GcsFileOptions options = new GcsFileOptions.Builder()
.acl("public-read").mimeType(sctype).build();
GcsOutputChannel outputChannel =
gcsService.createOrReplace(gcsfileName, options);
copy(stream, Channels.newOutputStream(outputChannel));
res.sendRedirect("/");
}
}
} catch (Exception ex) {
throw new ServletException(ex);
}
}
private void copy(InputStream input, OutputStream output) throws IOException {
try {
byte[] buffer = new byte[BUFFER_SIZE];
int bytesRead = input.read(buffer);
while (bytesRead != -1) {
output.write(buffer, 0, bytesRead);
bytesRead = input.read(buffer);
}
} finally {
input.close();
output.close();
}
}
}
References : Wilson Yeung's answer above and This Post
Although the other post has a limitation of upload size < 32 mb,
that was not a problem for me.
And this code also handles mime types automatically.
As far as I can tell, there is no problem with Google Cloud Storage or the APIs; the problem is earlier, in the reading of the content from HttpServletRequest.
The lines containing ------266cb0e18eba are actually part of the MIME encoding and marks the beginning and end of a part.
You can resolve the issue in one of two ways.
Option A: Keep the code the same, but change the way you upload data
Replace:
$ curl -F file=#"picture.jpg" http://myAppEngineProj.appspot.com/myServlet
With:
$ curl -X POST -d #"picture.jpg" http://myAppEngineProj.appspot.com/myServlet
Option B: Fix the Java code and continue using curl as you are using it
Replace:
java.io.InputStream is = request.getInputStream();
With:
javax.servlet.http.Part filePart = request.getPart("file");
java.io.InputStream is = filePart.getInputStream()
Which opens an input stream on the correct part in the multipart MIME message which curl constructed.
This is documented here:
http://docs.oracle.com/javaee/6/tutorial/doc/gmhba.html
Option B is probably the better option because it will work with forms and form uploads.

re-implementation of org.apache.commons.net.io.Util to parse an InputStream

org.apache.commons.net.io.Util uses InputStream which cannot be parsed live until the stream terminates. Is that correct or incorrect?
The IOUtil class is a blackbox for me. It uses org.apache.commons.net.io.Util but this is equally opaque.
Specifically, the line Util.copyStream(remoteInput, localOutput); of IOUtil is intriguing:
copyStream
public static final long copyStream(InputStream source,
OutputStream dest)
throws CopyStreamException
Same as copyStream(source, dest, DEFAULT_COPY_BUFFER_SIZE);
Throws:
CopyStreamException
How can I read either the original stream or its copy as it comes in? Live telnet connections will have an InputStream which does not terminate. I see no such functionality in the API.
Alternately, re-implementing Apache examples.util.IOUtil leads back to the original problem:
package weathertelnet;
import java.io.IOException;
import java.io.InputStream;
import java.util.logging.Logger;
public class StreamReader {
private final static Logger LOG = Logger.getLogger(StreamReader.class.getName());
private StringBuilder stringBuilder = new StringBuilder();
private InputStream inputStream;
public StreamReader() {
}
public void setInputStream(InputStream inputStream) throws IOException {
this.inputStream = inputStream;
readWrite();
}
public void readWrite() throws IOException {
Thread reader = new Thread() {
#Override
public void run() {
do {
try {
char ch = (char) inputStream.read();
stringBuilder.append(ch);
} catch (IOException ex) {
}
} while (true); //never stop reading the stream..
}
};
Thread writer = new Thread() {
#Override
public void run() {
//Util.copyStream(remoteInput, localOutput);
//somehow write the *live* stream to file *as* it comes in
//or, use org.apache.commons.net.io.Util to "get the data"
}
};
}
}
Either I have a fundamental misunderstanding, or, without re-implementing (or using reflection, maybe) these API's do not allow processing of a live, unterminated InputStream.
I'm really not inclined to use reflection here, the next stage is, I think, to start breaking down what org.apache.commons.net.io.Util does and how it does it, but that's really going down the rabbit hole. Where does it end?
http://commons.apache.org/proper/commons-net/apidocs/org/apache/commons/net/io/Util.html#copyStream%28java.io.InputStream,%20java.io.OutputStream%29
You can copy a Stream "live" but the InputStream will probably block when there is no more input.
You can see the code for org.apache.commons.net.io.Util#copyStream(...) here
output first:
thufir#dur:~$
thufir#dur:~$ java -jar NetBeansProjects/SSCCE/dist/SSCCE.jar
print..
makeString..
cannot remove java.util.NoSuchElementException
------------------------------------------------------------------------------
* Welcome to THE WEATHER UNDERGROUND telnet service! *
------------------------------------------------------------------------------
* *
* National Weather Service information provided by Alden Electronics, Inc. *
* and updated each minute as reports come in over our data feed. *
* *
* **Note: If you cannot get past this opening screen, you must use a *
* different version of the "telnet" program--some of the ones for IBM *
* compatible PC's have a bug that prevents proper connection. *
* *
* comments: jmasters#wunderground.com *
------------------------------------------------------------------------------
Press Return to continue:finally -- waiting for more data..
cannot remove java.util.NoSuchElementException
finally -- waiting for more data..
------------------------------------------------------------------------------
* Welcome to THE WEATHER UNDERGROUND telnet service! *
------------------------------------------------------------------------------
* *
* National Weather Service information provided by Alden Electronics, Inc. *
* and updated each minute as reports come in over our data feed. *
* *
* **Note: If you cannot get past this opening screen, you must use a *
* different version of the "telnet" program--some of the ones for IBM *
* compatible PC's have a bug that prevents proper connection. *
* *
* comments: jmasters#wunderground.com *
------------------------------------------------------------------------------
Press Return to continue:
cannot remove java.util.NoSuchElementException
^Cthufir#dur:~$
thufir#dur:~$
then code:
thufir#dur:~$ cat NetBeansProjects/SSCCE/src/weathertelnet/Telnet.java
package weathertelnet;
import static java.lang.System.out;
import java.io.IOException;
import java.io.InputStream;
import java.net.InetAddress;
import java.net.SocketException;
import java.util.concurrent.ConcurrentLinkedQueue;
import java.util.logging.Logger;
import org.apache.commons.net.telnet.TelnetClient;
public final class Telnet {
private final static Logger LOG = Logger.getLogger(Telnet.class.getName());
private TelnetClient telnetClient = new TelnetClient();
public Telnet() throws SocketException, IOException {
InetAddress host = InetAddress.getByName("rainmaker.wunderground.com");
int port = 3000;
telnetClient.connect(host, port);
final InputStream inputStream = telnetClient.getInputStream();
final ConcurrentLinkedQueue<Character> clq = new ConcurrentLinkedQueue();
final StringBuilder sb = new StringBuilder();
Thread print = new Thread() {
#Override
public void run() {
out.println("print..");
try {
char ch = (char) inputStream.read();
while (255 > ch && ch >= 0) {
clq.add(ch);
out.print(ch);
ch = (char) inputStream.read();
}
} catch (IOException ex) {
out.println("cannot read inputStream:\t" + ex);
}
}
};
Thread makeString = new Thread() {
#Override
public void run() {
out.println("makeString..");
do {
try {
do {
char ch = clq.remove();
sb.append(ch);
// out.println("appended\t" + ch);
} while (true);
} catch (java.util.NoSuchElementException | ClassCastException e) {
out.println("cannot remove\t\t" + e);
try {
Thread.sleep(1000);
} catch (InterruptedException interruptedException) {
out.println("cannot sleep1\t\t" + interruptedException);
}
} finally {
out.println("finally -- waiting for more data..\n\n" + sb + "\n\n\n");
try {
Thread.sleep(1000);
} catch (InterruptedException interruptedException) {
out.println("cannot sleep1\t\t" + interruptedException);
}
}
} while (true);
}
};
print.start();
makeString.start();
}
private void cmd(String cmd) throws IOException {//haven't tested yet..
byte[] b = cmd.getBytes();
System.out.println("streamreader has\t\t" + cmd);
int l = b.length;
for (int i = 0; i < l; i++) {
telnetClient.sendCommand(b[i]);
}
}
public static void main(String[] args) throws SocketException, IOException {
new Telnet();
}
}thufir#dur:~$
thufir#dur:~$

How do I send Http trailers/footers in a chunked response from within a java servlet?

Basically my response headers contain
Transfer-encoding=chunked,
Trailer=[some trailer I want to send say e.g "SomeTrailer"]
Once I'm done writing the data to the Servlet outputstream, I'm writing the trailer
"SomeTrailer:[value]", but this is not being parsed by the httpclient correctly.
The httpclient considers the whole of inputstream (including the trailer) as a single
chunk.
I've also tried writing the trailer in a response header after the data has been written to the outputstream but without success.
Please help
I haven't found any good sources on this.
I ended up writing a simple single threaded webserver for this. Turned out it was quite easy. The server is pretty simple. The code's a bit rough though, but the main idea is there.
What it does it sends the filecontents as the first chunk and the checksum of the file as a footer.
import java.io.BufferedReader;
import java.io.DataOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.ServerSocket;
import java.net.Socket;
import org.apache.commons.codec.digest.DigestUtils;
import org.apache.commons.io.IOUtils;
import org.apache.log4j.Logger;
public class ChunkedResponseServer implements Runnable {
private static final Logger LOGGER = Logger.getLogger(ChunkedResponseServer.class);
// Space ' '
static final byte SP = 32;
// Tab ' '
static final byte HT = 9;
// Carriage return
static final byte CR = 13;
// Line feed character
static final byte LF = 10;
final int port;
private volatile boolean cancelled = false;
public ChunkedResponseServer(int port) {
LOGGER.info("Chunked response server running on port " + port);
this.port = port;
}
#Override
public void run() {
ServerSocket serverSocket = null;
try {
serverSocket = new ServerSocket(port);
while (!cancelled) {
final Socket connectionSocket = serverSocket.accept();
handle(connectionSocket);
}
} catch (final IOException e) {
throw new RuntimeException(e);
}
}
public void cancel() {
LOGGER.info("Shutting down Chunked response Server");
cancelled = true;
}
private void handle(Socket socket) throws IOException {
BufferedReader input = null;
DataOutputStream output = null;
try {
input = new BufferedReader(new InputStreamReader(socket.getInputStream()));
output = new DataOutputStream(socket.getOutputStream());
addHeaders(output);
addCRLR(output);
final String filename = readFilename(input);
final byte[] content = readContent(filename);
addContentAsChunk(output, content);
final String checksum = DigestUtils.md5Hex(content);
addLastChunkAndChecksumFooter(output, checksum);
addCRLR(output);
} finally {
IOUtils.closeQuietly(input);
IOUtils.closeQuietly(output);
}
}
private void addLastChunkAndChecksumFooter(DataOutputStream output, String checksum) throws IOException {
output.writeBytes("0");
addCRLR(output);
output.writeBytes("checksum: " + checksum);
addCRLR(output);
}
private void addContentAsChunk(DataOutputStream output, byte[] content) throws IOException {
output.writeBytes(Integer.toHexString(content.length));
addCRLR(output);
output.write(content);
addCRLR(output);
}
private void addCRLR(DataOutputStream output) throws IOException {
output.writeByte(CR);
output.writeByte(LF);
}
private void addHeaders(DataOutputStream output) throws IOException {
output.writeBytes("HTTP/1.1 200 OK");
addCRLR(output);
output.writeBytes("Content-type: text/plain");
addCRLR(output);
output.writeBytes("Transfer-encoding: chunked");
addCRLR(output);
output.writeBytes("Trailer: checksum");
addCRLR(output);
}
private String readFilename(BufferedReader input) throws IOException {
final String initialLine = input.readLine();
final String filePath = initialLine.split(" ")[1];
final String[] components = filePath.split("/");
return components[components.length - 1];
}
private byte[] readContent(String filename) throws IOException {
final InputStream in = Thread.currentThread().getContextClassLoader().getResourceAsStream(filename);
return IOUtils.toByteArray(in);
}
}

JLayer Synchronization

(I'm attempting to make my previous question more generic in the hopes of a solution.)
I am using the JLayer library and a sample.mp3 file. I would like to play AND decode the file at the same time.
However, I want them to be synchronized - if a part of the song is decoded, it is also played. Nothing is decoded before it is played and vice versa (to a reasonable degree, of course).
Here is how a song is played and decoded, respectfully:
Player p = new Player(InputStream mp3stream);
p.play();
Decoder d = new Decoder();
BitStream bs = new Bitstream(InputStream mp3stream);
SampleBuffer s = (SampleBuffer) d.decodeFrame(bs.readFrame(), bs);
// ... for processing the SampleBuffer but irrelevant for the question
I currently use:
InputStream mp3stream = new FileInputStream("sample.mp3");
but this uses the whole song at once so I am unable to synchronize. Is there a way to break the sample.mp3 into pieces that can be manipulated by both processes? If I had small enough pieces I could run both pieces into the processes, wait until both finished, and then grab the next small piece and repeat until I was out of small pieces.
Note: I have tried using ByteArrayInputStream with no success - but perhaps my methodology is incorrect when using it.
I hope i get this right:
You have a single input file
You want that two different input streams are synchronized in the sense, that "they must make the same progress" in the stream.
This is an interestig question. I came up with the following sketch (compiles, but didn't execute it, so you may do a little testing first).
Create a wrapper object "StreamSynchronizer" that controls access to the underlying input. Only a single byte is read until all derived streams have read this byte.
Derive any number of "SynchronizedStream" instances from this that delegate the "read" back t the StreamSynchronizer.
package de.mit.stackoverflow;
import java.io.IOException;
import java.io.InputStream;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
public class StreamSynchronizer {
final private InputStream inputStream;
private List activeStreams = new ArrayList();
private int lastByte;
private Set waitingStreams = new HashSet();
private Object lock = new Object();
public StreamSynchronizer(InputStream is) throws IOException {
super();
this.inputStream = is;
lastByte = getInputStream().read();
}
public void close(SynchronizedStream stream) {
activeStreams.remove(stream);
}
public SynchronizedStream createStream() {
SynchronizedStream stream = new SynchronizedStream(this);
activeStreams.add(stream);
return stream;
}
public InputStream getInputStream() {
return inputStream;
}
public int read(SynchronizedStream stream) throws IOException {
synchronized (lock) {
while (waitingStreams.contains(stream)) {
if (waitingStreams.size() == activeStreams.size()) {
waitingStreams.clear();
lastByte = getInputStream().read();
lock.notifyAll();
} else {
try {
lock.wait();
} catch (InterruptedException e) {
throw new IOException(e);
}
}
}
waitingStreams.add(stream);
return lastByte;
}
}
}
package de.mit.stackoverflow;
import java.io.IOException;
import java.io.InputStream;
public class SynchronizedStream extends InputStream {
final private StreamSynchronizer synchronizer;
protected SynchronizedStream(StreamSynchronizer synchronizer) {
this.synchronizer = synchronizer;
}
#Override
public void close() throws IOException {
getSynchronizer().close(this);
}
public StreamSynchronizer getSynchronizer() {
return synchronizer;
}
#Override
public int read() throws IOException {
return getSynchronizer().read(this);
}
}

Categories