InputStream to servletInputStream - java

I have this InputStream:
InputStream inputStream = new ByteArrayInputStream(myString.getBytes(StandardCharsets.UTF_8));
How can I convert this to ServletInputStream?
I have tried:
ServletInputStream servletInputStream = (ServletInputStream) inputStream;
but do not work.
EDIT:
My method is this:
private static class LowerCaseRequest extends HttpServletRequestWrapper {
public LowerCaseRequest(final HttpServletRequest request) throws IOException, ServletException {
super(request);
}
#Override
public ServletInputStream getInputStream() throws IOException {
ServletInputStream servletInputStream;
StringBuilder jb = new StringBuilder();
String line;
String toLowerCase = "";
BufferedReader reader = new BufferedReader(new InputStreamReader(super.getInputStream()));
while ((line = reader.readLine()) != null) {
toLowerCase = jb.append(line).toString().toLowerCase();
}
InputStream inputStream = new ByteArrayInputStream(toLowerCase.getBytes(StandardCharsets.UTF_8));
servletInputStream = (ServletInputStream) inputStream;
return servletInputStream;
}
}
I´m trying to convert all my request to lowercase.

My advice: don't create the ByteArrayInputStream, just use the byte array you got from the getBytes method already. This should be enough to create a ServletInputStream.
Most basic solution
Unfortunately, aksappy's answer only overrides the read method. While this may be enough in Servlet API 3.0 and below, in the later versions of Servlet API there are three more methods you have to implement.
Here is my implementation of the class, although with it becoming quite long (due to the new methods introduced in Servlet API 3.1), you might want to think about factoring it out into a nested or even top-level class.
final byte[] myBytes = myString.getBytes("UTF-8");
ServletInputStream servletInputStream = new ServletInputStream() {
private int lastIndexRetrieved = -1;
private ReadListener readListener = null;
#Override
public boolean isFinished() {
return (lastIndexRetrieved == myBytes.length-1);
}
#Override
public boolean isReady() {
// This implementation will never block
// We also never need to call the readListener from this method, as this method will never return false
return isFinished();
}
#Override
public void setReadListener(ReadListener readListener) {
this.readListener = readListener;
if (!isFinished()) {
try {
readListener.onDataAvailable();
} catch (IOException e) {
readListener.onError(e);
}
} else {
try {
readListener.onAllDataRead();
} catch (IOException e) {
readListener.onError(e);
}
}
}
#Override
public int read() throws IOException {
int i;
if (!isFinished()) {
i = myBytes[lastIndexRetrieved+1];
lastIndexRetrieved++;
if (isFinished() && (readListener != null)) {
try {
readListener.onAllDataRead();
} catch (IOException ex) {
readListener.onError(ex);
throw ex;
}
}
return i;
} else {
return -1;
}
}
};
Adding expected methods
Depending on your requirements, you may also want to override other methods. As romfret pointed out, it's advisable to override some methods, such as close and available. If you don't implement them, the stream will always report that there are 0 bytes available to be read, and the close method will do nothing to affect the state of the stream. You can probably get away without overriding skip, as the default implementation will just call read a number of times.
#Override
public int available() throws IOException {
return (myBytes.length-lastIndexRetrieved-1);
}
#Override
public void close() throws IOException {
lastIndexRetrieved = myBytes.length-1;
}
Writing a better close method
Unfortunately, due to the nature of an anonymous class, it's going to be difficult for you to write an effective close method because as long as one instance of the stream has not been garbage-collected by Java, it maintains a reference to the byte array, even if the stream has been closed.
However, if you factor out the class into a nested or top-level class (or even an anonymous class with a constructor which you call from the line in which it is defined), the myBytes can be a non-final field rather than a final local variable, and you can add a line like:
myBytes = null;
to your close method, which will allow Java to free memory taken up by the byte array.
Of course, this will require you to write a constructor, such as:
private byte[] myBytes;
public StringServletInputStream(String str) {
try {
myBytes = str.getBytes("UTF-8");
} catch (UnsupportedEncodingException e) {
throw new IllegalStateException("JVM did not support UTF-8", e);
}
}
Mark and Reset
You may also want to override mark, markSupported and reset if you want to support mark/reset. I am not sure if they are ever actually called by your container though.
private int readLimit = -1;
private int markedPosition = -1;
#Override
public boolean markSupported() {
return true;
}
#Override
public synchronized void mark(int readLimit) {
this.readLimit = readLimit;
this.markedPosition = lastIndexRetrieved;
}
#Override
public synchronized void reset() throws IOException {
if (markedPosition == -1) {
throw new IOException("No mark found");
} else {
lastIndexRetrieved = markedPosition;
readLimit = -1;
}
}
// Replacement of earlier read method to cope with readLimit
#Override
public int read() throws IOException {
int i;
if (!isFinished()) {
i = myBytes[lastIndexRetrieved+1];
lastIndexRetrieved++;
if (isFinished() && (readListener != null)) {
try {
readListener.onAllDataRead();
} catch (IOException ex) {
readListener.onError(ex);
throw ex;
}
readLimit = -1;
}
if (readLimit != -1) {
if ((lastIndexRetrieved - markedPosition) > readLimit) {
// This part is actually not necessary in our implementation
// as we are not storing any data. However we need to respect
// the contract.
markedPosition = -1;
readLimit = -1;
}
}
return i;
} else {
return -1;
}
}

Try this code.
ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(myString.getBytes(StandardCharsets.UTF_8));
ServletInputStream servletInputStream=new ServletInputStream(){
public int read() throws IOException {
return byteArrayInputStream.read();
}
}

You can only cast something like this:
ServletInputStream servletInputStream = (ServletInputStream) inputStream;
if the inputStream you are trying to cast is actually a ServletInputStream already. It will complain if it's some other implementation of InputStream. You can't cast an object to something it isn't.
In a Servlet container, you can get a ServletInputStream from a ServletRequest:
ServletInputStream servletInputStream = request.getInputStream();
So, what are you actually trying to do?
EDIT
I'm intrigued as to why you want to convert your request to lower-case - why not just make your servlet case-insensitive? In other words, your code to lower-case the request data can be copied into your servlet, then it can process it there... always look for the simplest solution!

Related

How can if else be replaced with Optional.orElse in GZIPWrapper?

Hello everyone working with gzip, I was faced with a question. I have a GzipWrapper and there is a lot of if else, is it possible to do something like that with Optional.orElse? With simple Optional examples, I sorted it out, but I don’t quite understand how to do this in a wrapper. An example on one of the methods will suffice) Thanks in advance)
MyWrapper:
public class GZIPFilterResponseWrapper extends HttpServletResponseWrapper implements Closeable {
private PrintWriter printWriter;
private GZIPFilterResponseStream gzipStream;
private ServletOutputStream outputStream;
public GZIPFilterResponseWrapper(HttpServletResponse response) throws IOException {
super(response);
response.addHeader(CONTENT_ENCODING, GZIP);
gzipStream = new GZIPFilterResponseStream(response.getOutputStream());
}
#Override
public void flushBuffer() throws IOException {
if (nonNull(printWriter)) {
printWriter.flush();
}
if (nonNull(outputStream)) {
outputStream.flush();
}
super.flushBuffer();
}
#Override
public ServletOutputStream getOutputStream() throws IOException {
if (nonNull(printWriter)) {
throw new IllegalStateException(GZIP_CANNOT_WRITE);
}
if (isNull(outputStream)) {
outputStream = gzipStream;
}
return outputStream;
}
#Override
public PrintWriter getWriter() throws IOException {
if (nonNull(outputStream)) {
throw new IllegalStateException(GZIP_WRITER_ALREADY_HAS_CALLING);
}
if (isNull(printWriter)) {
printWriter = new PrintWriter(new OutputStreamWriter(gzipStream, getResponse().getCharacterEncoding()));
}
return printWriter;
}
#Override
public void close() throws IOException {
if (nonNull(printWriter)) {
printWriter.close();
}
if (nonNull(outputStream)) {
try {
outputStream.close();
} catch (IOException e) {
throw new IOException(e.getMessage());
}
}
}
}
Optional is not a replacement for conditional logic.
Optional was added so that APIs had a consistent way of declaring that a method returns a value that may or may not be present, without returning null. Returning null is vulnerable to exceptions, and there is no easy way to know whether to expect null without reading the documentation. Optional makes this contract explicit.
So while it may be possible to replace your conditional logic with Optionals, that would be unlikely to make your code any better or any easier to read.
Here's one example just to satisfy your curiosity:
if (nonNull(printWriter)) {
printWriter.flush();
}
becomes
Optional.ofNullable(printWriter).ifPresent(PrintWriter::flush);

Pipe Broken with PipeInputStream with kubernetes-client exec()

I'm using the kubernetes-client to try copy a directory from a pod, but I'm doing something wrong with the input stream from stdout. I get a java.io.IOException: Pipe broken exception when it tries to read(). I'm pretty sure that no data flows at all. I'm half wondering if I need to read the InputStream on a separate thread or something?
The stream is created like this:
public InputStream copyFiles(String containerId,
String folderName) {
ExecWatch exec = client.pods().withName(containerId).redirectingOutput().exec("tar -C " + folderName + " -c");
// We need to wrap the InputStream so that when the stdout is closed, then the underlying ExecWatch is closed
// also. This will cleanup any Websockets connections.
ChainedCloseInputStreamWrapper inputStreamWrapper = new ChainedCloseInputStreamWrapper(exec.getOutput(), exec);
return inputStreamWrapper;
}
And the InputStream is processed in this function
void copyVideos(final String containerId) {
TarArchiveInputStream tarStream = new TarArchiveInputStream(containerClient.copyFiles(containerId, "/videos/"));
TarArchiveEntry entry;
boolean videoWasCopied = false;
try {
while ((entry = tarStream.getNextTarEntry()) != null) {
if (entry.isDirectory()) {
continue;
}
String fileExtension = entry.getName().substring(entry.getName().lastIndexOf('.'));
testInformation.setFileExtension(fileExtension);
File videoFile = new File(testInformation.getVideoFolderPath(), testInformation.getFileName());
File parent = videoFile.getParentFile();
if (!parent.exists()) {
parent.mkdirs();
}
OutputStream outputStream = new FileOutputStream(videoFile);
IOUtils.copy(tarStream, outputStream);
outputStream.close();
videoWasCopied = true;
LOGGER.log(Level.INFO, "{0} Video file copied to: {1}/{2}", new Object[]{getId(),
testInformation.getVideoFolderPath(), testInformation.getFileName()});
}
} catch (IOException e) {
LOGGER.log(Level.WARNING, getId() + " Error while copying the video", e);
ga.trackException(e);
} finally {
if (!videoWasCopied) {
testInformation.setVideoRecorded(false);
}
}
}
The InputStream Wrapper class is just there to close the ExecWatch at the end once the InputStream is closed, it looks like this:
private static class ChainedCloseInputStreamWrapper extends InputStream {
private InputStream delegate;
private Closeable resourceToClose;
public ChainedCloseInputStreamWrapper(InputStream delegate, Closeable resourceToClose) {
this.delegate = delegate;
this.resourceToClose = resourceToClose;
}
#Override
public int read() throws IOException {
return delegate.read();
}
public int available() throws IOException {
return delegate.available();
}
public void close() throws IOException {
logger.info("Shutdown called!");
delegate.close();
// Close our dependent resource
resourceToClose.close();
}
public boolean equals(Object o) {
return delegate.equals(o);
}
public int hashCode() {
return delegate.hashCode();
}
public int read(byte[] array) throws IOException {
return delegate.read(array);
}
public int read(byte[] array,
int n,
int n2) throws IOException {
return delegate.read(array, n, n2);
}
public long skip(long n) throws IOException {
return delegate.skip(n);
}
public void mark(int n) {
delegate.mark(n);
}
public void reset() throws IOException {
delegate.reset();
}
public boolean markSupported() {
return delegate.markSupported();
}
public String toString() {
return delegate.toString();
}
}
Turns out I had the tar command wrong, so it was causing a failure and the stdout PipeInputStream was dead locking. I managed to find a workaround for the deadlock. But the main reason for the failure was that I forgot to tell tar to actually do something! I at least needed a "." to include the current directory.

Apache Util.copyStream: Stop Stream

Is it possible to stop the bytesTransferred stream for the Apache Util.copyStream function?
long bytesTransferred = Util.copyStream(inputStream, outputStream, 32768, CopyStreamEvent.UNKNOWN_STREAM_SIZE, new CopyStreamListener() {
#Override
public void bytesTransferred(CopyStreamEvent event) {
bytesTransferred(event.getTotalBytesTransferred(), event.getBytesTransferred(), event.getStreamSize());
}
#Override
public void bytesTransferred(long totalBytesTransferred, int bytesTransferred,
long streamSize) {
try {
if(true) {
log.info('Stopping');
return; //Cancel
} else {
log.info('Still going');
}
} catch (InterruptedException e) {
// this should not happen!
}
}
});
In this case, what will happen is that I keep getting a Stopping message in my logs. I also tried throwing a new RuntileException instead of returning, and again I get endless Stopping messages. How would I cancel the bytesTransfered in this case?
You could try wrapping the input stream, and overriding the read methods to check for a stop flag. If set, throw an IOException. Example class.
/**
* Wrapped input stream that can be cancelled.
*/
public class WrappedStoppableInputStream extends InputStream
{
private InputStream m_wrappedInputStream;
private boolean m_stop = false;
/**
* Constructor.
* #param inputStream original input stream
*/
public WrappedStoppableInputStream(InputStream inputStream)
{
m_wrappedInputStream = inputStream;
}
/**
* Call to stop reading stream.
*/
public void cancelTransfer()
{
m_stop = true;
}
#Override
public int read() throws IOException
{
if (m_stop)
{
throw new IOException("Stopping stream");
}
return m_wrappedInputStream.read();
}
#Override
public int read(byte[] b) throws IOException
{
if (m_stop)
{
throw new IOException("Stopping stream");
}
return m_wrappedInputStream.read(b);
}
#Override
public int read(byte[] b, int off, int len) throws IOException
{
if (m_stop)
{
throw new IOException("Stopping stream");
}
return m_wrappedInputStream.read(b, off, len);
}
}
I am assuming that the file copying is running inside a thread. So you wrap your input stream with WrappedStoppableInputStream, and pass that to your copy function, to be used instead of the original input stream.

Java Reader pre & post data

Is there a Reader class (JDK or library) I can use to decorate another Reader in such a way that the new reader returns "PREFIX" + everythong of innerReader + "POSTFIX"?
I want to decorate the file contents with a header and a footer before returning the Reader to the caller.
Not in the standard library, but take a look at http://ostermiller.org/utils/Concat.html
Looks promising, but I haven't used it myself.
I've built this on behalf of GreyBeardedGeek's post, maybe somebody can use it:
/**
* Utility <code>Reader</code> implementation which joins one or more other <code>Reader</code> to appear as one.
*/
public class CompositeReader extends Reader {
/** Logger. */
private final static Logger log = LoggerFactory.getLogger(CompositeReader.class);
/** List of readers (in order). */
private final Reader[] readers;
/** Current index. */
private int index;
/**
* #param readers ordered list of <code>Reader</code> to read from.
*/
public CompositeReader(final Reader... readers) {
checkArgument(readers.length > 0, "Argument readers must not be empty.");
this.readers = readers;
index = 0;
}
#Override
public int read(final char[] cbuf, final int off, final int len) throws IOException {
int read = 0;
while (read < len && index != readers.length) {
final Reader reader = readers[index];
final int readFromReader = reader.read(cbuf, off + read, len - read);
if (readFromReader == -1) {
++index;
} else {
read += readFromReader;
}
}
if (read == 0) {
return -1;
}
return read;
}
#Override
public void close() throws IOException {
IOException firstException = null;
for (final Reader reader : readers) {
try {
reader.close();
} catch (final IOException ex) {
if (firstException != null) {
log.warn("Multiple readers could not be closed, only first exception will be thrown.");
firstException = ex;
}
}
}
if (firstException != null) {
throw firstException;
}
}
}
Here you go :-)

How to Cache InputStream for Multiple Use

I have an InputStream of a file and i use apache poi components to read from it like this:
POIFSFileSystem fileSystem = new POIFSFileSystem(inputStream);
The problem is that i need to use the same stream multiple times and the POIFSFileSystem closes the stream after use.
What is the best way to cache the data from the input stream and then serve more input streams to different POIFSFileSystem ?
EDIT 1:
By cache i meant store for later use, not as a way to speedup the application. Also is it better to just read up the input stream into an array or string and then create input streams for each use ?
EDIT 2:
Sorry to reopen the question, but the conditions are somewhat different when working inside desktop and web application.
First of all, the InputStream i get from the org.apache.commons.fileupload.FileItem in my tomcat web app doesn't support markings thus cannot reset.
Second, I'd like to be able to keep the file in memory for faster acces and less io problems when dealing with files.
you can decorate InputStream being passed to POIFSFileSystem with a version that when close() is called it respond with reset():
class ResetOnCloseInputStream extends InputStream {
private final InputStream decorated;
public ResetOnCloseInputStream(InputStream anInputStream) {
if (!anInputStream.markSupported()) {
throw new IllegalArgumentException("marking not supported");
}
anInputStream.mark( 1 << 24); // magic constant: BEWARE
decorated = anInputStream;
}
#Override
public void close() throws IOException {
decorated.reset();
}
#Override
public int read() throws IOException {
return decorated.read();
}
}
testcase
static void closeAfterInputStreamIsConsumed(InputStream is)
throws IOException {
int r;
while ((r = is.read()) != -1) {
System.out.println(r);
}
is.close();
System.out.println("=========");
}
public static void main(String[] args) throws IOException {
InputStream is = new ByteArrayInputStream("sample".getBytes());
ResetOnCloseInputStream decoratedIs = new ResetOnCloseInputStream(is);
closeAfterInputStreamIsConsumed(decoratedIs);
closeAfterInputStreamIsConsumed(decoratedIs);
closeAfterInputStreamIsConsumed(is);
}
EDIT 2
you can read the entire file in a byte[] (slurp mode) then passing it to a ByteArrayInputStream
Try BufferedInputStream, which adds mark and reset functionality to another input stream, and just override its close method:
public class UnclosableBufferedInputStream extends BufferedInputStream {
public UnclosableBufferedInputStream(InputStream in) {
super(in);
super.mark(Integer.MAX_VALUE);
}
#Override
public void close() throws IOException {
super.reset();
}
}
So:
UnclosableBufferedInputStream bis = new UnclosableBufferedInputStream (inputStream);
and use bis wherever inputStream was used before.
This works correctly:
byte[] bytes = getBytes(inputStream);
POIFSFileSystem fileSystem = new POIFSFileSystem(new ByteArrayInputStream(bytes));
where getBytes is like this:
private static byte[] getBytes(InputStream is) throws IOException {
byte[] buffer = new byte[8192];
ByteArrayOutputStream baos = new ByteArrayOutputStream(2048);
int n;
baos.reset();
while ((n = is.read(buffer, 0, buffer.length)) != -1) {
baos.write(buffer, 0, n);
}
return baos.toByteArray();
}
Use below implementation for more custom use -
public class ReusableBufferedInputStream extends BufferedInputStream
{
private int totalUse;
private int used;
public ReusableBufferedInputStream(InputStream in, Integer totalUse)
{
super(in);
if (totalUse > 1)
{
super.mark(Integer.MAX_VALUE);
this.totalUse = totalUse;
this.used = 1;
}
else
{
this.totalUse = 1;
this.used = 1;
}
}
#Override
public void close() throws IOException
{
if (used < totalUse)
{
super.reset();
++used;
}
else
{
super.close();
}
}
}
What exactly do you mean with "cache"? Do you want the different POIFSFileSystem to start at the beginning of the stream? If so, there's absolutely no point caching anything in your Java code; it will be done by the OS, just open a new stream.
Or do you wan to continue reading at the point where the first POIFSFileSystem stopped? That's not caching, and it's very difficult to do. The only way I can think of if you can't avoid the stream getting closed would be to write a thin wrapper that counts how many bytes have been read and then open a new stream and skip that many bytes. But that could fail when POIFSFileSystem internally uses something like a BufferedInputStream.
If the file is not that big, read it into a byte[] array and give POI a ByteArrayInputStream created from that array.
If the file is big, then you shouldn't care, since the OS will do the caching for you as best as it can.
[EDIT] Use Apache commons-io to read the File into a byte array in an efficient way. Do not use int read() since it reads the file byte by byte which is very slow!
If you want to do it yourself, use a File object to get the length, create the array and the a loop which reads bytes from the file. You must loop since read(byte[], int offset, int len) can read less than len bytes (and usually does).
This is how I would implemented, to be safely used with any InputStream :
write your own InputStream wrapper where you create a temporary file to mirror the original stream content
dump everything read from the original input stream into this temporary file
when the stream was completely read you will have all the data mirrored in the temporary file
use InputStream.reset to switch(initialize) the internal stream to a FileInputStream(mirrored_content_file)
from now on you will loose the reference of the original stream(can be collected)
add a new method release() which will remove the temporary file and release any open stream.
you can even call release() from finalize to be sure the temporary file is release in case you forget to call release()(most of the time you should avoid using finalize, always call a method to release object resources). see Why would you ever implement finalize()?
public static void main(String[] args) throws IOException {
BufferedInputStream inputStream = new BufferedInputStream(IOUtils.toInputStream("Foobar"));
inputStream.mark(Integer.MAX_VALUE);
System.out.println(IOUtils.toString(inputStream));
inputStream.reset();
System.out.println(IOUtils.toString(inputStream));
}
This works. IOUtils is part of commons IO.
This answer iterates on previous ones 1|2 based on the BufferInputStream. The main changes are that it allows infinite reuse. And takes care of closing the original source input stream to free-up system resources. Your OS defines a limit on those and you don't want the program to run out of file handles (That's also why you should always 'consume' responses e.g. with the apache EntityUtils.consumeQuietly()). EDIT Updated the code to handle for gready consumers that use read(buffer, offset, length), in that case it may happen that BufferedInputStream tries hard to look at the source, this code protects against that use.
public class CachingInputStream extends BufferedInputStream {
public CachingInputStream(InputStream source) {
super(new PostCloseProtection(source));
super.mark(Integer.MAX_VALUE);
}
#Override
public synchronized void close() throws IOException {
if (!((PostCloseProtection) in).decoratedClosed) {
in.close();
}
super.reset();
}
private static class PostCloseProtection extends InputStream {
private volatile boolean decoratedClosed = false;
private final InputStream source;
public PostCloseProtection(InputStream source) {
this.source = source;
}
#Override
public int read() throws IOException {
return decoratedClosed ? -1 : source.read();
}
#Override
public int read(byte[] b) throws IOException {
return decoratedClosed ? -1 : source.read(b);
}
#Override
public int read(byte[] b, int off, int len) throws IOException {
return decoratedClosed ? -1 : source.read(b, off, len);
}
#Override
public long skip(long n) throws IOException {
return decoratedClosed ? 0 : source.skip(n);
}
#Override
public int available() throws IOException {
return source.available();
}
#Override
public void close() throws IOException {
decoratedClosed = true;
source.close();
}
#Override
public void mark(int readLimit) {
source.mark(readLimit);
}
#Override
public void reset() throws IOException {
source.reset();
}
#Override
public boolean markSupported() {
return source.markSupported();
}
}
}
To reuse it just close it first if it wasn't.
One limitation though is that if the stream is closed before the whole content of the original stream has been read, then this decorator will have incomplete data, so make sure the whole stream is read before closing.
I just add my solution here, as this works for me. It basically is a combination of the top two answers :)
private String convertStreamToString(InputStream is) {
Writer w = new StringWriter();
char[] buf = new char[1024];
Reader r;
is.mark(1 << 24);
try {
r = new BufferedReader(new InputStreamReader(is, "UTF-8"));
int n;
while ((n=r.read(buf)) != -1) {
w.write(buf, 0, n);
}
is.reset();
} catch(UnsupportedEncodingException e) {
Logger.debug(this.getClass(), "Cannot convert stream to string.", e);
} catch(IOException e) {
Logger.debug(this.getClass(), "Cannot convert stream to string.", e);
}
return w.toString();
}

Categories