Hello everyone working with gzip, I was faced with a question. I have a GzipWrapper and there is a lot of if else, is it possible to do something like that with Optional.orElse? With simple Optional examples, I sorted it out, but I don’t quite understand how to do this in a wrapper. An example on one of the methods will suffice) Thanks in advance)
MyWrapper:
public class GZIPFilterResponseWrapper extends HttpServletResponseWrapper implements Closeable {
private PrintWriter printWriter;
private GZIPFilterResponseStream gzipStream;
private ServletOutputStream outputStream;
public GZIPFilterResponseWrapper(HttpServletResponse response) throws IOException {
super(response);
response.addHeader(CONTENT_ENCODING, GZIP);
gzipStream = new GZIPFilterResponseStream(response.getOutputStream());
}
#Override
public void flushBuffer() throws IOException {
if (nonNull(printWriter)) {
printWriter.flush();
}
if (nonNull(outputStream)) {
outputStream.flush();
}
super.flushBuffer();
}
#Override
public ServletOutputStream getOutputStream() throws IOException {
if (nonNull(printWriter)) {
throw new IllegalStateException(GZIP_CANNOT_WRITE);
}
if (isNull(outputStream)) {
outputStream = gzipStream;
}
return outputStream;
}
#Override
public PrintWriter getWriter() throws IOException {
if (nonNull(outputStream)) {
throw new IllegalStateException(GZIP_WRITER_ALREADY_HAS_CALLING);
}
if (isNull(printWriter)) {
printWriter = new PrintWriter(new OutputStreamWriter(gzipStream, getResponse().getCharacterEncoding()));
}
return printWriter;
}
#Override
public void close() throws IOException {
if (nonNull(printWriter)) {
printWriter.close();
}
if (nonNull(outputStream)) {
try {
outputStream.close();
} catch (IOException e) {
throw new IOException(e.getMessage());
}
}
}
}
Optional is not a replacement for conditional logic.
Optional was added so that APIs had a consistent way of declaring that a method returns a value that may or may not be present, without returning null. Returning null is vulnerable to exceptions, and there is no easy way to know whether to expect null without reading the documentation. Optional makes this contract explicit.
So while it may be possible to replace your conditional logic with Optionals, that would be unlikely to make your code any better or any easier to read.
Here's one example just to satisfy your curiosity:
if (nonNull(printWriter)) {
printWriter.flush();
}
becomes
Optional.ofNullable(printWriter).ifPresent(PrintWriter::flush);
Related
Let's assume that I have a java program that creates a report by multiple threads writing .to a file:
public File report = new File("C:\somewhere\file")
public FileWriter fileWriter = new FileWriter("C:\somewhere\file");
//Some thread executed the following statement
fileWriter.write("creating report for this thread");
Instead of using a file, I want to use some type of String buffer to create the report so I can return it in a rest response. What can I use that has the same outcome as if using a File.
Update: I want to completely omit the file implementation as I can't store it in cloud.
You can use the outputstream of the HttpServletResponse to send the file as stream. Don't forget to make your header relevant. You can write a method to process the output as file:
public static void writeFileToOutputStream(HttpServletResponse response, File file) {
String type = "application/octet-stream";
response.setContentType(type);
response.setHeader("Content-Disposition", String.format("inline;filename=\"" + file.getName() + "\""));
response.setContentLength((int) file.length());
InputStream inputStream = null;
try {
inputStream = new BufferedInputStream(new FileInputStream(file));
FileCopyUtils.copy(inputStream, response.getOutputStream());
} catch (IOException e) {
log.info("------couldn't write file------");
}
}
Several threads writing to the same would have one obvious solution: use java.util.logging. Writing to a log file. The content of a log file can also easily be returned as a REST response.
Using a string buffer, StringBuilder is faster, but not thread-safe. The older StringBuffer is thread-safe but not with twice appending, like in:
sb.append("The size is ").append(size); // Not thread-safe.
You could do:
private final StringBuilder sb = new StringBuilder(4096);
public void printf(String messageFormat, Object... args) {
String s = new MessageFormat(....);
synchronized(sb) {
sb.append(s);
}
}
public String extract() {
String s;
synchronized(sb) {
s = sb.toString();
sb.setLength(0);
}
return s;
}
If you want to stay implementation agnostic then you should design to an interface. I'd suggest just plain old Writer. You could have something like:
public abstract class AbstractReportWriter {
protected Writer writer;
public AbstractWriter(Writer w) {
writer = w;
}
public void write(String text) {
writer.write(text);
}
}
public class FileReportWriter extends AbstractReportWriter {
public FileReportWriter(String path) {
super(new FileWriter(path))
}
}
public class StringReportWriter extends AbstractReportWriter {
public StringReportWriter() {
super(new StringWriter())
}
public String getValue() {
return ((StringWriter) writer).toString()
}
}
public class CloudReportWriter extends AbstractReportWriter {
public CloudReportWriter() {
super(new YourCloudWriterClass());
}
}
Then you can pick and choose your writer by just swapping the implementation.
I have a BufferedWriter as shown below:
BufferedWriter writer = new BufferedWriter(new OutputStreamWriter(
new GZIPOutputStream( hdfs.create(filepath, true ))));
String line = "text";
writer.write(line);
I want to find out the bytes written to the file with out querying file like
hdfs = FileSystem.get( new URI( "hdfs://localhost:8020" ), configuration );
filepath = new Path("path");
hdfs.getFileStatus(filepath).getLen();
as it will add overhead and I don't want that.
Also I cant do this:
line.getBytes().length;
As it give size before compression.
You can use the CountingOutputStream from Apache commons IO library.
Place it between the GZIPOutputStream and the file Outputstream (hdfs.create(..)).
After writing the content to the file you can read the number of written bytes from the CountingOutputStream instance.
If this isn't too late and you are using 1.7+ and you don't wan't to pull in an entire library like Guava or Commons-IO, you can just extend the GZIPOutputStream and obtain the data from the associated Deflater like so:
public class MyGZIPOutputStream extends GZIPOutputStream {
public MyGZIPOutputStream(OutputStream out) throws IOException {
super(out);
}
public long getBytesRead() {
return def.getBytesRead();
}
public long getBytesWritten() {
return def.getBytesWritten();
}
public void setLevel(int level) {
def.setLevel(level);
}
}
You can make you own descendant of OutputStream and count how many time write method was invoked
This is similar to the response by Olaseni, but I moved the counting into the BufferedOutputStream rather than the GZIPOutputStream, and this is more robust, since def.getBytesRead() in Olaseni's answer is not available after the stream has been closed.
With the implementation below, you can supply your own AtomicLong to the constructor so that you can assign the CountingBufferedOutputStream in a try-with-resources block, but still retrieve the count after the block has exited (i.e. after the file is closed).
public static class CountingBufferedOutputStream extends BufferedOutputStream {
private final AtomicLong bytesWritten;
public CountingBufferedOutputStream(OutputStream out) throws IOException {
super(out);
this.bytesWritten = new AtomicLong();
}
public CountingBufferedOutputStream(OutputStream out, int bufSize) throws IOException {
super(out, bufSize);
this.bytesWritten = new AtomicLong();
}
public CountingBufferedOutputStream(OutputStream out, int bufSize, AtomicLong bytesWritten)
throws IOException {
super(out, bufSize);
this.bytesWritten = bytesWritten;
}
#Override
public void write(byte[] b) throws IOException {
super.write(b);
bytesWritten.addAndGet(b.length);
}
#Override
public void write(byte[] b, int off, int len) throws IOException {
super.write(b, off, len);
bytesWritten.addAndGet(len);
}
#Override
public synchronized void write(int b) throws IOException {
super.write(b);
bytesWritten.incrementAndGet();
}
public long getBytesWritten() {
return bytesWritten.get();
}
}
I am reading a file from its classpath in Java project.
Sample Code:
public static Properties loadPropertyFile(String fileName) {
Properties properties = new Properties();
InputStream inputStream = PropertyReader.class.getClassLoader().getResourceAsStream(fileName);
if (inputStream != null) {
try {
properties.load(inputStream);
} catch (IOException e) {
e.printStackTrace();
}
} else {
throw new Exception("Property file: [" + fileName + "] not found in the classpath");
}
return properties;
}
It's working fine. I am writing Junit tests for this code. How can I create a scenario for IOException in properties.load(inputStream)?
What values should I put in my properties.file to get IOException?
When you look into the implementation of Properties::load, you find out that the class never throws the exception explicitly. The only way to trigger an IOException would be to hand an InputStream that throws this exception upon invoking the input stream's read method.
Do you have control over the PropertyReader class? One way to emulate this error would be to instrument this class loader to return an errornous InputStream for a given test value of fileName to throw an IOException. Alternatively, you could make the method more flexible by changing the signature to:
public static Properties loadPropertyFile(String fileName) {
return loadPropertyFile(fileName, PropertyReader.class);
}
public static Properties loadPropertyFile(String fileName, ClassLoader cl) {
// your code...
}
with handing a class loader:
class TestLoader extends ClassLoader {
#Override
public InputStream getResourceAsStream() {
return new InputStream() {
#Override
public byte read() throws IOException {
throws new IOException();
}
}
}
}
You cannot add specific characters to the properties file that cause an IOException as the InputStream only reads bytes. Any encoding-realted problem will instead result in an IllegalArgumentException.
Have you considered a mocking framework? Mock the new operator for Properties to return a mock implementation that throws IOException when its load method is called.
mockitio and powermockito will allow you to do this.
One way of forcing the load call to throw IOException is to pass a closed stream. If you can refactor your method to accept a InputStream you can pass a closed stream to the method and check whether it is throwing an exception.
That said, Unit Tests are supposed to cover the code you write. It seems to me you are testing whether load throws an exception if the input stream has an error. Which is superfluous.
I know that this is an old thread, but since I recently ran into the same problem and based on Rafael Winterhalter's answer I came up with this test:
#Test
void testFailLoadProjectFile() throws NoSuchMethodException, SecurityException {
final var method = LogGeneratorComponent.class.getDeclaredMethod("loadProjectProperties", Properties.class);
Assertions.assertTrue(Modifier.isPrivate(method.getModifiers()), "Method is not private.");
method.setAccessible(true);
final var obj = new LogGeneratorComponent();
Assertions.assertThrows(InvocationTargetException.class, () -> method.invoke(obj, new Properties() {
private static final long serialVersionUID = 5663506788956932491L;
#Override
public synchronized void load(#SuppressWarnings("unused") final InputStream is) throws IOException {
throw new IOException("Invalid properties implementation.");
}
}), "An InvocationTargetException should have been thrown, but nothing happened.");
}
And this is my actual method:
private static Properties loadProjectProperties(final Properties properties) {
try (final var is = Thread.currentThread().getContextClassLoader().getResourceAsStream("project.properties")) {
properties.load(is);
return properties;
} catch (final IOException e) {
throw new UncheckedIOException("Error while reading project.properties file.", e);
}
}
Obviously OP can customize it to receive a fileName parameter and make it public in order to have a simpler test.
Empty or Invalid fileName parameter.
pass a blank string as file name
I have this InputStream:
InputStream inputStream = new ByteArrayInputStream(myString.getBytes(StandardCharsets.UTF_8));
How can I convert this to ServletInputStream?
I have tried:
ServletInputStream servletInputStream = (ServletInputStream) inputStream;
but do not work.
EDIT:
My method is this:
private static class LowerCaseRequest extends HttpServletRequestWrapper {
public LowerCaseRequest(final HttpServletRequest request) throws IOException, ServletException {
super(request);
}
#Override
public ServletInputStream getInputStream() throws IOException {
ServletInputStream servletInputStream;
StringBuilder jb = new StringBuilder();
String line;
String toLowerCase = "";
BufferedReader reader = new BufferedReader(new InputStreamReader(super.getInputStream()));
while ((line = reader.readLine()) != null) {
toLowerCase = jb.append(line).toString().toLowerCase();
}
InputStream inputStream = new ByteArrayInputStream(toLowerCase.getBytes(StandardCharsets.UTF_8));
servletInputStream = (ServletInputStream) inputStream;
return servletInputStream;
}
}
I´m trying to convert all my request to lowercase.
My advice: don't create the ByteArrayInputStream, just use the byte array you got from the getBytes method already. This should be enough to create a ServletInputStream.
Most basic solution
Unfortunately, aksappy's answer only overrides the read method. While this may be enough in Servlet API 3.0 and below, in the later versions of Servlet API there are three more methods you have to implement.
Here is my implementation of the class, although with it becoming quite long (due to the new methods introduced in Servlet API 3.1), you might want to think about factoring it out into a nested or even top-level class.
final byte[] myBytes = myString.getBytes("UTF-8");
ServletInputStream servletInputStream = new ServletInputStream() {
private int lastIndexRetrieved = -1;
private ReadListener readListener = null;
#Override
public boolean isFinished() {
return (lastIndexRetrieved == myBytes.length-1);
}
#Override
public boolean isReady() {
// This implementation will never block
// We also never need to call the readListener from this method, as this method will never return false
return isFinished();
}
#Override
public void setReadListener(ReadListener readListener) {
this.readListener = readListener;
if (!isFinished()) {
try {
readListener.onDataAvailable();
} catch (IOException e) {
readListener.onError(e);
}
} else {
try {
readListener.onAllDataRead();
} catch (IOException e) {
readListener.onError(e);
}
}
}
#Override
public int read() throws IOException {
int i;
if (!isFinished()) {
i = myBytes[lastIndexRetrieved+1];
lastIndexRetrieved++;
if (isFinished() && (readListener != null)) {
try {
readListener.onAllDataRead();
} catch (IOException ex) {
readListener.onError(ex);
throw ex;
}
}
return i;
} else {
return -1;
}
}
};
Adding expected methods
Depending on your requirements, you may also want to override other methods. As romfret pointed out, it's advisable to override some methods, such as close and available. If you don't implement them, the stream will always report that there are 0 bytes available to be read, and the close method will do nothing to affect the state of the stream. You can probably get away without overriding skip, as the default implementation will just call read a number of times.
#Override
public int available() throws IOException {
return (myBytes.length-lastIndexRetrieved-1);
}
#Override
public void close() throws IOException {
lastIndexRetrieved = myBytes.length-1;
}
Writing a better close method
Unfortunately, due to the nature of an anonymous class, it's going to be difficult for you to write an effective close method because as long as one instance of the stream has not been garbage-collected by Java, it maintains a reference to the byte array, even if the stream has been closed.
However, if you factor out the class into a nested or top-level class (or even an anonymous class with a constructor which you call from the line in which it is defined), the myBytes can be a non-final field rather than a final local variable, and you can add a line like:
myBytes = null;
to your close method, which will allow Java to free memory taken up by the byte array.
Of course, this will require you to write a constructor, such as:
private byte[] myBytes;
public StringServletInputStream(String str) {
try {
myBytes = str.getBytes("UTF-8");
} catch (UnsupportedEncodingException e) {
throw new IllegalStateException("JVM did not support UTF-8", e);
}
}
Mark and Reset
You may also want to override mark, markSupported and reset if you want to support mark/reset. I am not sure if they are ever actually called by your container though.
private int readLimit = -1;
private int markedPosition = -1;
#Override
public boolean markSupported() {
return true;
}
#Override
public synchronized void mark(int readLimit) {
this.readLimit = readLimit;
this.markedPosition = lastIndexRetrieved;
}
#Override
public synchronized void reset() throws IOException {
if (markedPosition == -1) {
throw new IOException("No mark found");
} else {
lastIndexRetrieved = markedPosition;
readLimit = -1;
}
}
// Replacement of earlier read method to cope with readLimit
#Override
public int read() throws IOException {
int i;
if (!isFinished()) {
i = myBytes[lastIndexRetrieved+1];
lastIndexRetrieved++;
if (isFinished() && (readListener != null)) {
try {
readListener.onAllDataRead();
} catch (IOException ex) {
readListener.onError(ex);
throw ex;
}
readLimit = -1;
}
if (readLimit != -1) {
if ((lastIndexRetrieved - markedPosition) > readLimit) {
// This part is actually not necessary in our implementation
// as we are not storing any data. However we need to respect
// the contract.
markedPosition = -1;
readLimit = -1;
}
}
return i;
} else {
return -1;
}
}
Try this code.
ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(myString.getBytes(StandardCharsets.UTF_8));
ServletInputStream servletInputStream=new ServletInputStream(){
public int read() throws IOException {
return byteArrayInputStream.read();
}
}
You can only cast something like this:
ServletInputStream servletInputStream = (ServletInputStream) inputStream;
if the inputStream you are trying to cast is actually a ServletInputStream already. It will complain if it's some other implementation of InputStream. You can't cast an object to something it isn't.
In a Servlet container, you can get a ServletInputStream from a ServletRequest:
ServletInputStream servletInputStream = request.getInputStream();
So, what are you actually trying to do?
EDIT
I'm intrigued as to why you want to convert your request to lower-case - why not just make your servlet case-insensitive? In other words, your code to lower-case the request data can be copied into your servlet, then it can process it there... always look for the simplest solution!
I've created a filter to in my java webserver (appengine actually) that logs the parameters of an incoming request. I'd also like to log the resulting response that my webserver writes. Although I have access to the response object, I'm not sure how to get the actual string/content response out of it.
Any ideas?
You need to create a Filter wherein you wrap the ServletResponse argument with a custom HttpServletResponseWrapper implementation wherein you override the getOutputStream() and getWriter() to return a custom ServletOutputStream implementation wherein you copy the written byte(s) in the base abstract OutputStream#write(int b) method. Then, you pass the wrapped custom HttpServletResponseWrapper to the FilterChain#doFilter() call instead and finally you should be able to get the copied response after the the call.
In other words, the Filter:
#WebFilter("/*")
public class ResponseLogger implements Filter {
#Override
public void init(FilterConfig config) throws ServletException {
// NOOP.
}
#Override
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws ServletException, IOException {
if (response.getCharacterEncoding() == null) {
response.setCharacterEncoding("UTF-8"); // Or whatever default. UTF-8 is good for World Domination.
}
HttpServletResponseCopier responseCopier = new HttpServletResponseCopier((HttpServletResponse) response);
try {
chain.doFilter(request, responseCopier);
responseCopier.flushBuffer();
} finally {
byte[] copy = responseCopier.getCopy();
System.out.println(new String(copy, response.getCharacterEncoding())); // Do your logging job here. This is just a basic example.
}
}
#Override
public void destroy() {
// NOOP.
}
}
The custom HttpServletResponseWrapper:
public class HttpServletResponseCopier extends HttpServletResponseWrapper {
private ServletOutputStream outputStream;
private PrintWriter writer;
private ServletOutputStreamCopier copier;
public HttpServletResponseCopier(HttpServletResponse response) throws IOException {
super(response);
}
#Override
public ServletOutputStream getOutputStream() throws IOException {
if (writer != null) {
throw new IllegalStateException("getWriter() has already been called on this response.");
}
if (outputStream == null) {
outputStream = getResponse().getOutputStream();
copier = new ServletOutputStreamCopier(outputStream);
}
return copier;
}
#Override
public PrintWriter getWriter() throws IOException {
if (outputStream != null) {
throw new IllegalStateException("getOutputStream() has already been called on this response.");
}
if (writer == null) {
copier = new ServletOutputStreamCopier(getResponse().getOutputStream());
writer = new PrintWriter(new OutputStreamWriter(copier, getResponse().getCharacterEncoding()), true);
}
return writer;
}
#Override
public void flushBuffer() throws IOException {
if (writer != null) {
writer.flush();
} else if (outputStream != null) {
copier.flush();
}
}
public byte[] getCopy() {
if (copier != null) {
return copier.getCopy();
} else {
return new byte[0];
}
}
}
The custom ServletOutputStream:
public class ServletOutputStreamCopier extends ServletOutputStream {
private OutputStream outputStream;
private ByteArrayOutputStream copy;
public ServletOutputStreamCopier(OutputStream outputStream) {
this.outputStream = outputStream;
this.copy = new ByteArrayOutputStream(1024);
}
#Override
public void write(int b) throws IOException {
outputStream.write(b);
copy.write(b);
}
public byte[] getCopy() {
return copy.toByteArray();
}
}
BalusC solution is ok, but little outdated. Spring now has feature for it . All you need to do is use [ContentCachingResponseWrapper], which has method public byte[] getContentAsByteArray() .
I Suggest to make WrapperFactory which will allow to make it configurable, whether to use default ResponseWrapper or ContentCachingResponseWrapper.
Instead of creating Custom HttpServletResponseWrapper.You can use ContentCachingResponseWrapper as it provide method getContentAsByteArray().
public void doFilterInternal(HttpServletRequest servletRequest, HttpServletResponse servletResponse,
FilterChain filterChain) throws IOException, ServletException {
HttpServletRequest request = servletRequest;
HttpServletResponse response = servletResponse;
ContentCachingRequestWrapper requestWrapper = new ContentCachingRequestWrapper(request);
ContentCachingResponseWrapper responseWrapper =new ContentCachingResponseWrapper(response);
try {
super.doFilterInternal(requestWrapper, responseWrapper, filterChain);
} finally {
byte[] responseArray=responseWrapper.getContentAsByteArray();
String responseStr=new String(responseArray,responseWrapper.getCharacterEncoding());
System.out.println("string"+responseStr);
/*It is important to copy cached reponse body back to response stream
to see response */
responseWrapper.copyBodyToResponse();
}
}
While BalusC's answer will work in most scenarios you have to be careful with the flush call - it commits response and no other writing to it is possible, eg. via following filters.
We have found some problems with very simmilar approach in Websphere environment where the delivered response was only partial.
According to this question the flush should not be called at all and you should let it be called internally.
I have solved the flush problem by using TeeWriter (it splits stream into 2 streams) and using non-buffering streams in the "branched stream" for logging purpose. It is unneccessary to call the flush then.
private HttpServletResponse wrapResponseForLogging(HttpServletResponse response, final Writer branchedWriter) {
return new HttpServletResponseWrapper(response) {
PrintWriter writer;
#Override
public synchronized PrintWriter getWriter() throws IOException {
if (writer == null) {
writer = new PrintWriter(new TeeWriter(super.getWriter(), branchedWriter));
}
return writer;
}
};
}
Then you can use it this way:
protected void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException {
//...
StringBuilderWriter branchedWriter = new org.apache.commons.io.output.StringBuilderWriter();
try {
chain.doFilter(request, wrapResponseForLogging(response, branchedWriter));
} finally {
log.trace("Response: " + branchedWriter);
}
}
The code is simplified for brewity.
I am not quite familiar with appengine but you need something Access Log Valve in Tomcat. Its attribute pattern ; a formatting layout identifying the various information fields from the request and response to be logged, or the word common or combined to select a standard format.
It looks appengine has built in functionality for log filtering.
apply a servlet filter
If you just want the response payload as as String, I would go for:
final ReadableHttpServletResponse httpResponse = (ReadableHttpServletResponse) response;
final byte[] data = httpResponse.readPayload();
System.out.println(new String(data));