I have multiple jax-rs services built using cxf/spring. I want to control the output payload response size of all services. For simplicity sake, let's say none of api's in any of the services should ever return a JSON response payload more than 500 characters and I want to control this in one place instead of relying on individual services to adhere to this requirement. (We already have other features built into the custom framework/base component that all services depend on).
I have tried implementing this using JAX-RS's WriterInterceptor, ContainerResponseFilter and CXF's Phase Interceptor, but none of the approaches seem to be completely satisfy my requirement. More details on what I've done so far:
Option 1: (WriterInteceptor) In the overridden method, I get the ouputstream and set the max size of the cache to 500. When I invoke an api that returns more than 500 characters in the response payload, I get an HTTP 400 Bad Request status, but the response body contains the entire JSON payload.
#Provider
public class ResponsePayloadInterceptor implements WriterInterceptor {
private static final Logger LOGGER = LoggerFactory.getLogger(ResponsePayloadInterceptor.class);
#Override
public void aroundWriteTo(WriterInterceptorContext context) throws IOException, WebApplicationException {
final OutputStream outputStream = context.getOutputStream();
CacheAndWriteOutputStream cacheAndWriteOutputStream = new CacheAndWriteOutputStream(outputStream);
cacheAndWriteOutputStream.setMaxSize(500);
context.setOutputStream(cacheAndWriteOutputStream);
context.proceed();
}
}
Option 2a: (CXF Phase Inteceptor) In the overridden method, I get the response as String from the ouputstream and check it's size. If it's greater than 500, I create a new Response object with only the data Too much data and set it in the message. Even if the response is > 500 characters, I get an HTTP 200 OK status with the entire JSON. Only when I use the phase as POST_MARSHAL or a later phase, I'm able to get hold of the JSON response and check it's length, but by that time the response has already been streamed to the client.
#Provider
public class ResponsePayloadInterceptor extends AbstractPhaseInterceptor<Message> {
private static final Logger LOGGER = LoggerFactory.getLogger(ResponsePayloadInterceptor.class);
public ResponsePayloadInterceptor() {
super(Phase.POST_MARSHAL);
}
#Override
public void handleMessage(Message message) throws Fault {
LOGGER.info("handleMessage() - Response intercepted");
try {
OutputStream outputStream = message.getContent(OutputStream.class);
...
CachedOutputStream cachedOutputStream = (CachedOutputStream) outputStream;
String responseBody = IOUtils.toString(cachedOutputStream.getInputStream(), "UTF-8");
...
LOGGER.info("handleMessage() - Response: {}", responseBody);
LOGGER.info("handleMessage() - Response Length: {}", responseBody.length());
if (responseBody.length() > 500) {
Response response = Response.status(Response.Status.BAD_REQUEST)
.entity("Too much data").build();
message.getExchange().put(Response.class, response);
}
} catch (IOException e) {
LOGGER.error("handleMessage() - Error");
e.printStackTrace();
}
}
}
Option 2b: (CXF Phase Inteceptor) Same as above, but only the contents of if block is changed. If response length is greater than 500, I create a new output stream with the string Too much data and set it in message. But if the response payload is > 500 characters, I still get an HTTP 200 OK status with an invalid JSON response (entire JSON + additional text) i.e., the response looks like this: [{"data":"", ...}, {...}]Too much data (the text 'Too much data' is appended to the JSON)
if (responseBody.length() > 500) {
InputStream inputStream = new ByteArrayInputStream("Too much data".getBytes("UTF-8"));
outputStream.flush();
IOUtils.copy(inputStream, outputStream);
OutputStream out = new CachedOutputStream();
out.write("Too much data".getBytes("UTF-8"));
message.setContent(OutputStream.class, out);
}
Option 3: (ContainerResponseFilter) Using the ContainerResponseFilter, I added a Content-Length response header with value as 500. If response length is > 500, I get an HTTP 200 OK status with an invalid JSON response (truncated to 500 characters). If the response length is < 500, still get an HTTP 200 OK status, but the client waits for more data to be returned by the server (as expected) and times out, which isn't a desirable solution.
#Provider
public class ResponsePayloadFilter implements ContainerResponseFilter {
private static final Logger LOGGER = LoggerFactory.getLogger(ResponsePayloadFilter.class);
#Override
public void filter(ContainerRequestContext requestContext, ContainerResponseContext responseContext) throws IOException {
LOGGER.info("filter() - Response intercepted");
CachedOutputStream cos = (CachedOutputStream) responseContext.getEntityStream();
StringBuilder responsePayload = new StringBuilder();
ByteArrayOutputStream out = new ByteArrayOutputStream();
if (cos.getInputStream().available() > 0) {
IOUtils.copy(cos.getInputStream(), out);
byte[] responseEntity = out.toByteArray();
responsePayload.append(new String(responseEntity));
}
LOGGER.info("filter() - Content: {}", responsePayload.toString());
responseContext.getHeaders().add("Content-Length", "500");
}
}
Any suggestions on how I can tweak the above approaches to get what I want or any other different pointers?
I resolved this partially using help from this answer. I say partially because I'm successfully able to control the payload, but the not the response status code. Ideally, if the response length is greater than 500 and I modify the message content, I would like to send a different response status code (other than 200 OK). But this is a good enough solution for me to proceed at this point. If I figure out how to update the status code as well, I'll come back and update this answer.
import org.apache.commons.io.IOUtils;
import org.apache.cxf.interceptor.Fault;
import org.apache.cxf.io.CachedOutputStream;
import org.apache.cxf.message.Message;
import org.apache.cxf.phase.AbstractPhaseInterceptor;
import org.apache.cxf.phase.Phase;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
public class ResponsePayloadInterceptor extends AbstractPhaseInterceptor<Message> {
private static final Logger LOGGER = LoggerFactory.getLogger(ResponsePayloadInterceptor.class);
public ResponsePayloadInterceptor() {
super(Phase.PRE_STREAM);
}
#Override
public void handleMessage(Message message) throws Fault {
LOGGER.info("handleMessage() - Response intercepted");
try {
OutputStream outputStream = message.getContent(OutputStream.class);
CachedOutputStream cachedOutputStream = new CachedOutputStream();
message.setContent(OutputStream.class, cachedOutputStream);
message.getInterceptorChain().doIntercept(message);
cachedOutputStream.flush();
cachedOutputStream.close();
CachedOutputStream newCachedOutputStream = (CachedOutputStream) message.getContent(OutputStream.class);
String currentResponse = IOUtils.toString(newCachedOutputStream.getInputStream(), "UTF-8");
newCachedOutputStream.flush();
newCachedOutputStream.close();
if (currentResponse != null) {
LOGGER.info("handleMessage() - Response: {}", currentResponse);
LOGGER.info("handleMessage() - Response Length: {}", currentResponse.length());
if (currentResponse.length() > 500) {
InputStream replaceInputStream = IOUtils.toInputStream("{\"message\":\"Too much data\"}", "UTF-8");
IOUtils.copy(replaceInputStream, outputStream);
replaceInputStream.close();
message.setContent(OutputStream.class, outputStream);
outputStream.flush();
outputStream.close();
} else {
InputStream replaceInputStream = IOUtils.toInputStream(currentResponse, "UTF-8");
IOUtils.copy(replaceInputStream, outputStream);
replaceInputStream.close();
message.setContent(OutputStream.class, outputStream);
outputStream.flush();
outputStream.close();
}
}
} catch (IOException e) {
LOGGER.error("handleMessage() - Error", e);
throw new RuntimeException(e);
}
}
Related
I am using the feature of Icecast to add a repeatedly message containing the current metadata of the stream. It is enabled with following header:
request.headers().set("Icy-MetaData", "1");
After this, in the HTTP response there is a header containing the interval, which defines that the server sends metadata again after x bytes of the normal stream content. In addition, I want to store the stream content between the metadata tags.
My question is now, how is the best way to achieve this? One thaught was to use the ByteToMessageDecoder to "filter" the metadata tags, but the bytes were already decoded to an HttpObject. My current handler looks like this:
public class StreamClientHandler extends SimpleChannelInboundHandler<HttpObject> {
private int metadataInterval = 0;
#Override
protected void messageReceived(ChannelHandlerContext handlerContext, HttpObject message) throws Exception {
if(message instanceof HttpResponse) {
HttpResponse response = (HttpResponse) message;
this.metadataInterval = Integer.parseInt(response.headers().get("icy-metaint").toString());
}
if(message instanceof HttpContent) {
HttpContent content = (HttpContent) message;
if(content instanceof LastHttpContent) {
// Close connection
handlerContext.channel().close();
}
}
}
}
Thank you!
I'm trying to write an interceptor that compresses a request body using Gzip.
My server does not support compressed requests, so I'll be using an application/octet-stream instead of Content-Type: gzip and compress the request body manually, it will be decompressed manually at backend.
public class GzipRequestInterceptor implements Interceptor {
final String CONTENT_TYPE = "application/octet-stream";
#Override
public Response intercept(Chain chain) throws IOException {
Request originalRequest = chain.request();
if (originalRequest.body() == null || CONTENT_TYPE.equals(originalRequest.header("Content-Type"))) {
return chain.proceed(originalRequest);
}
Request compressedRequest = originalRequest.newBuilder()
.header("Content-Type", CONTENT_TYPE)
.method(originalRequest.method(), gzip(originalRequest.body()))
.build();
return chain.proceed(compressedRequest);
}
private RequestBody gzip(final RequestBody body) throws IOException {
final Buffer inputBuffer = new Buffer();
body.writeTo(inputBuffer);
final Buffer outputBuffer = new Buffer();
GZIPOutputStream gos = new GZIPOutputStream(outputBuffer.outputStream());
gos.write(inputBuffer.readByteArray());
inputBuffer.close();
gos.close();
return new RequestBody() {
#Override
public MediaType contentType() {
return body.contentType();
}
#Override
public long contentLength() {
return outputBuffer.size();
}
#Override
public void writeTo(BufferedSink sink) throws IOException {
ByteString snapshot = outputBuffer.snapshot();
sink.write(snapshot);
}
};
}
}
It doesn't work - 30 seconds after request is fired, a 500 Server Error is received. On the server there's a timeout exception.
My guess is that I've done wrong with input/output on the gzip method... any ideas?
Update if I stop the app, the request goes through successfully, does this indicate that the app is still waiting for data from outputBuffer?
Is this what you are looking for Interceptors, check the Rewriting Requests -chapter.
I have a Jersey 2 application containing resources that consume and produce json. My requirement is to add a signature to an Authorization response header generated from a combination of various piece of response data (similar to the Amazon Webservices request signature). One of these pieces of data is the response body but I cant see that there are any filter or interception points that will allow me access to the json content. I imagine this is mainly because the response outputstream is for writing not reading.
Any ideas as to how I can read the response body - or alternative approaches ?
Thank you.
My understanding is that when your application is responding to a request, you want to modify the Authorization header by adding a signature to it's value.
If that's the case, you want to implement a ContainerResponseFilter:
public class MyContainerResponseFilter implements ContainerResponseFilter {
#Override
public void filter(ContainerRequestContext containerRequestContext, ContainerResponseContext containerResponseContext) throws IOException {
// You can get the body of the response from the ContainerResponseContext
Object entity = containerResponseContext.getEntity();
// You'll need to know what kind of Object the entity is in order to do something useful though
// You can get some data using these functions
Class<?> entityClass = containerResponseContext.getEntityClass();
Type entityType = containerResponseContext.getEntityType();
// And/or by looking at the ContainerRequestContext and knowing what the response entity will be
String method = containerRequestContext.getMethod();
UriInfo uriInfo = containerRequestContext.getUriInfo();
// Then you can modify your Authorization header in some way
String authorizationHeaderValue = containerResponseContext.getHeaderString(HttpHeaders.AUTHORIZATION);
authorizationHeaderValue = authorizationHeaderValue + " a signature you calculated";
containerResponseContext.getHeaders().putSingle(HttpHeaders.AUTHORIZATION, authorizationHeaderValue);
}
}
Be warned that the filter function will be called for all requests to your application, even when Jersey couldn't find a matching resource for the request path, so you may have to do some extra checking.
You can implement ContainerRequestFilter in order to access the content, and once you are finished with your interception logic, forward it to the request. E.g.
import java.io.*;
import com.sun.jersey.api.container.ContainerException;
import com.sun.jersey.core.util.ReaderWriter;
import com.sun.jersey.spi.container.ContainerRequest;
import com.sun.jersey.spi.container.ContainerRequestFilter;
public class ExampleFilter implements ContainerRequestFilter {
#Override
public ContainerRequest filter(ContainerRequest req) {
try(InputStream in = req.getEntityInputStream(); ByteArrayOutputStream out = new ByteArrayOutputStream();) {
if (in.available() > 0) {
StringBuilder content = new StringBuilder();
ReaderWriter.writeTo(in, out);
byte[] entity = out.toByteArray();
if (entity.length > 0) {
content.append(new String(entity)).append("\n");
System.out.println(content);
}
req.setEntityInputStream(new ByteArrayInputStream(entity));
}
} catch (IOException ex) {
//handle exception
}
return req;
}
}
I'm trying to post to a web service that requires the Content-Length header to be set using the following code:
// EDIT: added apache connector code
ClientConfig clientConfig = new ClientConfig();
ApacheConnector apache = new ApacheConnector(clientConfig);
// setup client to log requests and responses and their entities
client.register(new LoggingFilter(Logger.getLogger("com.example.app"), true));
Part part = new Part("123");
WebTarget target = client.target("https://api.thing.com/v1.0/thing/{thingId}");
Response jsonResponse = target.resolveTemplate("thingId", "abcdefg")
.request(MediaType.APPLICATION_JSON)
.header(HttpHeaders.AUTHORIZATION, "anauthcodehere")
.post(Entity.json(part));
From the release notes https://java.net/jira/browse/JERSEY-1617 and the Jersey 2.0 documentation https://jersey.java.net/documentation/latest/message-body-workers.html it implies that Content-Length is automatically set. However, I get a 411 response code back from the server indicating that Content-Length is not present in the request.
Does anyone know the best way to get the Content-Length header set?
I've verified through setting up a logger that the Content-Length header is not generated in the request.
Thanks.
I ran a quick test with Jersey Client 2.2 and Netcat, and it is showing me that Jersey is sending the Content-Length header, even though the LoggingFilter is not reporting it.
To do this test, I first ran netcat in one shell.
nc -l 8090
Then I executed the following Jersey code in another shell.
Response response = ClientBuilder.newClient()
.register(new LoggingFilter(Logger.getLogger("com.example.app"), true))
.target("http://localhost:8090/test")
.request()
.post(Entity.json(IOUtils.toInputStream("{key:\"value\"}")));
After running this code, the following lines get logged.
INFO: 1 * LoggingFilter - Request received on thread main
1 > POST http://localhost:8090/test
1 > Content-Type: application/json
{key:"value"}
However, netcat reports several more headers in the message.
POST /test HTTP/1.1
Content-Type: application/json
User-Agent: Jersey/2.0 (HttpUrlConnection 1.7.0_17)
Host: localhost:8090
Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
Connection: keep-alive
Content-Length: 13
{key:"value"}
I ran this test on OSX with Java6 and Java7, with the same results. I also ran the test in Jersey 2.0, with similar results.
After looking at the source code for the ApacheConnector class, I see the problem. When a ClientRequest is converted to a HttpUriRequest a private method getHttpEntity() is called that returns a HttpEntity. Unfortunately, this returns a HttpEntity whose getContentLength() always returns a -1.
When the Apache http client creates the request it will consult the HttpEntity object for a length and since it returns -1 no Content-Length header will be set.
I solved my problem by creating a new connector that is a copy of the source code for the ApacheConnector but has a different implementation of the getHttpEntity(). I read the entity from the original ClientRequest into a byte array and then wrap that byte array with a ByteArrayEntity. When the Apache Http client creates the request it will consult the entity and the ByteArrayEntity will respond with the correct content length which in turns allows the Content-Length header to be set.
Here's the relevant code:
private HttpEntity getHttpEntity(final ClientRequest clientRequest) {
final Object entity = clientRequest.getEntity();
if (entity == null) {
return null;
}
byte[] content = getEntityContent(clientRequest);
return new ByteArrayEntity(content);
}
private byte[] getEntityContent(final ClientRequest clientRequest) {
// buffer into which entity will be serialized
final ByteArrayOutputStream baos = new ByteArrayOutputStream();
// set up a mock output stream to capture the output
clientRequest.setStreamProvider(new OutboundMessageContext.StreamProvider() {
#Override
public OutputStream getOutputStream(int contentLength) throws IOException {
return baos;
}
});
try {
clientRequest.writeEntity();
}
catch (IOException e) {
LOGGER.log(Level.SEVERE, null, e);
// re-throw new exception
throw new ProcessingException(e);
}
return baos.toByteArray();
}
WARNING: My problem space was constrained and only contained small entity bodies as part of requests. This method proposed above may be problematic with large entity bodies such as images so I don't think this is a general solution for all.
I've tested with Jersey 2.25.1 a simpler solution that consists in setting setChunkedEncodingEnabled(false) in the Jersey Client configuration. Instead of using a chunked encoding, the whole entity is serialised in memory and the Content-Length is set on the request.
For reference, here is an example of a configuration I've used:
private Client createJerseyClient(Environment environment) {
Logger logger = Logger.getLogger(getClass().getName());
JerseyClientConfiguration clientConfig = new JerseyClientConfiguration();
clientConfig.setProxyConfiguration(new ProxyConfiguration("localhost", 3333));
clientConfig.setGzipEnabled(false);
clientConfig.setGzipEnabledForRequests(false);
clientConfig.setChunkedEncodingEnabled(false);
return new JerseyClientBuilder(environment)
.using(clientConfig)
.build("RestClient")
.register(new LoggingFeature(logger, Level.INFO, null, null));
}
I've used mitmproxy to verify the request headers and the Content-Length header was set correctly.
This is supported in Jersey 2.5 (https://java.net/jira/browse/JERSEY-2224). You could use https://jersey.java.net/apidocs/latest/jersey/org/glassfish/jersey/client/RequestEntityProcessing.html#BUFFERED to stream your content. I put together a simple example that shows both chunked and buffering content using ApacheConnector. Checkout this project: https://github.com/aruld/sof-18157218
public class EntityStreamingTest extends JerseyTest {
private static final Logger LOGGER = Logger.getLogger(EntityStreamingTest.class.getName());
#Path("/test")
public static class HttpMethodResource {
#POST
#Path("chunked")
public String postChunked(#HeaderParam("Transfer-Encoding") String transferEncoding, String entity) {
assertEquals("POST", entity);
assertEquals("chunked", transferEncoding);
return entity;
}
#POST
public String postBuffering(#HeaderParam("Content-Length") String contentLength, String entity) {
assertEquals("POST", entity);
assertEquals(entity.length(), Integer.parseInt(contentLength));
return entity;
}
}
#Override
protected Application configure() {
ResourceConfig config = new ResourceConfig(HttpMethodResource.class);
config.register(new LoggingFilter(LOGGER, true));
return config;
}
#Override
protected void configureClient(ClientConfig config) {
config.connectorProvider(new ApacheConnectorProvider());
}
#Test
public void testPostChunked() {
Response response = target().path("test/chunked").request().post(Entity.text("POST"));
assertEquals(200, response.getStatus());
assertTrue(response.hasEntity());
}
#Test
public void testPostBuffering() {
ClientConfig cc = new ClientConfig();
cc.property(ClientProperties.REQUEST_ENTITY_PROCESSING, RequestEntityProcessing.BUFFERED);
cc.connectorProvider(new ApacheConnectorProvider());
JerseyClient client = JerseyClientBuilder.createClient(cc);
WebTarget target = client.target(getBaseUri());
Response response = target.path("test").request().post(Entity.text("POST"));
assertEquals(200, response.getStatus());
assertTrue(response.hasEntity());
}
}
#Test
public void testForbiddenHeadersAllowed() {
Client client = ClientBuilder.newClient();
System.setProperty("sun.net.http.allowRestrictedHeaders", "true");
Response response = testHeaders(client);
System.out.println(response.readEntity(String.class));
Assert.assertEquals(200, response.getStatus());
I'm pretty new to coding with streams but now I have to do it for more efficient Http coding.
Here is code that I wrote(not working) to get ContentProducer for HttpClient:
public static ContentProducer getContentProducer(final Context context, final UUID gId)
{
return new ContentProducer()
{
public void writeTo(OutputStream outputStream) throws IOException
{
outputStream = new Base64.OutputStream(new FileOutputStream(StorageManager.getFileFromName(context, gId.toString())));
outputStream.flush();
}
};
}
I'm using Base64 streaming encoder from here: http://iharder.sourceforge.net/current/java/base64/
My goal is to use this function to provide data that I read from binary file to HttpClient as base64 encoded stream.
This is how I consume content producers:
private MyHttpResponse processPOST(String url, ContentProducer requestData)
{
MyHttpResponse response = new MyHttpResponse();
try
{
HttpPost request = new HttpPost(serviceURL + url);
HttpEntity entity = new EntityTemplate(requestData);
request.setEntity(entity);
ResponseHandler<String> handler = new BasicResponseHandler();
response.Body = mHttpClient.execute(request, handler);
}
catch (HttpResponseException e)
{
}
catch (Throwable e)
{
}
return response;
}
I have another ContentProducer which works with GSON streamer(and it's working):
public ContentProducer getContentProducer(final Context context)
{
return new ContentProducer()
{
public void writeTo(OutputStream outputStream) throws IOException
{
Gson myGson = MyGsonWrapper.getMyGson();
JsonWriter writer = new JsonWriter(new OutputStreamWriter(outputStream, "UTF-8"));
writer.beginObject();
// stuff
writer.endObject();
writer.flush();
}
};
}
My question is: How to make my first example work. Am I doing it correctly? Right now I get empty post on server side, so it seems like no data coming through.
EDIT:
I believe that the issue is that you are being passed an OutputStream in your ContentProviders writeTo() method, and you are overwriting it with your own OutputStream. The contract of that class/method probably requires you to write your data to the OutputStream passed to you.
Based on looking at the Android Documentation, I do not see a way for you to specify the OutputStream to use, so you will probably need to just write out the data of the file to the OutputStream that is passed in.
Instead, you should do something like this:
public void writeTo(OutputStream outputStream) throws IOException
{
byte[] buf = createByteBufferFromFile(StorageManager.getFileFromName(context, gId.toString()));
outputStream.write(buf);
outputStream.flush();
}
Of course you will need to provide an implementation to the createByteBufferFromFile(...) method that I mention. Again, you should note that it is not likely that you will be using the Base64 OutputStream, so if that is a necessity, then you may have to find a different approach.