GWT DefaultRequestTransport: when/why to extend? - java

I've seen several GWT code excerpts where the developer extended DefaultRequestTransport and gave it custom functionality. One such example is in this SO question regarding authentication/login filters. But I have seen several others besides this one example.
My question: when & why does someone need to extend this class and override its methods? (In other words, what does this class do, what services do its methods perform, and why would I need to customize them?)
In that one example, the createRequestCallback method was overridden. According to the Javadocs on that method, it's purpose is to:
Create a RequestCallback that maps the HTTP response onto the TransportReceiver interface.
This is still sort of a cryptic explanation to me. Could someone please give me a layman's explanation for what scenarios it would be beneficial to extend this class and override 1+ of its methods?

RequestFactory does not depend on a specific "transport" mechanism; it deals with JSON representations of requests and responses but the way they're exchanged and transferred is out of scope, and deferred to a RequestTransport.
The DefaultRequestTransport uses a RequestBuilder to a given (but configurable) URL; because it uses RequestBuilder, it can only be used in a GWT client (to be compiled to JavaScript). There's also the UrlRequestTransport which uses a java.net.HttpURLConnection and can be used on any client running in a JVM (a server making a call to another server, an Android application, a desktop Java application, etc.)
In theory (because I never tried it and never heard someone else tried it), you could make a RequestTransport that uses Comet or WebSockets, or whichever transport you'd like. Of course, the server side would have to be adapted too (SimpleRequestProcessor can easily reused outside the RequestFactoryServlet; this is a similar separation of concerns)
Back to DefaultRequestTransport: it uses RequestBuilder and provides a few hooks that you can override to customize how it works. The most common use-case is to intercept all requests to add some request header (e.g. credentials) and/or all responses to handle specific HTTP responses before decoding the JSON-encoded RequestFactory response (e.g. intercept "unauthorized" response to ask the user to sign in).
DefaultRequestTransport works as an adapter between the RequestFactory API and the RequestBuilder one, and createRequestCallback is one half of it responsible for adapting the response.

In the example shown they need to extend DefaultRequestTransport in order to inspect all RF server responses and catch 401 status (SC_UNAUTHORIZED) which means that the request was rejected in the server side because the user has not a valid session, and then redirect the user to the application login page.
I've used DefaultRequestTransport as well for changing the requestUrl (the default is set to gwtRequest), so as I can set filters based on the url pattern: for instance authenticated RF services go to /myapp/gwtRequest or non-authenticated RF services go to /myapp/anonymousRequest etc.
I also have a customized RequestTransport using modified versions of RequestBuilder and XMLHttpRequest able to monitor onprogress events, very helpful for large requests.
You could extend it to send customized headers used for doing CORS authentication or whatever.
In summary RequestTransport is the way to modify the client transport layer of RF.

Related

Avoid redudant authorization check for all http requests

I'm working on a web application using React on the frontend and Java on the backend. From the frontend, I call different resources from the backend, where I have various classes providing #GET methods.
For every method, I'm always using the same check to determine if the user is authorized based on their session ID. That's a very repetitive way to accomplish this. This is especially true when creating a new #GET method, as I have to always remember to add this isUserAuthorized check.
My first thought was using an abstract class for the resources to centralize some of the code, but here I'd still have to add the check to each method.
Is there a way I can implement the authorization check for all HTTP requests, without needing to repeat this code?

Determining the HTTP method for payload transfer from client to server

I have a use case where some context needs to be transferred from the UI to the backend and backend needs to decide and send the response based on that context.
This can be achieved by sending the context through request body and at the server side, by parsing the request body, the representation can be sent in the response body.
My doubt is which http method is suitable for this?
GET: If we use GET, we can send the request body but it is advised that the body should not have any semantics related to the request.
See this: http-get-with-request-body
So I am left with POST or PUT but these corresponds to updating or creating a resource and using them might be little misleading.
So my question is what is the appropriate HTTP method that could be used in this scenario which is acceptable in the RESTful design standpoint.
Appreciate the response.
I am thinking to use POST or PUT as there are no restrictions on consuming the request body on the server side.
EDIT:
I think POST would serve my purpose.
The rfc HTTP RFC 7231 says that POST can be used for:
Providing a block of data, such as the fields entered into an HTML form, to a data-handling process
So the data handling process for me is the backend server and HTML Form is equivalent to any UI element.
So I can make use POST method to send the data to backend and send the existing resource representation as response body with http-status code being 200
Please bear in mind that GET must be used for data retrieval only, without side effects. That is, GET is both safe and idempotent (see more details here).
If the operation is meant to be idempotent, go for PUT:
4.3.4. PUT
The PUT method requests that the state of the target resource be created or replaced with the state defined by the representation enclosed in the request message payload. A successful PUT of a given representation would suggest that a subsequent GET on that same target resource will result in an equivalent representation being sent in a 200 (OK) response. [...]
Otherwise, go for POST, which is a catch all verb:
4.3.3. POST
The POST method requests that the target resource process the representation enclosed in the request according to the resource's own specific semantics. [...]
I would go for POST because in REST, PUT is used to create a new resource like user.
There is a PATCH post method, that is for changing things maybe thats what you are looking for
So my question is what is the appropriate HTTP method that could be used in this scenario which is acceptable in the RESTful design standpoint.
The world wide web is about as RESTful an example as you are going to find, and HTML forms only support GET (which should not have a request body) and POST. So POST must be fine (and it is).
More generally, POST can be used for anything; the other methods should be used when they are a better semantic fit. For example, you can use POST to make a resource unavailable, but DELETE is more explicit, and generic components can do sensible things because they recognize the semantics. PUT is a better choice than POST when what you are intending is to provide the server with a new representation of a resource, and so on.
I am not able to understand why the payload of the HTTP GET is forbidden
Payload of the HTTP GET is forbidden because the standard says "don't do that".
I believe it is written that way to simplify the rules for caching the response. As written, cache implementations only have to worry about header data (including information on the start-line).
But it could be as simple as the fact that the older versions of the standard didn't require that generic components do anything specific with the message-body of a GET request, and therefore modern specifications say "don't do that" in order to maintain backward compatibility. (One of the important constraints in designing long-lived systems is that you don't break older implementations.)

How to route requests based on URI in Netty?

I'm new to Netty and to get more familiar with it I'm working on building a simple HTTP server. One thing I want to do is deal with routing based on the URI. I looked around for examples and found a few approaches and wanted to see which made the most sense.
Have a "route" handler that will add/remove others based on the URI in the HTTPMessage. This seems inefficient if I have to do this for every single request.
Have the "route" handler wrap the HTTPMessage and HTTPContent inside another object that will then be passed in to the appropriate handler. For example, I can have an InfoHandler that extends SimpleChannelInboundHandler and the router InfoHTTPRequest object. This way the pipeline stays fixed and I'm not changing it on the fly - I am creating more objects though.
Have a single route handler that just has methods to handle the different endpoints. I can have a handleInfo method, a handleUpdate method, etc with each of those having their own implementation and referencing their own dependencies.
PS - I'm using Netty 4.0 and most of my understanding has come from various online research and reading the Netty In Action book.
I use a fixed pipeline which is only responsible for decoding/encoding requests/responses (and optional aggregation, compression, static headers, etc).
The final handler in the pipeline passes off to a configurable RequestResolver (generic to support types other than HTTP) which looks a little like:
public interface RequestResolver<T> {
void execute(#Nonnull ChannelHandlerContext ctx, #Nonnull T req);
}
The request resolver is responsible for deciding how to handle the request (i.e. routing if necessary) and generally passes off to one or more actions that have been registered on it, or returns a 404. It doesn't have anything to do with the pipeline so much, other than it takes a ctx with which to queue responses.
I started using Netty 4 right back when it was alpha-01 and there was no routing framework plugins available, so I wrote my own RequestResolver in Java, more recently I've another written in Clojure which re-uses the Clout routing from Compojure.

How can I override the render strategy for a specific page in wicket

I have an application which needs to accept a POST request from an outside server, to confirm payment. I don't want to break the default wicket render strategy (REDIRECT_TO_BUFFER) which is serving to give the users a nicer experience than ONE_PASS_RENDER would, however, the external service is not happy with the 302, and keeps retrying until it gives up.
Is there some sensible way that I can tell wicket to use ONE_PASS_RENDER for only the specific page that handles this request?
Try using a Resource instead of a Page to handle this request. That way, you can return whatever response you want (both HTTP headers and payload) to keep the external service happy.
It's a lower level API, though. If you need to respond with a rendered page, you may need to render it yourself (with lots of println() calls), or hack some way to Wicket to do it.
But since it is a response to a external service, I assume it will expect some kind of simple text, XML or JSON response, which are easy enough to do by hand.

Is it possible to forward a request changing the request method?

I'm working thorough a gateway, which allows only GET requests, whilst REST endpoints behind it are able to accept the wide range of methods (POST, PUT, DELETE, OPTIONS). Therefore, I'm trying to pass the request method as a parameter, having a filter, which forwards the request with a correct method. From what I can see in the specification, it's only allowed to forward the request w/o any modifications:
request.getRequestDispatcher(route).forward(request, response)
Are there any workarounds?
NOTE: Redirect is not an option for me.
If you have a single Rest servlet that handles the restful services (and this is usually the case), you can extend it and override the service method. There you can invoke doPost(..), doPut(..), etc. depending on the parameter you want. The default implementation of HttpServlet uses request.getMethod().
Another thing you can do (less preferable) is to have your Filter fire a new request to the endpoint, using URL.openConnection (or apache commons http components), and stream the result of that internal request back to the client. There you can specify the request method.
Anyway, I think you should try to overcome the limitation of your gateway, because it puts you in a really awkward situation.

Categories