I'm new to Netty and to get more familiar with it I'm working on building a simple HTTP server. One thing I want to do is deal with routing based on the URI. I looked around for examples and found a few approaches and wanted to see which made the most sense.
Have a "route" handler that will add/remove others based on the URI in the HTTPMessage. This seems inefficient if I have to do this for every single request.
Have the "route" handler wrap the HTTPMessage and HTTPContent inside another object that will then be passed in to the appropriate handler. For example, I can have an InfoHandler that extends SimpleChannelInboundHandler and the router InfoHTTPRequest object. This way the pipeline stays fixed and I'm not changing it on the fly - I am creating more objects though.
Have a single route handler that just has methods to handle the different endpoints. I can have a handleInfo method, a handleUpdate method, etc with each of those having their own implementation and referencing their own dependencies.
PS - I'm using Netty 4.0 and most of my understanding has come from various online research and reading the Netty In Action book.
I use a fixed pipeline which is only responsible for decoding/encoding requests/responses (and optional aggregation, compression, static headers, etc).
The final handler in the pipeline passes off to a configurable RequestResolver (generic to support types other than HTTP) which looks a little like:
public interface RequestResolver<T> {
void execute(#Nonnull ChannelHandlerContext ctx, #Nonnull T req);
}
The request resolver is responsible for deciding how to handle the request (i.e. routing if necessary) and generally passes off to one or more actions that have been registered on it, or returns a 404. It doesn't have anything to do with the pipeline so much, other than it takes a ctx with which to queue responses.
I started using Netty 4 right back when it was alpha-01 and there was no routing framework plugins available, so I wrote my own RequestResolver in Java, more recently I've another written in Clojure which re-uses the Clout routing from Compojure.
Related
I'm writing some http server by using Netty.
Let's imagine that I have something like this (in initializer, all handlers extends ChannelInboundHandlerAdapter):
// REST API handlers
pipeline.addLast(new CreateEventHandler());
pipeline.addLast(new GetEventHandler());
pipeline.addLast(new UpdateEventHandler());
pipeline.addLast("delete", new DeleteEventHandler());
My question is can I pass an object to a specific handler like this (for example, somewhere in CreateEventHandler):
ChannelInboundHandlerAdapter h = (ChannelInboundHandlerAdapter) ctx.pipeline().get("delete");
h.channelRead(ctx, msg);
If yes, is this a good way? Or I'm guaranteed to get some overhead or errors? Or I must pass an object through all handlers in the pipeline?
Thanks.
This not a good way to design your application. The Netty ChannelPipeline defines a sequence of handlers, each of which may transform the data passing through it. Essentially, the incoming/outgoing data is passed through all the inbound/outbound handlers in the pipeline. It is not a "conditional dispatch" mechanism which is what you seem to be looking for.
It looks like you're building a REST service; all you need is a generic HTTP pipeline, and a single inbound handler which will dispatch to your application code based on HTTP method. Which means none of the handlers in your example (GetEventHandler etc) need to extend ChannelInboundHandlerAdapter - they can be simple Java classes that have no netty specific code. This will give you a nice separation between protocol and business logic as well.
I've seen several GWT code excerpts where the developer extended DefaultRequestTransport and gave it custom functionality. One such example is in this SO question regarding authentication/login filters. But I have seen several others besides this one example.
My question: when & why does someone need to extend this class and override its methods? (In other words, what does this class do, what services do its methods perform, and why would I need to customize them?)
In that one example, the createRequestCallback method was overridden. According to the Javadocs on that method, it's purpose is to:
Create a RequestCallback that maps the HTTP response onto the TransportReceiver interface.
This is still sort of a cryptic explanation to me. Could someone please give me a layman's explanation for what scenarios it would be beneficial to extend this class and override 1+ of its methods?
RequestFactory does not depend on a specific "transport" mechanism; it deals with JSON representations of requests and responses but the way they're exchanged and transferred is out of scope, and deferred to a RequestTransport.
The DefaultRequestTransport uses a RequestBuilder to a given (but configurable) URL; because it uses RequestBuilder, it can only be used in a GWT client (to be compiled to JavaScript). There's also the UrlRequestTransport which uses a java.net.HttpURLConnection and can be used on any client running in a JVM (a server making a call to another server, an Android application, a desktop Java application, etc.)
In theory (because I never tried it and never heard someone else tried it), you could make a RequestTransport that uses Comet or WebSockets, or whichever transport you'd like. Of course, the server side would have to be adapted too (SimpleRequestProcessor can easily reused outside the RequestFactoryServlet; this is a similar separation of concerns)
Back to DefaultRequestTransport: it uses RequestBuilder and provides a few hooks that you can override to customize how it works. The most common use-case is to intercept all requests to add some request header (e.g. credentials) and/or all responses to handle specific HTTP responses before decoding the JSON-encoded RequestFactory response (e.g. intercept "unauthorized" response to ask the user to sign in).
DefaultRequestTransport works as an adapter between the RequestFactory API and the RequestBuilder one, and createRequestCallback is one half of it responsible for adapting the response.
In the example shown they need to extend DefaultRequestTransport in order to inspect all RF server responses and catch 401 status (SC_UNAUTHORIZED) which means that the request was rejected in the server side because the user has not a valid session, and then redirect the user to the application login page.
I've used DefaultRequestTransport as well for changing the requestUrl (the default is set to gwtRequest), so as I can set filters based on the url pattern: for instance authenticated RF services go to /myapp/gwtRequest or non-authenticated RF services go to /myapp/anonymousRequest etc.
I also have a customized RequestTransport using modified versions of RequestBuilder and XMLHttpRequest able to monitor onprogress events, very helpful for large requests.
You could extend it to send customized headers used for doing CORS authentication or whatever.
In summary RequestTransport is the way to modify the client transport layer of RF.
I am fairly new to many of the concepts and technologies being used in this question so I would appreciate a little understanding and help for a beginner from the community. I am using the Play Framework version 2.1.3 and I need to POST data to a RESTful web service so that it can be inserted into a remote database. An XML response will be returned indicating either success or failure.
I am sure you are aware that the documentation for the Play Framework is quite lacking and is in no way helpful to beginners, therefore I am unsure of how to accomplish this task with best practices in mind. I am looking for a Java solution to this problem, I do not have the time at present to learn the Scala language. My experience with Web Services is fairly limited, normally I would implement a DAO design pattern (or use one of the many available ORM libraries depending on needs) within my application and use JDBC to connect directly to the database. That is not an option here.
My first question would have to be, is there a recommended design pattern for accessing web services? Then, considering the Play MVC framework, how would one best implement such a design pattern, package the data (assuming the application has already captured and validated data from the user), send it off and process the responses back to the user?
I know it is a fairly lengthy question however my intention behind this is to create a knowledge base of sorts for beginners who can easily come in with limited experience, read, understand and replicate what they find here to produce a working solution. Having searched the web quite extensively, I have found a few disjointed snippets but nothing concrete involving these technologies and no up-to-date tutorials. Thank you for your time.
Creating requests is straight-forward. First you provide a URL. There are various methods to add content types, query parameters, timeouts, etc. to the request. Then you choose a request type and optionally add some content to send. Examples:
WSRequestHolder request = WS.url("http://example.com");
request.setQueryParameter("page", "1");
Promise<Response> promise = request.get();
Promise<Response> promise = WS.url("http://example.com").post(content);
The complicated part is to send it and use the response of the request. I assume you have a controller that should return a Result to the user, based on the web service's response. The result is usually a rendered template or maybe just a status code.
Play avoids blocking by using Futures and Promises. The controller's async method takes a Promise<Result> and returns a result (the future value) at some point later. A simple to use promise is provided by the get and post methods shown above. You don't need to care about their implementation, you just need to know that they promise to provide a Response once the request is complete.
Notice the problem here: When creating a request with WS.url("...").get() it will give you a Promise<Response> even though async takes a Promise<Result>. Here you have to implement another promise yourself, which will convert the response to a result using the map method. If you follow the Play documentation, this will look a bit confusing, because Java doesn't has closures (yet) and everything has to be wrapped in a class. You don't have to use anonymous classes inside the method call though. If you prefer more clean code, you also can do it like this:
return async(
request
.get() // returns a `Promise<Response>`
.map(resultFromResponse) // map takes a `Function<Response, Result>` and
// returns the `Promise<Result>` we need
);
The object resultFromResponse may look like follows. It's actually just like a cumbersome definition of some kind of callback method that takes a Response as only argument and returns a Result.
Function<Response, List<T>> resultFromResponse =
new Function<Response /* 1st parameter type */, Result /* return type */>() {
#Override
public Result apply(Response response) {
// example: read some json from the response
String message = response.asJson().get("message");
Result result = ok(message);
return result;
}
};
As #itsjeyd pointed out in the comments, when calling webservices in Play 2.2.x you don't wrap the call in async any more. You simply return the Promise<Result>:
public static Promise<Result> index() {
return request.get().map(resultFromResponse);
}
It is more of a design and architecture question. I am developing a new UI layer for an old system. This system accepts request in particular xml format. Currently request from the new UI layer goes to a data massaging class via controller.
This Translator/Massaging class converts UI request xml to desired request format. It adds few deprecated elements and constants to the XML it received from UI layer.
Request XML from UI is partially similar to the actual back end. But it has to go to the Translator/Massaging class to be converted into actual request.My question does UI layer need to worry about if it's request XML is partially similar to the actual request? Can UI layer just send the data in JSON format to the Translator/Massaging and translator class will convert it into the actual request xml?
My question does UI layer need to worry about if it's request XML is partially similar to the actual request?
No. As you suggested in your next question, the messaging class can convert the GUI data into an actual XML request.
Can the UI layer just send the data in JSON format to the messaging class and messaging class will convert it into the actual request XML?
It could. However, your GUI should have a data model. The GUI would interact with the data model. The data model would interact with the messaging class. There's no need for another data format, unless there's some requirement you're not telling us.
My question does UI layer need to worry about if it's request XML is partially similar to the actual request? Can UI layer just send the data in JSON format to the massaging class and massaging class will convert it into the actual request xml?
Clearly it could do that. But that would mean that the "massaging" class has more work to do.
To my mind, you are probably asking the wrong question here. If I was in your shoes, I'd be asking myself why I can't use the request format for the "old" system directly, or why I can't change the "old" system to accept requests in the "new" format directly.
Or to put it another, ask yourself what is the purpose of implementing a new format, and all of the extra coding and the performance hit of the "massaging".
Unless there is something else going on that you haven't told us, this all sounds a bit unnecessary to me.
Think services! Any functionality the server offers and is used by the client should be abstracted by a service interface, so that code using this service never has to worry about its implementation or any protocols involved. You can then have the actual implementation on the server, and a remote facade implementation for the client, which forwards the request to the server and also handles the responses:
interface SomeService {
public SomeResult doSomething(SomeArguments) throws SomeException;
}
class SomeServiceServerImpl implements SomeService {
// server-side implementation
}
class SomeServiceClientFacade implements SomeService {
// client-side facade, forwards the request, for example to a web service
}
The facade can then convert the arguments to XML, call a web service, parse the XML response and convert it back to a result object or exception.
If you use a standardized RPC (Remote Procedure Call) protocol (such as SOAP or JSON-RPC), the most elegant way to handle this would be to use a Proxy with an InvocationHandler which does the request marshalling and response unmarshalling in a generic way, allowing you to cheaply create remote service proxies.
I am developing a distributed system which consists of different components (services) which are loosely (asynchronously) coupled via JMS (ActiveMQ).
Thus I do not want to reinvent the wheel, I am looking for a (well) known protocol / library that facilitates remote procedure calls inbetween these components and helps me deal with method interfaces.
So let's decompose the problem I am already solving right now via dirty solutions:
A consumer component want's to call a service, thus it constructs a request string (hand-written and dirty)
The request string is than compressed an put into a JMS message (dirty aswell)
The request message is than transmitted via JMS and routing mechanisms (that part is ok)
The service first of all needs to decompress and parse the request string, to identify the right method (dirty)
The method gets called and the reply goes like #2 - #4.
So that looks pretty much like SOAP, whilst I think that SOAP is to heavy for my application and further I am not using HTTP at all. Given that I was thinking that one might be able to deconstruct the problem into different components.
Part A: HTTP is replaced by JMS (that one is okay)
Part B: XML is replaced by something more lightweight (alright, MessagePack comes in handy here)
Part C: Mechanism to parse request/reply string to identify operation name and parameter values (that one is the real problem here)
I was looking into MessagePack, ProtocolBuffers, Thrift and so forth, but what I don't like about them is that they introduce their own way of handling the actual (TCP) communication. And bypass my already sophisticated JMS infrastructure (which also handles load balancing and stuff).
To further elaborate Part C above, this is how I am currently handling it: Right know I would do something like the following if a consumer would call a service, let's assume the service takes a text and replies keywords. I would have the consumer create a JMS message and transmit it (via ActiveMQ) to the service. The message would contain:
Syntax: OPERATION_NAME [PARAMETERS]
Method: GET_ALL_KEYWORDS [String text] returns [JSON String[] keywords]
Example Request: GET_ALL_KEYWORDS "Hello world, this is my input text..."
Example Reply: ["hello", "world", "text"]
Needless to say that it feels like hacked-together. The problem I see is if I would be to change the method interface by adding or deleting parameters, I would have to check all the request/reply string construction/deconstructions to syncronize the changes. That is pretty errorprone. I'd rather have the library construct the right request/reply syntax by looking at a java interface and throwing real exceptions on runtime if I do mess up stuff, like "Protocol Exception: Mandatory parameter not set" or something...
Any projects/libs known for that?
Requirements would be
It's small and lightweight and fast.
It's implemented in JAVA
It doesn't serve too many purposes (like some fullblown framework e.g. Spring)
I think this Spring package is what you're looking for. See JmsInvokerProxyFactoryBean and related classes.
From the javadoc:
FactoryBean for JMS invoker proxies. Exposes the proxied service for
use as a bean reference, using the specified service interface.
Serializes remote invocation objects and deserializes remote
invocation result objects. Uses Java serialization just like RMI, but
with the JMS provider as communication infrastructure.