It is more of a design and architecture question. I am developing a new UI layer for an old system. This system accepts request in particular xml format. Currently request from the new UI layer goes to a data massaging class via controller.
This Translator/Massaging class converts UI request xml to desired request format. It adds few deprecated elements and constants to the XML it received from UI layer.
Request XML from UI is partially similar to the actual back end. But it has to go to the Translator/Massaging class to be converted into actual request.My question does UI layer need to worry about if it's request XML is partially similar to the actual request? Can UI layer just send the data in JSON format to the Translator/Massaging and translator class will convert it into the actual request xml?
My question does UI layer need to worry about if it's request XML is partially similar to the actual request?
No. As you suggested in your next question, the messaging class can convert the GUI data into an actual XML request.
Can the UI layer just send the data in JSON format to the messaging class and messaging class will convert it into the actual request XML?
It could. However, your GUI should have a data model. The GUI would interact with the data model. The data model would interact with the messaging class. There's no need for another data format, unless there's some requirement you're not telling us.
My question does UI layer need to worry about if it's request XML is partially similar to the actual request? Can UI layer just send the data in JSON format to the massaging class and massaging class will convert it into the actual request xml?
Clearly it could do that. But that would mean that the "massaging" class has more work to do.
To my mind, you are probably asking the wrong question here. If I was in your shoes, I'd be asking myself why I can't use the request format for the "old" system directly, or why I can't change the "old" system to accept requests in the "new" format directly.
Or to put it another, ask yourself what is the purpose of implementing a new format, and all of the extra coding and the performance hit of the "massaging".
Unless there is something else going on that you haven't told us, this all sounds a bit unnecessary to me.
Think services! Any functionality the server offers and is used by the client should be abstracted by a service interface, so that code using this service never has to worry about its implementation or any protocols involved. You can then have the actual implementation on the server, and a remote facade implementation for the client, which forwards the request to the server and also handles the responses:
interface SomeService {
public SomeResult doSomething(SomeArguments) throws SomeException;
}
class SomeServiceServerImpl implements SomeService {
// server-side implementation
}
class SomeServiceClientFacade implements SomeService {
// client-side facade, forwards the request, for example to a web service
}
The facade can then convert the arguments to XML, call a web service, parse the XML response and convert it back to a result object or exception.
If you use a standardized RPC (Remote Procedure Call) protocol (such as SOAP or JSON-RPC), the most elegant way to handle this would be to use a Proxy with an InvocationHandler which does the request marshalling and response unmarshalling in a generic way, allowing you to cheaply create remote service proxies.
Related
I'm new to Netty and to get more familiar with it I'm working on building a simple HTTP server. One thing I want to do is deal with routing based on the URI. I looked around for examples and found a few approaches and wanted to see which made the most sense.
Have a "route" handler that will add/remove others based on the URI in the HTTPMessage. This seems inefficient if I have to do this for every single request.
Have the "route" handler wrap the HTTPMessage and HTTPContent inside another object that will then be passed in to the appropriate handler. For example, I can have an InfoHandler that extends SimpleChannelInboundHandler and the router InfoHTTPRequest object. This way the pipeline stays fixed and I'm not changing it on the fly - I am creating more objects though.
Have a single route handler that just has methods to handle the different endpoints. I can have a handleInfo method, a handleUpdate method, etc with each of those having their own implementation and referencing their own dependencies.
PS - I'm using Netty 4.0 and most of my understanding has come from various online research and reading the Netty In Action book.
I use a fixed pipeline which is only responsible for decoding/encoding requests/responses (and optional aggregation, compression, static headers, etc).
The final handler in the pipeline passes off to a configurable RequestResolver (generic to support types other than HTTP) which looks a little like:
public interface RequestResolver<T> {
void execute(#Nonnull ChannelHandlerContext ctx, #Nonnull T req);
}
The request resolver is responsible for deciding how to handle the request (i.e. routing if necessary) and generally passes off to one or more actions that have been registered on it, or returns a 404. It doesn't have anything to do with the pipeline so much, other than it takes a ctx with which to queue responses.
I started using Netty 4 right back when it was alpha-01 and there was no routing framework plugins available, so I wrote my own RequestResolver in Java, more recently I've another written in Clojure which re-uses the Clout routing from Compojure.
I've seen several GWT code excerpts where the developer extended DefaultRequestTransport and gave it custom functionality. One such example is in this SO question regarding authentication/login filters. But I have seen several others besides this one example.
My question: when & why does someone need to extend this class and override its methods? (In other words, what does this class do, what services do its methods perform, and why would I need to customize them?)
In that one example, the createRequestCallback method was overridden. According to the Javadocs on that method, it's purpose is to:
Create a RequestCallback that maps the HTTP response onto the TransportReceiver interface.
This is still sort of a cryptic explanation to me. Could someone please give me a layman's explanation for what scenarios it would be beneficial to extend this class and override 1+ of its methods?
RequestFactory does not depend on a specific "transport" mechanism; it deals with JSON representations of requests and responses but the way they're exchanged and transferred is out of scope, and deferred to a RequestTransport.
The DefaultRequestTransport uses a RequestBuilder to a given (but configurable) URL; because it uses RequestBuilder, it can only be used in a GWT client (to be compiled to JavaScript). There's also the UrlRequestTransport which uses a java.net.HttpURLConnection and can be used on any client running in a JVM (a server making a call to another server, an Android application, a desktop Java application, etc.)
In theory (because I never tried it and never heard someone else tried it), you could make a RequestTransport that uses Comet or WebSockets, or whichever transport you'd like. Of course, the server side would have to be adapted too (SimpleRequestProcessor can easily reused outside the RequestFactoryServlet; this is a similar separation of concerns)
Back to DefaultRequestTransport: it uses RequestBuilder and provides a few hooks that you can override to customize how it works. The most common use-case is to intercept all requests to add some request header (e.g. credentials) and/or all responses to handle specific HTTP responses before decoding the JSON-encoded RequestFactory response (e.g. intercept "unauthorized" response to ask the user to sign in).
DefaultRequestTransport works as an adapter between the RequestFactory API and the RequestBuilder one, and createRequestCallback is one half of it responsible for adapting the response.
In the example shown they need to extend DefaultRequestTransport in order to inspect all RF server responses and catch 401 status (SC_UNAUTHORIZED) which means that the request was rejected in the server side because the user has not a valid session, and then redirect the user to the application login page.
I've used DefaultRequestTransport as well for changing the requestUrl (the default is set to gwtRequest), so as I can set filters based on the url pattern: for instance authenticated RF services go to /myapp/gwtRequest or non-authenticated RF services go to /myapp/anonymousRequest etc.
I also have a customized RequestTransport using modified versions of RequestBuilder and XMLHttpRequest able to monitor onprogress events, very helpful for large requests.
You could extend it to send customized headers used for doing CORS authentication or whatever.
In summary RequestTransport is the way to modify the client transport layer of RF.
A full REST api like with Java's jax-rs contains definitions for defining a path for a resource, uses the full GET, POST, PUT requests.
But typically when I encounter a REST API, it is typically a standard HTTP GET request and the response is a JSON output. It seems like the meat of a real-world REST request is using JSON output but the true definition of REST allows for XML, JSON or other output types.
For example, the twitter API has "JSON" output, they use a GET request and here are some of the URL's:
https://dev.twitter.com/docs/api/1.1/get/search/tweets
And you can still use the "GET" parameters to modify the request. It seems like the twitter 'search/tweets' function is just a simple http request with a well defined URI path that happens to return a JSON response. Is that really REST?
What is a REST api to you?
On Jax-rs
http://en.wikipedia.org/wiki/Java_API_for_RESTful_Web_Services
(Sorry if this is slightly subjective or anecdotal but I believe developers were wondering about this)
REST (Representational State Transfer) is a vague architectural design pattern which was first written about in a paper by Roy Fieldings (aka the guy who created HTTP).
Most of the time (99% of the time) when somebody wants a REST API they mean they want a web API where they send a Request containing an HTTP verb and a URL which describes the location of a Resource that will be acted upon by the HTTP verb. The web server then performs the requested verb on the Resource and sends back a Response back to the user. The Response will usually (depending on the HTTP verb used) contain a Representation of the resulting Resource. Resources can be represented as HTML, JSON, XML or a number of other different mime types.
Which representation used in the response doesn't really indicate whether an API is RESTful or not; it's how well the interface's URLs are structured and how the web server's behaviors are defined using the HTTP Verbs. A properly compliant REST API should use a GET verb to only read a resource, a POST verb to modify a resource, a PUT to add/replace a resource, and a DELETE to remove a resource. A more formal definition of the expected verb behaviors are listed in the HTTP Specification.
REST is (in a nutshell) a paradigm that resources should be accessible through a URI and that HTTP-like verbs can be used to manipulate them (that is to say, HTTP was designed with REST principles in mind). This is, say, in contrast to having one URI for your app and POSTing a data payload to tell the server what you want to achieve.
With a rough analogy, a filesystem is usually RESTful. Different resources live at different addresses (directories) that are easy to access and write to, despite not being necessarily stored on disk in a fashion that reflects the path. Alternatively, most databases are not RESTful - you connect to the database and access the data through a declarative API, rather than finding the data by a location.
As far as what the resource is - HTML, JSON, a video of a waterskiing squirrel - that is a different level of abstraction than adhering to RESTful principles.
REST is a pretty "loose" standard, but JSON is a reasonable format to standardize around. My only major concern with JSON overall is that it doesn't have a method for explicitly defining its character encoding the way XML does. As for the verbs, sticking to meaningful usages of the verbs is nice, but they don't necessarily map one for one in every situation, so as usual, use common sense :)
It can be JSON, it can be XML. JSON is not exactly industry "standard," but many developers like it (including me) because it's lightweight and easy.
That's pretty much it. REST is designed to be simple. The gist of it is that each url corresponds to a unique resource. The format of the resource is commonly json, but can be anything, and is typically determined by the "extension" or "format" part of the url. For example, "/posts/1.json" and "/posts/1.xml" are just two different representations of the same logical resource, formatted in json and xml, respectively. Another common trait of a RESTful interface is the use of HTTP verbs such as GET, PUT, and POST for retrieving, modifying and creating new resources. That's pretty much all there is to it.
I am developing a distributed system which consists of different components (services) which are loosely (asynchronously) coupled via JMS (ActiveMQ).
Thus I do not want to reinvent the wheel, I am looking for a (well) known protocol / library that facilitates remote procedure calls inbetween these components and helps me deal with method interfaces.
So let's decompose the problem I am already solving right now via dirty solutions:
A consumer component want's to call a service, thus it constructs a request string (hand-written and dirty)
The request string is than compressed an put into a JMS message (dirty aswell)
The request message is than transmitted via JMS and routing mechanisms (that part is ok)
The service first of all needs to decompress and parse the request string, to identify the right method (dirty)
The method gets called and the reply goes like #2 - #4.
So that looks pretty much like SOAP, whilst I think that SOAP is to heavy for my application and further I am not using HTTP at all. Given that I was thinking that one might be able to deconstruct the problem into different components.
Part A: HTTP is replaced by JMS (that one is okay)
Part B: XML is replaced by something more lightweight (alright, MessagePack comes in handy here)
Part C: Mechanism to parse request/reply string to identify operation name and parameter values (that one is the real problem here)
I was looking into MessagePack, ProtocolBuffers, Thrift and so forth, but what I don't like about them is that they introduce their own way of handling the actual (TCP) communication. And bypass my already sophisticated JMS infrastructure (which also handles load balancing and stuff).
To further elaborate Part C above, this is how I am currently handling it: Right know I would do something like the following if a consumer would call a service, let's assume the service takes a text and replies keywords. I would have the consumer create a JMS message and transmit it (via ActiveMQ) to the service. The message would contain:
Syntax: OPERATION_NAME [PARAMETERS]
Method: GET_ALL_KEYWORDS [String text] returns [JSON String[] keywords]
Example Request: GET_ALL_KEYWORDS "Hello world, this is my input text..."
Example Reply: ["hello", "world", "text"]
Needless to say that it feels like hacked-together. The problem I see is if I would be to change the method interface by adding or deleting parameters, I would have to check all the request/reply string construction/deconstructions to syncronize the changes. That is pretty errorprone. I'd rather have the library construct the right request/reply syntax by looking at a java interface and throwing real exceptions on runtime if I do mess up stuff, like "Protocol Exception: Mandatory parameter not set" or something...
Any projects/libs known for that?
Requirements would be
It's small and lightweight and fast.
It's implemented in JAVA
It doesn't serve too many purposes (like some fullblown framework e.g. Spring)
I think this Spring package is what you're looking for. See JmsInvokerProxyFactoryBean and related classes.
From the javadoc:
FactoryBean for JMS invoker proxies. Exposes the proxied service for
use as a bean reference, using the specified service interface.
Serializes remote invocation objects and deserializes remote
invocation result objects. Uses Java serialization just like RMI, but
with the JMS provider as communication infrastructure.
Scenario:
I'm working on a web services project. It supports SOAP and REST. The SOAP request and response are handled by XmlObjects. The REST architectures uses plain POJO's for request and Response. I have a common controller to handle the request from SOAP and REST. This controller understands a common object (Request Object). And , the controller sends back a Transfer Object. In front of the controller , I have a request translator to translate the SOAP/POJO objects to common Request Object. And also a response translator to convert the transfer objects to SOAP/REST view objects.
Problem:
I have 2 request and response translators. The SOAP/REST request and response translators look the same. But, they take a different object as input. So it looks like I have the same code 2 times.
How to avoid this redundancy?
Solutions that I thought of : Bean mapping.
Is there anything else more elegant than this?
I assume both your "REST" and "SOAP" APIs must do the same thing and are therefore quite parallel. You ought to (re)read Roy Fielding's description of the REST architectural approach and then decide if these APIs are truly RESTful in Roy's very precise definition of the term. If they are both RESTful, then just ditch your SOAP APIs: SOAP is harder to use, doesn't leverage HTTP caches, and adds nothing over your REST APIs.
If they are both non-RESTful (ie, they have a Remote Procedure Call flavor and the "REST" APIs just happen to do RPC operations using HTTP and XML), then assuming you can't convert them to the REST architectural style, you can at least factor out the POJO<==>XML mappings using a library like XStream.
You are right, If only the request/response XML differs in layout only and the data is same then you can make XSLTs for both of them which will convert it into proper XML as per your POJO.
Then you can use Castor mapping for XML to POJO object right? And you will get your needed object. But you have to make the object common for the code right?
What i mean is, use common object for your logic and use some another logic to get that object from your Request/Response objects for SOAP/REST. Because the data you are sending will be same in both kind of methods, you just need to handle Object to object conversion. That can be done directly OR using Object to XML and XML to object.depends which you prefer.
hope this helps.
Parth.