2 questions about RESTful Web Services - java

I am new to RESTful web services. I have the following 2 questions:
Are GET, POST, DELETE, PUT, TRACE, HEAD, OPTIONS, the only verbs in Http that I can use for RESTful web services?
How do I create and use a custom verb?
I'm using Java and Jersey for creating my RESTful web services.

The answer to question 1 is, yes as they are restricted by the HTTP specification. However as a matter of practice, most REST applications use only GET and POST, as these are most widely supported by all of the Internet infrastructure. And then the answer to question two is no, you can't create a custom verb.
The thing you have to consider in your use of the HTTP verbs is that a GET should have no side effects, as the client is free to resend a GET at any time (in the event a communication failure was detected). A POST however can be sent by the client at most once, so this should be used for anything that causes a change that cannot be repeated (like an insert).
Normally you would define what "verb" you want in your application as part of the URL, not as the HTTP verb.

Then how do I provide the 10 actions with only 7 verbs?
The idea behind web services is to focus on the objects, not the verbs.
Your actions either "Create" ("POST"), "Retrieve" ("GET"), "Update" ("PUT") or "Remove" ("DELETE") the objects.
Doesn't each action go under a separate verb?
No. You can have all the objects you want. You only need four verbs to create, find, change and remove objects.
Or I'm wrong and can use conditionals to provide several actions under a single verb?
No. You can make a create ("POST") request which can, in turn, create a number of individual objects.
In general how do others design their application such that they don't need extra verbs even if they need to provide a 100 different actions?
You focus on the objects. Objects are created, retrieved, updated and deleted.

Related

Restful service naming conventions?

For a restfull service, does the noun can be omitted and discarded?
Instead of /service/customers/555/orders/111
Can / should I expose: /service/555/111 ?
Is the first option mandatory or are there several options and this is debatable?
It's totally up to you, I think the nice thing about having the nouns is that it helps you see from the URL what the service is trying to achieve.
Also taking into account that under customer you can have something like below and from the URL you can distinguish between order and quote for a customer
/service/customers/555/quote/111
/service/customers/555/order/111
One of the core aspects of REST is that URLs should be treated as opaque entities. A client should never create a URL, just use URLs that have been supplied by the server. Only the server hosting the entities needs to know something about the URL structure.
So use the URL scheme that makes most sense to you when designing the service.
Regarding the options you mentioned:
Omitting the nouns makes it hard to extend your service if e.g. you want to add products, receipts or other entities.
Having the orders below the customers surprises me (but once again, that's up to you designing the service). I'd expect something like /service/customers/555 and /service/orders/1234567.
Anyway, the RESTful customer document returned from the service should contain links to his or her orders and vice versa (plus all other relevant relationships).
To a certain degree, the "rules" for nameing RESTful endpoints should follow the same naming rules that "Clean Code" for example teaches.
Meaning: names should mean something. And they should say what they mean, and mean what they say.
Coming from there: it probably depends on the nature of that service. If you only can "serve" customers - then you could omit the customer part - because that doesn't add (much) meaningful information. But what if you later want to serve other kinds of clients?
In other words: we can't tell you what is right for your application - because that depends on the requirements / goals of your environment.
And worth noting: do not only consider todays requirements. Step back and consider those "future grow paths" that seem most likely. And then make sure that the API you are defining today will work nicely with those future extensions that are most likely to happen.
Instead of /service/customers/555/orders/111
Can / should I expose: /service/555/111 ?
The question is broad but as you use REST paths to define nested information, that has to be as much explicit as required.
If providing long paths in the URL is a problem for you, as alternative provide the contextual information in the body of the request.
I think that the short way /service/555/111 lacks consistency.
Suppose that /service/555/111 correspond to invoke the service for the customer 555 and the order 111.
You know that. But the client of the API doesn't know necessarily what the paths meaning are.
Besides, suppose now that you wish invoke the invoke the same service for the customer 555 but for the year 2018. How do that now ?
Like that :
/service/555/2018 would be error prone as you will have to add a parameter to convey the last path value and service/555/years/2018 will make your API definition inconsistent.
Clarity, simplicity and consistency matters.
According to me usage of noun is not necessary or comes under any standard,but yes it's usage helps your endpoint to be more specific and simple to understand.
So if any nomenclature is making your URL more human readable or easy to understand then that type or URL I usually prefer to create and keep things simple. It also helps your service consumer who understand the functionality of any service partially by name itself.
Please follow https://restfulapi.net/resource-naming/ for the best practices.
For a restfull service, does the noun can be omitted and discarded?
Yes. REST doesn't care what spelling you use for your resource identifiers.
URL shorteners work just fine.
Choices of spelling are dictated by local convention, they are much like variables in that sense.
Ideally, the spellings are independent of the underlying domain and data models, so that you can change the models without changing the api. Jim Webber expressed the idea this way
The web is not your domain, it's a document management system. All the HTTP verbs apply to the document management domain. URIs do NOT map onto domain objects - that violates encapsulation. Work (ex: issuing commands to the domain model) is a side effect of managing resources. In other words, the resources are part of the anti-corruption layer. You should expect to have many many more resources in your integration domain than you do business objects in your business domain.
Resources adapt your domain model for the web
That said, if you are expecting clients to discover URIs in your documentation (rather than by reading them out of well specified hypermedia responses), then its going to be a good idea to use URI spellings that follow a simple conceptual model.

Java - Using HttpServletRequest objects in MVC patterened design

I understand that using HttpServletRequest and HttpServletResponseobjects in an MVC-patterned web application has benefits e.g. interrogating HTTP request details and setting useful attributes for the view (e.g. JSPs) to use them. But is there any recommended practice for NOT using HttpServletRequest/Response objects in the design - and I mean in any specific places e.g.
1) Service Layer/DAO
2) Model
Recently I have had a code review from a peer who suggested that I should remove the usage of HttpServletRequest/Response objects from the service layer. Besides that fact that Service Layer is strictly for everyone to use (i.e. should not pass request/response objects like that), I cannot seem to see any other reason. Would anyone care to help and tick my brain?
KR,

GWT DefaultRequestTransport: when/why to extend?

I've seen several GWT code excerpts where the developer extended DefaultRequestTransport and gave it custom functionality. One such example is in this SO question regarding authentication/login filters. But I have seen several others besides this one example.
My question: when & why does someone need to extend this class and override its methods? (In other words, what does this class do, what services do its methods perform, and why would I need to customize them?)
In that one example, the createRequestCallback method was overridden. According to the Javadocs on that method, it's purpose is to:
Create a RequestCallback that maps the HTTP response onto the TransportReceiver interface.
This is still sort of a cryptic explanation to me. Could someone please give me a layman's explanation for what scenarios it would be beneficial to extend this class and override 1+ of its methods?
RequestFactory does not depend on a specific "transport" mechanism; it deals with JSON representations of requests and responses but the way they're exchanged and transferred is out of scope, and deferred to a RequestTransport.
The DefaultRequestTransport uses a RequestBuilder to a given (but configurable) URL; because it uses RequestBuilder, it can only be used in a GWT client (to be compiled to JavaScript). There's also the UrlRequestTransport which uses a java.net.HttpURLConnection and can be used on any client running in a JVM (a server making a call to another server, an Android application, a desktop Java application, etc.)
In theory (because I never tried it and never heard someone else tried it), you could make a RequestTransport that uses Comet or WebSockets, or whichever transport you'd like. Of course, the server side would have to be adapted too (SimpleRequestProcessor can easily reused outside the RequestFactoryServlet; this is a similar separation of concerns)
Back to DefaultRequestTransport: it uses RequestBuilder and provides a few hooks that you can override to customize how it works. The most common use-case is to intercept all requests to add some request header (e.g. credentials) and/or all responses to handle specific HTTP responses before decoding the JSON-encoded RequestFactory response (e.g. intercept "unauthorized" response to ask the user to sign in).
DefaultRequestTransport works as an adapter between the RequestFactory API and the RequestBuilder one, and createRequestCallback is one half of it responsible for adapting the response.
In the example shown they need to extend DefaultRequestTransport in order to inspect all RF server responses and catch 401 status (SC_UNAUTHORIZED) which means that the request was rejected in the server side because the user has not a valid session, and then redirect the user to the application login page.
I've used DefaultRequestTransport as well for changing the requestUrl (the default is set to gwtRequest), so as I can set filters based on the url pattern: for instance authenticated RF services go to /myapp/gwtRequest or non-authenticated RF services go to /myapp/anonymousRequest etc.
I also have a customized RequestTransport using modified versions of RequestBuilder and XMLHttpRequest able to monitor onprogress events, very helpful for large requests.
You could extend it to send customized headers used for doing CORS authentication or whatever.
In summary RequestTransport is the way to modify the client transport layer of RF.

HATEOAS and dynamic discovery of API

The HATEOAS principle "Clients make state transitions only through actions that are dynamically identified within hypermedia by the server"
Now I have a problem with the word dynamically, though I guess it's the single most important word there.
If I change one of my parameters from say optional to mandatory in the API, I HAVE to fix my client else the request would fail.
In short, all HATEOAS does is give the server side developer extreme liberty to change the API at will, at the cost of all clients using his/her API.
Am I right in saying this, or am I missing something like versioning or maybe some other media-type than JSON which the server has to adopt?
Any time you change a parameter from optional to mandatory in an API, you will break consumers of that API. That it is a REST API that follows HATEOAS principles does not change this in any way. Instead, if you wish to maintain compatibility you should avoid making such changes; ensure that any call made or message sent by a client written against the old API will continue to function as expected.
On the other hand, it is also a good idea to not write clients to expect the set of returned elements to always be identical. They should be able to ignore additional information given by a server if the server chooses to provide it. Again, this is just good API design.
HATEOAS isn't the problem. Excessively rigid API expectations are the problem. HATEOAS is just part of the solution to the problem (as it potentially relieves clients from having to know vast amounts about the state model of the service, even if it doesn't necessarily make it straight-forward).
Donal Fellows has a good answer, but there's another side to the same coin. The HATEOAS principle doesn't have anything to say itself about the format of your messages (other parts of REST do); instead, it means essentially that the client should not try to know which URI's to act upon out of band. Instead, the server should tell the client which URI's are of interest via hyperlinks (or forms/templates which construct hyperlinks). How it works:
The client starts at state 0.
The client requests a well-known resource.
The server's response moves the client to a new state N. There may be multiple states achievable at this point depending on the response code and payload.
The response includes links (or forms/templates) which tell the client, in band, the set of potential next states.
The client selects one of the potential next states by issuing a method on a URI.
Repeat 3 through 5 to states N+1 and beyond until the client's application needs are met.
In this fashion, the server is free to change the URI that moves the client from state N to state N+1 without breaking the client.
It seems to me that you misunderstood the quoted principle. Your question suggests that you think about the resources and that they could be "dynamically" defined. Like a mandatory property added to certain resource type at the application runtime. This is not what the principle says and this was correctly pointed out in other answers. The quoted principle says that the actions within the hypermedia should be dynamically identified.
The actions available for a given resource may change in time (e.g. because someone added/removed a relationship in the meantime) and there may be different actions available for the same resource but for different users (e.g. because users have different authorization levels). The idea of HATEOAS is that clients should not have any assumptions about actions available for certain resource at any given time. The client should identify available actions each time it reads that resource.
Edit: The below paragraph may be wrong. See the comments for discussion about it.
On the other hand clients may have expectation for the data available in the resource. Like that a book resource must have a title and that it there may be links to the book's author or authors. There is no way of avoiding the coupling introduced by these assumptions but both service providers and clients should use backward-compatibility and versioning techniques to deal with it.

How do RESTful and SOAP Web Services differ in practice?

I am implementing web services for a PHP application and am trying to understand what both standard web services and RESTful web services have to offer.
My intent is to write wrapper code to abstract away the web service details so that developers can just "instantiate remote objects" and use them.
Here are my thoughts, perhaps some of you could add your experience and expand this:
RESTful Web Servcies
are basically just "XML feeds on demand", so e.g. you could write wrapper code for a client application so it could query the server application in this way:
$users = Users::getUsers("state = 'CO'");
this would in turn get an XML feed form a remote URL
$users could be made into a collection of full User objects, or
left as XML, or
turned into an array, etc.
the query script ("state = 'CO'") would be translated into SQL on the server side
RESTful Web Services are in general read-only from the view of the client, although you could write code which could use POST or GET to make changes on the server, e.g. passing an encrypted token for security, e.g.:
$users = Users::addUser($encryptedTrustToken, 'jim',$encryptedPassword, 'James', 'Taylor');
and this would create a new user on the server application.
Standard Web Services
Standard Web Servcies in the end basically do the same thing. The one advantage they have is that client can discover their details via WSDL. But other than that, if I want to write wrapper code which allows a developer to instantiate, edit and save objects remotely, I still need to implement the wrapper code. SOAP doesn't do any of that for me, it can do this:
$soap = new nusoap_client('http://localhost/test/webservice_user.php?wsdl', true);
$user = $soap->getProxy();
$lastName = $user->lastName();
but if I want to edit and save:
$user->setLastName('Jones');
$user->save();
then I need to e.g. handle all of the state on the server side, SOAP doesn't seem to hold that object on the server side for each client.
Perhaps there are limitations in the PHP SOAP implementation I am using (nusoap). Perhaps Java and .NET implementations do much more.
Will enjoy hearing your feedback to clear up some of these clouds.
They are different models... REST is data-centric, where-as SOAP is operation-centric. i.e. with SOAP you tend to have discrete operations "SubmitOrder", etc; but with REST you are typically a lot more fluid about how you are querying the data.
SOAP also tends to be associated with a lot more complexity (which is sometimes necessary) - REST is back to POX etc, and YAGNI.
In terms of .NET, tools like "wsdl.exe" will give you a full client-side proxy library to represent a SOAP service (or "svcutil.exe" for a WCF service):
var someResult = proxy.SubmitOrder(...);
For REST with .NET, I guess ADO.NET Data Services is the most obvious player. Here, the tooling (datasvcutil.exe) will give you a full client-side data-context with LINQ support. LINQ is .NET's relatively new way of forming complex queries. So something like (with strong/static type checking and intellisense):
var qry = from user in ctx.Users
where user.State == 'CO'
select user;
(this will be translated to/from the appropriate REST syntax for ADO.NET Data Services)
I'm echoing what Marc Gravell mentioned. When people ask me about REST (and they usually have an idea about SOAP and SOA), I will tell them REST = ROA as it is resource/data oriented, it's a different paradigm and therefore has different design concerns.
For your case, if I'm reading you correctly, you want to avoid writing the wrapper code and need a solution that can store objects and their attributes remotely (and having them completely hidden from the developers). I can't really suggest a better solution.. Umm, let me know if either of these ever come close to meet your requirements:
EJB3 / JPA
CouchDB (REST/JSON)
Let me know too if I've interpreted your question wrongly.
If we compare XML-RPC and SOAP, the latter will give you better data types handling, for the former although you will be dealing with simpler data types but you will have to write a layer or adapter to convert them to your domain objects.
yc
Soap is just a set of agreed upon XML schemas blessed by a standards group. It's strength is that it was designed for interoperability and it supports many enterprise-class features. Soap on any platform will not provide the operations you are looking for. You need to design & implement the service contract to get those features.
Sounds like you want remote objects which neither Soap or REST are particularly good for. Maybe you'd be better off looking at XML-RPC.
The key differences are basically tooling.
Many of the high end SOAP stacks shroud the vast bulk of the SOAP infrastructure from the developer, to where you are working pretty much solely with DTO's/Documents and RPC.
REST over HTTP puts more of that burden upon you the developer (negotiating formats [XML, JSON, HTTP POST], parsing results [DOM, maps, DTO marshalling, etc.]).
Obviously, you can write a layer of logic to make dealing with this detail easier. And some REST frameworks have arrived to make this easier, but at the moment, it's still a task on your TODO list when you wish to use or consume such services.
My feedback is that if you want remote state, you are not talking about web services. I don't know about any contemporary model that deal with remote state. I think that if you need remote state you are on your own ( without religion to follow ). Just throw xml from here to there and don't mind the theory.
I have to add that remote state is evil. Avoid it if you can.

Categories