GWT RequestFactory client scenarios - java

My understanding is that the GWT RequestFactory (RF) API is for building data-oriented services whereby a client-side entity can communicate directly with it's server-side DAO.
My understanding is that when you fire a RF method from the client-side, a RequestFactoryServlet living on the server is what first receives the request. This servlet acts like a DispatchServlet and routes the request on to the correct service, which is tied to a single entity (model) in the data store.
I'm used to writing servlets that might pass the request on to some business logic (like an EJB), and then compute some response to send back. This might be a JSP view, some complicated JSON (Jackson) object, or anything else.
In all the RF examples, I see no such existence of these servlets, and I'm wondering if they even exist in GWT-RF land. If the RequestFactoryServlet is automagically routing requests to the correct DAO and method, and the DAO method is what is returned in the response, then I can see a scenario where GWT RF doesn't even utilize traditional servlets. (1) Is this the case?
Regardless, there are times in my GWT application where I want to hit a specific url, such as http://www.example.com?foo=bar. (2) Can I use RF for this, and if so, how?
I think if I could see two specific examples, side-by-side of GWT RF in action, I'd be able to connect all the dots:
Scenario #1 : I have a Person entity with methods like isHappy(), isSad(), etc. that would require interaction with a server-side DAO; and
Scenario #2 : I want to fire an HTTP request to http://www.example.com?foo=bar and manually inspect the HTTP response
If it's possible to accomplish both with the RF API, that would be my first preference. If the latter scenario can't be accomplished with RF, then please explain why and what is the GWT-preferred alternative. Thanks in advance!

1.- Request factory not only works for Entities but Services, so you could define any service in server-side with methods which you call from client. Of course when you use RF services they are able to deal with certain types (primitive, boxed primitives, sets, lists and RF proxies)
#Service(value=RfService.class, locator=RfServiceLocator.class)
public interface TwService extends RequestContext {
Request<String> parse(String value);
}
public class RfService {
public String parse(String value) {
return value.replace("a", "b");
}
2.- RF is not thought to receive other message payloads than the RF servlet produces, and the most you can do in client side with RF is ask for that services hosted in a different site (when you deploy your server and client sides in different hosts).
You can use other mechanisms in gwt world to get data from other urls, take a look to gwtquery Ajax and data-binding or this article

Related

CometD Secure Requests

I am using CometD and I have a service setup (Java on the server side) as follows:
http://localhost:8086/service/myService/get-player-details?params={id:1234}
This works fine in practice but what concerns me is that any user can query my service using the above URL and retrieve another players details.
What would be the suggested way of guarding against such an issue? Would authorizers be the correct approach?
If the URL you posted is a mapped to CometD, then I strongly discourage you to use those kind of URLs to pass information such as params in the URL.
First, this will not work if you use other transports that are not HTTP, such as WebSocket.
Second, as you note that URL may expose information that you don't want to expose.
I recommend that you change the way you retrieve information from the server to not use URLs but only messages.
If all your communication with the server happens via messages, CometD on the server already validates that the message comes from a client that was allowed to handshake. You just need to enforce the right authentication checks at handshake time using a SecurityPolicy as explained the in authentication section.
The messages will have this form, using the CometD JavaScript client library:
cometd.publish("/service/player", { action:"get", playerId: 1234 });
There may be variations of this pattern where you want to put the "action" into the channel itself, for example:
cometd.publish("/service/player/get", { playerId: 1234 });
In this way, you have more little services (each responding to a different channel and therefore to a different action), which may be desirable.
Reading the examples of the services section may give you additional information.
I don't recommend to put the playerId into the channel for two reasons:
to avoid to create too many channels
to have this information promptly available in the code, so you don't need to parse the channel (although CometD support use of parameters in channels); parsing is more costly than just doing message.get("playerId").
To send the response to the client, the server can just call:
#Service
public class GetPlayer
{
#Session
private LocalSession sender;
#Listener("/service/player/get")
public void perform(ServerSession session, ServerMessage message)
{
Map<String, Object> player = retrievePlayerInfo(message.get("playerId"));
session.deliver(sender, message.getChannel(), player);
}
}
Note usage of ServerSession.deliver() to return the response to that specific client.
What above guarantees you (with a proper SecurityPolicy) that only authenticated clients can send and receive messages.
What you need to do now is to put in place the right authorizations, in particular that player 123 cannot play as player 789 by hacking the CometD messages that it sends.
This is the job for Authorizers, see the section and the examples in the Authorizers documentation.
What you must do is to establish a relation between the user that authenticated with the playerIds that she's allowed to see. That is application specific and it's the core of your Authorizer implementation.
With proper SecurityPolicy and Authorizers in place, your application is safe from the concerns of your question.
Strictly speaking, Authorizers may be enough, but typically if you want an authorization policy to be enforced, you also need authentication, which is provided by the SecurityPolicy.

Spring: How to export objects methods via non-HTTP protocol

I have a collection of stateless scala/Java APIs, which might look like:
class UserService {
def get(id: UserIDType): User { ... }
def update( user:User): User { ... }
...
}
where User has a set of inspectable bean properties. I'd like to make these same APIs not only available over HTTP (which I know how to do), but also other, more performant non-HTTP protocols (ideally also running in the same process as the HTTP server). And more importantly, automate as much as possible including generation of client APIs that match the original Java API, and the dispatching of method calls following network requests.
I've found Spring's guide on remoting
However, this looks limited to HTTP (text not binary). What I'd really love is a library/other method that:
lets me scan for registered/annotated services and describes methods and parameters as data
lets me easily dispatch method calls (so that I'm in control of the sockets, communication protocols and whatnot and can chose a more performant one than HTTP).
i.e. the kinds of things that Spring's DispatcherServlet does internally, but without the HTTP limitations.
Ideas?
Background
I'd like to make a set of stateless service API calls available as network services with the following goals:
Some of the APIs should be available as REST calls from web pages/mobile clients
A superset of these APIs should be available internally within our company (e.g. from C++ libraries, python libraries) in a way that is as high-performance (read low-latency) as possible, particularly when the service and client are:
co-located in the same process, or
co-located on the same machine.
automate wrapper code for the client and server. If I add a method a service API, or and an attribute to a class, no-one should have to write additional code in client or server. (e.g. I hit a button and a new client API that matches the original Java/scala service is auto-generated).
(1) is easily achievable with Spring MVC. Without duplication of classes I can simply markup the service: (i.e. service is also a controller)
#Controller
#Service
#RequestMapping(...)
class UserService {
#RequestMapping(...)
def get(#PathVariable("id") id: UserIDType): User { ... }
#RequestMapping(...)
def update( #RequestBody user:User): User { ... }
...
}
With tools like Swagger (3) is easily achievable (I can trivially read the Swagger-generated JSON spec and can auto-generate C++ or Python code that mimics the original service class, it's methods, parameter names and the POJO parameter types).
However, HTTP is not particularly performant, so I'd like to use a different (binary) protocol here. Trivially, I can use the this same spec to generate a thrift .idl or even directly generate client code to talk Thrift/ProtocolBuffers (or any other binary protocol) while also providing to the client an API that looks just like the original Java/scala one.
Really -- the only tricky part is getting a description of the service method and then actually dispatching a method calls. So -- are there libraries to help me do that?
Notes:
I'm aware that mixing service and controller annotations disgusts some Spring MVC purists. However, my goal really is to have a logical API, and automate all of the service embodiments of it (i.e. the controller part is redundant boilerplate that should be auto-generated). I'd REALLY rather not spend my life writing boilerplate code.

GWT-RPC or RequestFactory for Authentication?

I'm trying to build a login screen for my GWT app. When you click the login button, the credentials (username & password) need to be sent to the server for authentication. I'm wondering what server communication method is a best fit for this: GWT-RPC or RequestFactory.
My understanding is the RequestFactory is more efficient and recommended over GWT-RPC, but it's more of a data/entity/persistence framework than a request-response framework like RPC. So although many GWT afficionados recommend using RequestFactory over GWT-RPC, I don't think RequestFactory can be used for this scenario. After all, I don't want to CRUD a login request, I want to send credentials to a server, perform secured authentication, and return a response to the client.
Am I correct? Is GWT-RPC my only option? or
Can RequestFactory be used. If so, how (need to see a code example of both client and server code)?
You can use either, although RF is very used with EntityProxy, it also is thought to work with ValueProxy which mean transmit any type. RF facilitates as well execution of remote procedures passing Proxy types or primitive types.
Said that, I would use the technology used primarily in my app. If you are using RPC send your login/password in a RPC request, but if you are using RF use it, so as you dont mix things, although you can mix RF, RPC, and plain Ajax without problems.
What you have to be aware of, is that normally, in applications requiring authentication you have to use a filter to check whether the user has a valid session when requesting RPC or RF, so in the case of sending a request for login, you have to jump somehow the auth filter.
Related with security, both scenarios are the same, you have to do the request in an https enabled environment.
[EDIT]
This could be the interface for the client and the remote implementation of a RF call for login, as you can see it is really easy, you can add any method you need to these classes:
#Service(value = LoginUserService.class)
public interface LoginUserRequest extends RequestContext {
Request<Boolean> login(String username, String password);
}
public class LoginUserService {
// Using static you dont need to provide a Locator for the service
static Boolean login(String username, String password) {
return true;
}
}
Related with auth filters for RF, you can take a look to this: GWT RequestFactory authentication functions
With both technology you can send such information to server side, but as already you pointed out the RequestFactory is dedicated to entity management. In your case is better to use GWT-RPC because in order to only send the credentials server side, and eventually retrieve the authentication result, you don't need the RequestFactory surplus (delta transmission, entity management).
For authentication, I would (almost) always use RequestBuilder, i.e. a simple HTTP(S!) POST. Why? Because this way you can implement a general authentication mechanism, that can not only be used by GWT apps. You gain the flexibility to add a simple HTML login page, single sign-on, standard server-side mechanisms (e.g. Spring security), etc.
A simple GWT re-login dialog is also no problem with RequestBuilder - to submit just username/password, GWT-RPC or RF is simply not necessary.

"Sessions" with Google Cloud Endpoints

This question is only to confirm that I'm clear about this concept.
As far as I understand, Google Cloud Endpoints are kind of Google's implementation of REST services, so that they can't keep any "session" data in memory, therefore:
Users must send authentication data with each request.
All the data I want to use later on must be persisted, namely, with each API request I receive, I have to access the Datastore, do something and store the data again.
Is this correct? And if so, is this actually good in terms of performance?
Yes you can use session, only put another Paramether in your API method with HttpServlet:
#ApiMethod
public MyResponse getResponse( HttpServletRequest req, #Named("infoId") String infoId ) {
// Use 'req' as you would in a servlet, e.g.
String ipAddress = req.getRemoteAddr();
...
}
The datastore is pretty quick especially if you do a key lookup (as apposed to query). if you use NDB then you will have the benefit of auto memache your lookups.
Yes, your Cloud Endpoints API backend code (Java or Python) is still running on App Engine, so you have the same access to all resources you would have on App Engine.
Though you can't set client-side cookies for sessions, you still can obtain a user for a request and store user-specific data in the datastore. As #Shay Erlichmen mentioned, if you couple the datastore with memcache and an in-context cache (as ndb does), you can make these lookups very quick.
To do this in either Python or Java, either allowed_client_ids or audiences will need to be specified in the annotation/decorator on the API and/or on the method(s). See the docs for more info.
Python:
If you want to get a user in Python, call
endpoints.get_current_user()
from within a request that has been annotated with allowed_client_ids or audiences. If this returns None, then there is no valid user (and you should return a 401).
Java:
To get a user, on an annotated method (or method contained in an annotated API), simply specify a user object in the request:
import com.google.appengine.api.users.User;
...
public Model insert(Model model, User user) throws
OAuthRequestException, IOException {
and as in Python, check if user is null to determine if a valid OAuth 2.0 token was sent with the request.

JAXWS and sessions

I'm fairly new to writing web services. I'm working on a SOAP service using JAXWS. I'd like to be able to have users log-in and in my service know which user is issuing a command. In other words, have some session handling.
One way I've seen to do this is to use cookies and access the HTTP layer from my web service. However, this puts a dependency on using HTTP as the transport layer (I'm aware HTTP is almost always the transport layer but I'm a purist).
Is there a better approach which keeps the service layer unaware of the transport layer? Is there some way I can accomplish this with servlet filters? I'd like the answer to be as framework agnostic as possible.
I'm working on a SOAP service using JAXWS. I'd like to be able to have users log-in and in my service know which user is issuing a command. In other words, have some session handling.
Conventional Web services are stateless in nature, there is no session handling in web services (which has by the say nothing to do with identifying the caller).
If you want to require your users to be authenticated to call a service, the traditional approach is to:
Expose an "authentication" web service (passing user credentials) that returns an authentication token.
Have the users call this authentication first.
Have the users pass the token in a custom header on subsequent calls of "business" web services.
On the server side:
Reject any call that doesn't contain a valid token.
Invalidate tokens after some time of inactivity
You can implement a custom solution for this approach (this is a highly interoperable solution). Or you can use WS-Security/UsernameTokens that provides something similar out of the box. WS-Security is a standard (Metro implements it), it isn't "framework" specific.
As you mention, servlet filters can provide the basis of solution. Use a filter to store the current session details (e.g. the session context Map) in a threadLocal storage. This is implemented as your application class, so is transport agnostic. Your service simply uses a static method to fetch the current context, unaware of where it came from.
E.g.
class ServiceSessionContext
{
static ThreadLocal<Map> local = new ThreadLocal<Map>();
// context set by the transport layer, e.g. servlet filter
static public void setContext(Map map)
{
local.put(map);
}
// called when request is complete
static public void clearContext()
{
local.put(null);
}
// context fetched by the service
static public Map getContext()
{
return local.get();
}
}

Categories