GWT manually serialize domain object on server - java

The first thing my GWT app does when it loads is request the current logged in user from the server via RequestFactory. This blocks because I need properties of the User to know how to proceed. This only takes < 500ms, but it really annoys me that the app is blocked during this time. I already have the User on the server when the jsp is generated, so why not just add the serialized User to the jsp and eliminate this request altogether?
I have two problems keeping me from doing this:
I need to transform User to UserProxy
I need to serialize UserProxy in a way that is easy for GWT to deserialize.
I have not figured out a good way to do #1. This logic appears to be buried in ServiceLayerDecorator without an easy way to isolate? I may be wrong here.
The second one seems easier via ProxySerializer But how do I get my hands on the requestfactory when I am on the server? You cannot call GWT.create on the server.
I have been looking into AutoBeans but this does not handle #1 above. My UserProxy has references to collections of other EntityProxy's that I would like to maintain.

It is possible using AutoBeans if you create an AutoBeanFactory for your proxies:
To transform User to UserProxy:
Create a server side RequestFactory and invoke the same normal request. Response will contain UserProxy (but on the server).
To serialize UserProxy:
AutoBean<UserProxy> bean = AutoBeanUtils.getAutoBean(receivedUserProxy);
String json = AutoBeanCodex.encode(bean).getPayload();
To deserialize UserProxy on client:
AutoBean<UserProxy> bean = AutoBeanCodex.decode(userAutoBeanFactory, UserProxy.class, json);
Creating an in-process RequestFactory on the server (tutorial):
public static <T extends RequestFactory> T create( Class<T> requestFactoryClass ) {
ServiceLayer serviceLayer = ServiceLayer.create();
SimpleRequestProcessor processor = new SimpleRequestProcessor( serviceLayer );
T factory = RequestFactorySource.create( requestFactoryClass );
factory.initialize( new SimpleEventBus(), new InProcessRequestTransport(processor) );
return factory;
}

You could use AutoBeans for this as well if you are able to make User implements UserProxy. It works since Proxies are interfaces with getters/setters:
interface UserFactory implements AutoBeanFactory
{
AutoBean<UserProxy> user(UserProxy toWrap); // wrap existing instance in an AutoBean
}
Then on server you can create the autobean and serialize to json:
UserFactory factory = AutoBeanFactorySource.create(UserFactory.class)
AutoBean<UserProxy> userProxyBean = factory.user( existingUserPojo );
// to convert AutoBean to JSON
String json = AutoBeanCodex.encode(userProxyBean).getPayload();
On the client you can just use AutoBeanCodex.decode to deserialize JSON back to a bean

You cannot call GWT.create on the server (or from any real JVM), but in many cases you can call a JVM-compatible method designed for server use instead. In this case, take a look at RequestFactorySource.create.
It can be a little messy to get the server to read from itself and print out data using RequestFactory - here is a demo example of how this can work (using gwt 2.4, the main branch has the same thing for 2.3 or so) https://github.com/niloc132/tvguide-sample-parent/blob/gwt-2.4.0/tvguide-client/src/main/java/com/acme/gwt/server/TvViewerJsonBootstrap.java - not quite the same thing that you are after, but it may be possible to use this same idea to populate a string in a proxy store that can be read in the client (seen here https://github.com/niloc132/tvguide-sample-parent/blob/gwt-2.4.0/tvguide-client/src/main/java/com/acme/gwt/client/TvGuide.java).
The basic idea is to create a request (including ids, invocations, and with() arguments so the proxy builder makes all the right pieces in a consistent way), and pass it into a SimpleRequestProcessor instance, which will then run it through the server pieces it normally would. (Any entity management system probably should still have the entities cached to avoid an additional lookup, otherwise you need to model some of the work SRP doesn internally.) The ProxySerializer, which wraps a ProxyStore, expects to have full RF messages as sent from the server, so a fair bit of message bookkeeping needs to be done correctly.

I found the answer on the GWT Google Group. All credits go to Nisha Sowdri NM.
Server side encoding:
DefaultProxyStore store = new DefaultProxyStore();
ProxySerializer ser = requests.getSerializer(store);
final String key = ser.serialize(userProxy);
String message = key + ":" + store.encode();
Client side decoding:
String[] parts = message.split(":", 2);
ProxyStore store = new DefaultProxyStore(parts[1]);
ProxySerializer ser = requests.getSerializer(store);
UserProxy user = ser.deserialize(UserProxy.class, parts[0]);

Related

Creating JsonLd + Hydra based Generic Client API in java. Is there any projects exist for reference?

I am creating Client API in Java using :+ Apache Jena FrameWork+ Hydra(for Hypermedia driven) + my private vocab similar to Markus Lanther Event-API Vocab instead of schema.org(for Ontology/Vocabulary part)
Section 1 :
After looking this Markus Lanther EventDemo repo and hydra-java.I found that they are creating classes for each hydra:Class that can break client in future .For example :
A Person class (Person.java)
public class Person
{
String name;
};
But in future requirement name is also a class eg:
public class Name
{
String firstName;
String LastName;
};
So to fulfill this requirement I have to update Person class like this:
public class Person
{
Name name;
};
Question 1:
Is my understanding correct or not of this Section? If yes then what is the way to deal with this part ?
Section 2:
To avoid above problem I created a GenericResource class(GenericResource.java)
public class GenericResource
{
private Model model;
public void addProperty(String propertyName,Object propertyValue)
{
propertyName = "myvocab:"+propertyName;
//Because he will pass propertyName only eg: "name" and I will map it to "myvocab:name"
//Some logic to add propertyName and propertyValue to model
}
public GenericResource retriveProperty(String propertyName)
{
propertyName = "myvocab:"+propertyName;
//Some logic to query and retrieve propertyName data from this Object add it to new GenericResource Object and return
}
public GenericResouce performAction(String actionName,String postData)
{
//Some logic to make http call give response in return
}
}
But again I stuck in lots of problem :
Problem 1: It is not necessary that every propertyName is mapped to myvocab:propertyName. Some may be mapped to some other vocab eg: hydra:propertyName, schema:propertyName, rdfs:propertyName, newVocab:propertyName, etc.
Problem 2: How to validate whether this propertyName belongs to this class ?
Suggestion: Put type field/variable in GenericResource class.And then check supportedProperty in vocab corresponding to that class.To more clarity assume above Person class which is also defined in vocab and having supportedProperty : [name,age,etc] .So my GenericResource have type "Person" and at time of addProperty or some other operation , I will query through vocab for that property is in supportedProperty list or in supportedOperation list in case of performAction().
Is it correct way ? Any other suggestion will be most welcomed?
Question 1: Is my understanding correct or not of this Section? If yes
then what is the way to deal with this part ?
Yes, that seems to be correct. Just because hydra-java decided to creates classes doesn't mean you have to do the same in your implementation though. I would rather write a mapper and annotate an internal class that can then stay stable (you need to update the mapping instead). Your GenericResource approach also looks good btw.
Problem 1: It is not necessary that every propertyName is mapped to
myvocab:propertyName. Some may be mapped to some other vocab eg:
hydra:propertyName, schema:propertyName, rdfs:propertyName,
newVocab:propertyName, etc.
Why don't you store and access the properties with full URLs, i.e., including the vocab? You can of course implement some convenience methods to simplify the work with your vocab.
Problem 2: How to validate whether this propertyName belongs to this
class
Suggestion: Put type field/variable in GenericResource class
JSON-LD's #type in node objects (not in #value objects) corresponds to rdf:type. So simply add it as every other property.
And then check supportedProperty in vocab corresponding to that class.
Please keep in mind that supportedProperty only tells you which properties are known to be supported. It doesn't tell you which aren't. In other words, it is valid to have properties other than the ones listed as supportedProperty on an object/resource.
Ad Q1:
For the flexibility you want, the client has to be prepared for semantic and structural changes.
In HTML that is possible. The server can change the structure of an html form in the way outlined by you, by having a firstName and lastName field rather than just a name field. The client does not break, rather it adjusts its UI, following the new semantics. The trick is that the UI is generated, not fixed.
A client which tries to unmarshal the incoming message into a fixed representation, such as a Java bean, is out of luck, and I do not think there is any solution how you could deserialize into a Java bean and survive a change like yours.
If you do not try to deserialize, but stick to reading and processing the incoming message into a more flexible representation, then you can achieve the kind of evolvability you're after. The client must be able to handle the flexible representation accordingly. It could generate UIs rather than binding data to fixed markup, which means, it makes no assumptions about the semantics and structure of the data. If the client absolutely has to know what a data element means, then the server cannot change the related semantics, it can only add new items with the new semantics while keeping the old ones around.
If there were a way how a server could hand out a new structure with a code-on-demand adapter for existing clients, then the server would gain a lot of evolvability. But I am not aware of any such solutions yet.
Ad Q2:
If your goal is to read an incoming json-ld response into a Jena Model on the client side, please see https://jena.apache.org/documentation/io/rdf-input.html
Model model = ModelFactory.createDefaultModel() ;
String base = null;
model.read(inputStream, base, "JSON-LD");
Thus your client will not break in the sense that it cannot read the incoming response. I think that is what your GenericResource achieves, too. But you could use Jena directly on the client side. Basically, you would avoid unmarshalling into a fixed type.

Easily create instance of Java DTO object from Scala code

I am converting the server side of my GWT project use Scala instead of Java. I have a number of RPC servlets that do DB lookups then map results to ArrayList where a class like SomeDTO might be
override def listTrips(): util.ArrayList[TripRoleDTO] = {
val trd = new TripRoleDTO
trd.setRoleType(RoleType.TripAdmin)
trd.setTripName(sessionDataProvider.get().getSessionUser.getEmail)
val res: util.ArrayList[TripRoleDTO] = new util.ArrayList[TripRoleDTO]()
res.add(trd)
res
}
instead of
#Override
public ArrayList<TripRoleDTO> listTrips() {
final SessionData sessionData = sessionDataProvider.get();
final List<TripRole> tripsForUser = tripAdminProvider.get().listTripRolesForUser(sessionData.getSessionUser().getId());
return newArrayList(transform(tripsForUser, DTOConverter.convertTripRole));
}
Note that the Java implementation actually makes the DB call (something I'm figuring out in Scala still) but it does its DTO transformation via Google Guava's Iterables.transform method.
Since the DTO objects need to be .java files that the client side of GWT can use what is an elegant way to transform my Scala domain objects to DTOS?
Use the GWT RequestFactory for automating the creation of DTOs. The DTO can be defined simply with an interface and a #ProxyFor annotation, see an example in the link provided.
If using RequestFactory by some reason is not an alternative, then consider using Dozer to map domain objects to DTOs, this is frequently used with GWT.

How to unit test a class which uses HttpURLConnection internally?

I'm looking for the best way to test a class which internally makes HTTP requests to a pre-defined URL. Generally, the class in question looks more or less like this :
public class ServiceAccess {
private static final String SERVICE_URL = "http://someservice.com/";
public ServiceAccess(String username) throws IOException,
UserNotFoundException, MalformedURLException {
URL url = new URL(SERVICE_URL + username);
HttpURLConnection conn = (HttpURLConnection)url.openConnection();
if(conn.getResponseCode() == HTTP_NOT_FOUND) {
throw new UserNotFoundException("user not found : " + username);
}
// and some more checks
}
}
I would like to test that the class properly reacts to the HTTP server's responses, including response codes, header fields, and such. I found the mockwebserver library that looks just like something I need. However, in order to use it, I would need to somehow change the URL that the class connects to.
The only sensible option that I see is to pass the URL in the constructor : however, it seems to me that this does not play too well in terms of design, since requiring the client to pass an URL to such a class looks fishy. Furthermore, I have not seen any other web service access libraries (Twitter4J, RestFB) that would require their clients to pass the URL in order to actually use them.
I'm not a Java whiz, but I'd like to get it as right as possible. All answers welcome.
What is fishy about passing the URL? Not sure I get that.
Generally for things like this, don't you want the URL to be a property? I would think in the same way that the database url for your instance is going to be constructed of properties, you would want to do the same here. In which case, in your test you just override the property/ies.
The other interesting thing about these kinds of tests is I think it's a really good idea to have tests of the actual protocol (which is what you are doing with the mock) and also the actual service and then run the service tests on a schedule, just as a way to make sure that the downstream services you are consuming are still there and honoring their end of the contract. Was reading the excellent Continuous Delivery book from Addison Wesley, contemplating making this part of a pipeline today.
if you have written your tests first, you would have never written such code :)
your class violates single responsibility rule. refactor this class. extract part responsible for networking (in your code - getting connection). then ServiceAccess should use that class. then you can easily test ServiceAccess in unit tests. unit testing networking code is pointless - guys from oracle have already done that. all you can test is that you have provided correct parameters and that's the role of integration tests
Iff you can't change the code, you could use PowerMock to mock HttpURLConnection.

Google App Engine class on client side

I am developing an Android app using GAE on Eclipse.
On one of the EndPoint classes I have a method which returns a "Bla"-type object:
public Bla foo()
{
return new Bla();
}
This "Bla" object holds a "Bla2"-type object:
public class Bla {
private Bla2 bla = new Bla2();
public Bla2 getBla() {
return bla;
}
public void setBla(Bla2 bla) {
this.bla = bla;
}
}
Now, my problem is I cant access the "Bla2" class from the client side. (Even the method "getBla()" doesn't exist)
I managed to trick it by creating a second method on the EndPoint class which return a "Bla2" object:
public Bla2 foo2()
{
return new Bla2();
}
Now I can use the "Bla2" class on the client side, but the "Bla.getBla()" method still doesn't exit. Is there a right way to do it?
This isn't the 'right' way, but keep in mind that just because you are using endpoints, you don't have to stick to the endpoints way of doing things for all of your entities.
Like you, I'm using GAE/J and cloud endpoints and have an ANdroid client. It's great running Java on both the client and the server because I can share code between all my projects.
Some of my entities are communicated and shared the normal 'endpoints way', as you are doing. But for other entities I still use JSON, but just stick them in a string, send them through a generic endpoint, and deserialize them on the other side, which is easy because the entity class is in the shared code.
This allows me to send 50 different entity types through a single endpoint, and it makes it easy for me to customize the JSON serializing/deserializing for those entities.
Of course, this solution gets you in trouble if decide to add an iOS or Web (unless you use GWT) client, but maybe that isn't important to you.
(edit - added some impl. detail)
Serializing your java objects (or entities) to/from JSON is very easy, but the details depend on the JSON library you use. Endpoints can use either Jackson or GSON on the client. But for my own JSON'ing I used json.org which is built-into Android and was easy to download and add to my GAE project.
Here's a tutorial that someone just published:
http://www.survivingwithandroid.com/2013/10/android-json-tutorial-create-and-parse.html
Then I added an endpoint like this:
#ApiMethod(name = "sendData")
public void sendData( #Named("clientId") String clientId, String jsonObject )
(or something with a class that includes a List of String's so you can send multiple entities in one request.)
And put an element into your JSON which tells the server which entity the JSON should be de serialized into.
Try using #ApiResourceProperty on the field.

Adding fields to a WebService

I have a SOAP service that exposes a method
TradeDetail getTradeDetail()
TradeDetail stores 5 fields, transaction number, dates etc
I need to add a couple of fields to TradeDetail. I want to keep backward compatibility (for a while) and it looks as if my options are limited to creating a new class with the extra fields
TradeDetail2 getTradeDetail2()
Now this will work - I've done it before. But are there any other solutions that people have used?
E.g.
Fundamentally change TradeDetail2 to add name value pairs.
Inherit TradeDetail2 from TradeDetail, this would reduce code but increase coupling
Return XML or JSON instead
I will be able to retire the original interface pretty quickly so the code will get cleaned up and the extra TradeDetail2 won't last forever!
thanks
I sympathise - some of my webservices are riddled with myMethod(), myMethod2(), myMethod3() etc simply because I needed to add a few new fields.
Would it make sense for you to keep the method name and create a new endpoint for each version of your API instead? eg:
http://my.domain.com/servicename/v1
http://my.domain.com/servicename/v1.1
http://my.domain.com/servicename/v1.5
http://my.domain.com/servicename/v2
Then your method names stay sensible, regardless of how many future changes you need to make.
Any apps using your webservice would probably need to be rewritten and/or rebuilt against a new WSDL anyway in order to take advantage of the new fields, so why not just have them rewritten/rebuilt against the new v1.1 API.
I find that this also helps when communicating with the owners/developers of the apps using your service - eg, "Version [old] of our webservice API will no longer be supported after [date], please ensure that you are using at least version [new]."
This is why I prefer to have complete control over XML to Object mapping, so that I can separate model from XML interface. In your case, I would simply add new fields to TradeDetail, and consider them "optional" for backwards compatibility. This would be the example XML->Object mapping for TradeDetail in framework my team uses, written for your interface:
// this would go into my client endpoint class
public TradeDetail getTradeDetail() {
Element requestRoot = new Element("GetTradeDetail");
Element responseRoot = invokeWebServiceAndReturnJdomElement(requestRoot);
return mapTradeDetail(responseRoot);
}
// this would go into my client XO mapping class
public TradeDetail mapTradeDetail(Element root) {
TradeDetail tradeDetail = new TradeDetail();
tradeDetail.setField1 = fetchString(root, "/GetTradeDetail/Field1");
tradeDetail.setField2 = fetchInteger(root, "/GetTradeDetail/Field2");
tradeDetail.setField3 = mapField3(root, "/GetTradeDetail/Field3");
tradeDetail.setField4 = fetchString(root, "/GetTradeDetail/Field4");
}
This kind of client would ignore new fields, thus being compatible with new version of protocol, until I add something like this to the end of this same method in version 2:
if (fetchXPath(root, "/GetTradeDetail/Field5") != null) {
// so we're talking with server which speaks new version of protocol
tradeDetail.setField5 = fetchString(root, "/GetTradeDetail/Field5");
}
Server would work with similar code, possibly checking client version, and mapping extra fields only if client supports new version of protocol.
In my view, client should be written so that few extra fields added to protocol don't break the client - I don't have the luxury of being down simply because upstream provider added new functionality and didn't inform me about it. If provider changes existing mandatory fields, of course, client needs modification. This is why upstream provider should version protocol and support old version for at least a couple of months.

Categories