I have a SOAP service that exposes a method
TradeDetail getTradeDetail()
TradeDetail stores 5 fields, transaction number, dates etc
I need to add a couple of fields to TradeDetail. I want to keep backward compatibility (for a while) and it looks as if my options are limited to creating a new class with the extra fields
TradeDetail2 getTradeDetail2()
Now this will work - I've done it before. But are there any other solutions that people have used?
E.g.
Fundamentally change TradeDetail2 to add name value pairs.
Inherit TradeDetail2 from TradeDetail, this would reduce code but increase coupling
Return XML or JSON instead
I will be able to retire the original interface pretty quickly so the code will get cleaned up and the extra TradeDetail2 won't last forever!
thanks
I sympathise - some of my webservices are riddled with myMethod(), myMethod2(), myMethod3() etc simply because I needed to add a few new fields.
Would it make sense for you to keep the method name and create a new endpoint for each version of your API instead? eg:
http://my.domain.com/servicename/v1
http://my.domain.com/servicename/v1.1
http://my.domain.com/servicename/v1.5
http://my.domain.com/servicename/v2
Then your method names stay sensible, regardless of how many future changes you need to make.
Any apps using your webservice would probably need to be rewritten and/or rebuilt against a new WSDL anyway in order to take advantage of the new fields, so why not just have them rewritten/rebuilt against the new v1.1 API.
I find that this also helps when communicating with the owners/developers of the apps using your service - eg, "Version [old] of our webservice API will no longer be supported after [date], please ensure that you are using at least version [new]."
This is why I prefer to have complete control over XML to Object mapping, so that I can separate model from XML interface. In your case, I would simply add new fields to TradeDetail, and consider them "optional" for backwards compatibility. This would be the example XML->Object mapping for TradeDetail in framework my team uses, written for your interface:
// this would go into my client endpoint class
public TradeDetail getTradeDetail() {
Element requestRoot = new Element("GetTradeDetail");
Element responseRoot = invokeWebServiceAndReturnJdomElement(requestRoot);
return mapTradeDetail(responseRoot);
}
// this would go into my client XO mapping class
public TradeDetail mapTradeDetail(Element root) {
TradeDetail tradeDetail = new TradeDetail();
tradeDetail.setField1 = fetchString(root, "/GetTradeDetail/Field1");
tradeDetail.setField2 = fetchInteger(root, "/GetTradeDetail/Field2");
tradeDetail.setField3 = mapField3(root, "/GetTradeDetail/Field3");
tradeDetail.setField4 = fetchString(root, "/GetTradeDetail/Field4");
}
This kind of client would ignore new fields, thus being compatible with new version of protocol, until I add something like this to the end of this same method in version 2:
if (fetchXPath(root, "/GetTradeDetail/Field5") != null) {
// so we're talking with server which speaks new version of protocol
tradeDetail.setField5 = fetchString(root, "/GetTradeDetail/Field5");
}
Server would work with similar code, possibly checking client version, and mapping extra fields only if client supports new version of protocol.
In my view, client should be written so that few extra fields added to protocol don't break the client - I don't have the luxury of being down simply because upstream provider added new functionality and didn't inform me about it. If provider changes existing mandatory fields, of course, client needs modification. This is why upstream provider should version protocol and support old version for at least a couple of months.
Related
I am creating Client API in Java using :+ Apache Jena FrameWork+ Hydra(for Hypermedia driven) + my private vocab similar to Markus Lanther Event-API Vocab instead of schema.org(for Ontology/Vocabulary part)
Section 1 :
After looking this Markus Lanther EventDemo repo and hydra-java.I found that they are creating classes for each hydra:Class that can break client in future .For example :
A Person class (Person.java)
public class Person
{
String name;
};
But in future requirement name is also a class eg:
public class Name
{
String firstName;
String LastName;
};
So to fulfill this requirement I have to update Person class like this:
public class Person
{
Name name;
};
Question 1:
Is my understanding correct or not of this Section? If yes then what is the way to deal with this part ?
Section 2:
To avoid above problem I created a GenericResource class(GenericResource.java)
public class GenericResource
{
private Model model;
public void addProperty(String propertyName,Object propertyValue)
{
propertyName = "myvocab:"+propertyName;
//Because he will pass propertyName only eg: "name" and I will map it to "myvocab:name"
//Some logic to add propertyName and propertyValue to model
}
public GenericResource retriveProperty(String propertyName)
{
propertyName = "myvocab:"+propertyName;
//Some logic to query and retrieve propertyName data from this Object add it to new GenericResource Object and return
}
public GenericResouce performAction(String actionName,String postData)
{
//Some logic to make http call give response in return
}
}
But again I stuck in lots of problem :
Problem 1: It is not necessary that every propertyName is mapped to myvocab:propertyName. Some may be mapped to some other vocab eg: hydra:propertyName, schema:propertyName, rdfs:propertyName, newVocab:propertyName, etc.
Problem 2: How to validate whether this propertyName belongs to this class ?
Suggestion: Put type field/variable in GenericResource class.And then check supportedProperty in vocab corresponding to that class.To more clarity assume above Person class which is also defined in vocab and having supportedProperty : [name,age,etc] .So my GenericResource have type "Person" and at time of addProperty or some other operation , I will query through vocab for that property is in supportedProperty list or in supportedOperation list in case of performAction().
Is it correct way ? Any other suggestion will be most welcomed?
Question 1: Is my understanding correct or not of this Section? If yes
then what is the way to deal with this part ?
Yes, that seems to be correct. Just because hydra-java decided to creates classes doesn't mean you have to do the same in your implementation though. I would rather write a mapper and annotate an internal class that can then stay stable (you need to update the mapping instead). Your GenericResource approach also looks good btw.
Problem 1: It is not necessary that every propertyName is mapped to
myvocab:propertyName. Some may be mapped to some other vocab eg:
hydra:propertyName, schema:propertyName, rdfs:propertyName,
newVocab:propertyName, etc.
Why don't you store and access the properties with full URLs, i.e., including the vocab? You can of course implement some convenience methods to simplify the work with your vocab.
Problem 2: How to validate whether this propertyName belongs to this
class
Suggestion: Put type field/variable in GenericResource class
JSON-LD's #type in node objects (not in #value objects) corresponds to rdf:type. So simply add it as every other property.
And then check supportedProperty in vocab corresponding to that class.
Please keep in mind that supportedProperty only tells you which properties are known to be supported. It doesn't tell you which aren't. In other words, it is valid to have properties other than the ones listed as supportedProperty on an object/resource.
Ad Q1:
For the flexibility you want, the client has to be prepared for semantic and structural changes.
In HTML that is possible. The server can change the structure of an html form in the way outlined by you, by having a firstName and lastName field rather than just a name field. The client does not break, rather it adjusts its UI, following the new semantics. The trick is that the UI is generated, not fixed.
A client which tries to unmarshal the incoming message into a fixed representation, such as a Java bean, is out of luck, and I do not think there is any solution how you could deserialize into a Java bean and survive a change like yours.
If you do not try to deserialize, but stick to reading and processing the incoming message into a more flexible representation, then you can achieve the kind of evolvability you're after. The client must be able to handle the flexible representation accordingly. It could generate UIs rather than binding data to fixed markup, which means, it makes no assumptions about the semantics and structure of the data. If the client absolutely has to know what a data element means, then the server cannot change the related semantics, it can only add new items with the new semantics while keeping the old ones around.
If there were a way how a server could hand out a new structure with a code-on-demand adapter for existing clients, then the server would gain a lot of evolvability. But I am not aware of any such solutions yet.
Ad Q2:
If your goal is to read an incoming json-ld response into a Jena Model on the client side, please see https://jena.apache.org/documentation/io/rdf-input.html
Model model = ModelFactory.createDefaultModel() ;
String base = null;
model.read(inputStream, base, "JSON-LD");
Thus your client will not break in the sense that it cannot read the incoming response. I think that is what your GenericResource achieves, too. But you could use Jena directly on the client side. Basically, you would avoid unmarshalling into a fixed type.
Using MongoDB, I need to persist objects from Twitter4J. Twitter4J uses interfaces, which are implemented in JSON versions. Example:
The API returns Status (an interface), and Status is implemented as StatusJSONImpl.
I can't save Status to MongoDB, I need to implement StatusJSONImpl.
My issue is, this class StatusJSONImpl is not public (see here) so I can't use it in my code. I tried to download the source of Twitter4J to manually add "public" to StatusJSONImpl: I can do:
Status status = twitter.updateStatus(latestStatus);
String statusStringified = TwitterObjectFactory.getRawJSON(status);
StatusJSONImpl statusImplemented = (StatusJSONImpl) TwitterObjectFactory.createUserList(statusStringified);
SingletonLaunchDB.getMongo().save(statusImplemented);
But I still get a java.lang.IllegalAccessError on the class StatusJSONImpl at run time.
I see from other SA answers that users routinely point other users to this Impl classes... how do they do to use it in their code?
Your help is much appreciated.
Status is serializable. To recover StatusJSONImpl from statusStringified you can write.
JSONObject json = new JSONObject(statusStringified);
Status status = new StatusJSONImpl(json);
The code sample is from StatusSerializationTest.java
I hope this helps.
Use the static factory method on TwitterObjectFactory:
Status status = TwitterObjectFactory.createStatus(statusAsString);
StatusJSONImpl is an implementation detail which library users are not meant deal with. The only thing a user of the library should care about is the contract (the Status interface in this case) which is necessarily public and the library authors promise to fulfill. On the other hand, the concrete classes like StatusJSONImpl are not public on purpose in order to prevent consumers from using them and getting tightly coupled to a specific implementation which may change over time. And from the authors' point of view, by coding to an interface they are then free to return any concrete type they wish as long as it fulfills the contract.
If you check the class that is returned from the factory method, it is StatusJSONImpl. But to reiterate, as a user of the library you should need to know or care about that.
Status status = TwitterObjectFactory.createStatus(statusAsString);
status.getClass(); // class twitter4j.StatusJSONImpl
To understand more about why this is done, you can read about static factory methods.
How can I distinguish between published OSGI services implementing same interface by their properties?
Assuming that you want to retrieve registered services based on certain values for properties, you need to use a filter (which is based on the LDAP syntax).
For example:
int myport = 5000;
String filter = "&(objectClass=" + MyInterface.class.getName()
+ ")(port=" + myport + ")";
ServiceReference[] serviceReferences = bundleContext.getServiceReferences(null,filter);
where you want to look for services both implementing MyInterface and having a value of the port property equal to myport.
Here is the relevant javadoc for getting the references.
Remark 1:
The above example and javadoc refer to the Release 4.2. If you are not restricted to a J2SE 1.4 runtime, I suggest you to have a look at the Release 4.3 syntax, where you can use generics.
Remark 2: (courtesy of Ray)
You can also pre-check the correctness of your filter by instead creating a Filter object from a filterStr string:
Filter filter = bundleContext.createFilter(filterStr);
which also allows you to match the filter with other criteria. You still pass filterStr to get the references, since there is no overloading that accounts for a Filter argument. Please be aware, however, that in this way you will check the correctness twice: both getServiceReferences and createFilter throw InvalidSyntaxException on parsing the filter. Certainly not a show-stopper inefficiency, I guess, but it is worth mentioning.
Luca's answer above is correct, however it assumes you are using the low level API for accessing services.
If you are using Declarative Services (which I would generally recommend) then the filter can be added to the target attribute of the service reference. For example (using the bnd annotations for DS):
#Reference(target = "(port=8080)")
public void setHttpService(HttpService http) {
// ...
}
In Blueprint you can specify the filter attribute on the reference or reference-list element. For example:
<reference id="sampleRef"
interface="org.sample.MyInterface"
filter="(port=5000)"/>
The first thing my GWT app does when it loads is request the current logged in user from the server via RequestFactory. This blocks because I need properties of the User to know how to proceed. This only takes < 500ms, but it really annoys me that the app is blocked during this time. I already have the User on the server when the jsp is generated, so why not just add the serialized User to the jsp and eliminate this request altogether?
I have two problems keeping me from doing this:
I need to transform User to UserProxy
I need to serialize UserProxy in a way that is easy for GWT to deserialize.
I have not figured out a good way to do #1. This logic appears to be buried in ServiceLayerDecorator without an easy way to isolate? I may be wrong here.
The second one seems easier via ProxySerializer But how do I get my hands on the requestfactory when I am on the server? You cannot call GWT.create on the server.
I have been looking into AutoBeans but this does not handle #1 above. My UserProxy has references to collections of other EntityProxy's that I would like to maintain.
It is possible using AutoBeans if you create an AutoBeanFactory for your proxies:
To transform User to UserProxy:
Create a server side RequestFactory and invoke the same normal request. Response will contain UserProxy (but on the server).
To serialize UserProxy:
AutoBean<UserProxy> bean = AutoBeanUtils.getAutoBean(receivedUserProxy);
String json = AutoBeanCodex.encode(bean).getPayload();
To deserialize UserProxy on client:
AutoBean<UserProxy> bean = AutoBeanCodex.decode(userAutoBeanFactory, UserProxy.class, json);
Creating an in-process RequestFactory on the server (tutorial):
public static <T extends RequestFactory> T create( Class<T> requestFactoryClass ) {
ServiceLayer serviceLayer = ServiceLayer.create();
SimpleRequestProcessor processor = new SimpleRequestProcessor( serviceLayer );
T factory = RequestFactorySource.create( requestFactoryClass );
factory.initialize( new SimpleEventBus(), new InProcessRequestTransport(processor) );
return factory;
}
You could use AutoBeans for this as well if you are able to make User implements UserProxy. It works since Proxies are interfaces with getters/setters:
interface UserFactory implements AutoBeanFactory
{
AutoBean<UserProxy> user(UserProxy toWrap); // wrap existing instance in an AutoBean
}
Then on server you can create the autobean and serialize to json:
UserFactory factory = AutoBeanFactorySource.create(UserFactory.class)
AutoBean<UserProxy> userProxyBean = factory.user( existingUserPojo );
// to convert AutoBean to JSON
String json = AutoBeanCodex.encode(userProxyBean).getPayload();
On the client you can just use AutoBeanCodex.decode to deserialize JSON back to a bean
You cannot call GWT.create on the server (or from any real JVM), but in many cases you can call a JVM-compatible method designed for server use instead. In this case, take a look at RequestFactorySource.create.
It can be a little messy to get the server to read from itself and print out data using RequestFactory - here is a demo example of how this can work (using gwt 2.4, the main branch has the same thing for 2.3 or so) https://github.com/niloc132/tvguide-sample-parent/blob/gwt-2.4.0/tvguide-client/src/main/java/com/acme/gwt/server/TvViewerJsonBootstrap.java - not quite the same thing that you are after, but it may be possible to use this same idea to populate a string in a proxy store that can be read in the client (seen here https://github.com/niloc132/tvguide-sample-parent/blob/gwt-2.4.0/tvguide-client/src/main/java/com/acme/gwt/client/TvGuide.java).
The basic idea is to create a request (including ids, invocations, and with() arguments so the proxy builder makes all the right pieces in a consistent way), and pass it into a SimpleRequestProcessor instance, which will then run it through the server pieces it normally would. (Any entity management system probably should still have the entities cached to avoid an additional lookup, otherwise you need to model some of the work SRP doesn internally.) The ProxySerializer, which wraps a ProxyStore, expects to have full RF messages as sent from the server, so a fair bit of message bookkeeping needs to be done correctly.
I found the answer on the GWT Google Group. All credits go to Nisha Sowdri NM.
Server side encoding:
DefaultProxyStore store = new DefaultProxyStore();
ProxySerializer ser = requests.getSerializer(store);
final String key = ser.serialize(userProxy);
String message = key + ":" + store.encode();
Client side decoding:
String[] parts = message.split(":", 2);
ProxyStore store = new DefaultProxyStore(parts[1]);
ProxySerializer ser = requests.getSerializer(store);
UserProxy user = ser.deserialize(UserProxy.class, parts[0]);
I was wondering how people with more experience and more complex projects get along with this "uglyness" in the REST Communication. Imagine the following Problem:
We'll need a fair amount of functionalities for one specific resource within our REST Infrastructure, in my case that's about 50+ functions that result in different querys and different responses. I tried to think of a meaningful resource-tree and assigned these to methods that will do "stuff". Afterwards, the Server Resource Class looks like this:
#Path("/thisResource")
public class SomeResource {
#GET/POST/PUT/DELETE
#Path("meaningfulPath")
public Response resourceFunction1 ( ...lots of Params) {
... logic ....
}
//
// lots of functions ...
//
#GET/POST/PUT/DELETE
#Path("meaningfulPath")
public Response resourceFunctionN ( ...lots of Params) {
... logic ....
}
}
To construct the urls my client will call, I made a little function to prevent Typos and to take better use of Constants
so my Client looks like this:
public class Client() {
public returnType function1 () {
client.resource = ResourceClass.build(Constants.Resouce, "meaningfulPath");
...
return response.getEntity(returnType);
}
}
Now the questions that bothers me is how could I link the client function and the server function better?
The only connection between these two blocks of code is the URL that will be called by the client and mapped by the server, and if even this URL is generated somewhere else, this leads to a lot of confusion.
When one of my colleagues needs to get into this code, he has a hard time figuring out which of the 50+ client functions leads to wich server function. Also it is hard to determine if there are obsolete functions in the code, etc. I guess most of you know about the problems of unclean code better than I do.
How do you deal with this? How would you keep this code clean, maintainable and georgeous?
Normally, this would be addressed by EJB or similar technologies.
Or at least by "real" web services, which would provide at least WSDL and schemas (with kind of mapping to Java interfaces, or "ports").
But REST communication is very loosely typed and loosely structured.
The only thing I can think of now, is: define a project (let's call it "Definitions") which would be referenced (hence known) by client and server. In this project you could define a class with a lot of public static final String, such as:
public static final String SOME_METHOD_NAME = "/someMethodName";
public static final String SOME_OTHER_METHOD_NAME = "/someOtherMethodName";
Note: a static final String can very well be referenced by an annotation (in that case it is considered to be constant by the compiler). So use the "constants" to annotate your #Path, such as:
#Path(Definitions.SOME_METHOD_NAME)
Same for the client:
ResourceClass.build(Constants.Resouce, Definitions.SOME_METHOD_NAME);
You are missing the idea behind REST. What you are doing is not REST but RPC over HTTP. Generally you are not supposed to construct URLs using out of band knowledge. Instead you should be following links received in the responses received from the server. Read about HATEOAS:
http://en.wikipedia.org/wiki/HATEOAS