I am developing an Android app using GAE on Eclipse.
On one of the EndPoint classes I have a method which returns a "Bla"-type object:
public Bla foo()
{
return new Bla();
}
This "Bla" object holds a "Bla2"-type object:
public class Bla {
private Bla2 bla = new Bla2();
public Bla2 getBla() {
return bla;
}
public void setBla(Bla2 bla) {
this.bla = bla;
}
}
Now, my problem is I cant access the "Bla2" class from the client side. (Even the method "getBla()" doesn't exist)
I managed to trick it by creating a second method on the EndPoint class which return a "Bla2" object:
public Bla2 foo2()
{
return new Bla2();
}
Now I can use the "Bla2" class on the client side, but the "Bla.getBla()" method still doesn't exit. Is there a right way to do it?
This isn't the 'right' way, but keep in mind that just because you are using endpoints, you don't have to stick to the endpoints way of doing things for all of your entities.
Like you, I'm using GAE/J and cloud endpoints and have an ANdroid client. It's great running Java on both the client and the server because I can share code between all my projects.
Some of my entities are communicated and shared the normal 'endpoints way', as you are doing. But for other entities I still use JSON, but just stick them in a string, send them through a generic endpoint, and deserialize them on the other side, which is easy because the entity class is in the shared code.
This allows me to send 50 different entity types through a single endpoint, and it makes it easy for me to customize the JSON serializing/deserializing for those entities.
Of course, this solution gets you in trouble if decide to add an iOS or Web (unless you use GWT) client, but maybe that isn't important to you.
(edit - added some impl. detail)
Serializing your java objects (or entities) to/from JSON is very easy, but the details depend on the JSON library you use. Endpoints can use either Jackson or GSON on the client. But for my own JSON'ing I used json.org which is built-into Android and was easy to download and add to my GAE project.
Here's a tutorial that someone just published:
http://www.survivingwithandroid.com/2013/10/android-json-tutorial-create-and-parse.html
Then I added an endpoint like this:
#ApiMethod(name = "sendData")
public void sendData( #Named("clientId") String clientId, String jsonObject )
(or something with a class that includes a List of String's so you can send multiple entities in one request.)
And put an element into your JSON which tells the server which entity the JSON should be de serialized into.
Try using #ApiResourceProperty on the field.
Related
We are trying to create a Java client for an API created with Spring Data.
Some endpoints return hal+json responses containing _embedded and _links attributes.
Our main problem at the moment is trying to wrap our heads around the following structure:
{
"_embedded": {
"plans": [
{
...
}
]
},
...
}
When you hit the plans endpoint you get a paginated response the content of which is within the _embedded object. So the logic is that you call plans and you get back a response containing an _embedded object that contains a plans attribute that holds an array of plan objects.
The content of the _embedded object can vary as well, and trying a solution using generics, like the example following, ended up returning us a List of LinkedHashMap Objects instead of the expected type.
class PaginatedResponse<T> {
#JsonProperty("_embedded")
Embedded<T> embedded;
....
}
class Embedded<T> {
#JsonAlias({"plans", "projects"})
List<T> content; // This instead of type T ends up deserialising as a List of LinkedHashMap objects
....
}
I am not sure if the above issue is relevant to this Jackson bug report dating from 2015.
The only solution we have so far is to either create a paginated response for each type of content, with explicitly defined types, or to include a List<type_here> for each type of object we expect to receive and make sure that we only read from the populated list and not the null ones.
So our main question to this quite spread out issue is, how one is supposed to navigate such an API without the use of Spring?
We do not consider using Spring in any form as an acceptable solution. At the same time, and I may be quite wrong here, but it looks like in the java world Spring is the only framework actively supporting/promoting HAL/HATEOAS?
I'm sorry if there are wrongly expressed concepts, assumptions and terminology in this question but we are trying to wrap our heads around the philosophy of such an implementation and how to deal with it from a Java point of view.
You can try consuming HATEOS API using super type tokens. A kind of generic way to handle all kind of hateos response.
For example
Below generic class to handle response
public class Resource<T> {
protected Resource() {
this.content = null;
}
public Resource(T content, Link... links) {
this(content, Arrays.asList(links));
}
}
Below code to read the response for various objects
ObjectMapper objectMapper = new ObjectMapper();
Resource<ObjectA> objectA = objectMapper.readValue(response, new TypeReference<Resource<ObjectA>>() {});
Resource<ObjectB> objectB = objectMapper.readValue(response, new TypeReference<Resource<ObjectB>>() {});
You can refer below
http://www.java-allandsundry.com/2012/12/json-deserialization-with-jackson-and.html
http://www.java-allandsundry.com/2014/01/consuming-spring-hateoas-rest-service.html
I am working on an application where I want to use retrofit, but response from API is very large and can not be converted in any data class or POJO class, and also the response is dynamic it increase with user actions for backup, So I want to ask this a long time that is there any way where I can use retrofit without making response data class or POJO class otherwise I have to move back to basic Http way of using REST api's .
If anyone have achieved this or used before , please give some idea how to achieve this, would be a great help. Thanks in advance.
From retrofit docs:
[1] Retrofit is the class through which your API interfaces are turned into callable objects.
[2] Retrofit turns your HTTP API into a Java interface.
Sole purpose of Retrofit is to abstract your API calls as Java interfaces. IT was meant to be used with interfaces and POJOs, it is designed that way. If you don't want to use POJOs, you can use OkHttp which is actually used by Retrofit under the hood. Retrofit should only be used when you need an abstraction for your HTTP calls as Java objects.
You can always just send strings via the #Body annotation.
public interface YourService{
#POST("some/extension")
Call<Object> makeCall(#Body String body);
}
You can access the body of the response like this:
service.makeCall(yourCustomString).enqueue(new Callback<String>() {
#Override
public void onResponse(Response<String> response) {
String content = response.body(); // this gives the response body as a string
}
#Override
public void onFailure(Throwable t) {...}
});
I still think using JSON-Converters is the way to go. You might just need a lot of nested classes inside of the wrapping response/request classes depending on the JSON structure. The size beeing different doesn't matter and can easly be created using lists and optional attributes. How big do your responses get? Moshi for example doesn't really have a size limit.
I am developing an architecture in Java using tomcat and I have come across a situation that I believe is very generic and yet, after reading several questions/answers in StackOverflow, I couldn't find a definitive answer. My architecture has a REST API (running on tomcat) that receives one or more files and their associated metadata and writes them to storage. The configuration of the storage layer has a 1-1 relationship with the REST API server, and for that reason the intuitive approach is to write a Singleton to hold that configuration.
Obviously I am aware that Singletons bring testability problems due to global state and the hardship of mocking Singletons. I also thought of using the Context pattern, but I am not convinced that the Context pattern applies in this case and I worry that I will end up coding using the "Context anti-pattern" instead.
Let me give you some more background on what I am writing. The architecture is comprised of the following components:
Clients that send requests to the REST API uploading or retrieving "preservation objects", or simply put, POs (files + metadata) in JSON or XML format.
The high level REST API that receives requests from clients and stores data in a storage layer.
A storage layer that may contain a combination of OpenStack Swift containers, tape libraries and file systems. Each of these "storage containers" (I'm calling file systems containers for simplicity) is called an endpoint in my architecture. The storage layer obviously does not reside on the same server where the REST API is.
The configuration of endpoints is done through the REST API (e.g. POST /configEndpoint), so that an administrative user can register new endpoints, edit or remove existing endpoints through HTTP calls. Whilst I have only implemented the architecture using an OpenStack Swift endpoint, I anticipate that the information for each endpoint contains at least an IP address, some form of authentication information and a driver name, e.g. "the Swift driver", "the LTFS driver", etc. (so that when new storage technologies arrive they can be easily integrated to my architecture as long as someone writes a driver for it).
My problem is: how do I store and load configuration in an testable, reusable and elegant way? I won't even consider passing a configuration object to all the various methods that implement the REST API calls.
A few examples of the REST API calls and where the configuration comes into play:
// Retrieve a preservation object metadata (PO)
#GET
#Path("container/{containername}/{po}")
#Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
public PreservationObjectInformation getPOMetadata(#PathParam("containername") String containerName, #PathParam("po") String poUUID) {
// STEP 1 - LOAD THE CONFIGURATION
// One of the following options:
// StorageContext.loadContext(containerName);
// Configuration.getInstance(containerName);
// Pass a configuration object as an argument of the getPOMetadata() method?
// Some sort of dependency injection
// STEP 2 - RETRIEVE THE METADATA FROM THE STORAGE
// Call the driver depending on the endpoint (JClouds if Swift, Java IO stream if file system, etc.)
// Pass poUUID as parameter
// STEP 3 - CONVERT JSON/XML TO OBJECT
// Unmarshall the file in JSON format
PreservationObjectInformation poi = unmarshall(data);
return poi;
}
// Delete a PO
#DELETE
#Path("container/{containername}/{po}")
public Response deletePO(#PathParam("containername") String containerName, #PathParam("po") String poName) throws IOException, URISyntaxException {
// STEP 1 - LOAD THE CONFIGURATION
// One of the following options:
// StorageContext.loadContext(containerName); // Context
// Configuration.getInstance(containerName); // Singleton
// Pass a configuration object as an argument of the getPOMetadata() method?
// Some sort of dependency injection
// STEP 2 - CONNECT TO THE STORAGE ENDPOINT
// Call the driver depending on the endpoint (JClouds if Swift, Java IO stream if file system, etc.)
// STEP 3 - DELETE THE FILE
return Response.ok().build();
}
// Submit a PO and its metadata
#POST
#Consumes(MediaType.MULTIPART_FORM_DATA)
#Path("container/{containername}/{po}")
public Response submitPO(#PathParam("containername") String container, #PathParam("po") String poName, #FormDataParam("objectName") String objectName,
#FormDataParam("inputstream") InputStream inputStream) throws IOException, URISyntaxException {
// STEP 1 - LOAD THE CONFIGURATION
// One of the following options:
// StorageContext.loadContext(containerName);
// Configuration.getInstance(containerName);
// Pass a configuration object as an argument of the getPOMetadata() method?
// Some sort of dependency injection
// STEP 2 - WRITE THE DATA AND METADATA TO STORAGE
// Call the driver depending on the endpoint (JClouds if Swift, Java IO stream if file system, etc.)
return Response.created(new URI("container/" + container + "/" + poName))
.build();
}
** UPDATE #1 - My implementation based on #mawalker's comment **
Find below my implementation using the proposed answer. A factory creates concrete strategy objects that implement lower-level storage actions. The context object (which is passed back and forth by the middleware) contains an object of the abstract type (in this case, an interface) StorageContainerStrategy (its implementation will depend on the type of storage in each particular case at runtime).
public interface StorageContainerStrategy {
public void write();
public void read();
// other methods here
}
public class Context {
public StorageContainerStrategy strategy;
// other context information here...
}
public class StrategyFactory {
public static StorageContainerStrategy createStorageContainerStrategy(Container c) {
if(c.getEndpoint().isSwift())
return new SwiftStrategy();
else if(c.getEndpoint().isLtfs())
return new LtfsStrategy();
// etc.
return null;
}
}
public class SwiftStrategy implements StorageContainerStrategy {
#Override
public void write() {
// OpenStack Swift specific code
}
#Override
public void read() {
// OpenStack Swift specific code
}
}
public class LtfsStrategy implements StorageContainerStrategy {
#Override
public void write() {
// LTFS specific code
}
#Override
public void read() {
// LTFS specific code
}
}
Here is the paper Doug Schmidt (in full disclosure my current PhD Advisor) wrote on the Context Object Pattern.
https://www.dre.vanderbilt.edu/~schmidt/PDF/Context-Object-Pattern.pdf
As dbugger stated, building a factory into your api classes that returns the appropriate 'configuration' object is a pretty clean way of doing this. But if you know the 'context'(yes, overloaded usage) of the paper being discussed, it mainly for use in middleware. Where there are multiple layers of context changes. And note that under the 'implementation' section it recommends use of the Strategy Pattern for how to add each layer's 'context information' to the 'context object'.
I would recommend a similar approach. Each 'storage container' would have a different strategy associated with it. Each "driver" therefore has its own strategy impl. class. That strategy would be obtained from a factory, and then used as needed. (How to design your Strats... best way (I'm guessing) would be to make your 'driver strat' be generic for each driver type, and then configure it appropriately as new resources arise/the strat object is assigned)
But as far as I can tell right now(unless I'm reading your question wrong), this would only have 2 'layers' where the 'context object' would be aware of, the 'rest server(s)' and the 'storage endpoints'. If I'm mistaken then so be it... but with only 2 layers, You can just use 'strategy pattern' in the same way you were thinking 'context pattern', and avoid the issue of singletons/Context 'anti-pattern'. (You 'could' have a context object, which contains the strategy for which driver to use, and then a 'configuration' for that driver... that wouldn't be insane, and might fit well with your dynamic HTTP configuration.)
The Strategy(s) Factory Class doesn't 'have to' be singleton/have static factory methods either. I've made factories that are objects before just fine, even with D.I. for testing. There is always trade-offs to different approaches, but I've found better testing to be worth it in almost all cases I've ran into.
We have a stateless backend Java app running on Google App Engine (GAE). The engine takes in a string-A (json) and returns a different string-B (json).
The examples on the Google Cloud Endpoints are around creating an Entity - to define the CloudEndpoint Class. Most of the examples seems to be tied to the DataStore - a backend DB.
In our case, the data is not persisted and there are no primary keys. We were successful in creating an entity - with Input and Output string as two fields. It worked, however the resp payload also consisted of a copy of input string.
We have a solution using a standard servlet(req. string and a different resp string) using doPost Method.
Any suggestions - for our scenario - is CloudEndPoint necessary and/or if there is an easy way to conduct this within Cloud Endpoint?
Thanks
There is nothing that forces you to use the datastore. If you don't need it, don't use it.
You can transform from one Pojo into another for example
public class Input {
public String inputValue;
}
public class Output {
public String outputValue;
}
#Api(name = "myApi", version = "v1")
public class MyApi {
#ApiMethod(name = "hello")
public Output hello(Input input) {
Output response = new Output();
response.resultValue = "Hello " + input.inputValue;
return response;
}
}
which via APIs explorer (http://localhost:8888/_ah/api/explorer for me) shows that it results in a POST request / response of equivalent JSON objects:
POST http://localhost:8888/_ah/api/myApi/v1/hello
{
"inputValue": "Hans"
}
which returns
200 OK
{
"resultValue": "Hello Hans"
}
The big plus of endpoints is that you can write simple Java methods like Output hello(Input input) and use them from auto-generated (Java) client code that does not "see" that those methods are called over HTTP.
You can use them via regular http if you figure out what the URL is but that's not the intended use.
A more generic way to do JSON methods in app-engine would be to use a JAX-RS implementation like Jersey. That way you don't have to have_ah/api/vN/apiname/methodname methods and the restrictions that come with them (like a specific error response in case of exceptions).
Code with JAX-RS would probably look like
#Path("/whatEverILike")
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.APPLICATION_JSON)
public class MyApi {
#POST
public Output hello(Input input) {
Output response = new Output();
response.resultValue = "Hello " + input.inputValue;
return response;
}
}
but it's a little difficult to set such a project up since you'll need a lot of dependencies. For Jersey for example you'll probably want the following 2 including several transitive dependencies:
org.glassfish.jersey.containers:jersey-container-servlet-core:2.22
org.glassfish.jersey.media:jersey-media-json-jackson:2.22
which unfolds into
aopalliance-repackaged-2.4.0-b31.jar jackson-core-2.5.4.jar javassist-3.18.1-GA.jar jersey-client-2.22.jar jersey-media-jaxb-2.22.jar
hk2-api-2.4.0-b31.jar jackson-databind-2.5.4.jar javax.annotation-api-1.2.jar jersey-common-2.22.jar jersey-media-json-jackson-2.22.jar
hk2-locator-2.4.0-b31.jar jackson-jaxrs-base-2.5.4.jar javax.inject-1.jar jersey-container-servlet-core-2.22.jar jersey-server-2.22.jar
hk2-utils-2.4.0-b31.jar jackson-jaxrs-json-provider-2.5.4.jar javax.inject-2.4.0-b31.jar jersey-entity-filtering-2.22.jar osgi-resource-locator-1.0.1.jar
jackson-annotations-2.5.4.jar jackson-module-jaxb-annotations-2.5.4.jar javax.ws.rs-api-2.0.1.jar jersey-guava-2.22.jar validation-api-1.1.0.Final.jar
I was wondering how people with more experience and more complex projects get along with this "uglyness" in the REST Communication. Imagine the following Problem:
We'll need a fair amount of functionalities for one specific resource within our REST Infrastructure, in my case that's about 50+ functions that result in different querys and different responses. I tried to think of a meaningful resource-tree and assigned these to methods that will do "stuff". Afterwards, the Server Resource Class looks like this:
#Path("/thisResource")
public class SomeResource {
#GET/POST/PUT/DELETE
#Path("meaningfulPath")
public Response resourceFunction1 ( ...lots of Params) {
... logic ....
}
//
// lots of functions ...
//
#GET/POST/PUT/DELETE
#Path("meaningfulPath")
public Response resourceFunctionN ( ...lots of Params) {
... logic ....
}
}
To construct the urls my client will call, I made a little function to prevent Typos and to take better use of Constants
so my Client looks like this:
public class Client() {
public returnType function1 () {
client.resource = ResourceClass.build(Constants.Resouce, "meaningfulPath");
...
return response.getEntity(returnType);
}
}
Now the questions that bothers me is how could I link the client function and the server function better?
The only connection between these two blocks of code is the URL that will be called by the client and mapped by the server, and if even this URL is generated somewhere else, this leads to a lot of confusion.
When one of my colleagues needs to get into this code, he has a hard time figuring out which of the 50+ client functions leads to wich server function. Also it is hard to determine if there are obsolete functions in the code, etc. I guess most of you know about the problems of unclean code better than I do.
How do you deal with this? How would you keep this code clean, maintainable and georgeous?
Normally, this would be addressed by EJB or similar technologies.
Or at least by "real" web services, which would provide at least WSDL and schemas (with kind of mapping to Java interfaces, or "ports").
But REST communication is very loosely typed and loosely structured.
The only thing I can think of now, is: define a project (let's call it "Definitions") which would be referenced (hence known) by client and server. In this project you could define a class with a lot of public static final String, such as:
public static final String SOME_METHOD_NAME = "/someMethodName";
public static final String SOME_OTHER_METHOD_NAME = "/someOtherMethodName";
Note: a static final String can very well be referenced by an annotation (in that case it is considered to be constant by the compiler). So use the "constants" to annotate your #Path, such as:
#Path(Definitions.SOME_METHOD_NAME)
Same for the client:
ResourceClass.build(Constants.Resouce, Definitions.SOME_METHOD_NAME);
You are missing the idea behind REST. What you are doing is not REST but RPC over HTTP. Generally you are not supposed to construct URLs using out of band knowledge. Instead you should be following links received in the responses received from the server. Read about HATEOAS:
http://en.wikipedia.org/wiki/HATEOAS