We have a stateless backend Java app running on Google App Engine (GAE). The engine takes in a string-A (json) and returns a different string-B (json).
The examples on the Google Cloud Endpoints are around creating an Entity - to define the CloudEndpoint Class. Most of the examples seems to be tied to the DataStore - a backend DB.
In our case, the data is not persisted and there are no primary keys. We were successful in creating an entity - with Input and Output string as two fields. It worked, however the resp payload also consisted of a copy of input string.
We have a solution using a standard servlet(req. string and a different resp string) using doPost Method.
Any suggestions - for our scenario - is CloudEndPoint necessary and/or if there is an easy way to conduct this within Cloud Endpoint?
Thanks
There is nothing that forces you to use the datastore. If you don't need it, don't use it.
You can transform from one Pojo into another for example
public class Input {
public String inputValue;
}
public class Output {
public String outputValue;
}
#Api(name = "myApi", version = "v1")
public class MyApi {
#ApiMethod(name = "hello")
public Output hello(Input input) {
Output response = new Output();
response.resultValue = "Hello " + input.inputValue;
return response;
}
}
which via APIs explorer (http://localhost:8888/_ah/api/explorer for me) shows that it results in a POST request / response of equivalent JSON objects:
POST http://localhost:8888/_ah/api/myApi/v1/hello
{
"inputValue": "Hans"
}
which returns
200 OK
{
"resultValue": "Hello Hans"
}
The big plus of endpoints is that you can write simple Java methods like Output hello(Input input) and use them from auto-generated (Java) client code that does not "see" that those methods are called over HTTP.
You can use them via regular http if you figure out what the URL is but that's not the intended use.
A more generic way to do JSON methods in app-engine would be to use a JAX-RS implementation like Jersey. That way you don't have to have_ah/api/vN/apiname/methodname methods and the restrictions that come with them (like a specific error response in case of exceptions).
Code with JAX-RS would probably look like
#Path("/whatEverILike")
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.APPLICATION_JSON)
public class MyApi {
#POST
public Output hello(Input input) {
Output response = new Output();
response.resultValue = "Hello " + input.inputValue;
return response;
}
}
but it's a little difficult to set such a project up since you'll need a lot of dependencies. For Jersey for example you'll probably want the following 2 including several transitive dependencies:
org.glassfish.jersey.containers:jersey-container-servlet-core:2.22
org.glassfish.jersey.media:jersey-media-json-jackson:2.22
which unfolds into
aopalliance-repackaged-2.4.0-b31.jar jackson-core-2.5.4.jar javassist-3.18.1-GA.jar jersey-client-2.22.jar jersey-media-jaxb-2.22.jar
hk2-api-2.4.0-b31.jar jackson-databind-2.5.4.jar javax.annotation-api-1.2.jar jersey-common-2.22.jar jersey-media-json-jackson-2.22.jar
hk2-locator-2.4.0-b31.jar jackson-jaxrs-base-2.5.4.jar javax.inject-1.jar jersey-container-servlet-core-2.22.jar jersey-server-2.22.jar
hk2-utils-2.4.0-b31.jar jackson-jaxrs-json-provider-2.5.4.jar javax.inject-2.4.0-b31.jar jersey-entity-filtering-2.22.jar osgi-resource-locator-1.0.1.jar
jackson-annotations-2.5.4.jar jackson-module-jaxb-annotations-2.5.4.jar javax.ws.rs-api-2.0.1.jar jersey-guava-2.22.jar validation-api-1.1.0.Final.jar
Related
Here is the default implementation of an API generated by the openapi-generator-maven-plugin using Spring Boot as library:
default Mono<ResponseEntity<Void>> testAPI(
#Parameter(hidden = true) final ServerWebExchange exchange
) {
Mono<Void> result = Mono.empty();
exchange.getResponse().setStatusCode(HttpStatus.NOT_IMPLEMENTED);
return result.then(Mono.empty());
}
Being new to this, there are several things I don't understand:
There are two Mono.empty(), one being the result, one inside the then(Mono.empty()), why is it done like that?
Why can't it just returns one? e.g. return Mono.empty();
Or better yet, remove also the pass in exchange and just do:
return Mono.just(ResponseEntity.status(HttpStatus.NOT_IMPLEMENTED).build());
The default implementation is more like a template that gives you a hint how to complete this API controller. For an API controller usually you need to create a response in at least two steps: first fetch data from some source and then make it a valid response. The template code can give you a start point to write such code. For example, I can write the following code using the template:
public class UsersApiController implements UsersApi {
#Override
public Mono<ResponseEntity<String>> usersGet(
#Parameter(hidden = true) final ServerWebExchange exchange
) {
var client = WebClient.create("http://calapi.inadiutorium.cz/");
Mono<String> result = client.get().uri("/api/v0/en/calendars/general-en/today").retrieve().bodyToMono(String.class);
return result.map(rep -> ResponseEntity.status(HttpStatus.OK).body(rep));
}
}
The first Mono.empty becomes the WebClient that gets data from another API, and the second Mono.empty is replaced by a map operation that transforms the API result to ResponseEntity object. If the generator only generates a Mono.empty, newcomers may feel difficult to start writing the controller.
I am developing an architecture in Java using tomcat and I have come across a situation that I believe is very generic and yet, after reading several questions/answers in StackOverflow, I couldn't find a definitive answer. My architecture has a REST API (running on tomcat) that receives one or more files and their associated metadata and writes them to storage. The configuration of the storage layer has a 1-1 relationship with the REST API server, and for that reason the intuitive approach is to write a Singleton to hold that configuration.
Obviously I am aware that Singletons bring testability problems due to global state and the hardship of mocking Singletons. I also thought of using the Context pattern, but I am not convinced that the Context pattern applies in this case and I worry that I will end up coding using the "Context anti-pattern" instead.
Let me give you some more background on what I am writing. The architecture is comprised of the following components:
Clients that send requests to the REST API uploading or retrieving "preservation objects", or simply put, POs (files + metadata) in JSON or XML format.
The high level REST API that receives requests from clients and stores data in a storage layer.
A storage layer that may contain a combination of OpenStack Swift containers, tape libraries and file systems. Each of these "storage containers" (I'm calling file systems containers for simplicity) is called an endpoint in my architecture. The storage layer obviously does not reside on the same server where the REST API is.
The configuration of endpoints is done through the REST API (e.g. POST /configEndpoint), so that an administrative user can register new endpoints, edit or remove existing endpoints through HTTP calls. Whilst I have only implemented the architecture using an OpenStack Swift endpoint, I anticipate that the information for each endpoint contains at least an IP address, some form of authentication information and a driver name, e.g. "the Swift driver", "the LTFS driver", etc. (so that when new storage technologies arrive they can be easily integrated to my architecture as long as someone writes a driver for it).
My problem is: how do I store and load configuration in an testable, reusable and elegant way? I won't even consider passing a configuration object to all the various methods that implement the REST API calls.
A few examples of the REST API calls and where the configuration comes into play:
// Retrieve a preservation object metadata (PO)
#GET
#Path("container/{containername}/{po}")
#Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
public PreservationObjectInformation getPOMetadata(#PathParam("containername") String containerName, #PathParam("po") String poUUID) {
// STEP 1 - LOAD THE CONFIGURATION
// One of the following options:
// StorageContext.loadContext(containerName);
// Configuration.getInstance(containerName);
// Pass a configuration object as an argument of the getPOMetadata() method?
// Some sort of dependency injection
// STEP 2 - RETRIEVE THE METADATA FROM THE STORAGE
// Call the driver depending on the endpoint (JClouds if Swift, Java IO stream if file system, etc.)
// Pass poUUID as parameter
// STEP 3 - CONVERT JSON/XML TO OBJECT
// Unmarshall the file in JSON format
PreservationObjectInformation poi = unmarshall(data);
return poi;
}
// Delete a PO
#DELETE
#Path("container/{containername}/{po}")
public Response deletePO(#PathParam("containername") String containerName, #PathParam("po") String poName) throws IOException, URISyntaxException {
// STEP 1 - LOAD THE CONFIGURATION
// One of the following options:
// StorageContext.loadContext(containerName); // Context
// Configuration.getInstance(containerName); // Singleton
// Pass a configuration object as an argument of the getPOMetadata() method?
// Some sort of dependency injection
// STEP 2 - CONNECT TO THE STORAGE ENDPOINT
// Call the driver depending on the endpoint (JClouds if Swift, Java IO stream if file system, etc.)
// STEP 3 - DELETE THE FILE
return Response.ok().build();
}
// Submit a PO and its metadata
#POST
#Consumes(MediaType.MULTIPART_FORM_DATA)
#Path("container/{containername}/{po}")
public Response submitPO(#PathParam("containername") String container, #PathParam("po") String poName, #FormDataParam("objectName") String objectName,
#FormDataParam("inputstream") InputStream inputStream) throws IOException, URISyntaxException {
// STEP 1 - LOAD THE CONFIGURATION
// One of the following options:
// StorageContext.loadContext(containerName);
// Configuration.getInstance(containerName);
// Pass a configuration object as an argument of the getPOMetadata() method?
// Some sort of dependency injection
// STEP 2 - WRITE THE DATA AND METADATA TO STORAGE
// Call the driver depending on the endpoint (JClouds if Swift, Java IO stream if file system, etc.)
return Response.created(new URI("container/" + container + "/" + poName))
.build();
}
** UPDATE #1 - My implementation based on #mawalker's comment **
Find below my implementation using the proposed answer. A factory creates concrete strategy objects that implement lower-level storage actions. The context object (which is passed back and forth by the middleware) contains an object of the abstract type (in this case, an interface) StorageContainerStrategy (its implementation will depend on the type of storage in each particular case at runtime).
public interface StorageContainerStrategy {
public void write();
public void read();
// other methods here
}
public class Context {
public StorageContainerStrategy strategy;
// other context information here...
}
public class StrategyFactory {
public static StorageContainerStrategy createStorageContainerStrategy(Container c) {
if(c.getEndpoint().isSwift())
return new SwiftStrategy();
else if(c.getEndpoint().isLtfs())
return new LtfsStrategy();
// etc.
return null;
}
}
public class SwiftStrategy implements StorageContainerStrategy {
#Override
public void write() {
// OpenStack Swift specific code
}
#Override
public void read() {
// OpenStack Swift specific code
}
}
public class LtfsStrategy implements StorageContainerStrategy {
#Override
public void write() {
// LTFS specific code
}
#Override
public void read() {
// LTFS specific code
}
}
Here is the paper Doug Schmidt (in full disclosure my current PhD Advisor) wrote on the Context Object Pattern.
https://www.dre.vanderbilt.edu/~schmidt/PDF/Context-Object-Pattern.pdf
As dbugger stated, building a factory into your api classes that returns the appropriate 'configuration' object is a pretty clean way of doing this. But if you know the 'context'(yes, overloaded usage) of the paper being discussed, it mainly for use in middleware. Where there are multiple layers of context changes. And note that under the 'implementation' section it recommends use of the Strategy Pattern for how to add each layer's 'context information' to the 'context object'.
I would recommend a similar approach. Each 'storage container' would have a different strategy associated with it. Each "driver" therefore has its own strategy impl. class. That strategy would be obtained from a factory, and then used as needed. (How to design your Strats... best way (I'm guessing) would be to make your 'driver strat' be generic for each driver type, and then configure it appropriately as new resources arise/the strat object is assigned)
But as far as I can tell right now(unless I'm reading your question wrong), this would only have 2 'layers' where the 'context object' would be aware of, the 'rest server(s)' and the 'storage endpoints'. If I'm mistaken then so be it... but with only 2 layers, You can just use 'strategy pattern' in the same way you were thinking 'context pattern', and avoid the issue of singletons/Context 'anti-pattern'. (You 'could' have a context object, which contains the strategy for which driver to use, and then a 'configuration' for that driver... that wouldn't be insane, and might fit well with your dynamic HTTP configuration.)
The Strategy(s) Factory Class doesn't 'have to' be singleton/have static factory methods either. I've made factories that are objects before just fine, even with D.I. for testing. There is always trade-offs to different approaches, but I've found better testing to be worth it in almost all cases I've ran into.
I am currently working on a REST based java application using the new Camel REST DSL as the foundation.
It mostly works except that I noticed when calling the URLs through a REST client (instead of say a browser) the JSON response is "garbled" and comes through with what I assume is the wrong encoding
MyRouteBuilder.java
#Component
public class MyRouteBuilder extends RouteBuilder{
#Autowired
LocalEnvironmentBean environmentBean;
#Override
public void configure() throws Exception {
restConfiguration().component("jetty").host("0.0.0.0").port(80)
.bindingMode(RestBindingMode.auto);
rest("/testApp")
.get("/data").route()
.to("bean:daoService?method=getData")
.setProperty("viewClass", constant(CustomeJsonViews.class))
.marshal("customDataFormat").endRest()
.get("/allData").route()
.to("bean:daoService?method=getDatas")
.setProperty("viewClass", constant(CustomeJsonViews.class))
.marshal("customDataFormat").endRest();
}
}
CustomeDataFormat.java
public class CustomDataFormat implements DataFormat{
private ObjectMapper jacksonMapper;
public CustomDataFormat(){
jacksonMapper = new ObjectMapper();
}
#Override
public void marshal(Exchange exchange, Object obj, OutputStream stream) throws Exception {
Class view = (Class) exchange.getProperty("viewClass");
if (view != null)
{
ObjectWriter w = jacksonMapper.writerWithView(view);
w.writeValue(stream, obj);
}
else
stream.write(jacksonMapper.writeValueAsBytes(obj));
}
#Override
public Object unmarshal(Exchange exchange, InputStream stream) throws Exception {
return null;
}
}
A full working version can be found here:
https://github.com/zwhitten/camel-rest-test
When going to the URL, {host}/testApp/data, in Chrome for example the response comes through as:
{
data: "Sherlock",
value: "Holmes",
count: 10
}
However using the Postman browser plugin as the client returns:
"W3siZGF0YSI6ImRhdGE6OjAiLCJ2YWx1ZSI6InZhbHVlOjowIiwiY291bnQiOjB9LHsiZGF0YSI6ImRhdGE6OjEiLCJ2YWx1ZSI6InZhbHVlOjoxIiwiY291bnQiOjF9LHsiZGF0YSI6ImRhdGE6OjIiLCJ2YWx1ZSI6InZhbHVlOjoyIiwiY291bnQiOjJ9LHsiZGF0YSI6ImRhdGE6OjMiLCJ2YWx1ZSI6InZhbHVlOjozIiwiY291bnQiOjN9LHsiZGF0YSI6ImRhdGE6OjQiLCJ2YWx1ZSI6InZhbHVlOjo0IiwiY291bnQiOjR9LHsiZGF0YSI6ImRhdGE6OjUiLCJ2YWx1ZSI6InZhbHVlOjo1IiwiY291bnQiOjV9LHsiZGF0YSI6ImRhdGE6OjYiLCJ2YWx1ZSI6InZhbHVlOjo2IiwiY291bnQiOjZ9LHsiZGF0YSI6ImRhdGE6OjciLCJ2YWx1ZSI6InZhbHVlOjo3IiwiY291bnQiOjd9LHsiZGF0YSI6ImRhdGE6OjgiLCJ2YWx1ZSI6InZhbHVlOjo4IiwiY291bnQiOjh9LHsiZGF0YSI6ImRhdGE6OjkiLCJ2YWx1ZSI6InZhbHVlOjo5IiwiY291bnQiOjl9XQ=="
The problem seems to be with the REST bind mode being "auto" and using a custom marshaller.
If I set the binding mode to "json" then both the browser and client responses get garbled.
If I set the binding mode to "json" and bypass the custom marshallers everything works correctly.
Is there a way to configure the route to use a custom marshaller and encode the responses correctly regardless of the client?
I think the solution is to use the default binding option(off) since you are using custom marshallers.
You have two ways to achieve it:
Turn off the RestBindingMode, because otherwise the RestBindingMarshalOnCompletion in RestBindingProcessor will be registered and manually (un)marshal.
Register your own DataFormat and use it within the RestBinding automatically. You configure the REST configuration via jsonDataFormat to set the custom data format.
Map<String, DataFormatDefinition> dataFormats = getContext().getDataFormats();
if (dataFormats == null) {
dataFormats = new HashMap<>();
}
dataFormats.put("yourFormat", new DataFormatDefinition(new CustomDataFormat()));
restConfiguration()....jsonDataFormat("yourFormat")
You can also create your own dataformat like so:
in your restconfiguration it will look sthg like this (see json-custom)
builder.restConfiguration().component("jetty")
.host(host(propertiesResolver))
.port(port(propertiesResolver))
.bindingMode(RestBindingMode.json)
.jsonDataFormat("json-custom")
;
You must create a file "json-custom"
that's the name of the file and that file should contain the class name that implements your own way to marshal and unmarshal...
it must be located in your jar : META-INF\services\org\apache\camel\dataformat
so the content of the file should be:
class=packageofmyclass.MyOwnDataformatter
The response you were receiving is JSON, but it had been encoded to base64. Taking the String from your post, I was able to decode it as:
[{"data":"data::0","value":"value::0","count":0},{"data":"data::1","value":"value::1","count":1},{"data":"data::2","value":"value::2","count":2},{"data":"data::3","value":"value::3","count":3},{"data":"data::4","value":"value::4","count":4},{"data":"data::5","value":"value::5","count":5},{"data":"data::6","value":"value::6","count":6},{"data":"data::7","value":"value::7","count":7},{"data":"data::8","value":"value::8","count":8},{"data":"data::9","value":"value::9","count":9}]
The answers above stop the response body being encoded to base64. The documentation from Apache Camel on bindingMode is illusive as to why it behaves that way when combined with explicit marshalling. Removing the explicit marshalling will return a JSON body, but you may also notice that it contains the any class names in the body. The documentation suggests that bindingMode is more for the transportation of classes and that you specifiy a type(Pojo.class) and optionally outType(Pojo.class) of your requests/responses. See http://camel.apache.org/rest-dsl.html (section Binding to POJOs Using) for more details.
Base64 is the safest way of transferring JSON across networks to ensure it is received exactly as the server sent it, according to some posts I've read. It is then the responsibility of the client to then decode the response.
The answers above do solve the problem. However, I'm not entirely convinced that mixing the data format in the service routes is such as good thing and should ideally be at a higher level of abstraction. This would then allow the data format to be changed in one place, rather than having to change it on every route that produces JSON. Though, I must admit, I've never seen a service that has change data format in its lifetime, so this really is a mute point.
We were also facing the same issue.
Our DataFormat was Json .Once we implented our own custom marshaller. Camel was encoding the data to base64.I tried the approach provided by Cshculz but our CustomDataFormatter was not getting called for some reason which i couldn't figure out.
So We added .marshal(YourDataFormatter) after every Bean call.This provided us with the formatted json but in the encoded form so in the end of the route we added .unmarshal().json(JsonLibrary.Jackson) to return a raw json to the client.
sample snippet :-
.to("xxxxxxx").marshal(YourDataFormatterBeanRef)
.to("xxxxxxx").marshal(YourDataFormatterBeanRef)
.to("xxxxxxx").marshal(YourDataFormatterBeanRef)
.to("xxxxxxx").marshal(YourDataFormatterBeanRef)
.end().unmarshal().json(JsonLibrary.Jackson)
In short:
I'd like to return different JSONs, with say less attributes, when a request comes from a phone than when it comes from a desktop pc.
I want to build a REST service.
The service will serve data based on JPA entities.
The service is declared with #Path.
Depending on the User-Agent header, I want to serve a richer JSON for desktop than for mobile devices. Selection to be done serverside.
Is there a better way than to build a second serializer and use a condition (if(request useragent ) to call them (in every method) and be forced to return some String instead of any Object (making #Produces annotation unused).
Thank you
One way it to add a PathParam or QueryParam to the Path to tell the device type in the request, so the service can be able to understand the type of device, from which the request is and create the appropriate JSON.
Please check the most voted SO answer to find out whether the request is from mobile or desktop and add the parameter accordingly
You can use jax-rs resource selector, which will use different sub-resource depending on user-agent string.
#Path("api")
public UserResource getResourceByUserAgent() {
//the if statement will be more sophisticated obviously :-)
if(userAgent.contains("GT-I9505") {
return new HighEndUserResource();
} else {
return new LowEndUserResource();
}
}
interface UserResource {User doSomeProcessing()}
class HighEndUserResource implements UserResource {
#Path("process")
public User doSomeProcessing() {
//serve
}
}
class LowEndUserResource implements UserResource {
#Path("process")
public User doSomeProcessing() {
//serve content for low end
}
}
By invoking "/api/process" resource the response will depend on userAgent. You can also easily extend the solution for other devices, and implement MiddleEndUserResource for example.
You can read more information about sub-resources here:
I am developing an Android app using GAE on Eclipse.
On one of the EndPoint classes I have a method which returns a "Bla"-type object:
public Bla foo()
{
return new Bla();
}
This "Bla" object holds a "Bla2"-type object:
public class Bla {
private Bla2 bla = new Bla2();
public Bla2 getBla() {
return bla;
}
public void setBla(Bla2 bla) {
this.bla = bla;
}
}
Now, my problem is I cant access the "Bla2" class from the client side. (Even the method "getBla()" doesn't exist)
I managed to trick it by creating a second method on the EndPoint class which return a "Bla2" object:
public Bla2 foo2()
{
return new Bla2();
}
Now I can use the "Bla2" class on the client side, but the "Bla.getBla()" method still doesn't exit. Is there a right way to do it?
This isn't the 'right' way, but keep in mind that just because you are using endpoints, you don't have to stick to the endpoints way of doing things for all of your entities.
Like you, I'm using GAE/J and cloud endpoints and have an ANdroid client. It's great running Java on both the client and the server because I can share code between all my projects.
Some of my entities are communicated and shared the normal 'endpoints way', as you are doing. But for other entities I still use JSON, but just stick them in a string, send them through a generic endpoint, and deserialize them on the other side, which is easy because the entity class is in the shared code.
This allows me to send 50 different entity types through a single endpoint, and it makes it easy for me to customize the JSON serializing/deserializing for those entities.
Of course, this solution gets you in trouble if decide to add an iOS or Web (unless you use GWT) client, but maybe that isn't important to you.
(edit - added some impl. detail)
Serializing your java objects (or entities) to/from JSON is very easy, but the details depend on the JSON library you use. Endpoints can use either Jackson or GSON on the client. But for my own JSON'ing I used json.org which is built-into Android and was easy to download and add to my GAE project.
Here's a tutorial that someone just published:
http://www.survivingwithandroid.com/2013/10/android-json-tutorial-create-and-parse.html
Then I added an endpoint like this:
#ApiMethod(name = "sendData")
public void sendData( #Named("clientId") String clientId, String jsonObject )
(or something with a class that includes a List of String's so you can send multiple entities in one request.)
And put an element into your JSON which tells the server which entity the JSON should be de serialized into.
Try using #ApiResourceProperty on the field.