I'm working on API which should provide simple access to number of remote web-service based resources.
Some of these remote resources requires special parameters to be passed before interaction. For example, one of them requires pair of developer's keys to be passed, another requires pair of keys and unique identifier. Third one doesn't require these parameters at all. I'm working with 3 services now but their number can be enlarged.
For each web-service I have correspondent implementation of my API. The problem is that I don't know how to introduce to my API possibility to pass unknown number of Strings with unknown meanings.
Some of my suggestions:
1.
ServiceFactory.createService (ServiceEnum type, Properties keys);
2.
ServiceFactory.createService (ServiceEnum type, ServiceParams params);
Where ServiceParams is a marker-interface. In this case I'll have some helper-class like this:
public class ServiceHelper {
public static ServiceParams createFirstServiceParams (String secretKey, String publicKey);
public static ServiceParams createSecondServiceParams (String secretKey, String publicKey, String uid);
public static ServiceParams createThirdServiceParams ();
}
Pros: meaningful parameter names for each service.
Cons: if I provide support for fourth service then user will have to update factories module. In the first case user will only have to download new module.
3.
ServiceFactory.createService (ServiceEnum type, String ... params);
Pros: the most easy to use. User don't need to do any additional actions (like creating properties of ServiceParams).
Cons: the most unobvious way. User should know which set of params corresponds to the service he wants to create.
4-6:
the same variants but params are being passed not to factory method but to Service instance (in its init() method for example).
Pros: user can change keys for his service if he need without necessary to create new instance of the same service.
Cons: more complicated way, profit is questionable.
Which variant do you prefer? Why? Your variants are welcome.
You could have two factory methods, one where you pass a Map containing parameters, and the other without parameters:
ServiceFactory.createService(ServiceEnum type);
ServiceFactory.createService(ServiceEnum type, Map<String,?> params);
In this case, it's the responsibility of the caller to get the parameters right, but it gives you maximal flexibility.
I would probably go with option 1 and replace Properties with Map, which is what Properties uses for its underlying implementation.
Related
I have a class API, where you set and get the info you need to later be used on a API call
I want to make it easier to the next person who's gonna use this.
So instead of doing this:
api.addURL("urltorequesttoken");
api.addHeader("client_id","sdfsfsdfsd")
.addHeader("client_secret","sdfsdfsfsfd")
.addHeader("grant_type","client_credentials")
.addHeader("scope","READ");
api.addBody("bodyToSend")
I want to do this:
String URL = "";
URL = "put your URL here";
So I pass the URL and other variables as a parameter to another method where I will be doing what I did in the first block of code,so they don't need to know about the API class and its methods, but I dont know how to handle the hashmap, how can I do that user friendly? and then pass that as a parameter, also, what type of parameters should the methods receiving this info have? (Map<String key, String value>) or (String key, String value)?
EDIT(ADD):
So there's a class that a DEV is going to create, let's call it CreateToken
, so that class currently has:
api.addURL("urltorequesttoken");
api.addHeader("client_id","sdfsfsdfsd")
.addHeader("client_secret","sdfsdfsfsfd")
.addHeader("grant_type","client_credentials")
.addHeader("scope","READ");
api.addBody("bodyToSend")
There's another class called BASE, where Im doing the core services, in order for this to be easier for the person when they create their class, I dont want to have that block of code on their class, but instead, on mine, so in their class all they have to do is set the URL, headers and body(for POST method), so instead of this:
api.addURL("urltorequesttoken");
they will do:
URL = "urltorequesttoken";
and there's a method on their class to send me this or for me to get it i,e.
fillAPICallInfo(URL, headers, body);
I will receive that on the BASE class, but I dont know how to handle the Map variables, don't know how to make it easy for the DEV so they just put the key and value, and how do I receive that on my class (as a Map or as Strings)?
So you simply can pass a Map<String, String> as parameter:
public void fillAPICallInfo(String url, Map<String, String> headers, String body) {
// Assuming there is an instance of class DEV named api available
api.addURL(url);
headers.forEach((h, v) -> api.addHeader(h, v));
api.addBody(body);
}
I have a Request object with field request_type and number of other fields. request_type can be 'additionRequest' , 'deletionRequest' 'informationRequest'.
Based on request_type other fields in Request object are processed differently.
My simple minded approach is
if additionRequest
algorithm1
else if deletionRequest
algorithm2
else if deletionRequest
algorithm3
end
How I can avoid these if statements and still apply proper algorithm?
If you want to avoid conditional statements then you can leverage object oriented features such as:
Map<String, Function<Request, Result>> parsers = new HashMap<>();
parsers.put("additionRequest", request -> {
// parse and generate Result here
return result;
});
Result result = parsers.get(request.request_type).apply(request);
Seems to me that perhaps a Command Pattern could come in handy here. If you make an object structure of these commands and encapsulate the algorithm that you want to execute within the object, then you can construct the specific sub-objects and later on use "execute" method of the command to invoke the algorithm. Just make sure you are using polymorphic references:
if additionRequest
algorithm1
else if deletionRequest
algorithm2
else if deletionRequest
algorithm3
end
will become
void theRequestExecutor(Reqest req) {
req.executeAlgorithm()
}
https://en.wikipedia.org/wiki/Command_pattern
Use HashMap<RequestType, RequestHandler> for this case. RequestHandler can be an interface which will be implemented for each situation you want to handle.
Hope this help.
You can create Map of key:String, value :Function of RequestType ReturnType. Depending on type of request it will call corresponding function.
Example:
Map<String, Function<RequestType, ResponseType> requestProcessors = new HashMap<>;
requestProcessors.add("additionRequest", this::methodToHandleAddRequest);
requestProcessors.add("deletionRequest", this::methodToHandleDeleteRequest);
Inside request handler do
return this.requestProcessors.get(request.request_type).apply(request);
Note you may have to create response interface if different responses are different. Different response type will inherit from response interface
The object-oriented solution is always to include the logic with the data. In this case include whatever you want a request to do in the request object itself, instead of (presumably) passing out pure data.
Something like
public interface Request {
Response apply();
}
Anything else, like creating a map of functions, creating a thin abstraction layer or applying some pattern is a workaround. These are suitable only if the first solution can not be applied. This might be the case if the Request objects are not under your control, for example generated automatically or third party classes.
I'm kinda new on DDD and even after read the blue and red book I still have some questions about how to transform some principles to code, specifically using Kotlin and Java.
For example, I identify a Client aggregate root that receive some parameters need it for the creation like Name and Address:
class Client: AggregateRoot {
var clientId: ClienteId
var name: Name
var address: Address
constructor(name: Name,address: Address) : super(){
// validations ....
this.name = name
this.address = address
}
Easy part:
To create a new Client I receive a DTO inside the RS service and try to create a new Client class passing the parameters above, case everything was solid and all rules fulfilled I send the new instance of Client to the repository, pretty straight foward.
clientRepository.store(client)
Other part:
I need to search my Client to change the address so I send the id to the repository and find the Client inside the database then I need to convert the database entity to the aggregate root and return to the caller.
override fun getById(id: Long): Client {
val clientEntity = em.find(...)
val client: Client(.....) //But I need another constructor with ClientId
return client
}
Then I will need a new constructor one that receive more parameters like the ClientId
constructor(clientId: ClienteId,name: Name,address: Address) : super(){
The problem is that every service can call this new constructor and create a incorrect instance of my aggregation root, so my questions are:
Is there a way to hide the complete constructor just for the repository or specific layers to see. Like in C# when you could use internal.
Is there any solution for Java or Kotlin to not expose this constructor that should be used just on tests and integrations ?
Another example is if I didn't need the address to be passed every time a client is created but just after in another method like:
client.addAddress(address)
But in both cases I will need to fulfill the entire Client from the database so I will need a second constructor with the address parameter.
So, the problem is how to rehydrate an Aggregate from the persistence without breaking its encapsulation by exposing the wrong interface to the client code (i.e. the Application layer or the Presentation layer).
I see two solutions to this:
Use reflection to populate the fields. This is the solution that most ORMs use and it is also the most generic. It works for most persistence types, even when there is an impedance mismatch. Some ORMs need to annotate fields or relations.
Expose a different interface to the client code. This means that your Aggregate implementation is larger that the interface and contains additional initialization methods used only by the infrastructure.
As an example in pseudo-code your could have:
// what you want the upper layers to see
interface Client {
void addAddress(address);
}
// the actual implementations
public class ClientAggregate implements Client
{
void rehidrate(clientId,name,address){...}
void addAddress(address){...}
}
public class ClientRepository
{
// this method returns Client (interface)
Client getById(id){
val clientEntity = em.find(...)
val client = new ClientAggregate()
client.rehydrate(clientEntity.id, clientEntity.name, clientEntity.address)
return client //you are returning ClientAggregate but the other see only Client (interface)
}
}
As a side note, I don't expose the constructor to create an Aggregate from the Domain point of view. I like to have empty constructors and a dedicated method, named from the Ubiquitous language, that creates the Aggregate. The reason is that is not clear that the constructor creates a new Aggregate. The constructor instantiate a new instance of a class; it is more a implementation details than a domain concern. An example:
class Client {
constructor(){ //some internal initializations, if needed }
void register(name){ ... }
}
I am creating Client API in Java using :+ Apache Jena FrameWork+ Hydra(for Hypermedia driven) + my private vocab similar to Markus Lanther Event-API Vocab instead of schema.org(for Ontology/Vocabulary part)
Section 1 :
After looking this Markus Lanther EventDemo repo and hydra-java.I found that they are creating classes for each hydra:Class that can break client in future .For example :
A Person class (Person.java)
public class Person
{
String name;
};
But in future requirement name is also a class eg:
public class Name
{
String firstName;
String LastName;
};
So to fulfill this requirement I have to update Person class like this:
public class Person
{
Name name;
};
Question 1:
Is my understanding correct or not of this Section? If yes then what is the way to deal with this part ?
Section 2:
To avoid above problem I created a GenericResource class(GenericResource.java)
public class GenericResource
{
private Model model;
public void addProperty(String propertyName,Object propertyValue)
{
propertyName = "myvocab:"+propertyName;
//Because he will pass propertyName only eg: "name" and I will map it to "myvocab:name"
//Some logic to add propertyName and propertyValue to model
}
public GenericResource retriveProperty(String propertyName)
{
propertyName = "myvocab:"+propertyName;
//Some logic to query and retrieve propertyName data from this Object add it to new GenericResource Object and return
}
public GenericResouce performAction(String actionName,String postData)
{
//Some logic to make http call give response in return
}
}
But again I stuck in lots of problem :
Problem 1: It is not necessary that every propertyName is mapped to myvocab:propertyName. Some may be mapped to some other vocab eg: hydra:propertyName, schema:propertyName, rdfs:propertyName, newVocab:propertyName, etc.
Problem 2: How to validate whether this propertyName belongs to this class ?
Suggestion: Put type field/variable in GenericResource class.And then check supportedProperty in vocab corresponding to that class.To more clarity assume above Person class which is also defined in vocab and having supportedProperty : [name,age,etc] .So my GenericResource have type "Person" and at time of addProperty or some other operation , I will query through vocab for that property is in supportedProperty list or in supportedOperation list in case of performAction().
Is it correct way ? Any other suggestion will be most welcomed?
Question 1: Is my understanding correct or not of this Section? If yes
then what is the way to deal with this part ?
Yes, that seems to be correct. Just because hydra-java decided to creates classes doesn't mean you have to do the same in your implementation though. I would rather write a mapper and annotate an internal class that can then stay stable (you need to update the mapping instead). Your GenericResource approach also looks good btw.
Problem 1: It is not necessary that every propertyName is mapped to
myvocab:propertyName. Some may be mapped to some other vocab eg:
hydra:propertyName, schema:propertyName, rdfs:propertyName,
newVocab:propertyName, etc.
Why don't you store and access the properties with full URLs, i.e., including the vocab? You can of course implement some convenience methods to simplify the work with your vocab.
Problem 2: How to validate whether this propertyName belongs to this
class
Suggestion: Put type field/variable in GenericResource class
JSON-LD's #type in node objects (not in #value objects) corresponds to rdf:type. So simply add it as every other property.
And then check supportedProperty in vocab corresponding to that class.
Please keep in mind that supportedProperty only tells you which properties are known to be supported. It doesn't tell you which aren't. In other words, it is valid to have properties other than the ones listed as supportedProperty on an object/resource.
Ad Q1:
For the flexibility you want, the client has to be prepared for semantic and structural changes.
In HTML that is possible. The server can change the structure of an html form in the way outlined by you, by having a firstName and lastName field rather than just a name field. The client does not break, rather it adjusts its UI, following the new semantics. The trick is that the UI is generated, not fixed.
A client which tries to unmarshal the incoming message into a fixed representation, such as a Java bean, is out of luck, and I do not think there is any solution how you could deserialize into a Java bean and survive a change like yours.
If you do not try to deserialize, but stick to reading and processing the incoming message into a more flexible representation, then you can achieve the kind of evolvability you're after. The client must be able to handle the flexible representation accordingly. It could generate UIs rather than binding data to fixed markup, which means, it makes no assumptions about the semantics and structure of the data. If the client absolutely has to know what a data element means, then the server cannot change the related semantics, it can only add new items with the new semantics while keeping the old ones around.
If there were a way how a server could hand out a new structure with a code-on-demand adapter for existing clients, then the server would gain a lot of evolvability. But I am not aware of any such solutions yet.
Ad Q2:
If your goal is to read an incoming json-ld response into a Jena Model on the client side, please see https://jena.apache.org/documentation/io/rdf-input.html
Model model = ModelFactory.createDefaultModel() ;
String base = null;
model.read(inputStream, base, "JSON-LD");
Thus your client will not break in the sense that it cannot read the incoming response. I think that is what your GenericResource achieves, too. But you could use Jena directly on the client side. Basically, you would avoid unmarshalling into a fixed type.
I am developing an architecture in Java using tomcat and I have come across a situation that I believe is very generic and yet, after reading several questions/answers in StackOverflow, I couldn't find a definitive answer. My architecture has a REST API (running on tomcat) that receives one or more files and their associated metadata and writes them to storage. The configuration of the storage layer has a 1-1 relationship with the REST API server, and for that reason the intuitive approach is to write a Singleton to hold that configuration.
Obviously I am aware that Singletons bring testability problems due to global state and the hardship of mocking Singletons. I also thought of using the Context pattern, but I am not convinced that the Context pattern applies in this case and I worry that I will end up coding using the "Context anti-pattern" instead.
Let me give you some more background on what I am writing. The architecture is comprised of the following components:
Clients that send requests to the REST API uploading or retrieving "preservation objects", or simply put, POs (files + metadata) in JSON or XML format.
The high level REST API that receives requests from clients and stores data in a storage layer.
A storage layer that may contain a combination of OpenStack Swift containers, tape libraries and file systems. Each of these "storage containers" (I'm calling file systems containers for simplicity) is called an endpoint in my architecture. The storage layer obviously does not reside on the same server where the REST API is.
The configuration of endpoints is done through the REST API (e.g. POST /configEndpoint), so that an administrative user can register new endpoints, edit or remove existing endpoints through HTTP calls. Whilst I have only implemented the architecture using an OpenStack Swift endpoint, I anticipate that the information for each endpoint contains at least an IP address, some form of authentication information and a driver name, e.g. "the Swift driver", "the LTFS driver", etc. (so that when new storage technologies arrive they can be easily integrated to my architecture as long as someone writes a driver for it).
My problem is: how do I store and load configuration in an testable, reusable and elegant way? I won't even consider passing a configuration object to all the various methods that implement the REST API calls.
A few examples of the REST API calls and where the configuration comes into play:
// Retrieve a preservation object metadata (PO)
#GET
#Path("container/{containername}/{po}")
#Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
public PreservationObjectInformation getPOMetadata(#PathParam("containername") String containerName, #PathParam("po") String poUUID) {
// STEP 1 - LOAD THE CONFIGURATION
// One of the following options:
// StorageContext.loadContext(containerName);
// Configuration.getInstance(containerName);
// Pass a configuration object as an argument of the getPOMetadata() method?
// Some sort of dependency injection
// STEP 2 - RETRIEVE THE METADATA FROM THE STORAGE
// Call the driver depending on the endpoint (JClouds if Swift, Java IO stream if file system, etc.)
// Pass poUUID as parameter
// STEP 3 - CONVERT JSON/XML TO OBJECT
// Unmarshall the file in JSON format
PreservationObjectInformation poi = unmarshall(data);
return poi;
}
// Delete a PO
#DELETE
#Path("container/{containername}/{po}")
public Response deletePO(#PathParam("containername") String containerName, #PathParam("po") String poName) throws IOException, URISyntaxException {
// STEP 1 - LOAD THE CONFIGURATION
// One of the following options:
// StorageContext.loadContext(containerName); // Context
// Configuration.getInstance(containerName); // Singleton
// Pass a configuration object as an argument of the getPOMetadata() method?
// Some sort of dependency injection
// STEP 2 - CONNECT TO THE STORAGE ENDPOINT
// Call the driver depending on the endpoint (JClouds if Swift, Java IO stream if file system, etc.)
// STEP 3 - DELETE THE FILE
return Response.ok().build();
}
// Submit a PO and its metadata
#POST
#Consumes(MediaType.MULTIPART_FORM_DATA)
#Path("container/{containername}/{po}")
public Response submitPO(#PathParam("containername") String container, #PathParam("po") String poName, #FormDataParam("objectName") String objectName,
#FormDataParam("inputstream") InputStream inputStream) throws IOException, URISyntaxException {
// STEP 1 - LOAD THE CONFIGURATION
// One of the following options:
// StorageContext.loadContext(containerName);
// Configuration.getInstance(containerName);
// Pass a configuration object as an argument of the getPOMetadata() method?
// Some sort of dependency injection
// STEP 2 - WRITE THE DATA AND METADATA TO STORAGE
// Call the driver depending on the endpoint (JClouds if Swift, Java IO stream if file system, etc.)
return Response.created(new URI("container/" + container + "/" + poName))
.build();
}
** UPDATE #1 - My implementation based on #mawalker's comment **
Find below my implementation using the proposed answer. A factory creates concrete strategy objects that implement lower-level storage actions. The context object (which is passed back and forth by the middleware) contains an object of the abstract type (in this case, an interface) StorageContainerStrategy (its implementation will depend on the type of storage in each particular case at runtime).
public interface StorageContainerStrategy {
public void write();
public void read();
// other methods here
}
public class Context {
public StorageContainerStrategy strategy;
// other context information here...
}
public class StrategyFactory {
public static StorageContainerStrategy createStorageContainerStrategy(Container c) {
if(c.getEndpoint().isSwift())
return new SwiftStrategy();
else if(c.getEndpoint().isLtfs())
return new LtfsStrategy();
// etc.
return null;
}
}
public class SwiftStrategy implements StorageContainerStrategy {
#Override
public void write() {
// OpenStack Swift specific code
}
#Override
public void read() {
// OpenStack Swift specific code
}
}
public class LtfsStrategy implements StorageContainerStrategy {
#Override
public void write() {
// LTFS specific code
}
#Override
public void read() {
// LTFS specific code
}
}
Here is the paper Doug Schmidt (in full disclosure my current PhD Advisor) wrote on the Context Object Pattern.
https://www.dre.vanderbilt.edu/~schmidt/PDF/Context-Object-Pattern.pdf
As dbugger stated, building a factory into your api classes that returns the appropriate 'configuration' object is a pretty clean way of doing this. But if you know the 'context'(yes, overloaded usage) of the paper being discussed, it mainly for use in middleware. Where there are multiple layers of context changes. And note that under the 'implementation' section it recommends use of the Strategy Pattern for how to add each layer's 'context information' to the 'context object'.
I would recommend a similar approach. Each 'storage container' would have a different strategy associated with it. Each "driver" therefore has its own strategy impl. class. That strategy would be obtained from a factory, and then used as needed. (How to design your Strats... best way (I'm guessing) would be to make your 'driver strat' be generic for each driver type, and then configure it appropriately as new resources arise/the strat object is assigned)
But as far as I can tell right now(unless I'm reading your question wrong), this would only have 2 'layers' where the 'context object' would be aware of, the 'rest server(s)' and the 'storage endpoints'. If I'm mistaken then so be it... but with only 2 layers, You can just use 'strategy pattern' in the same way you were thinking 'context pattern', and avoid the issue of singletons/Context 'anti-pattern'. (You 'could' have a context object, which contains the strategy for which driver to use, and then a 'configuration' for that driver... that wouldn't be insane, and might fit well with your dynamic HTTP configuration.)
The Strategy(s) Factory Class doesn't 'have to' be singleton/have static factory methods either. I've made factories that are objects before just fine, even with D.I. for testing. There is always trade-offs to different approaches, but I've found better testing to be worth it in almost all cases I've ran into.