I have an own written Java container which uses Inversion of Control (IoC), thus the container expects an implementation of a certain interface.
Example of the interface:
public interface OnWrite {
void onWrite(Context context);
}
Now, I'd like the user to provide the implementation not in Java but in JavaScript, hence the user writes JavaScript code.
Example of the user provided JavaScript implementation:
function onWrite(context) {
// Do something
}
The JavaScript has to be executed by Node.js.
The only one solution I can imagine is, that the context objects on both sides are proxy objects, which comunicate through sockets.
Do you have any other ideas? I appreciete any suggestions.
Related
I'm writing an integration with a 3rd party API Gateway, and in an effort to make it as decoupled as possible (and to be able to change the provider in the future), I created 3 interfaces that will contain methods for reading, writing and deleting from the gateway (because there are a lot of methods that I need to use so I don't want to cram everything in one large interface and violate interface segregation principle). The API Gateway is used to handle app creation and other CRUD operations.
And I'm not sure what is the best way to move forward. I can create an interface like this
interface Api_Gateway_Create {
public function create_app( string $organization, string $developer_id, string $body );
// other methods.
}
And then, when doing a concrete implementation, I will create a class that implements this, and when I need to use it, I need to provide the three arguments.
This seems a bit constricting. What if, when I replace the provider, I no longer need the $developer_id? I could set all the arguments with some default value
interface Api_Gateway_Create {
public function create_app( string $organization,
string $developer_id = 'some-default-value',
string $body = 'some-default-value' );
// other methods.
}
But that means that I'll end up with arguments I don't need potentially and that could mess up my implementation.
The last thing that came to mind is that I could just put a variadic, and then let the implementation take care of the arguments
interface Api_Gateway_Create {
public function create_app( ...$arguments_list );
// other methods.
}
In the implementation I'd have
class Create_App_Implementation1 implements Api_Gateway_Create {
public function create_app( ...$arguments_list ) {
list( $organization, $developer_id, $body ) = $arguments;
// Some business logic.
}
}
or
class Create_App_Implementation2 implements Api_Gateway_Create {
public function create_app( ...$arguments_list ) {
list( $organization, $app_key, $body ) = $arguments;
// Some business logic.
}
}
That way I don't need to care if my provider offers these arguments, because I'll just implement the ones I need.
This however presents another problem. In the consuming code, say a class that will use create_app() method via dependency injection, I need to make extra care that the correct values are passed. And this is not future proof, as I'd need to change the code in the consuming class as well (which to me seems like the opposite intention of the interface).
With first one I will never have to change the consuming code, because I don't care what provider I'm using, if I'm expecting the same result (based on the same input arguments). But as I've mentioned, this seems a bit constricting. Two providers could have different way of handling this.
Did anybody have to face this kind of thing and what is the industry standard of handling this?
I'm writing in PHP, but I think Java could also be suited because of its object oriented nature.
Did anybody have to face this kind of thing and what is the industry standard of handling this?
I'm sure they have.
There is no "industry standard" way of doing this.
(You should probably remove phrases like "industry standard" and "best practice" from your vocabulary. They are harmful for communication, in my opinion. Read "No best practices" and think about what he is saying.)
I'm not familiar enough with PHP to say what is commonly done to address this in that language.
In Java, there are two common approaches:
Define lots of different method (or constructor) overloads for the common cases; e.g.
public interface CreateAppApi {
public String create_app(String organization, String developerId,
String body);
public String create_app(String organization, String developerId);
public String create_app(String developerId);
}
This doesn't work well if the parameters all have the same type: the different overload may not be distinguishable.
Use the fluent builder pattern; e.g. define the interface like this:
public interface CreateAppRequest {
public CreateAppRequest organization(String value);
public CreateAppRequest developer(String developerId);
public CreateAppRequest body(String body);
public String send();
}
and use it like this:
String result = new CreateAppRequestImpl().organization(myOrg)
.developer(myId).body(someBody).send();
This works nicely from the API user perspective, and it is easy to evolve. (Just add more methods for supplying new parameters to the API and implement them.) The downside is that there is more boilerplate code.
Java supports variadic parameter, but they are not suitable for the use-case you are describing. (That are applicable to cases where you have a variable number of values that essentially mean the same thing; e.g. an inline list of strings representing methods of our hypothetical "app".)
In Python, there is no method overloading ... because there is no need for it. Instead:
- you can use positional or keyword arguments, or a combination of them.
- you also have constructs like *args and **kwargs to pass through arbitrary parameters without explicitly declaring them.
Generally speaking, keyword parameters are better for APIs which require a variety of arguments with different meanings and (possibly) types.
So in this example you might write something like this:
class CreateAppRequest(SomeBaseClass):
def __init__(self, organization='defaultOrg', developer=None, body='',
**kwargs):
super(self, CreateAppRequest).__init__(**kwargs)
if developer is None:
raise Exception('developer is mandatory')
self.developer = developer
self.organization = organization
self.body = body
def send(self):
...
The builder / fluent approach can be used in Python too.
I've got a class which has some operations dependent on each other.
class MyFile{
private String uploadedUri;
public void upload(){
// http upload
}
public void asyncUpload(){
// http async upload
}
public void convert(){
//http call to convert
}
public void convertAsync(){
//http call to convert, but async
}
public void extract(){
//http call to convert, but async
}
public void extractAsync(){
//http call to convert, but async
}
// some other operations
}
Now, my convert operation depends on upload, it only acts on the uploaded URI. In the sync convert, I check if the uri is set. If not, I'll first upload it. Similarly, there are other methods (extract for example) in the class which depend on the convert method, i.e. if the conversion is not done, they'll attempt to convert it. This is being done so that the caller of the methods doesn't have to worry about the order.
My problem is with the async methods. When the sequence of methods is like this:
MyFile myfile = new MyFile();
myfile.convertAsync()
myfile.extractAsync()
Inside the extractAsync(), I can't be sure if the conversion has taken place, since it's happening in a separate thread. So the extract will start the conversion as well, which will lead to conversions for the file. The same problem arises in any of the other dependent asycn methods.
If I return CompletableFuture from the async methods, and force the user to chain the operations, so that extract is only called after convert has completed, the user has to know the order of the methods, which defeats my purpose, and is different than the sync implementation of the same methods.
I want to know if this is the right way of handling dependent operations in a domain class as per DDD. If yes, how to handle this scenario?
I want to know if this is the right way of handling dependent operations in a domain class as per DDD. If yes, how to handle this scenario?
One approach to consider is to leave all of the asynchronous work out in the application layer.
The basic idea is that your domain model -- which is fundamentally just a bookkeeping device -- acts as a in memory state machine. So it doesn't have asynchronous side effects of its own. Instead, it tells the application what information it needs to read/write, and the application is responsible for figuring out how to do that.
if (domainModel.needsData()) {
val data = application.readData()
domainModel.onData(data)
}
See Functional Core, Imperative Shell by Gary Bernhardt and Building Protocol Libraries the Right Way by Cory Benfield.
I have code like below
if (MessageBoxProvider.questionMessageBox(shell, title, message)) {
return performOverwrite(file);
}
I wanted to test(JUnit) how many times performOverwrite(file) method called, I know I can use verify method to test, my problem is MessageBoxProvider.questionMessageBox(shell, title, message) , condition will become only true when user clicks ok, but Using Junits, how can I make if condition true?
Unit testing business logic becomes very complicated if user-interface code is mixed in with it. Ideally, you should adopt a design pattern, such as MVC or MVP that prevents this entirely.
If you can't or won't go down that route, consider defining an interface that contains all your message box methods. E.g.
public interface MessagePrompter {
boolean poseQuestion(title, message);
// ...
}
In your class constructor, accept an object of this type and store it. In your tests, you can mock this object and use it to control what the test user has done.
In your production code, use a concrete implementation of this interface that calls your MessageBoxProvider methods.
This type of approach has the benefit of making your application more portable. If you want to release a command-line version or a web-based version, you simply change the way your concrete implementation behaves.
I can only find stuff about the inverse; using Clojure to implement Java interfaces. However, I want to write a programme in Clojure and allow one to extend it with Java. For example:
# P.clj
(defprotocol P
(f [a])
(g [a b]))
# I.java
public class I implements P {
public Object f(Object a) { … }
public Object g(Object a, Object b) { … }
}
Also, how would I specify parameter types so I don’t have to use Object everywhere?
The only option I currently see is using dot-notation and relying on duck typing but I prefer compile-time verification of interface implementation on the Java side.
Yes, it's quite possible, and just like that.
From the pertinent part of the docs:
defprotocol will automatically generate a corresponding interface, with the same name as the protocol, i.e. given a protocol my.ns/Protocol, an interface my.ns.Protocol. The interface will have methods corresponding to the protocol functions, and the protocol will automatically work with instances of the interface.
And, to answer your question:
A Java client looking to participate in the protocol can do so most efficiently by implementing the protocol-generated interface.
As Isaac has already said, yes.
However, w/o a source representation, I think it's kind of a horsesh*t claim. Please note, I'm not referring to Isaac's answer when I say that. I'm referring to the way that Clojure works in this case.
If you need Java interop, you might want to stick with boring Java interfaces. I think it would also make it easier to interoperate w/other languages in the JVM too, as it is the least common denominator. I also think it makes it easier to communicate w/non Clojure developers.
I've been asking some questions about adapting the command protocol for use in my client server environment. However, after some experimentation, I have come to the conclusion that it will not work for me. It is not designed for this scenario. I'm thus at a loose end.
I have implemented a sort of RPC mechanism before whereby I had a class entitled "Operation". I also had an enum entitled "Action" that contained names for actions that could be invoked on the server.
Now, in my old project, every time that the client wanted to invoke an action on the server, it would create an instance of 'Operation' and set the action variable with a value from the "Action" enum. For example
Operation serverOpToInvoke = new Operation();
serverOpToInvoke.setAction(Action.CREATE_TIME_TABLE);
serverOpToInvoke.setParameters(Map params);
ServerReply reply = NetworkManager.sendOperation(serverOpToInvoke);
...
On the server side, I had to perform the horrible task of determining which method to invoke by examining the 'Action' enum value with a load of 'if/else' statements. When a match was found, I would call the appropriate method.
The problem with this was that it was messy, hard to maintain and was ultimately bad design.
My question is thus - Is there some sort of pattern that I can follow to implement a nice, clean and maintainable rpc mechanism over a TCP socket in java? RMI is a no go for me here as the client (android) doesn't support RMI. I've exhausted all avenues at this stage. The only other option would maybe be a REST service. Any advice would be very helpful.
Thank you very much
Regards
Probably the easiest solution is to loosely follow the path of RMI.
You start out with an interface and an implementation:
interface FooService {
Bar doThis( String param );
String doThat( Bar param );
}
class FooServiceImpl implements FooService {
...
}
You deploy the interface to both sides and the implementation to the server side only.
Then to get a client object, you create a dynamic proxy. Its invocation handler will do nothing else but serialize the service classname, the method name and the parameters and send it to the server (initially you can use an ObjectOutputStream but you can use alternative serialization techniques, like XStream for example).
The server listener takes this request and executes it using reflection, then sends the response back.
The implementation is fairly easy and it is transparent from both sides, the only major caveat being that your services will effectively be singletons.
I can include some more implementation detail if you need, but this is the general idea I would follow if I had to implement something like that.
Having said that, I'd probably search a bit more for an already existing solution, like webservices or something similar.
Update: This is what an ordinary (local) invocation handler would do.
class MyHandler implements InvocationHandler {
private Object serviceObject;
#Override
public Object invoke(Object proxy, Method method, Object[] args)
throws Throwable {
return method.invoke(serviceObject, args);
}
}
Where serviceObject is your service implementation object wrapped into the handler.
This is what you have to cut in half, and instead of calling the method, you need to send the following to the server:
The full name of the interface (or some other value that uniquely identifies the service interface)
The name of the method.
The names of the parameter types the method expects.
The args array.
The server side will have to:
Find the implementation for that interface (the easiest way is to have some sort of map where the keys are the interface names and the values the implementation singleton instance)
Find the method, using Class.getMethod( name, paramTypes );
Execute the method by calling method.invoke(serviceObject, args); and send the return value back.
You should look into protocol buffers from google: http://code.google.com/p/protobuf/
This library defines an IDL for generating struct like classes that can be written and read from a stream/byte array/etc. They also define an RPC mechanism using the defined messages.
I've used this library for a similar problem and it worked very well.
RMI is the way to go.
Java RMI is a Java application
programming interface that performs
the object-oriented equivalent of
remote procedure calls (RPC).