So I'm writing a web service architecture which includes FunctionProvider classes which do the actual processing of requests, and a main Endpoint class which receives and delegates requests to the proper FunctionProvider.
I don't know exactly the FunctionProviders available at runtime, so I need to be able to 'register' (if that's the right word) them with my main Endpoint class, and query them to see if they match an incoming request.
public class MyFunc implements FunctionProvider{
static {
MyEndpoint.register(MyFunc);
}
public Boolean matchesRequest(Request req){...}
public void processRequest(Request req){...}
}
public class MyEndpoint{
private static ArrayList<FunctionProvider> functions = new ArrayList<FunctionProvider>();
public void register(Class clz){
functions.add(clz);
}
public void doPost(Request request){
//find the FunctionProvider in functions
//matching the request
}
}
I've really not done much reflective Java like this (and the above is likely wrong, but hopefully demonstrates my intentions).
What's the nicest way to implement this without getting hacky?
Do not let the FunctionProviders self register. Bootstrap the endpoint through some application init. call with a list of FunctionProviders. That way you can configure priority (what if two providers both claim they can process a request?). The way you set it up now you need to invoke the class somehow to trigger the static constructor, too indirect.
If detecting whether or not a FunctionProvider supports a given request is trivial consider making it part of configuration. If this is in the request map it to that FunctionProvider. This would seperate concerns a bit better. If the detection is complicated consider doing it in seperate classes from the FunctionProvider.
By configuring a delegate/function pointer you can possibly prevent from needing a FunctionProvider altogether (not sure if/how Java supports delegates).
Related
Our application calls many external API's which take a session token of the current user as input. So what we currently do is in a controller, get the session token for the user and pass it into a service which in turn might call another service or some API client. To give an idea, we end up with something like this (example is .NET but something similar is I think possible in Java)
public IActionResult DoSomething(string something)
{
this.someService.DoSomethingForUser(this.HttpContext.SessionToken, something);
return View();
}
And then we have
public class SomeService
{
private readonly IApiClient apiClient;
public SomeService(IApiClient apiClient)
{
this.apiClient = apiClient;
}
public void DoSomethingForUser(string sessionToken, something)
{
this.apiClient.DoSomethingForUser(sessionToken, something);
}
}
It can also happen that in SomeService another service is injected which in turn calls the IApiClient instead of SomeService calling IApiClient directly, basically adding another "layer".
We had a discussion with the team if it isn't better to instead of passing the session token, inject it using DI so you get something like this:
public IActionResult DoSomething(string something)
{
this.someService.DoSomethingForUser(something);
return View();
}
And then we have
public class SomeService
{
private readonly IUserService userService;
private readonly IApiClient apiClient;
public SomeService(IUserService userService, IApiClient apiClient)
{
this.userService = userService;
this.apiClient = apiClient;
}
public void DoSomethingForUser(string something)
{
this.apiClient.DoSomethingForUser(userService.SessionToken, something);
}
}
The IUserService would have an IHttpContextAccessor injected:
public class UserService : IUserService
{
private readonly IHttpContextAccessor httpContextAccessor;
public UserService(IHttpContextAccessor httpContextAccessor)
{
this.httpContextAccessor = httpContextAccessor;
}
public string SessionToken => httpContextAccessor.HttpContext.SessionToken;
}
The benefits of this pattern are I think pretty clear. Especially with many services, it keeps the code "cleaner" and you end up with less boilerplate code to pass a token around.
Still, I don't like it. To me the downsides of this pattern are more important than its benefit:
I like that passing the token in the methods is concise. It is clear that the service needs some sort of authentication token for it to function. I'm not sure if you can call it a side effect but the fact that a session token is magically injected three layers deep is impossible to tell just by reading the code
Unit testing is a bit more tedious if you have to Mock the IUserService
You run into problems when calling this in another thread, e.g. calling SomeService from another thread. Although these problems can be mitigated by injecting another concrete type of IUserService which gets the token from some place else, it feels like a chore.
To me it strongly feels like an anti pattern but apart from the arguments above it is mostly a feeling. There was a lot of discussion and not everybody was convinced that it was a bad idea. Therefor, my question is, is it an anti pattern or is it perfectly valid? What are some strong arguments for and against it, hopefully so there can be not much debate that this pattern is indeed, either perfectly valid or something to avoid.
I would say the main point is to enable your desired separation of concerns. I think it is a good question if expressed in those terms. As Kit says, different people may prefer different solutions.
REQUEST SCOPED OBJECTS
These occur quite naturally in APIs. Consider the following example, where a UI calls an Orders API, then the Orders API forwards the JWT to an upstream Billing API. A unique Request ID is also sent, in case the flow experiences a temporary problem. If the flow is retried, the Request ID can be used by APIs to prevent data duplication. Yet business logic should not need to know about either the Request ID or the JWT.
BUSINESS LOGIC CLASS DESIGN
I would start by designing my logic classes with my desired inputs, then work out the DI later. In my example the OrderService class might use claims to get the user identity and also for authorization. But I would not want it to know about HTTP level concerns:
public class OrderService
{
private readonly IBillingApiClient billingClient;
public OrderService(IBillingApiClient billingClient, ClaimsPrincipal user)
{
this.billingClient = billingClient;
}
public async void CreateOrder(OrderInput data)
{
this.Authorize();
var order = this.CreateOrder(data);
await this.billingClient.CreateInvoice(order);
}
}
DI SETUP
To enable my preferred business logic, I would write a little DI plumbing, so that I could inject request scoped dependencies in my preferred way. First, when the app starts, I would create a small middleware class. This will run early in the HTTP request pipeline:
private void ConfigureApiMiddleware(IApplicationBuilder api)
{
api.UseMiddleware<ClientContextMiddleware>();
}
In the middleware class I would then create a ClientContext object from runtime data. The OrderService class will run later, after next() is called:
public class ClientContextMiddleware
{
public async Task Invoke(HttpContext context)
{
var jwt = readJwt(context.Request);
var requestId = readRequestId(context.Request);
var holder = context.RequestServices.GetService<ClientContextHolder>();
holder.ClientContext = new ClientContext(jwt, requestIO);
await this.next(context);
}
}
In my DI composition at application startup I would express that the API client should be created when it is first referenced. In the HTTP request pipeline, the OrderService request scoped object will be constructed after the middleware has run. The below lambda will then be invoked:
private void RegisterDependencies(IServiceCollection services)
{
this.services.AddScoped<IApiClient>(
ctx =>
{
var holder = ctx.GetService<ClientContextHolder>();
return new ApiClient(holder.context);
});
this.services.AddScoped<ClientContextHolder>();
}
The holder object is just due to a technology limitation. The MS stack does not allow you to create new request scoped injectable objects at runtime, so you have to update an existing one. In a previous .NET tech stack, the concept of child container per request was made available to developers, so the holder object was not needed.
ASYNC AWAIT
Request scoped objects are stored against the HTTP request object, which is the correct behaviour when using async await. The current thread ID may switch, eg from 4 to 6 after the call to the Billing API.
If the OrderService class has a transient scope, it could get recreated when the flow resumes on thread 6. If this is the case, then resolution will continue to work.
SUMMARY
Designing inputs first, then writing some support code if needed is a good approach I think, and it is also useful to know the DI techniques. Personally I think natural request scoped objects that need to be created at runtime should be usable in DI. Some people may prefer a different approach though.
See in dotnet the area that I am an expert is not an anti standard on the contrary it is the model that many adopt but it is not a model that I would follow for the following reasons
it is not clear where is the token for those who read and use it being an anti clean code
you load important information in a place that is frequently accessed by the framework in the case of .netCore
your classes will reference a large property carrying a lot of unnecessary information when you could have created a more clean model that costs less memory and allocation time, I'm saying this because the HttpAcessor carries all the information relevant to your request
As I would take care of readability (clean code) and improve my performance
I would make a middleware or filter in my flow mvc where I would do the authentication part and create a class like:
public class TokenAuthenciationValues
{
public string TokenClient { get; set; }
public string TokenValue { get; set; }
}
Of course my method is an example but in my middleware I would implement it by loading its token values after calling the necessary apis (of course this model needs an interface and it needs to be configured as .AddScoped() in the case of .net)
That way I would use it in my methods only instantiating my ITokenAuthenciationValues in the constructor and I would have clear and clean information loaded in memory during the entire request
If it is necessary in the middle of the request to change the token any class can access it and change its value
I would have less memory allocated unused in my classes since the IHttpAcessor contract the ITokenAuthenciationValues only has relevant information
Hope this helps
i have a little kont in my brain about structuring our code. We have a REST Backend based on SpringBoot. To handle requests regarding to security checks we use HandlerInterceptors. In some specific cases we need a specific interceptor and not our default one. The default one is registered in a 3rd party lib that no one can forget it. But i want all coders to think about this specific interceptor.
Actually, i just said it to them to achieve this.
Here's my question: Is there an option to create required (or necessary) interfaces which must be implemented? This would be a way to provide our security code by lib and to have the security that every coder implemented our specific interface (also if he just does nothing with it).
pseudo code:
public interface thinkForIt(){
Object SecBean specificSecBean;
public void methodToThinkOn();
}
public SecImpl implements thinkForIt(){
#Override
public void methodToThinkOn(){
return null; // i thought about it but i do not need to do anyting!
}
If the interface thinkForIt would have any annotations like #required, users could get warning or error if they did not implement it...
Looking for a solution and thanks for your comments in advance!
Your overall design is questionable; you are reinventing security code, which is always a red flag. Use Spring Security instead.
However, there's a simple way to ensure that "some bean of type Foo" has been registered with the context:
#Component
#RequiredArgsConstructor
public class ContextConfigurationVerifier {
final Foo required;
}
Here is snippet of intrested case:
We have some configuration class it can have multi instances. It suppose that we supply several configurations in one bundle. It's one scope.
#Service
#Component
public class SampleConfigurationImpl implements SampleConfiguration {
// declaration of some properties, init method and etc...
}
Also we have a service which uses these configurations:
#Service
#Component
public class SampleServiceImpl implements SampleService {
#Reference(
referenceInterface = SampleConfiguration.class,
cardinality = ReferenceCardinality.OPTIONAL_MULTIPLE,
policy = ReferencePolicy.DYNAMIC)
private Map<String, SampleConfiguration> sampleConfigurations = new ConcurrentHashMap<>();
private void bindSampleConfigurations(SampleConfiguration sampleConfiguration) {
sampleConfigurations.put(sampleConfiguration.getName(), sampleConfiguration);
}
private void unbindSampleConfigurations(SampleConfiguration sampleConfiguration) {
sampleConfigurations.remove(sampleConfiguration.getName());
}
#Activate
private void init() {
System.out.println(sampleConfigurations.size());
}
}
So, can I get some guarantees that on invocation of init method all configurations are injected (at least of current bundle)? Maybe there is some alternative way to do this. I understand that another bundles can bring new configurations and it's unreal to get guarantees but it's intrested in case of only one bundle.
On practice it can be case when in init method there are only part of configurations. Especially if it's more difficalt case when you have several types of configuration or one service uses another one which has dynamic references and first service relies on fact that everything is injected.
The most unpleasant is that it can bind/unbind configurations both before and after init method.
Maybe there is some way to guarantee that it bind always after init method...
I'm interested in any information. It will be great to get answer on two questions (guarantees before or after). Probably someone has experience how to resolve such problem and can share with me.
Thanks.
No, not that I know of. What I usually do in that case (depending on your use case, it depends on if your activation code is ok with running multiple times) is to create a 'reallyActivate' method I call both from the regular activate and from the bindSampleConfigurations (+ setting an isActivated flag in activate). Then I can perform some logic every time a new SampleConfiguration gets bound, even if it's after the activation. Does that help for your case?
I'm new to akka and I'm trying akka on java. I'd like to understand unit testing of business logic within actors. I read documentation and the only example of isolated business logic within actor is:
static class MyActor extends UntypedActor {
public void onReceive(Object o) throws Exception {
if (o.equals("say42")) {
getSender().tell(42, getSelf());
} else if (o instanceof Exception) {
throw (Exception) o;
}
}
public boolean testMe() { return true; }
}
#Test
public void demonstrateTestActorRef() {
final Props props = Props.create(MyActor.class);
final TestActorRef<MyActor> ref = TestActorRef.create(system, props, "testA");
final MyActor actor = ref.underlyingActor();
assertTrue(actor.testMe());
}
While this is simple, it implies that the method I want to test is public. However, considering actors should communicate only via messages, my understanding that there is no reason to have public methods, so I'd made my method private. Like in example below:
public class LogRowParser extends AbstractActor {
private final Logger logger = LoggerFactory.getLogger(LogRowParser.class);
public LogRowParser() {
receive(ReceiveBuilder.
match(LogRow.class, lr -> {
ParsedLog log = parse(lr.rowText);
final ActorRef logWriter = getContext().actorOf(Props.create(LogWriter.class));
logWriter.tell(log, self());
}).
matchAny(o -> logger.info("Unknown message")).build()
);
}
private ParsedLog parse(String rowText) {
// Log parsing logic
}
}
So to test method parse I either:
need it to make package-private
Or test actor's public interface, i.e. that next actor LogWriter received correct parsed message from my actor LogRowParser
My questions:
Are there any downsides on option #1? Assuming that actors communicating only via messages, encapsulation and clean open interfaces are less important?
In case if I try to use option #2, is there a way to catch messages sent from actor in test downstream (testing LogRowParser and catching in LogWriter)? I reviewed various examples on JavaTestKit but all of them are catching messages that are responses back to sender and none that would show how to intercept the message send to new actor.
Is there another option that I'm missing?
Thanks!
UPD:
Forgot to mention that I also considered options like:
Moving logic out of actors completely into helper classes. Is it common practice with akka?
Powermock... but i'm trying to avoid it if redesign is possible
There's really no good reason to make that method private. One generally makes a method on a class private to prevent someone who has a direct reference to an instance of that class from calling that method. With an actor instance, no one will have a direct reference to an instance of that actor class. All you can get to communicate with an instance of that actor class is an ActorRef which is a light weight proxy that only allows you to communicate by sending messages to be handled by onReceive via the mailbox. An ActorRef does not expose any internal state or methods of that actor class. That's sort of one of the big selling points of an actor system. An actor instance completely encapsulates its internal state and methods, protecting them from the outside world and only allows those internal things to change in response to receiving messages. That's why it does not seem necessary to mark that method as private.
Edit
Unit testing of an actor, IMO, should always go through the receive functionality. If you have some internal methods that are then called by the handling in receive, you should not focus on testing these methods in isolation but instead make sure that the paths that lead to their invocation are properly exercised via the messages that you pass during test scenarios.
In your particular example, parse is producing a ParsedLog message that is then sent on to a logWriter child actor. For me, knowing that parse works as expected means asserting that the logWriter received the correct message. In order to do this, I would allow the creation of the child logWriter to be overridden and then do just that in the test code and replace the actor creation with a TestProbe. Then, you can use expectMsg on that probe to make sure that it received the expected ParsedLog message thus also testing the functionality in parse.
As far as your other comment around moving the real business for the actor out into a separate and more testable class and then calling that from in the actor, some people do this, so it's not unheard of. I personally don't, but that's just me. If that approach works for you, I don't see any major issues with it.
I had the same problem 3 years ago, when dealing with actors : the best approach i found was to have minimum responsability to the actor messenging responsability.
The actor will receive the message and choose the Object's method to call or the message to send or the exception to throw and that's it.
This way it will be very simple to mock up either the services called by the actor and the input to those services.
I was wondering how people with more experience and more complex projects get along with this "uglyness" in the REST Communication. Imagine the following Problem:
We'll need a fair amount of functionalities for one specific resource within our REST Infrastructure, in my case that's about 50+ functions that result in different querys and different responses. I tried to think of a meaningful resource-tree and assigned these to methods that will do "stuff". Afterwards, the Server Resource Class looks like this:
#Path("/thisResource")
public class SomeResource {
#GET/POST/PUT/DELETE
#Path("meaningfulPath")
public Response resourceFunction1 ( ...lots of Params) {
... logic ....
}
//
// lots of functions ...
//
#GET/POST/PUT/DELETE
#Path("meaningfulPath")
public Response resourceFunctionN ( ...lots of Params) {
... logic ....
}
}
To construct the urls my client will call, I made a little function to prevent Typos and to take better use of Constants
so my Client looks like this:
public class Client() {
public returnType function1 () {
client.resource = ResourceClass.build(Constants.Resouce, "meaningfulPath");
...
return response.getEntity(returnType);
}
}
Now the questions that bothers me is how could I link the client function and the server function better?
The only connection between these two blocks of code is the URL that will be called by the client and mapped by the server, and if even this URL is generated somewhere else, this leads to a lot of confusion.
When one of my colleagues needs to get into this code, he has a hard time figuring out which of the 50+ client functions leads to wich server function. Also it is hard to determine if there are obsolete functions in the code, etc. I guess most of you know about the problems of unclean code better than I do.
How do you deal with this? How would you keep this code clean, maintainable and georgeous?
Normally, this would be addressed by EJB or similar technologies.
Or at least by "real" web services, which would provide at least WSDL and schemas (with kind of mapping to Java interfaces, or "ports").
But REST communication is very loosely typed and loosely structured.
The only thing I can think of now, is: define a project (let's call it "Definitions") which would be referenced (hence known) by client and server. In this project you could define a class with a lot of public static final String, such as:
public static final String SOME_METHOD_NAME = "/someMethodName";
public static final String SOME_OTHER_METHOD_NAME = "/someOtherMethodName";
Note: a static final String can very well be referenced by an annotation (in that case it is considered to be constant by the compiler). So use the "constants" to annotate your #Path, such as:
#Path(Definitions.SOME_METHOD_NAME)
Same for the client:
ResourceClass.build(Constants.Resouce, Definitions.SOME_METHOD_NAME);
You are missing the idea behind REST. What you are doing is not REST but RPC over HTTP. Generally you are not supposed to construct URLs using out of band knowledge. Instead you should be following links received in the responses received from the server. Read about HATEOAS:
http://en.wikipedia.org/wiki/HATEOAS