Vertx add handler to Spring data API - java

In vertx if we want to execute jdbc operation that will not block main loop we use the following code,
client.getConnection(res -> {
if (res.succeeded()) {
SQLConnection connection = res.result();
connection.query("SELECT * FROM some_table", res2 -> {
if (res2.succeeded()) {
ResultSet rs = res2.result();
// Do something with results
}
});
} else {
// Failed to get connection - deal with it
}
});
Here we add handler that will execute when our operation will be done.
Now I want to use Spring Data API but is it in the same way as above
Now I used it as follow
#Override
public void start() throws Exception {
final EventBus eventBus = this.vertx.eventBus();
eventBus.<String>consumer(Addresses.BEGIN_MATCH.asString(), handler-> {
this.vertx.executeBlocking(()-> {
final String body = handler.body();
final JsonObject resJO = this.json.asJson(body);
final int matchId = Integer.parseInt(resJO.getString("matchid"));
this.matchService.beginMatch(matchId);//this service call method of crudrepository
log.info("Match [{}] is started",matchId);
}
},
handler->{});
}
Here I used execute blocking but it use thread from the worker pool is it any alternative to wrap blocking code?

To answer the question: the need of using executeBlocking method goes away if:
You run multiple instance of your verticle in separate pids (using systemd or docker or whatever that allows you to run independent java process safely with recovery mode) and listening the same eventbus channel in cluster mode (with hazelcast for example).
You run multiple instance of your verticle as worker verticles as suggested by tsegismont in comment of this answer.
Also, it's not related to the question and it's really a personal opinion but I give it anyway: I think it's a bad idea to use Spring dependencies inside a vert.x application. Spring is relevant for Servlet based applications using at least Spring Core. I mean that's relevant to be used in an eco-system totally based on Spring. Otherwise you'll bring back a lot of unused big dependencies into your jar files.
You have for almost each Spring modules, small, lighter and independent libs with the same purposes. For example, for IoC you have guice, hk2, weld...
Personally if I need to use SQL based database, I'd be inspired by the Spring's JdbcTemplate and RowMapper model without using any Spring dependencies. It's pretty simple to reproduce that with a simple interface like that :
import java.io.Serializable;
import java.sql.ResultSet;
import java.sql.SQLException;
public interface RowMapper<T extends Serializable> {
T map(ResultSet rs) throws SQLException;
}
And another interface DatabaseProcessor with a method like that :
<T extends Serializable> List<T> execute(String query, List<QueryParam> params, RowMapper<T> rowMapper) throws SQLException;
And a class QueryParam with the value, the order and the name of your query parameters (to avoid SQL injection vulnerability).

Related

Dependency injection of IHttpContextAccessor vs passing parameter up the method chain

Our application calls many external API's which take a session token of the current user as input. So what we currently do is in a controller, get the session token for the user and pass it into a service which in turn might call another service or some API client. To give an idea, we end up with something like this (example is .NET but something similar is I think possible in Java)
public IActionResult DoSomething(string something)
{
this.someService.DoSomethingForUser(this.HttpContext.SessionToken, something);
return View();
}
And then we have
public class SomeService
{
private readonly IApiClient apiClient;
public SomeService(IApiClient apiClient)
{
this.apiClient = apiClient;
}
public void DoSomethingForUser(string sessionToken, something)
{
this.apiClient.DoSomethingForUser(sessionToken, something);
}
}
It can also happen that in SomeService another service is injected which in turn calls the IApiClient instead of SomeService calling IApiClient directly, basically adding another "layer".
We had a discussion with the team if it isn't better to instead of passing the session token, inject it using DI so you get something like this:
public IActionResult DoSomething(string something)
{
this.someService.DoSomethingForUser(something);
return View();
}
And then we have
public class SomeService
{
private readonly IUserService userService;
private readonly IApiClient apiClient;
public SomeService(IUserService userService, IApiClient apiClient)
{
this.userService = userService;
this.apiClient = apiClient;
}
public void DoSomethingForUser(string something)
{
this.apiClient.DoSomethingForUser(userService.SessionToken, something);
}
}
The IUserService would have an IHttpContextAccessor injected:
public class UserService : IUserService
{
private readonly IHttpContextAccessor httpContextAccessor;
public UserService(IHttpContextAccessor httpContextAccessor)
{
this.httpContextAccessor = httpContextAccessor;
}
public string SessionToken => httpContextAccessor.HttpContext.SessionToken;
}
The benefits of this pattern are I think pretty clear. Especially with many services, it keeps the code "cleaner" and you end up with less boilerplate code to pass a token around.
Still, I don't like it. To me the downsides of this pattern are more important than its benefit:
I like that passing the token in the methods is concise. It is clear that the service needs some sort of authentication token for it to function. I'm not sure if you can call it a side effect but the fact that a session token is magically injected three layers deep is impossible to tell just by reading the code
Unit testing is a bit more tedious if you have to Mock the IUserService
You run into problems when calling this in another thread, e.g. calling SomeService from another thread. Although these problems can be mitigated by injecting another concrete type of IUserService which gets the token from some place else, it feels like a chore.
To me it strongly feels like an anti pattern but apart from the arguments above it is mostly a feeling. There was a lot of discussion and not everybody was convinced that it was a bad idea. Therefor, my question is, is it an anti pattern or is it perfectly valid? What are some strong arguments for and against it, hopefully so there can be not much debate that this pattern is indeed, either perfectly valid or something to avoid.
I would say the main point is to enable your desired separation of concerns. I think it is a good question if expressed in those terms. As Kit says, different people may prefer different solutions.
REQUEST SCOPED OBJECTS
These occur quite naturally in APIs. Consider the following example, where a UI calls an Orders API, then the Orders API forwards the JWT to an upstream Billing API. A unique Request ID is also sent, in case the flow experiences a temporary problem. If the flow is retried, the Request ID can be used by APIs to prevent data duplication. Yet business logic should not need to know about either the Request ID or the JWT.
BUSINESS LOGIC CLASS DESIGN
I would start by designing my logic classes with my desired inputs, then work out the DI later. In my example the OrderService class might use claims to get the user identity and also for authorization. But I would not want it to know about HTTP level concerns:
public class OrderService
{
private readonly IBillingApiClient billingClient;
public OrderService(IBillingApiClient billingClient, ClaimsPrincipal user)
{
this.billingClient = billingClient;
}
public async void CreateOrder(OrderInput data)
{
this.Authorize();
var order = this.CreateOrder(data);
await this.billingClient.CreateInvoice(order);
}
}
DI SETUP
To enable my preferred business logic, I would write a little DI plumbing, so that I could inject request scoped dependencies in my preferred way. First, when the app starts, I would create a small middleware class. This will run early in the HTTP request pipeline:
private void ConfigureApiMiddleware(IApplicationBuilder api)
{
api.UseMiddleware<ClientContextMiddleware>();
}
In the middleware class I would then create a ClientContext object from runtime data. The OrderService class will run later, after next() is called:
public class ClientContextMiddleware
{
public async Task Invoke(HttpContext context)
{
var jwt = readJwt(context.Request);
var requestId = readRequestId(context.Request);
var holder = context.RequestServices.GetService<ClientContextHolder>();
holder.ClientContext = new ClientContext(jwt, requestIO);
await this.next(context);
}
}
In my DI composition at application startup I would express that the API client should be created when it is first referenced. In the HTTP request pipeline, the OrderService request scoped object will be constructed after the middleware has run. The below lambda will then be invoked:
private void RegisterDependencies(IServiceCollection services)
{
this.services.AddScoped<IApiClient>(
ctx =>
{
var holder = ctx.GetService<ClientContextHolder>();
return new ApiClient(holder.context);
});
this.services.AddScoped<ClientContextHolder>();
}
The holder object is just due to a technology limitation. The MS stack does not allow you to create new request scoped injectable objects at runtime, so you have to update an existing one. In a previous .NET tech stack, the concept of child container per request was made available to developers, so the holder object was not needed.
ASYNC AWAIT
Request scoped objects are stored against the HTTP request object, which is the correct behaviour when using async await. The current thread ID may switch, eg from 4 to 6 after the call to the Billing API.
If the OrderService class has a transient scope, it could get recreated when the flow resumes on thread 6. If this is the case, then resolution will continue to work.
SUMMARY
Designing inputs first, then writing some support code if needed is a good approach I think, and it is also useful to know the DI techniques. Personally I think natural request scoped objects that need to be created at runtime should be usable in DI. Some people may prefer a different approach though.
See in dotnet the area that I am an expert is not an anti standard on the contrary it is the model that many adopt but it is not a model that I would follow for the following reasons
it is not clear where is the token for those who read and use it being an anti clean code
you load important information in a place that is frequently accessed by the framework in the case of .netCore
your classes will reference a large property carrying a lot of unnecessary information when you could have created a more clean model that costs less memory and allocation time, I'm saying this because the HttpAcessor carries all the information relevant to your request
As I would take care of readability (clean code) and improve my performance
I would make a middleware or filter in my flow mvc where I would do the authentication part and create a class like:
public class TokenAuthenciationValues
{
public string TokenClient { get; set; }
public string TokenValue { get; set; }
}
Of course my method is an example but in my middleware I would implement it by loading its token values ​​after calling the necessary apis (of course this model needs an interface and it needs to be configured as .AddScoped() in the case of .net)
That way I would use it in my methods only instantiating my ITokenAuthenciationValues ​​in the constructor and I would have clear and clean information loaded in memory during the entire request
If it is necessary in the middle of the request to change the token any class can access it and change its value
I would have less memory allocated unused in my classes since the IHttpAcessor contract the ITokenAuthenciationValues ​​only has relevant information
Hope this helps

Release of resources in AWS Lambda

I implement AWS Lambda function with Java and face with the question - how to release used resources correctly? In my function I make different calls of some resources: execute queries to DB, make REST-calls to third-party services (send StatsD metrics, invoke Slack webhooks, etc), interact with Kinesys stream.
Not going into details, my function looks like this:
public class RequestHandler {
private StatisticsService statsService; //Collect StatsD metrics
private SlackNotificationService slackService; //Send Slack notifications
private SearchService searchService; //Interact with DB
//Simplified version of constructor
public RequestHandler() {
this.statsService = new StatisticsService();
this.slackService = new SlackNotificationService();
this.searchService = new SearchService();
}
public LambdaResponse handleRequest(LambdaRequest request, Context context) {
/**
* Main method of function
* where business-logic is executed
* and all mentioned services are invoked
*/
}
}
And my main question is - where is more correctly release resources which are used in my services, in the end of handleRequest() method (in such case I'll need to open them all again in each next invocation of Lambda-function) or in finalize() method of RequestHandler class?
According to Lambda best practices you should :
Keep alive and reuse connections (HTTP, database, etc.) that were
established during a previous invocation.
So your current code is right.
Regarding the finalize() function, I don't think it is relevant. Lambda execution context will be deleted at some point freeing automatically every open resources.
https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html#function-code

Commit hook in JOOQ

I've been using JOOQ in backend web services for a while now. In many of these services, after persisting data to the database (or better said, after successfully committing data), we usually want to write some messages to Kafka about the persisted records so that other services know of these events.
What I'm essentially looking for is: Is there a way for me to register a post-commit hook or callback with JOOQ's DSLContext object, so I can run some code when a transaction successfully commits?
I'm aware of the ExecuteListener and ExecuteListenerProvider interfaces, but as far as I can tell the void end(ExecuteContext ctx) method (which is supposedly for end of lifecycle uses) is not called when committing the transaction. It is called after every query though.
Here's an example:
public static void main(String[] args) throws Throwable {
Class.forName("org.postgresql.Driver");
Connection connection = DriverManager.getConnection("<url>", "<user>", "<pass>");
connection.setAutoCommit(false);
DSLContext context = DSL.using(connection, SQLDialect.POSTGRES_9_5);
context.transaction(conf -> {
conf.set(new DefaultExecuteListenerProvider(new DefaultExecuteListener() {
#Override
public void end(ExecuteContext ctx) {
System.out.println("End method triggered.");
}
}));
DSLContext innerContext = DSL.using(conf);
System.out.println("Pre insert.");
innerContext.insertInto(...).execute();
System.out.println("Post insert.");
});
connection.close();
}
Which always seems to print:
Pre insert.
End method triggered.
Post insert.
Making me believe this is not intended for commit hooks.
Is there perhaps a JOOQ guru that can tell me if there is support for commit hooks in JOOQ? And if so, point me in the right direction?
The ExecuteListener SPI is listening to the lifecycle of a single query execution, i.e. of this:
innerContext.insertInto(...).execute();
This isn't what you're looking for. Instead, you should implement your own TransactionProvider (possibly delegating to jOOQ's DefaultTransactionProvider). You can then implement any logic you want prior to the actual commit logic.
Note that jOOQ 3.9 will also provide a new TransactionListener SPI (see #5378) to facilitate this.

Asynchronous multiple query from different datasources or databases

I'm having trouble to find appropriate solution for that:
I have several databases with the same structure but with different data. And when my web app execute a query, it must separate this query for each database and execute it asynchronously and then aggregate results from all databases and return it as single result. Additionaly I want to be able to pass a list of databases where query would be executed and also I want to pass maximum expiration time for query executing. Also result must contains meta information for each databases such as excess execution time.
It would be great if it possible to use another datasource such as remote web service with specific API, rather than relational database.
I use Spring/Grail and need java solution but I will be glad to any advice.
UPD: I want to find prepared solution, maybe framework or something like that.
This is basic OO. You need to abstract what you are trying to achieve - loading data - from the mechanism you are using to achieve - a database query or a web-service call.
Such a design would usually involve an interface that defines the contract of what can be done and then multiple implementing classes that make it happen according to their implementation.
For example, you'd end up with an interface that looked something like:
public interface DataLoader
{
public Collection<Data> loadData() throws DataLoaderException;
}
You would then have implementations like JdbcDataLoader, WebServiceDataLoader, etc. In your case you would need another type of implementation that given one or more instances of DataLoader, runs each sumulatiously aggregating the results. This implementation would look something like:
public class AggregatingDataLoader implements DataLoader
{
private Collection<DataLoader> dataLoaders;
private ExecutorService executorService;
public AggregatingDataLoader(ExecutorService executorService, Collection<DataLoader> dataLoaders)
{
this.executorService = executorService;
this.dataLoaders = dataLoaders;
}
public Collection<Data> loadData() throws DataLoaderException
{
Collection<DataLoaderCallable>> dataLoaderCallables = new ArrayList<DataLoaderCallable>>();
for (DataLoader dataLoader : dataLoaders)
{
dataLoaderCallables.add(new DataLoaderCallable(dataLoader));
}
List<Future<Collection<Data>>> futures = executorService.invokeAll(dataLoaderCallables);
Collection<Data> data = new ArrayList<Data>();
for (Future<Collection<Data>> future : futures)
{
add.addAll(future.get());
}
return data;
}
private class DataLoaderCallable implements Callable<Collection<Data>>
{
private DataLoader dataLoader;
public DataLoaderCallable(DataLoader dataLoader)
{
this.dataLoader = dataLoader;
}
public Collection<Data> call()
{
return dataLoader.load();
}
}
}
You'll need to add some timeout and exception handling logic to this, but you get the gist.
The other important thing is your call code should only ever use the DataLoader interface so that you can swap different implementations in and out or use mocks during testing.

Retry Task Framework

I have a number of situations where I need to retry a task n-times if it fails (sometimes with some form of back-off-before-retry logic). Generally, if an exception is thrown, the task should be retried up to the max-retry count.
I can easily write something to do this fairly generically, but not wanting to re-invent the wheel I was wondering if anyone can recommend any frameworks for this. The only thing I have been able to find is: Ant Retry but I don't want to use Ant tasks directly in my application.
Thanks
Check out Failsafe (which I authored). It's a simple, zero-dependency library for performing retries, and supports synchronous and asynchronous retries, Java 8 integration, event listeners, integration with other async APIs, etc:
RetryPolicy retryPolicy = new RetryPolicy()
.handle(ConnectException.class, SocketException.class);
.withMaxRetries(3);
Connection connection = Failsafe.with(retryPolicy).get(() -> connect());
Doesn't get much easier.
You can use RetriableTasks as outlined in this post: Retrying Operations in Java. You can quite easily change its waiting algorithm if you like.
Sample code:
//creates a task which will retry 3 times with an interval of 5 seconds
RetriableTask r = new RetriableTask(3, 5000, new Callable(){
public Object call() throws Exception{
//put your code here
}
});
r.call();
If you use Spring:
//import the necessary classes
import org.springframework.batch.retry.RetryCallback;
import org.springframework.batch.retry.RetryContext;
import org.springframework.batch.retry.backoff.ExponentialBackOffPolicy;
import org.springframework.batch.retry.policy.SimpleRetryPolicy;
import org.springframework.batch.retry.support.RetryTemplate;
...
// create the retry template
final RetryTemplate template = new RetryTemplate();
template.setRetryPolicy(new SimpleRetryPolicy(5));
final ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(1000L);
template.setBackOffPolicy(backOffPolicy);
// execute the operation using the retry template
template.execute(new RetryCallback<Remote>() {
#Override
public Remote doWithRetry(final RetryContext context) throws Exception {
return (Remote) Naming.lookup("rmi://somehost:2106/MyApp");
}
});
Original blog post
If you are using Spring, it is very simple using Spring Retry Library.
Now, Spring Retry is an individual library (earlier it was part of Spring Batch) framework.
Step1: Add spring retry dependency.
<dependency>
<groupId>org.springframework.retry</groupId>
<artifactId>spring-retry</artifactId>
<version>1.1.2.RELEASE</version>
</dependency>
Step2: Add #EnableRetry annotation to your class which contains main() method of your application or into any of your #Configuration class .
Step3: Add #Retryable annotation to your method which you want to retry/call again, in case of exceptions.
#Retryable(maxAttempts=5,backoff = #Backoff(delay = 3000))
public void retrySomething() throws Exception{
logger.info("printSomething{} is called");
throw new SQLException();
}
This #Retryable annotation will retry/call retrySomething() 5 times (including the 1st failure).
Current thread will wait for 3000 ms or 3 seconds between next retry.
I have one answer already, but it was three years ago and I have to add that now I absolutley love guava-retrying project. Let me just show you the code.
Callable<Boolean> callable = new Callable<Boolean>() {
public Boolean call() throws Exception {
return true; // do something useful here
}
};
Retryer<Boolean> retryer = RetryerBuilder.<Boolean>newBuilder()
.retryIfResult(Predicates.<Boolean>isNull())
.retryIfExceptionOfType(IOException.class)
.retryIfRuntimeException()
.withStopStrategy(StopStrategies.stopAfterAttempt(3))
.build();
try {
retryer.call(callable);
} catch (RetryException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
One option to factor this out of your codebase is to use the command pattern between the components of your application.
Once you turn a call to a business method into an object, you can hand around the call easily and have an abstract RetryHandler that takes a command and retries it. This should be independent from the actual call and reusable.
I have implemented a pretty flexible retry utility here
You can retry any Callable with:
public static <T> T executeWithRetry(final Callable<T> what, final int nrImmediateRetries,
final int nrTotalRetries, final int retryWaitMillis, final int timeoutMillis,
final Predicate<? super T> retryOnReturnVal, final Predicate<Exception> retryOnException)
with immediate + delayed retries, with a max timeout, and retry either on decisions based on result or exception.
There are several other versions of this function with more or less flexibility.
I have written also a aspect that can be applied with annotations Retry Retry Aspect
You can use Quartz. Look at this Stack Overflow answer.

Categories