I have two SQS queues from which I want to listen to messages using two different message handlers. I am using Guice to bind and initialise the different consumers and message handlers. However, I am unable to bind the respective consumer to corresponding handler.
Consumer - TransactionQueue, Handler - TransactionMessageHandler
Consumer - RetryQueue, Handler- RetryMessageHandler
com.google.inject.CreationException: Unable to create injector, see the following errors:
1) No implementation for com.amazon.aft.sqsmessageconsumer.messaging.sqs.SqsMessageHandler was bound.
Did you mean?
* com.amazon.aft.sqsmessageconsumer.messaging.sqs.SqsMessageHandler annotated with
#com.google.inject.name.Named(value="TransactionMessageHandler")
* com.amazon.aft.sqsmessageconsumer.messaging.sqs.SqsMessageHandler annotated with
#com.google.inject.name.Named(value="RetryMessageHandler")
while locating com.amazon.aft.sqsmessageconsumer.messaging.sqs.SqsMessageHandler
for the 1st parameter of com.config.providers.ListenerProvider.<init>(TransactionConsumerProvider.java:23)
at com.config.ListenerModule.configure(ListenerModule.java:26) (via modules: com.service.ServiceModule -> com.config.ListenerModule)
I have used #Named annotation to differentiate the consumers and handlers. Here is how my bindings look like:
#Override
protected void configure() {
bind(SqsConsumer.class).annotatedWith(Names.named("TransactionQueue"))
.toProvider(TransactionConsumerProvider.class)
.asEagerSingleton();
bind(SqsMessageHandler.class).annotatedWith(Names.named("TransactionMessageHandler"))
.toProvider(TransactionMessageHandlerProvider.class)
.asEagerSingleton();
bind(SqsConsumer.class).annotatedWith(Names.named("RetryQueue"))
.toProvider(RetryConsumerProvider.class)
.asEagerSingleton();
bind(SqsMessageHandler.class).annotatedWith(Names.named("RetryMessageHandler"))
.toProvider(RetryMessageHandlerProvider.class)
.asEagerSingleton();
}
Here are the classes for the consumers and the handlers:
TransactionConsumerProvider.class
#AllArgsConstructor(onConstructor = #__(#Inject))
public final class TransactionConsumerProvider implements Provider<SqsConsumer> {
#Named("TransactionMessageHandler")
private final SqsMessageHandler sqsMessageHandler;
#Override
public SqsConsumer get() {
// initializing transaction SQS queue
final SqsConsumer consumer = new SqsConsumer(
"TransactionQueue",
sqsClientBuilder,
sqsMessageHandler,
new SimpleMessageProcessingFailureHandler()
);
}
TransactionMessageHandlerProvider.class
#AllArgsConstructor(onConstructor = #__(#Inject))
public final class TransactionMessageHandlerProvider implements Provider<SqsMessageHandler> {
#Override
public SqsMessageHandler get() {
// Initialising TransactionMessageHandler
}
RetryConsumerProvider.class
#AllArgsConstructor(onConstructor = #__(#Inject))
public final class RetryConsumerProvider implements Provider<SqsConsumer> {
#Named("RetryMessageHandler")
private final SqsMessageHandler sqsMessageHandler;
#Override
public SqsConsumer get() {
// initializing retry SQS queue
}
RetryMessageHandlerProvider.class
#AllArgsConstructor(onConstructor = #__(#Inject))
public final class RetryMessageHandlerProvider implements Provider<SqsMessageHandler> {
#Override
public SqsMessageHandler get() {
// Initialising RetryMessageHandlerProvider
}
I think I have used com.google.inject.name.Names import correctly, but I am willing to try any new suggestions to make this work.
Thanks
Related
I am having troubles invoking a method asynchronously in Spring, when the invoker is an embedded library receiving notifications from an external system. The code looks as below:
#Service
public class DefaultNotificationProcessor implements NotificationProcessor {
private NotificationClient client;
#Override
public void process(Notification notification) {
processAsync(notification);
}
#PostConstruct
public void startClient() {
client = new NotificationClient(this, clientPort);
client.start();
}
#PreDestroy
public void stopClient() {
client.stop();
}
#Async
private void processAsync(Notification notification) {
// Heavy processing
}
}
The NotificationClient internally has a thread in which it receives notifications from another system. It accepts a NotificationProcessor in its constructor which is basically the object that will do the actual processing of notifications.
In the above code, I have given the Spring bean as the processor and attempted to process the notification asynchronously by using #Async annotation. However, it appears the notification is processed in the same thread as the one used by NotificationClient. Effectively, #Async is ignored.
What am I missing here?
#Async (as well as #Transactional and other similar annotations) will not work when the method is invoked via this (on when #Async is used for private methods*), as long as you do not use real AspectJ compiletime or runtime weaving.
*the private method thing is: when the method is private, then it must been invoked via this - so this is more the consequence then the cause
So change your code:
#Service
public class DefaultNotificationProcessor implements NotificationProcessor {
#Resource
private DefaultNotificationProcessor selfReference;
#Override
public void process(Notification notification) {
selfReference.processAsync(notification);
}
//the method must not been private
//the method must been invoked via a bean reference
#Async
void processAsync(Notification notification) {
// Heavy processing
}
}
See also the answers for: Does Spring #Transactional attribute work on a private method? -- this is the same problem
I'm new with Kafka and want to persist data from kafka topics to database tables (each topic flow to a specific table). I know Kafka connect exists and can be used to achieve this but there are reasons why this approach is preferred.
Unfortunately only one topic is writing the database. Kafka seems to not process() all processors concurrently. Either MyFirstData is writing to database or MySecondData but never but at the same time.
According the my readings the is the option overriding init() from of kafka stream Processor interface which offers context.forward() not sure if this will help and how to use it in my used case.
I use Spring Cloud Stream (but got the same behaviour with Kafka DSL and Processor API implementations)
My code snippet:
Configuring the consumers:
#Configuration
#RequiredArgsConstructor
public class DatabaseProcessorConfiguration {
private final MyFirstDao myFirstDao;
private final MySecondDao mySecondDao;
#Bean
public Consumer<KStream<GenericData.Record, GenericData.Record>> myFirstDbProcessor() {
return stream -> stream.process(() -> {
return new MyFirstDbProcessor(myFirstDao);
});
}
#Bean
public Consumer<KStream<GenericRecord, GenericRecord>> mySecondDbProcessor() {
return stream -> stream.process(() -> new MySecondDbProcessor(mySecondDao));
}
}
This MyFirstDbProcessor and MySecondDbProcessor is analog to this.
#Slf4j
#RequiredArgsConstructor
public class MyFirstDbProcessor implements Processor<GenericData.Record, GenericData.Record, Void, Void> {
private final MyFirstDao myFirstDao;
#Override
public void process(Record<GenericData.Record, GenericData.Record> record) {
CdcRecordAdapter adapter = new CdcRecordAdapter(record.key(), record.value());
MyFirstTopicKey myFirstTopicKey = adapter.getKeyAs(MyFirstTopicKey.class);
MyFirstTopicValue myFirstTopicValue = adapter.getValueAs(MyFirstTopicValue.class);
MyFirstData data = PersistenceMapper.map(myFirstTopicKey, myFirstTopicValue);
switch (myFirstTopicValue.getCrudOperation()) {
case UPDATE, INSERT -> myFirstDao.persist(data);
case DELETE -> myFirstDao.delete(data);
default -> System.err.println("unimplemented CDC operation streamed by kafka");
}
}
}
My Dao implementations: I try an implementation of MyFirstRepository with JPARepository and ReactiveCrudRepository but same behaviour. MySecondRepository is implemented analog to MyFirstRepository.
#Component
#RequiredArgsConstructor
public class MyFirstDaoImpl implements MyFirstDao {
private final MyFirstRepository myFirstRepository;
#Override
public MyFirstData persist(MyFirstData myFirstData) {
Optional<MyFirstData> dataOptional = MyFirstRepository.findById(myFirstData.getId());
if (dataOptional.isPresent()){
var data = dataOptional.get();
myFirstData.setCreatedDate(data.getCreatedDate());
}
return myFirstRepository.save(myFirstData);
}
#Override
public void delete(MyFirstData myFirstData) {
System.out.println("delete() from transaction detail dao called");
MyFirstRepository.delete(myFirstData);
}
}
I am trying to re-use a guice module from a Library which has multiple providers, I want to use few providers from the library and provide few in my own module.
Library Module -
public class LibraryModule extends AbstractModule {
#Override
protected void configure() {
}
#Provides
#Singleton
#Named("dbCredentials")
private AWSCredentialsProvider getCredentialsProvider(#Named("app.keySet") String keySet) {
return new SomeCredentialsProvider(keySet);
}
#Provides
#Singleton
private AmazonDynamoDB getDynamoDBClient(#Named("dbCredentials") AWSCredentialsProvider credentialsProvider,
#Named("aws.region") String region) {
return AmazonDynamoDBClientBuilder.standard()
.withCredentials(credentialsProvider)
.withRegion(region)
.build();
}
#Provides
#Singleton
#Named("dbMapper")
private DynamoDBMapper getDynamoDBMapper(AmazonDynamoDB dynamoDBClient) {
return new DynamoDBMapper(dynamoDBClient);
}
...... more providers using dbMapper
Now where I want to use this module I want to give some different implementation of AmazonDynamoDB which uses the default credential provider.
public class MyModule extends AbstractModule {
#Override
protected void configure() {
bind(AmazonDynamoDB.class).toProvider(AmazonDynamoDBClientProvider.class).in(Singleton.class);
install(new LibraryModule());
}
AmazonDynamoDBClientProvider class -
public class AmazonDynamoDBClientProvider implements Provider<AmazonDynamoDB> {
private final String region;
#Inject
public AmazonDynamoDBClientProvider(#Named("aws.region") String region) {
this.region = region;
}
#Override
public AmazonDynamoDB get() {
return AmazonDynamoDBClientBuilder.standard()
.withRegion(region)
.build();
}
}
but when I try to do this it fails in the provider of library while trying to create the AmazonDynamoDB by saying A binding to com.amazonaws.services.dynamodbv2.AmazonDynamoDB was already configured
I wanted to know if it is possible to omit providers for classes which have already been bound in the parent module? If yes how do we do that? I was unable to find a solution for this problem.
If you are in the position to change LibraryModule, then you should give it flexibility to bind either its default implementation or the one you're supplying:
class LibraryModule extends AbstractModule {
// Default implementation based on the code you've shown in LibraryModule
static class DefaultDynamoDBProvider implements Provider<AmazonDynamoDB> {
private final AWSCredentialsProvider credentialsProvider;
private final String region;
#Inject DefaultDynamoDBProvider(
#Named("dbCredentials") AWSCredentialsProvider credentialsProvider,
#Named("aws.region") String region) {
this.credentialsProvider = credentialsProvider;
this.region = region;
}
#Override AmazonDynamoDB get() {
return AmazonDynamoDBClientBuilder
.standard()
.withCredentials(credentialsProvider)
.withRegion(region)
.build();
}
}
private Class<Provider<AmazonDynamoDB>> dynamoDbProviderClass;
// if constructed with this constructor, uses default provider
LibraryModule() {
this(DefaultDynamoDBProvider.class);
}
// here, uses the supplied one. Builder pattern would be better
LibraryModule(Class<Provider<AmazonDynamoDB>> providerClass) {
this.dynamoDbProviderClass = providerClass;
}
protected void configure() {
bind(AmazonDynamoDB.class)
.toProvider(dynamoDbProviderClass)
.in(Singleton.class);
}
}
If however touching LibraryModule is impossible, take a look at
Overriding Binding in Guice, while keeping in mind that overriding bindings is an anti-pattern and should be avoided at all costs:
Modules.override javadoc says "Prefer to write smaller modules that can be reused and tested without overrides," and supplies examples having to do with unit testing
Guice bindings are hard to track down, but at least there's a promise that once you've found the binding, it's the right one. Modules.override() complicate this task further.
Modules are supposed to be independent of one another. In our scenario, someone may have installed LibraryModule and connector to Dynamo just fine, but once he also installed OrderProcessingModule (let's assume that's what you're writing), his connection broke. Now he has no recourse: it's either Dynamo or Order Processing, or yet another even-more-complicated override
I've implemented a JAX-RS server application using Jersey 2.24.
I use the Guice-HK2 bridge so that the controller classes (those annotated with #Path) are injected with dependencies from Guice, not Jersey/HK2.
However, HK2 still creates instances of the #Path annotated classes itself.
Is there a way I can plug into Jersey/HK2 so that I'm notified when a #Path annotated class is created? Like some sort of lifecycle listener? Every time a #Path annotated class is created by Jersey/HK2 I want to do some registering/logging of that class.
If Guice were doing the actual creation of the #Path annotated class I think I could do it using a generic Provider but that's not available in this case, since Jersey/HK2 is creating the actual instance.
Thank you!!
I think the least intrusive way would be to just use AOP. HK2 offers AOP. What you can do is create a ConstructorInterceptor. Something like
public class LoggingConstructorInterceptor implements ConstructorInterceptor {
private static final Logger LOG
= Logger.getLogger(LoggingConstructorInterceptor.class.getName());
#Override
public Object construct(ConstructorInvocation invocation) throws Throwable {
Constructor ctor = invocation.getConstructor();
LOG.log(Level.INFO, "Creating: {0}", ctor.getDeclaringClass().getName());
// returned instance from constructor invocation.
Object instance = invocation.proceed();
LOG.log(Level.INFO, "Created Instance: {0}", instance.toString());
return instance;
}
}
Then create a InterceptorService to only use the interceptor for classes annotated with #Path
public class PathInterceptionService implements InterceptionService {
private static final ConstructorInterceptor CTOR_INTERCEPTOR
= new LoggingConstructorInterceptor();
private final static List<ConstructorInterceptor> CTOR_LIST
= Collections.singletonList(CTOR_INTERCEPTOR);
#Override
public Filter getDescriptorFilter() {
return BuilderHelper.allFilter();
}
#Override
public List<MethodInterceptor> getMethodInterceptors(Method method) {
return null;
}
#Override
public List<ConstructorInterceptor> getConstructorInterceptors(Constructor<?> ctor) {
if (ctor.getDeclaringClass().isAnnotationPresent(Path.class)) {
return CTOR_LIST;
}
return null;
}
}
Then just register the InterceptionService and ConstructorInterceptor with the DI system
new ResourceConfig()
.register(new AbstractBinder(){
#Override
public void configure() {
bind(PathInterceptionService.class)
.to(InterceptionService.class)
.in(Singleton.class);
bind(LoggingConstructorInterceptor.class)
.to(ConstructorInterceptor.class)
.in(Singleton.class);
}
});
See complete example in this Gist
See Also:
HK2 documentation on AOP
I have a system (Java with Spring Framework) that exposes 7 different Apache Thrift servlets over HTTP using the TServlet class. Currently they all need their own Servlets, ServletMappings, Processors, Handlers etc. so implementing clients have to also keep an internal list of all the various URLs for the different services.
I understand that Apache Thrift supports multiplexing when using TServer and its derivatives by using TMultiplexingProcessor, however since I am using Spring and my Servlet, Handler and Processor are all Spring Beans that get autowired into one another, I'm unsure how to proceed.
Here's an example of how one of the services gets wired up:
UserServiceHandler.java
#Component
public class UserServiceHandler implements UserService.Iface {
#Override
public User getUser(String userId) throws TException {
// implementation logic goes here
}
}
UserServiceProcessor.java
#Component
public class UserServiceProcessor extends UserService.Processor<UserServiceHandler> {
private UserServiceHandler handler;
#Autowired
public UserServiceProcessor(UserServiceHandler iface) {
super(iface);
handler = iface;
}
public UserServiceHandler getHandler() {
return handler;
}
public void setHandler(UserServiceHandler handler) {
this.handler = handler;
}
}
UserServiceServlet.java
#Component
public class UserServiceServlet extends TServlet {
private UserServiceProcessor processor;
#Autowired
public UserServiceServlet(UserServiceProcessor p) {
super(p, new TBinaryProtocol.Factory());
processor = p;
}
}
Servlet Registration
ServletRegistration.Dynamic userService = servletContext.addServlet("UserServiceServlet", (UserServiceServlet) ctx.getBean("userServiceServlet"));
userService.setLoadOnStartup(1);
userService.addMapping("/api/UserService/*");
// This same block repeated 7 times for each *ServiceServlet with different mappings
I would like to have all 7 service handlers map to a single URL like /api/*. Is this even possible? I suppose I would have to create a single servlet and processor, but I'm unsure what they should look like. My processors extend UserService.Processor and the like.
OK, figured it out. Might not be the best way, so I welcome criticism.
Here were my rough steps:
Keep the handler classes the way they were.
Create a new class that extends TMultiplexedProcessor
Create a new class that extends TServlet
All Processors (e.g. the UserServiceProcessor have a handler property and a corresponding getter and setter
Here is my ApiMultiplexingProcessor:
#Component
public class ApiMultiplexingProcessor extends TMultiplexedProcessor {
UserServiceHandler userServiceHandler;
ReportServiceHandler reportServiceHandler;
// ... more service handlers can go here
#Autowired
public ApiMultiplexingProcessor(UserServiceProcessor userServiceProcessor, ReportServiceProcessor reportServiceProcessor) {
this.registerProcessor("UserService", userServiceProcessor);
this.registerProcessor("ReportService", reportServiceProcessor);
// add more registerProcessor lines here for additional services
userServiceHandler = userServiceProcessor.getHandler();
reportServiceHandler = reportServiceProcessor.getHandler();
// set any additional service handlers here
}
// getters and setters for the handlers
public UserServiceHandler getUserServiceHandler() {
return userServiceHandler;
}
public void setUserServiceHandler(UserServiceHandler userServiceHandler) {
this.userServiceHandler = userServiceHandler;
}
public ReportServiceHandler getReportServiceHandler() {
return reportServiceHandler;
}
public void setReportServiceHandler(ReportServiceHandler reportServiceHandler) {
this.reportServiceHandler = reportServiceHandler;
}
}
So to explain the above a bit, if you add any additional services, you need to add the *ServiceHandler classes as fields on this class, and create the getters and setters etc.
So now that we have that, we can create a new single servlet that will be added to the servlet context.
Here is my ApiServlet:
#Component
public class ApiServlet extends TServlet {
private ApiMultiplexingProcessor processor;
#Autowired
public ApiServlet(ApiMultiplexingProcessor p) {
super(p, new TBinaryProtocol.Factory());
processor = p;
}
}
And then you just add this servlet to the servlet context (from a bean) as before:
ServletRegistration.Dynamic api = servletContext.addServlet("ApiServlet", (ApiServlet) ctx.getBean("apiServlet"));
api.setLoadOnStartup(1);
api.addMapping("/api/*");
// yay now we have a single URL and a single servlet
This all could be helpful to someone else in my situation, so enjoy!
P.S. make sure when adapting your clients you use the TMultiplexedProtocol so that you can pass the service name through when talking to the server e.g.
TTransport transport = new THttpClient(new Uri("https://myapp.com/api/"));
TProtocol protocol = new TBinaryProtocol(transport);
TMultiplexedProtocol mp = new TMultiplexedProtocol(protocol, "UserService");
UserService.Client userServiceClient = new UserService.Client(mp);