Azure service bus configuration via code spring - java

I'm using the following dependency to send and receive messages from a azure service bus topic:
<dependency>
<groupId>com.azure.spring</groupId>
<artifactId>spring-cloud-azure-starter-servicebus-jms</artifactId>
<version>4.2.0</version>
</dependency>
I'd like to create the configuration via code through a spring bean because I need to configure more than 1 connection string, so after read the documentation, I decided to create this bean:
#Bean
#Primary
public AzureServiceBusJmsProperties priceListJmsProperties() {
var properties = new AzureServiceBusJmsProperties();
properties.setConnectionString(connectionString);
properties.setPricingTier("standard");
properties.setTopicClientId(priceListTopicName);
return properties;
}
If I debug the object creation, I see that this object is been creating twice, the first one with the configuration that I've provided, and the second one with null data, and this is the reason of why I'm getting the following error because there is a validation in this object that throws an exception if certain field is not set in the properties file:
spring.jms.servicebus.connection-string' should be provided
I've tried creating a connection factory instead but for the reason above, I'm getting the same error.
Anyone knows how I can set this configuration as a bean instead of the application.properties file? Thanks in advance.

Following the #DeepDave-MT answer, I couldn't disable the jms autoconfiguration with the spring.jms.servicebus.enabled property, so I decided to exclude the ServiceBusJmsAutoConfiguration with the property spring.autoconfigure.exclude, you have to pass the package name to this property.
Then, in my config class, I just added the following beans:
#Bean
#Primary
public ConnectionFactory connectionFactory() {
var connectionFactory = new ServiceBusJmsConnectionFactory(connectionString);
var serviceBusConnectionString = new ServiceBusConnectionString(connectionString);
var remoteUri = String.format(AMQP_URI_FORMAT, serviceBusConnectionString.getEndpointUri(), 100000);
connectionFactory.setRemoteURI(remoteUri);
connectionFactory.setClientID(topicName);
connectionFactory.setUsername(serviceBusConnectionString.getSharedAccessKeyName());
connectionFactory.setPassword(serviceBusConnectionString.getSharedAccessKey());
return new CachingConnectionFactory(connectionFactory);
}
#Bean
#Primary
public JmsListenerContainerFactory<?> topicJmsListenerContainerFactory(#Qualifier("connectionFactory") ConnectionFactory connectionFactory) {
var topicFactory = new DefaultJmsListenerContainerFactory();
topicFactory.setConnectionFactory(connectionFactory);
topicFactory.setSubscriptionDurable(Boolean.TRUE);
topicFactory.setErrorHandler(priceListErrorHandler());
return topicFactory;
}
#Bean
#Primary
public AzureServiceBusJmsProperties jmsProperties() {
var properties = new AzureServiceBusJmsProperties();
properties.setConnectionString(connectionString);
properties.setPricingTier("standard");
properties.setTopicClientId(topicName);
return properties;
}

Related

SpringBoot Embedded Kafka to produce Event using Avro Schema

I have created the below test class to produce an event using AvroSerializer.
#SpringBootTest
#EmbeddedKafka(partitions = 1, brokerProperties = { "listeners=PLAINTEXT://localhost:9092", "port=9092" })
#TestPropertySource(locations = ("classpath:application-test.properties"))
#ContextConfiguration(classes = { TestAppConfig.class })
#DirtiesContext
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
class EntitlementEventsConsumerServiceImplTest {
#Autowired
EmbeddedKafkaBroker embeddedKafkaBroker;
#Bean
MockSchemaRegistryClient mockSchemaRegistryClient() {
return new MockSchemaRegistryClient();
}
#Bean
KafkaAvroSerializer kafkaAvroSerializer() {
return new KafkaAvroSerializer(mockSchemaRegistryClient());
}
#Bean
public DefaultKafkaProducerFactory producerFactory() {
Map<String, Object> props = KafkaTestUtils.producerProps(embeddedKafkaBroker);
props.put(KafkaAvroSerializerConfig.AUTO_REGISTER_SCHEMAS, false);
return new DefaultKafkaProducerFactory(props, new StringSerializer(), kafkaAvroSerializer());
}
#Bean
public KafkaTemplate<String, ApplicationEvent> kafkaTemplate() {
KafkaTemplate<String, ApplicationEvent> kafkaTemplate = new KafkaTemplate(producerFactory());
return kafkaTemplate;
}
}
But when I send an event using kafkaTemplate().send(appEventsTopic, applicationEvent);I am getting the below exception.
Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Schema Not Found; error code: 404001
at io.confluent.kafka.schemaregistry.client.MockSchemaRegistryClient.getIdFromRegistry(MockSchemaRegistryClient.java:79)
at io.confluent.kafka.schemaregistry.client.MockSchemaRegistryClient.getId(MockSchemaRegistryClient.java:273)
at io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:82)
at io.confluent.kafka.serializers.KafkaAvroSerializer.serialize(KafkaAvroSerializer.java:53)
at org.apache.kafka.common.serialization.Serializer.serialize(Serializer.java:62)
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:902)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:862)
at org.springframework.kafka.core.DefaultKafkaProducerFactory$CloseSafeProducer.send(DefaultKafkaProducerFactory.java:781)
at org.springframework.kafka.core.KafkaTemplate.doSend(KafkaTemplate.java:562)
at org.springframework.kafka.core.KafkaTemplate.send(KafkaTemplate.java:363)
When I use MockSchemaRegistryClient why it is trying to lookup the schema?
schema.registry.url= mock://localhost.something
Basically anything with mock as prefix will do the job.
Refer to this https://github.com/confluentinc/schema-registry/blob/master/avro-serializer/src/main/java/io/confluent/kafka/serializers/AbstractKafkaAvroSerDeConfig.java
Also set auto.register.schemas=true
You are setting the producer not to try and auto register new schema on producing the message , so it just trying to fetch from the SR and did not find its schema on the SR.
also did not see you setup schema registry URL guess its taking default values
To your question the mock is imitating the work of real schema registry, but has its clear disadvantages
/**
Mock implementation of SchemaRegistryClient that can be used for tests. This version is NOT
thread safe. Schema data is stored in memory and is not persistent or shared across instances.
*/
You may look on the document for more information
https://github.com/confluentinc/schema-registry/blob/master/client/src/main/java/io/confluent/kafka/schemaregistry/client/MockSchemaRegistryClient.java#L47

Cant resolve symbol SimpleNativeJdbcExtractor [duplicate]

My Project is on spring-boot-starter-parent - "1.5.9.RELEASE" and I'm migrating it to spring-boot-starter-parent - "2.3.1.RELEASE".
This is multi-tenant env application, where one database will have multiple schemas, and based on the tenant-id, execution switches between schemas.
I had achieved this schema switching using SimpleNativeJdbcExtractor but in the latest Springboot version NativeJdbcExtractor is no longer available.
Code snippet for the existing implementation:
#Bean
#Scope(
value = ConfigurableBeanFactory.SCOPE_PROTOTYPE,
proxyMode = ScopedProxyMode.TARGET_CLASS)
public JdbcTemplate jdbcTemplate() {
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
SimpleNativeJdbcExtractor simpleNativeJdbcExtractor = new SimpleNativeJdbcExtractor() {
#Override
public Connection getNativeConnection(Connection con) throws SQLException {
LOGGER.debug("Set schema for getNativeConnection "+Utilities.getTenantId());
con.setSchema(Utilities.getTenantId());
return super.getNativeConnection(con);
}
#Override
public Connection getNativeConnectionFromStatement(Statement stmt) throws SQLException {
LOGGER.debug("Set schema for getNativeConnectionFromStatement "+Utilities.getTenantId());
Connection nativeConnectionFromStatement = super.getNativeConnectionFromStatement(stmt);
nativeConnectionFromStatement.setSchema(Utilities.getTenantId());
return nativeConnectionFromStatement;
}
};
simpleNativeJdbcExtractor.setNativeConnectionNecessaryForNativeStatements(true);
simpleNativeJdbcExtractor.setNativeConnectionNecessaryForNativePreparedStatements(true);
jdbcTemplate.setNativeJdbcExtractor(simpleNativeJdbcExtractor);
return jdbcTemplate;
}
Here Utilities.getTenantId() ( Stored value in ThreadLocal) would give the schema name based on the REST request.
Questions:
What are the alternates to NativeJdbcExtractor so that schema can be dynamically changed for JdbcTemplate?
Is there any other way, where while creating the JdbcTemplate bean I can set the schema based on the request.
Any help, code snippet, or guidance to solve this issue is deeply appreciated.
Thanks.
When I was running the application in debug mode I saw Spring was selecting Hikari Datasource.
I had to intercept getConnection call and update schema.
So I did something like below,
Created a Custom class which extends HikariDataSource
public class CustomHikariDataSource extends HikariDataSource {
#Override
public Connection getConnection() throws SQLException {
Connection connection = super.getConnection();
connection.setSchema(Utilities.getTenantId());
return connection;
}
}
Then in the config class, I created bean for my CustomHikariDataSource class.
#Bean
public DataSource customDataSource(DataSourceProperties properties) {
final CustomHikariDataSource dataSource = (CustomHikariDataSource) properties
.initializeDataSourceBuilder().type(CustomHikariDataSource.class).build();
if (properties.getName() != null) {
dataSource.setPoolName(properties.getName());
}
return dataSource;
}
Which will be used by the JdbcTemplate bean.
#Bean
#Scope(
value = ConfigurableBeanFactory.SCOPE_PROTOTYPE,
proxyMode = ScopedProxyMode.TARGET_CLASS)
public JdbcTemplate jdbcTemplate() throws SQLException {
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
return jdbcTemplate;
}
With this approach, I will have DataSource bean created only once and for every JdbcTemplate access, the proper schema will be updated during runtime.
There's no need to get rid of JdbcTemplate. NativeJdbcExtractor was removed in Spring Framework 5 as it isn't needed with JDBC 4.
You should replace your usage of NativeJdbcExtractor with calls to connection.unwrap(Class). The method is inherited by Connection from JDBC's Wrapper.
You may also want to consider using AbstractRoutingDataSource which is designed to route connection requests to different underlying data sources based on a lookup key.

Issue integrating Drools with Activiti and SpringBoot - Using Custom Post Deployers

I took help from this forum : https://community.alfresco.com/thread/225090-spring-boot-activiti-5180-and-drools-integration-issue. I was able to Autowire the ProcessEngine, get the process engine configuration and then while adding the deployer I got struck. The snippet of code is :
SpringProcessEngineConfiguration sp = (SpringProcessEngineConfiguration)
processEngine.getProcessEngineConfiguration();
List<Deployer> listDeployer = new ArrayList<Deployer>();
listDeployer.add(new RulesDeployer());
sp.setCustomPostDeployers(listDeployer); // <--setCustomPostDeployers function is not called
How can I achieve this and call the setCustomPostDeployers function to integrate Drools with Activiti. Can any one please help me on this issue?
It takes me time to figure it out, but after reading some interesting posts and some documentation I have finally created an example using Activiti, Spring-Boot and Drools.
In your case, you are modifying the existing SpringBootConfiguration before using the processEngine, but according to my tests, is too late to adding the custom deployers there, due to the resources has been already read. Then you must set the configuration much earlier.
The documentation in general is pointing out to change the 'activiti.cfg.xml' but this is for spring and useless for spring-boot. Then the idea is to generate a configuration class as Spring Boot use to do.
#Configuration
public class ProcessEngineConfigDrlEnabled {
#Autowired
private DataSource dataSource;
#Autowired
private PlatformTransactionManager transactionManager;
#Bean
public SpringProcessEngineConfiguration processEngineConfiguration() {
SpringProcessEngineConfiguration config = new SpringProcessEngineConfiguration();
try {
config.setDeploymentResources(getBpmnFiles());
} catch (IOException e) {
e.printStackTrace();
}
config.setDataSource(dataSource);
config.setTransactionManager(transactionManager);
//Here the rulesdeployer is added.
config.setCustomPostDeployers(Arrays.asList(new RulesDeployer()));
return config;
}
/*Read folder with BPMN files and folder with DLR files */
private Resource[] getBpmnFiles() throws IOException {
ResourcePatternResolver resourcePatternResolver = new PathMatchingResourcePatternResolver();
Resource[] bpmnResources = resourcePatternResolver.getResources("classpath*:" + BPMN_PATH + "**/*.bpmn20.xml");
Resource[] drlResources = resourcePatternResolver.getResources("classpath*:" + DRL_PATH + "**/*.drl");
return (Resource[]) ArrayUtils.addAll(bpmnResources, drlResources);
}
#Bean
public ProcessEngineFactoryBean processEngine() {
ProcessEngineFactoryBean factoryBean = new ProcessEngineFactoryBean();
factoryBean.setProcessEngineConfiguration(processEngineConfiguration());
return factoryBean;
}
...
}
As usual, this class must be in a packet that Spring boot can read (in the packet hierarchy of the main class).
At this example, I have #Autowired the datasource and the transactionManager to use the original one in from the default configuration. If not, you must implement yours and add them to the configuration.

Multiple transaction managers - Selecting a one at runtime - Spring

I am using Spring to configure transactions in my application. I have two transaction managers defined for two RabbitMQ servers.
....
#Bean(name = "devtxManager")
public PlatformTransactionManager devtxManager() {
return new RabbitTransactionManager(devConnectionFactory());
}
#Bean(name = "qatxManager")
public PlatformTransactionManager qatxManager() {
return new RabbitTransactionManager(qaConnectionFactory());
}
#Bean
public ConnectionFactory devConnectionFactory() {
CachingConnectionFactory factory = new CachingConnectionFactory();
factory.setHost(propertyLoader.loadProperty("dev.rabbit.host"));
factory.setPort(Integer.parseInt(propertyLoader.loadProperty("dev.rabbit.port")));
factory.setVirtualHost("product");
factory.setUsername(propertyLoader.loadProperty("dev.sender.rabbit.user"));
factory.setPassword(propertyLoader.loadProperty("dev.sender.rabbit.password"));
return factory;
}
#Bean
public ConnectionFactory qaConnectionFactory() {
CachingConnectionFactory factory = new CachingConnectionFactory();
factory.setHost(propertyLoader.loadProperty("qa.rabbit.host"));
factory.setPort(Integer.parseInt(propertyLoader.loadProperty("qa.rabbit.port")));
factory.setVirtualHost("product");
factory.setUsername(propertyLoader.loadProperty("qa.sender.rabbit.user"));
factory.setPassword(propertyLoader.loadProperty("qa.sender.rabbit.password"));
return factory;
}
...
In my service class I need to pick the right transaction manager by the 'env' variable passed in. ( i.e if env=='qa' I need to choose 'qatxManager' else if 'env==dev' I need to choose 'devtxManager'.
....
#Transactional(value = "qatxManager")
public String requeue(String env, String sourceQueue, String destQueue) {
// read from queue
List<Message> messageList = sendReceiveImpl.receive(env, sourceQueue);
....
How can I get it done?
I think you need a Facade. Define an interface and create 2 classes implementing the same interface but with different #Transactional(value = "qatxManager")
Then define one Facade class which keeps 2 implementations (use #Qualifier to distinguish them) The Facade gets the env String and call method of proper bean

mongodb multi tenacy spel with #Document

This is related to
MongoDB and SpEL Expressions in #Document annotations
This is the way I am creating my mongo template
#Bean
public MongoDbFactory mongoDbFactory() throws UnknownHostException {
String dbname = getCustid();
return new SimpleMongoDbFactory(new MongoClient("localhost"), "mydb");
}
#Bean
MongoTemplate mongoTemplate() throws UnknownHostException {
MappingMongoConverter converter =
new MappingMongoConverter(mongoDbFactory(), new MongoMappingContext());
return new MongoTemplate(mongoDbFactory(), converter);
}
I have a tenant provider class
#Component("tenantProvider")
public class TenantProvider {
public String getTenantId() {
--custome Thread local logic for getting a name
}
}
And my domain class
#Document(collection = "#{#tenantProvider.getTenantId()}_device")
public class Device {
-- my fields here
}
As you see I have created my mongotemplate as specified in the post, but I still get the below error
Exception in thread "main" org.springframework.expression.spel.SpelEvaluationException: EL1057E:(pos 1): No bean resolver registered in the context to resolve access to bean 'tenantProvider'
What am I doing wrong?
Finally figured out why i was getting this issue.
When using Servlet 3 initialization make sure that you add the application context to the mongo context as follows
#Autowired
private ApplicationContext appContext;
public MongoDbFactory mongoDbFactory() throws UnknownHostException {
return new SimpleMongoDbFactory(new MongoClient("localhost"), "apollo-mongodb");
}
#Bean
MongoTemplate mongoTemplate() throws UnknownHostException {
final MongoDbFactory factory = mongoDbFactory();
final MongoMappingContext mongoMappingContext = new MongoMappingContext();
mongoMappingContext.setApplicationContext(appContext);
// Learned from web, prevents Spring from including the _class attribute
final MappingMongoConverter converter = new MappingMongoConverter(factory, mongoMappingContext);
converter.setTypeMapper(new DefaultMongoTypeMapper(null));
return new MongoTemplate(factory, converter);
}
Check the autowiring of the context and also
mongoMappingContext.setApplicationContext(appContext);
With these two lines i was able to get the component wired correctly to use it in multi tenant mode
The above answers just worked partially in my case.
I've been struggling with the same problem and finally realized that under some runtime execution path (when RepositoryFactorySupport relies on AbstractMongoQuery to query MongoDB, instead of SimpleMongoRepository which as far as I know is used in "out of the box" methods provided by SpringData) the metadata object of type MongoEntityMetadata that belongs to MongoQueryMethod used in AbstractMongoQuery is updated only once, in a method named getEntityInformation()
Because MongoQueryMethod object that holds a reference to this 'stateful' bean seems to be pooled/cached by infrastructure code #Document annotations with Spel not always work.
As far as I know as a developer we just have one choice, use MongoOperations directly from your #Repository bean in order to be able to specify the right collection name evaluated at runtime with Spel.
I've tried to use AOP in order to modify this behaviour, by setting a null collection name in MongoEntityMetadata but this does not help because changes in AbstractMongoQuery inner classes, that implement Execution interface, would also need to be done in order to check if MongoEntityMetadata collection name is null and therefore use a different MongoTemplate method signature.
MongoTemplate is smart enough to guess the right collection name by using its private method
private <T> String determineEntityCollectionName(T obj)
I've a created a ticket in spring's jira https://jira.spring.io/browse/DATAMONGO-1043
If you have the mongoTemplate configured as in the related issue, the only thing i can think of is this:
<context:component-scan base-package="com.tenantprovider.package" />
Or if you want to use annotations:
#ComponentScan(basePackages = "com.tenantprovider.package")
You might not be scanning the tenant provider package.
Ex:
#ComponentScan(basePackages = "com.tenantprovider.package")
#Document(collection = "#{#tenantProvider.getTenantId()}_device")
public class Device {
-- my fields here
}

Categories