Integrate Redis to JHipster CacheConfiguration error - java

I'm trying to integrate redis cache to JHipster generator following this pull request on Github: https://github.com/jhipster/generator-jhipster/pull/10057/commits/cd2f2865d35dfd77624dd3a38ed32822e895539d#
I receive this error while building my project:
[ERROR] symbol: method getRedis()
[ERROR] location: class io.github.jhipster.config.JHipsterProperties.Cache
[ERROR] ../config/CacheConfiguration.java:[61,139] cannot find symbol
The method getRedis() is undefined for the type JHipsterProperties.CacheJava(67108964)
Where is getRedis() defined?
CacheConfiguration method in CacheConfiguration.java:
private final javax.cache.configuration.Configuration<Object, Object> jcacheConfiguration;
public CacheConfiguration(JHipsterProperties jHipsterProperties) {
MutableConfiguration<Object, Object> jcacheConfig = new MutableConfiguration<>();
Config config = new Config();
config.useSingleServer()
.setAddress(jHipsterProperties.getCache().getRedis().getServer())
.setSubscriptionConnectionMinimumIdleSize(1)
.setSubscriptionConnectionPoolSize(50)
.setConnectionMinimumIdleSize(24)
.setConnectionPoolSize(64)
.setDnsMonitoringInterval(5000)
.setIdleConnectionTimeout(10000)
.setConnectTimeout(10000)
.setTimeout(3000)
.setRetryAttempts(3)
.setRetryInterval(1500)
.setDatabase(0)
.setPassword(null)
.setSubscriptionsPerConnection(5)
.setClientName(null)
.setSslEnableEndpointIdentification(true)
.setSslProvider(SslProvider.JDK)
.setSslTruststore(null)
.setSslTruststorePassword(null)
.setSslKeystore(null)
.setSslKeystorePassword(null)
.setPingConnectionInterval(0)
.setKeepAlive(false)
.setTcpNoDelay(false);
jcacheConfig.setStatisticsEnabled(true);
jcacheConfig.setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(new Duration(TimeUnit.SECONDS, jHipsterProperties.getCache().getRedis().getExpiration())));
jcacheConfiguration = RedissonConfiguration.fromInstance(Redisson.create(config), jcacheConfig);
}
Am I missing some dependencies for getRedis()?
Note: I left out this in build.gradle.ejs; would this be causing the problem?
<%_ if (cacheProvider === 'redis') { _%>
implementation "org.redisson:redisson"
<%_ if (enableHibernateCache) { _%>
implementation "org.hibernate:hibernate-jcache"
<%_ } _%>
<%_ } _%>
Solution?:
ApplicationProperties.java:
#ConfigurationProperties(prefix = "application", ignoreUnknownFields = false)
public class ApplicationProperties {
private final Redis redis = new Redis();
public Redis getRedis() {
return redis;
}
public static class Redis {
private String server = JHipsterDefaults.Cache.Redis.server;
private int expiration = JHipsterDefaults.Cache.Redis.expiration;
public String getServer() {
return server;
}
public void setServer(String server) {
this.server = server;
}
public int getExpiration() {
return expiration;
}
public void setExpiration(int expiration) {
this.expiration = expiration;
}
}
}
CacheConfiguration.java
<%_ if (cacheProvider === 'redis') { _%>
private final javax.cache.configuration.Configuration<Object, Object> jcacheConfiguration;
public CacheConfiguration(JHipsterProperties jHipsterProperties, ApplicationProperties applicationProperties) {
MutableConfiguration<Object, Object> jcacheConfig = new MutableConfiguration<>();
Config config = new Config();
config.useSingleServer()
.setAddress(applicationProperties.getRedis().getServer());
.setSubscriptionConnectionMinimumIdleSize(1)
.setSubscriptionConnectionPoolSize(50)
.setConnectionMinimumIdleSize(24)
.setConnectionPoolSize(64)
.setDnsMonitoringInterval(5000)
.setIdleConnectionTimeout(10000)
.setConnectTimeout(10000)
.setTimeout(3000)
.setRetryAttempts(3)
.setRetryInterval(1500)
.setDatabase(0)
.setPassword(null)
.setSubscriptionsPerConnection(5)
.setClientName(null)
.setSslEnableEndpointIdentification(true)
.setSslProvider(SslProvider.JDK)
.setSslTruststore(null)
.setSslTruststorePassword(null)
.setSslKeystore(null)
.setSslKeystorePassword(null)
.setPingConnectionInterval(0)
.setKeepAlive(false)
.setTcpNoDelay(false);
jcacheConfig.setStatisticsEnabled(true);
jcacheConfig.setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(new Duration(TimeUnit.SECONDS, applicationProperties.getRedis().getExpiration())));
jcacheConfiguration = RedissonConfiguration.fromInstance(Redisson.create(config), jcacheConfig);
}
application.yml.ejs
# ===================================================================
# Application specific properties
# Add your own application properties here, see the ApplicationProperties class
# to have type-safe configuration, like in the JHipsterProperties above
#
# More documentation is available at:
# https://www.jhipster.tech/common-application-properties/
# ===================================================================
# application:
application.redis.server: redis://localhost:6379
application.redis.expiration: 300

You are missing the respective changes in the JHipster library which are not released yet (located in this pull request).
My advice (until it's released) would be to copy the changes (the Redis class and values) from JhipsterProperties.java to your ApplicationProperties.java.
Then if you need to configure the values to a non-default value, you can do so in your application.yml under the application: key.
Lastly add ApplicationProperties applicationProperties to the constructor in CacheConfiguration.java next to JhipsterProperties and reference getRedis() from there.
I believe the reddison dependency is also needed.

Related

kafka streams persistent state store does not work in dockerized environment

I am getting the following exception when i try to retrieve an entry from kafka streams persistent state store:
org.apache.kafka.streams.errors.InvalidStateStoreException: Cannot get state store kafka-state-dir because the stream thread is STARTING, not RUNNING
I am using spring boot and kafka streams
Here is my code:
Configuration class
#Configuration
#EnableKafka
#EnableKafkaStreams
public class KafkaStreamsConfig {
#Value(value = "${spring.kafka.bootstrap-servers}")
private String bootstrapAddress;
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
KafkaStreamsConfiguration kStreamsConfig() {
Map<String, Object> props = new HashMap<>();
props.put(APPLICATION_ID_CONFIG, "applicationId");
props.put(BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
props.put(DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.StringSerde.class);
props.put(DEFAULT_VALUE_SERDE_CLASS_CONFIG, CustomSerdes.messageContextSerde().getClass().getName());
props.put(STATE_DIR_CONFIG, "/app/kafka-state-dir");
return new KafkaStreamsConfiguration(props);
}
The kafka streams processor:
#Component
public class Processor {
private static final Logger logger = LoggerFactory.getLogger(MessageFinProcessor.class);
private static final Serde<String> STRING_SERDE = Serdes.String();
public static final String FIN_CACHE = "kafka-state-dir";
#Value("${kafka.topic.fin.message.ctx}")
private String finMessageContextTopic;
#Autowired
private void process(StreamsBuilder streamsBuilder) {
logger.info("Starting MessageFinProcessor on topic {}", finMessageContextTopic);
streamsBuilder.table(finMessageContextTopic,
Materialized.<String, MessageContext, KeyValueStore<Bytes, byte[]>>as(FIN_CACHE)
.withKeySerde(STRING_SERDE)
.withValueSerde(CustomSerdes.messageContextSerde()));
}
}
The Service where i am retrieving the entry from the cache:
#Service
public class KafkaStreamsStorageService {
private final StreamsBuilderFactoryBean streamsFactoryBean;
public static final String FIN_CACHE = "kafka-state-dir";
public KafkaStreamsStorageService(StreamsBuilderFactoryBean streamsFactoryBean) {
this.streamsFactoryBean = streamsFactoryBean;
}
public MessageContext get(String correlationId) {
KafkaStreams kafkaStreams = streamsFactoryBean.getKafkaStreams();
if (kafkaStreams != null) {
ReadOnlyKeyValueStore<String, MessageContext> keyValueStore = kafkaStreams.store(StoreQueryParameters.fromNameAndType(
FIN_CACHE, QueryableStoreTypes.keyValueStore()));
return keyValueStore.get(correlationId);
}
return null;
}
}
Inside the container where my java app runs i see only the following files in store dir:
ls -a /app/kafka-state-dir/applicationId
.lock
kafka-streams-process-metadata
Here is my Dockerfile:
FROM ...
ENV JVM_MEM_ARGS=-Xms128m\ -Xmx2g
ARG JAR_FILE
ADD ./target/${JAR_FILE} /app/myapp.jar
WORKDIR /app
CMD ["./run.sh", "myapp.jar"]
Here is also the volumes that i pass to the service in docker-compose.yml
...
volumes:
- /home/myuser/kafka-streams/:/app/kafka-state-dir/
Whereas when i run my app as a standalone java jar from intelij (with different profile), the whole procedure works as expected (i can retrieve the entry from the persistent store) and i see the following files inside the store dir:
ls -a /app/kafka-state-dir/applicationId
0_0
0_1
0_2
0_3
kafka-streams-process-metadata
.lock
ls kafka-state-dir/applicationId/0_0
.checkpoint
rocksdb
I have tried many different paths for the state.dir in order kafakstreams lib to be able to find it, but none of them worked. Do you have any ideas?
Thanks

Is there anyway to disable "Retryable writes" to false in Spring Boot 2.2.1

First time
I am trying to develop a controller to save data in DocumentDB in AWS.
In the first time it saves, but in the second time, I am looking for this register saved in database, I got this and change some data, and save, but...
I am getting this error:
Caused by: com.mongodb.MongoCommandException: Command failed with error 301: 'Retryable writes are not supported' on server aws:27017. The full response is {"ok": 0.0, "code": 301, "errmsg": "Retryable writes are not supported", "operationTime": {"$timestamp": {"t": 1641469879, "i": 1}}}
This my java code
#Service
public class SaveStateHandler extends Handler<SaveStateCommand> {
#Autowired
private MongoRepository repository;
#Autowired
private MongoTemplate mongoTemplate;
#Override
public String handle(Command command) {
SaveStateCommand cmd = (SaveStateCommand) command;
State state = buildState(cmd);
repository.save(state);
return state.getId();
}
private State buildState(SaveStateCommand cmd) {
State state = State
.builder()
.activityId(cmd.getActivityId())
.agent(cmd.getAgent())
.stateId(cmd.getStateId())
.data(cmd.getData())
.dataAlteracao(LocalDateTime.now())
.build();
State stateFound = findState(cmd);
if (stateFound != null) {
state.setId(stateFound.getId());
}
return state;
}
private State findState(SaveStateCommand request) {
Query query = new Query();
selectField(query);
where(request, query);
return mongoTemplate.findOne(query, State.class);
}
private void selectField(Query query) {
query.fields().include("id");
}
private void where(SaveStateCommand request, Query query) {
query.addCriteria(new Criteria().andOperator(
Criteria.where("activityId").is(request.getActivityId()),
Criteria.where("agent").is(request.getAgent())));
}
}
In AWS they suggest to use retryWrites=false but I donĀ“t know how to do it in Spring Boot.
I use Spring Boot 2.2.1
I tryed to do this
#Bean
public MongoClientSettings mongoSettings() {
return MongoClientSettings
.builder()
.retryWrites(Boolean.FALSE)
.build();
}
But not worked.
=================================================================================
Second Time
I connected to AWS DocumentDb with SSH Tunnel.
Started my application with these database configuration
#Configuration
#EnableConfigurationProperties({MongoProperties.class})
public class MongoAutoConfiguration {
private final MongoClientFactory factory;
private final MongoClientOptions options;
private MongoClient mongo;
public MongoAutoConfiguration(MongoProperties properties, ObjectProvider<MongoClientOptions> options, Environment environment) {
this.options = options.getIfAvailable();
if (StringUtils.isEmpty(properties.getUsername()) || StringUtils.isEmpty(properties.getPassword())) {
properties.setUsername(null);
properties.setPassword(null);
}
properties.setUri(createUri(properties));
this.factory = new MongoClientFactory(properties, environment);
}
private String createUri(MongoProperties properties) {
String uri = "mongodb://";
if (StringUtils.hasText(properties.getUsername()) && !StringUtils.isEmpty(properties.getPassword())) {
uri = uri + properties.getUsername() + ":" + new String(properties.getPassword()) + "#";
}
return uri + properties.getHost() + ":" + properties.getPort() + "/" + properties.getDatabase() + "?retryWrites=false";
}
#PreDestroy
public void close() {
if (this.mongo != null) {
this.mongo.close();
}
}
#Bean
public MongoClient mongo() {
this.mongo = this.factory.createMongoClient(this.options);
return this.mongo;
}
}
And localy it saves the data without error.
But, if I put my API update in AWS ECS, and try to save, got the same error.
=================================================================================
Dependencies
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-mongodb</artifactId>
<version>2.2.1.RELEASE</version>
</dependency>
<dependency>
<groupId>com.querydsl</groupId>
<artifactId>querydsl-mongodb</artifactId>
<version>4.1.4</version>
</dependency>
When you construct your connection string, you can include the parameters for disabling retryable writes, by adding this to your connection URI:
?replicaSet=rs0&readPreference=primaryPreferred&retryWrites=false&maxIdleTimeMS=30000
Then use this when creating the database factory and mongo template (this example uses the Reactive database factory, but the principle is the same for the SimpleMongoClientDatabaseFactory:
#Bean
fun reactiveMongoDatabaseFactory(
#Value("\${spring.data.mongodb.uri}") uri: String,
#Value("\${mongodb.database-name}") database: String
): ReactiveMongoDatabaseFactory {
val parsedURI = URI(uri)
return SimpleReactiveMongoDatabaseFactory(MongoClients.create(uri), database)
}

Integration testing in multi module Maven project with Spring

I have a multi module maven project with several modules (parent, service, updater1, updater2).
The #SpringBootApplication is in 'service' module and the others doesn't have artifacts.
'updater1' is a module which have a Kafka listener and a http client, and when receives a kafka event launches a request to an external API. I want to create integration tests in this module with testcontainers, so I've created the containers and a Kafka producer to send a KafkaTemplate to my consumer.
My problem is the Kafka producer is autowiring to null, so the tests throws a NullPointerException. I think it should be a Spring configuration problem, but I can't find the problem. Can you help me? Thank's!
This is my test class:
#ExtendWith(SpringExtension.class)
#ContextConfiguration(classes = {KafkaConfiguration.class, CacheConfiguration.class, ClientConfiguration.class})
public class InvoicingTest {
#ClassRule
public static final Containers containers = Containers.Builder.aContainer()
.withKafka()
.withServer()
.build();
private final MockHttpClient mockHttpClient =
new MockHttpClient(containers.getHost(SERVER),
containers.getPort(SERVER));
#Autowired
private KafkaEventProducer kafkaEventProducer;
#BeforeEach
#Transactional
void setUp() {
mockHttpClient.reset();
}
#Test
public void createElementSuccesfullResponse() throws ExecutionException, InterruptedException, TimeoutException {
mockHttpClient.whenPost("/v1/endpoint")
.respond(HttpStatusCode.OK_200);
kafkaEventProducer.produce("src/test/resources/event/invoiceCreated.json");
mockHttpClient.verify();
}
And this is the event producer:
#Component
public class KafkaEventProducer {
private final KafkaTemplate<String, String> kafkaTemplate;
private final String topic;
#Autowired
KafkaInvoicingEventProducer(KafkaTemplate<String, String> kafkaTemplate,
#Value("${kafka.topic.invoicing.name}") String topic){
this.kafkaTemplate = kafkaTemplate;
this.topic = topic;
}
public void produce(String event){
kafkaTemplate.send(topic, event);
}
}
You haven't detailed how KafkaEventProducer is implemented (is it a #Component?), neither your test class is annotated with #SpringBootTest and the runner #RunWith.
Check out this sample, using Apache KakfaProducer:
import org.apache.kafka.clients.producer.KafkaProducer;
public void sendRecord(String topic, String event) {
try (KafkaProducer<String, byte[]> producer = new KafkaProducer<>(producerProps(bootstrapServers, false))) {
send(producer, topic, event);
}
}
where
public void send(KafkaProducer<String, byte[]> producer, String topic, String event) {
try {
ProducerRecord<String, byte[]> record = new ProducerRecord<>(topic, event.getBytes());
producer.send(record).get();
} catch (InterruptedException | ExecutionException e) {
fail("Not expected exception: " + e.getMessage());
}
}
protected Properties producerProps(String bootstrapServer, boolean transactional) {
Properties producerProperties = new Properties();
producerProperties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
producerProperties.put(KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
producerProperties.put(VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class.getName());
if (transactional) {
producerProperties.put(TRANSACTIONAL_ID_CONFIG, "my-transactional-id");
}
return producerProperties;
}
and bootstrapServers is taken from kafka container:
KafkaContainer kafka = new KafkaContainer();
kafka.start();
bootstrapServers = kafka.getBootstrapServers();

Override Ribbon Server List to get a list of host names from consul

I am trying to override Ribbon Server List to get a list of host names from consul. I have the consul piece working properly(when testing with hardcode values) to get the hostname and port for a service. The issue I am having is when I try to autowire in IClientConfig. I get an exception that IClientConfig bean could not be found. How do I override the ribbon configurations and autowire IClientConfig in the ribbonServerList method.
I have tried following the instructions here at http://projects.spring.io/spring-cloud/spring-cloud.html#_customizing_the_ribbon_client on how to customize ribbon client configuration. I keep getting the following error:
Description:
Parameter 0 of method ribbonServerList in com.intradiem.enterprise.keycloak.config.ConsulRibbonSSLConfig required a bean of type 'com.netflix.client.config.IClientConfig' that could not be found.
Which is causing spring-boot to fail.
Bellow are the classes that I am trying to use to create
AutoConfiguration Class:
#Configuration
#EnableConfigurationProperties
#ConditionalOnBean(SpringClientFactory.class)
#ConditionalOnProperty(value = "spring.cloud.com.intradiem.service.apirouter.consul.ribbon.enabled", matchIfMissing = true)
#AutoConfigureAfter(RibbonAutoConfiguration.class)
#RibbonClients(defaultConfiguration = ConsulRibbonSSLConfig.class)
//#RibbonClient(name = "question-answer-provider", configuration = ConsulRibbonSSLConfig.class)
public class ConsulRibbonSSLAutoConfig
{
}
Configuration Class:
#Component
public class ConsulRibbonSSLConfig
{
#Autowired
private ConsulClient client;
private String serviceId = "client";
public ConsulRibbonSSLConfig() {
}
public ConsulRibbonSSLConfig(String serviceId) {
this.serviceId = serviceId;
}
#Bean
#ConditionalOnMissingBean
public ServerList<?> ribbonServerList(IClientConfig clientConfig) {
ConsulSSLServerList serverList = new ConsulSSLServerList(client);
serverList.initWithNiwsConfig(clientConfig);
return serverList;
}
}
ServerList Code:
public class ConsulSSLServerList extends AbstractServerList<Server>
{
private final ConsulClient client;
private String serviceId = "client";
public ConsulSSLServerList(ConsulClient client) {
this.client = client;
}
#Override
public void initWithNiwsConfig(IClientConfig clientConfig) {
this.serviceId = clientConfig.getClientName();
}
#Override
public List<Server> getInitialListOfServers() {
return getServers();
}
#Override
public List<Server> getUpdatedListOfServers() {
return getServers();
}
private List<Server> getServers() {
List<Server> servers = new ArrayList<>();
Response<QueryExecution> results = client.executePreparedQuery(serviceId, QueryParams.DEFAULT);
List<QueryNode> nodes = results.getValue().getNodes();
for (QueryNode queryNode : nodes) {
QueryNode.Node node = queryNode.getNode();
servers.add(new Server(node.getMeta().containsKey("secure") ? "https" : "http", node.getNode(), queryNode.getService().getPort()));
}
return servers;
}
#Override
public String toString() {
final StringBuilder sb = new StringBuilder("ConsulSSLServerList{");
sb.append("serviceId='").append(serviceId).append('\'');
sb.append('}');
return sb.toString();
}
}

BeanCurrentlyInCreationException when setting DataSource in Spring Boot and Batch project

I am using both Spring Boot and Batch in a maven multi-module project for parsing CSV files and storing data in a MySQL database.
When running the batch module using my BatchLauncher class (shared below) I get a BeanCurrentlyInCreationException caused by getDataBase() which I use for configuring my MySQL database. (click this link to see logs)
And when I remove this method Spring Boot choose automatically an embedded database of type H2 (link for logs)
BatchLauncher class :
#Slf4j
public class BatchLauncher {
public static void main(String[] args) {
try {
Launcher.launchWithConfig("My Batch", BatchConfig.class, false);
}catch (Exception ex) {
log.error(ex.getMessage());
}
}
}
Launcher class :
#Slf4j
public class Launcher {
private Launcher() {}
public static void launchWithConfig(String batchName, Class<?> configClass, boolean oncePerDayMax) throws JobExecutionException, BatchException {
try {
// Check the spring profiles used
log.info("Start batch \"" + batchName + "\" with profiles : " + System.getProperty("spring.profiles.active"));
// Load configuration
#SuppressWarnings("resource")
AnnotationConfigApplicationContext context = new AnnotationConfigApplicationContext(configClass);
JobLauncher jobLauncher = context.getBean(JobLauncher.class);
Job job = context.getBean(Job.class);
//Authorize only one execution of each job per day
JobParameters jobParameters = new JobParameters();
JobExecution execution = jobLauncher.run(job, jobParameters);
if(!BatchStatus.COMPLETED.equals(execution.getStatus())) {
throw new BatchException("Unknown error while executing batch : " + batchName);
}
}catch (Exception ex){
log.error("Exception",ex);
throw new BatchException(ex.getMessage());
}
}
}
BatchConfig class :
#Slf4j
#Configuration
#EnableAutoConfiguration(exclude = {DataSourceAutoConfiguration.class, DataSourceTransactionManagerAutoConfiguration.class, HibernateJpaAutoConfiguration.class})
#EnableBatchProcessing
#ComponentScan(basePackages = {
"fr.payet.flad.batch.tasklet",
"fr.payet.flad.batch.mapper"
})
#Import({CoreConfig.class})
public class BatchConfig {
private StepBuilderFactory steps;
private JobBuilderFactory jobBuilderFactory;
private ReadInputTasklet readInputTasklet;
public BatchConfig(StepBuilderFactory steps, JobBuilderFactory jobBuilderFactory, ReadInputTasklet readInputTasklet) {
this.steps = steps;
this.jobBuilderFactory = jobBuilderFactory;
this.readInputTasklet = readInputTasklet;
}
#Bean
public DataSource getDataBase(){
return DataSourceBuilder
.create()
.driverClassName("com.mysql.jdbc.Driver")
.url("jdbc:mysql://localhost:3306/myDb?useSSL=false")
.username("myuser")
.password("mypwd")
.build();
}
#Bean
public Step readInputStep() {
return steps.get("readInputStep")
.tasklet(readInputTasklet)
.build();
}
#Bean
public Job readCsvJob() {
return jobBuilderFactory.get("readCsvJob")
.incrementer(new RunIdIncrementer())
.flow(readInputStep())
.end()
.build();
}
}
The solution was to create a custom DataSourceConfiguration class annotated with #Configuration in which I set my own database like this :
#Bean
public DataSource getDataBase(){
return DataSourceBuilder
.create()
.driverClassName("com.mysql.jdbc.Driver")
.url("jdbc:mysql://localhost:3306/myDB?useSSL=false")
.username("myUser")
.password("myPwd")
.build();
}

Categories