I have a spring bean annotated with #Cacheable annotations defined like so
#Service
public class MyCacheableBeanImpl implements MyCacheableBean {
#Override
#Cacheable(value = "cachedData")
public List<Data> getData() { ... }
}
I need this class to be capable of disabling caching and working only with data from original source. This should happen based on some event from the outside. Here's my approach to this:
#Service
public class MyCacheableBeanImpl implements MyCacheableBean, ApplicationListener<CacheSwitchEvent> {
//Field with public getter to use it in Cacheable condition expression
private boolean cacheEnabled = true;
#Override
#Cacheable(value = "cachedData", condition = "#root.target.cacheEnabled") //exression to check whether we want to use cache or not
public List<Data> getData() { ... }
#Override
public void onApplicationEvent(CacheSwitchEvent event) {
// Updating field from application event. Very schematically just to give you the idea
this.cacheEnabled = event.isCacheEnabled();
}
public boolean isCacheEnabled() {
return cacheEnabled;
}
}
My concern is that the level of "magic" in this approach is very high. I'm not even sure how I can test that this would work (based on spring documentation this should work but how to be sure). Am I doing it right? If I'm wrong then how to make it right?
What I was looking for was NoOpCacheManager:
To make it work I switched from xml bean creation to a factory
I did something as follows:
#Bean
public CacheManager cacheManager() {
final CacheManager cacheManager;
if (this.methodCacheManager != null) {
final EhCacheCacheManager ehCacheCacheManager = new EhCacheCacheManager();
ehCacheCacheManager.setCacheManager(this.methodCacheManager);
cacheManager = ehCacheCacheManager;
} else {
cacheManager = new NoOpCacheManager();
}
return cacheManager;
}
Inspired by SimY4 last comment, here is my working solution overloading SimpleCacheManager in order to provide runtime switch.
Just use switchableSimpleCacheManager.setEnabeld(false/true) to switch off/on.
package ch.hcuge.dpi.lab.cache;
import org.springframework.cache.Cache;
import org.springframework.cache.support.NoOpCache;
import org.springframework.cache.support.SimpleCacheManager;
/**
* Extends {#link SimpleCacheManager} to allow to disable caching at runtime
*/
public class SwitchableSimpleCacheManager extends SimpleCacheManager {
private boolean enabled = true;
public boolean isEnabled() {
return enabled;
}
/**
* If the enabled value changes, all caches are cleared
*
* #param enabled true or false
*/
public void setEnabled(boolean enabled) {
if (enabled != this.enabled) {
clearCaches();
}
this.enabled = enabled;
}
#Override
public Cache getCache(String name) {
if (enabled) {
return super.getCache(name);
} else {
return new NoOpCache(name);
}
}
protected void clearCaches() {
this.loadCaches().forEach(cache -> cache.clear());
}
}
Configuration ( using Caffeine ):
#Bean
public SwitchableSimpleCacheManager cacheManager() {
SwitchableSimpleCacheManager cacheManager = new SwitchableSimpleCacheManager();
cacheManager.setCaches(Arrays.asList(
buildCache(RESULT_CACHE, 24, 5000)
));
return cacheManager;
}
private CaffeineCache buildCache(String name, int hoursToExpire, long maxSize) {
return new CaffeineCache(
name,
Caffeine.newBuilder()
.expireAfterWrite(hoursToExpire, TimeUnit.HOURS)
.maximumSize(maxSize)
.build()
);
}
Related
I am using Spring Boot 2.7.2 and Caffeine 2.8.5. I have verified the cache is working correctly and all of the other manage/metrics endpoints for cache give the correct counts (cache.eviction.weight, cache.evictions, cache.gets, cache.size), but the count for cache.puts is always 0, even when there are many items put into the cache.
#Configuration
#ConfigurationProperties(prefix = "importantcache")
public class CaffeineCacheConfig {
private Long maxSize;
private Duration expire;
public Long getMaxSize() {
return maxSize;
}
public void setMaxSize(Long maxSize) {
this.maxSize = maxSize;
}
public Duration getExpire() {
return expire;
}
public void setExpire(Duration expire) {
this.expire = expire;
}
}
#Component
#EnableCaching
public class CaffeineCache {
private final CaffeineCacheConfig caffeineCacheConfig;
#Autowired
public CaffeineCache(CaffeineCacheConfig caffeineCacheConfig) {
this.caffeineCacheConfig = caffeineCacheConfig;
}
#Bean
public CacheManager cacheManager() {
CaffeineCacheManager cacheManager = new CaffeineCacheManager(
"cache1",
"cache2",
"cache3"
);
cacheManager.setCaffeine(caffeineCacheBuilder());
return cacheManager;
}
Caffeine<Object, Object> caffeineCacheBuilder() {
Long size = 1000L;
Duration expire = Duration.parse("PT1H");
if (caffeineCacheConfig.getMaxSize() != null) {
size = caffeineCacheConfig.getMaxSize();
}
if (caffeineCacheConfig.getExpire() != null) {
expire = caffeineCacheConfig.getExpire();
}
return Caffeine.newBuilder()
.maximumSize(size)
.expireAfterWrite(expire)
.recordStats();
}
}
I have also tried this alternative method in the above class for cacheManager() from this related question
#Bean
public CacheManager cacheManager(MeterRegistry meterRegistry) {
CaffeineCacheManager caffeineCacheManager = new CaffeineCacheManager() {
protected com.github.benmanes.caffeine.cache.Cache<Object, Object> createNativeCaffeineCache(String name) {
return CaffeineCacheMetrics.monitor(meterRegistry, super.createNativeCaffeineCache(name), name);
}
};
caffeineCacheManager.setCaffeine(caffeineCacheBuilder());
return caffeineCacheManager;
}
Any ideas on what else to try to make the cache.puts endpoint accurate?
I am working within an environment that changes credentials every several minutes. In order for beans that implement clients who depend on these credentials to work, the beans need to be refreshed. I decided that a good approach for that would be implementing a custom scope for it.
After looking around a bit on the documentation I found that the main method for a scope to be implemented is the get method:
public class CyberArkScope implements Scope {
private Map<String, Pair<LocalDateTime, Object>> scopedObjects = new ConcurrentHashMap<>();
private Map<String, Runnable> destructionCallbacks = new ConcurrentHashMap<>();
private Integer scopeRefresh;
public CyberArkScope(Integer scopeRefresh) {
this.scopeRefresh = scopeRefresh;
}
#Override
public Object get(String name, ObjectFactory<?> objectFactory) {
if (!scopedObjects.containsKey(name) || scopedObjects.get(name).getKey()
.isBefore(LocalDateTime.now().minusMinutes(scopeRefresh))) {
scopedObjects.put(name, Pair.of(LocalDateTime.now(), objectFactory.getObject()));
}
return scopedObjects.get(name).getValue();
}
#Override
public Object remove(String name) {
destructionCallbacks.remove(name);
return scopedObjects.remove(name);
}
#Override
public void registerDestructionCallback(String name, Runnable runnable) {
destructionCallbacks.put(name, runnable);
}
#Override
public Object resolveContextualObject(String name) {
return null;
}
#Override
public String getConversationId() {
return "CyberArk";
}
}
#Configuration
#Import(CyberArkScopeConfig.class)
public class TestConfig {
#Bean
#Scope(scopeName = "CyberArk")
public String dateString(){
return LocalDateTime.now().toString();
}
}
#RestController
public class HelloWorld {
#Autowired
private String dateString;
#RequestMapping("/")
public String index() {
return dateString;
}
}
When I debug this implemetation with a simple String scope autowired in a controller I see that the get method is only called once in the startup and never again. So this means that the bean is never again refreshed. Is there something wrong in this behaviour or is that how the get method is supposed to work?
It seems you need to also define the proxyMode which injects an AOP proxy instead of a static reference to a string. Note that the bean class cant be final. This solved it:
#Configuration
#Import(CyberArkScopeConfig.class)
public class TestConfig {
#Bean
#Scope(scopeName = "CyberArk", proxyMode=ScopedProxyMode.TARGET_CLASS)
public NonFinalString dateString(){
return new NonFinalString(LocalDateTime.now());
}
}
We would like to send actuator metrics to Cloudwatch. Using the provided micrometer cloudwatch MeterRegistry solutions makes to many assumptions about how our project is setup, for example you need to depend on cloud AWS which then makes even more assumptions. We would like to write a more lightweight implementation which just get a CloudWatchAsyncClient injected and makes no other assumptions about our project.
However im not sure how. Is there any example on how to make a custom implementation insted of having to depend on the available metrics registry?
So far I have done some experimenting with the following:
public interface CloudWatchConfig extends StepRegistryConfig {
int MAX_BATCH_SIZE = 20;
#Override
default String prefix() {
return "cloudwatch";
}
default String namespace() {
String v = get(prefix() + ".namespace");
if (v == null)
throw new MissingRequiredConfigurationException("namespace must be set to report metrics to CloudWatch");
return v;
}
#Override
default int batchSize() {
String v = get(prefix() + ".batchSize");
if (v == null) {
return MAX_BATCH_SIZE;
}
int vInt = Integer.parseInt(v);
if (vInt > MAX_BATCH_SIZE)
throw new InvalidConfigurationException("batchSize must be <= " + MAX_BATCH_SIZE);
return vInt;
}
}
#Service
#Log
public class CloudWatchMeterRegistry extends StepMeterRegistry {
public CloudWatchMeterRegistry(CloudWatchConfig config, Clock clock) {
super(config, clock);
}
#Override
protected void publish() {
getMeters().stream().forEach(a -> {
log.warning(a.getId().toString());
});
}
#Override
protected TimeUnit getBaseTimeUnit() {
return TimeUnit.MILLISECONDS;
}
}
#Configuration
public class MetricsPublisherConfig {
#Bean
public CloudWatchConfig cloudWatchConfig() {
return new CloudWatchConfig() {
#Override
public String get(String key) {
switch (key) {
case "cloudwatch.step":
return props.getStep();
default:
return "testtest";
}
}
};
}
}
However when I run the publish method is never called and no metrics are ever logged. What am I missing to get this working?
Here's an example project. I don't use cloudwatch myself so not had a chance to test it integrating with AWS. Leave a comment if there are any issues and we can try to resolve them
https://github.com/michaelmcfadyen/spring-boot-cloudwatch
I am trying to do something similar, and avoid using Spring Cloud. The simplest solution I have found so far is:
import io.micrometer.cloudwatch2.CloudWatchConfig;
import io.micrometer.cloudwatch2.CloudWatchMeterRegistry;
import io.micrometer.core.instrument.Clock;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.actuate.autoconfigure.metrics.export.properties.StepRegistryProperties;
import org.springframework.boot.actuate.autoconfigure.metrics.export.properties.StepRegistryPropertiesConfigAdapter;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.stereotype.Component;
import software.amazon.awssdk.services.cloudwatch.CloudWatchAsyncClient;
#Configuration
public class MetricsConfiguration {
#Bean
public CloudWatchMeterRegistry cloudWatchMeterRegistry(CloudWatchConfig config, Clock clock) {
return new CloudWatchMeterRegistry(config, clock, CloudWatchAsyncClient.create());
}
#Component
public static class MicrometerCloudWatchConfig
extends StepRegistryPropertiesConfigAdapter<StepRegistryProperties>
implements CloudWatchConfig {
private final String namespace;
private final boolean enabled;
public MicrometerCloudWatchConfig(
#Value("${CLOUDWATCH_NAMESPACE}") String namespace,
#Value("${METRICS_ENABLED}") boolean enabled) {
super(new StepRegistryProperties() {
});
this.namespace = namespace;
this.enabled = enabled;
}
#Override
public String namespace() {
return namespace;
}
#Override
public boolean enabled() {
return enabled;
}
#Override
public int batchSize() {
return CloudWatchConfig.MAX_BATCH_SIZE;
}
}
}
Dependencies:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-cloudwatch2</artifactId>
</dependency>
I have set qualifier name from properties file as isomessage.qualifier=isoMessageMember1:
public class BankBancsConnectImpl implements BankBancsConnect{
#Autowired
#Resource(name="${isomessage.qualifier}")
private Iso8583Message iso8583Message;
public BancsConnectTransferComp getFundTransfer(IpsDcBatchDetail ipsDcBatchDetail) {
bancsxfr = iso8583Message.getFundTransfer(bancsxfr);
}
}
The value of ${isomessage.qualifier} is static as it is defined in the properties file. However i want it to be dynamic and get it's value from database based on certain condition. For instance i have multiple implementation of Iso8583Message (member wise) and has to call respective class of member id that is currently logged in. Please guide me to achieve this in the best and java spring way.
And my implementation class will look like this:
#Service("isoMessageMember1")
public class Iso8583MessageEBLImpl implements Iso8583Message{
public BancsConnectTransferComp getFundTransfer(BancsConnectTransferComp bancsxfr) throws Exception {
...
}
You can use Condition instead Qualifier if you are using Spring4+.
First, you need a ConfigDAO which read the qualifier name which you
need from database.
public class ConfigDAO {
public static String readFromDataSource() {
return " ";
}
}
Suppose there are two implemetions of Iso8583Message, you can
create two Condition objects.
IsoMessageMember1_Condition
public class IsoMessageMember1_Condition implements Condition {
#Override
public boolean matches(ConditionContext context, AnnotatedTypeMetadata metadata) {
String qualifier = ConfigDAO.readFromDataSource();
if (qualifier.equals("IsoMessageMember1_Condition")) {
return true;
} else {
return false;
}
}
}
IsoMessageMember2_Condition
public class IsoMessageMember2_Condition implements Condition {
#Override
public boolean matches(ConditionContext context, AnnotatedTypeMetadata metadata) {
String qualifier = ConfigDAO.readFromDataSource();
if (qualifier.equals("IsoMessageMember2_Condition")) {
return true;
} else {
return false;
}
}
}
Return different implemetion according to condition in config class.
#Configuration
public class MessageConfiguration {
#Bean(name = "iso8583Message")
#Conditional(IsoMessageMember1_Condition.class)
public Iso8583Message isoMessageMember1() {
return new Iso8583MessageEBLImpl();
}
#Bean(name = "iso8583Message")
#Conditional(IsoMessageMember2_Condition.class)
public Iso8583Message isoMessageMember2() {
return new OtherMessageEBLImpl();
}
}
Remove the #Qulifier and #Autowire annotations which you do not need anymore, you can retrieve the message from context every time you use it.
public class BankBancsConnectImpl implements BankBancsConnect{
private Iso8583Message iso8583Message;
public BancsConnectTransferComp getFundTransfer(IpsDcBatchDetail ipsDcBatchDetail) {
iso8583Message = (Iso8583Message)context.getBean("iso8583Message");
bancsxfr = iso8583Message.getFundTransfer(bancsxfr);
}
}
In spring it is possible to autowire the application context, and retrieve any bean based on its name.
For example, your interface signature similar to the below syntax
public interface Iso8583Message {
public String getFundDetails(String uniqueId);
}
and 2 different implementations follow below format
#Service("iso8583-message1")
public class Iso8583MessageImpl1 implements Iso8583Message {
#Override
public String getFundDetails(String uniqueId) {
return "Iso8583MessageImpl1 details ";
}
}
and
#Service("iso8583-message2")
public class Iso8583MessageImpl2 implements Iso8583Message {
#Override
public String getFundDetails(String uniqueId) {
return "Iso8583MessageImpl2 details ";
}
}
We can retrieve the beans as follows
public class BankBancsConnectImpl implements BankBancsConnect{
#Autowired
private ApplicationContext applicationContext;
public BancsConnectTransferComp getFundTransfer(IpsDcBatchDetail
ipsDcBatchDetail) {
//for retrieving 1st implementation
Iso8583Message iso8583Message=applicationContext.getBean("iso8583-message1", Iso8583Message.class);
//For retrieving 2nd implementation
Iso8583Message iso8583Message=applicationContext.getBean("iso8583-message2", Iso8583Message.class);
String result = iso8583Message.getFundTransfer(bancsxfr);
}
}
In this case, we can configure the bean names coming from the database instead of hard coded values("iso8583-message1","iso8583-message2").
I have created my own repository like that:
public interface MyRepository extends TypedIdCassandraRepository<MyEntity, String> {
}
So the question how automatically create cassandra table for that? Currently Spring injects MyRepository which tries to insert entity to non-existent table.
So is there a way to create cassandra tables (if they do not exist) during spring container start up?
P.S. It would be very nice if there is just config boolean property without adding lines of xml and creation something like BeanFactory and etc. :-)
Overide the getSchemaAction property on the AbstractCassandraConfiguration class
#Configuration
#EnableCassandraRepositories(basePackages = "com.example")
public class TestConfig extends AbstractCassandraConfiguration {
#Override
public String getKeyspaceName() {
return "test_config";
}
#Override
public SchemaAction getSchemaAction() {
return SchemaAction.RECREATE_DROP_UNUSED;
}
#Bean
public CassandraOperations cassandraOperations() throws Exception {
return new CassandraTemplate(session().getObject());
}
}
You can use this config in the application.properties
spring.data.cassandra.schema-action=CREATE_IF_NOT_EXISTS
You'll also need to Override the getEntityBasePackages() method in your AbstractCassandraConfiguration implementation. This will allow Spring to find any classes that you've annotated with #Table, and create the tables.
#Override
public String[] getEntityBasePackages() {
return new String[]{"com.example"};
}
You'll need to include spring-data-cassandra dependency in your pom.xml file.
Configure your TestConfig.class as below:
#Configuration
#PropertySource(value = { "classpath:Your .properties file here" })
#EnableCassandraRepositories(basePackages = { "base-package name of your Repositories'" })
public class CassandraConfig {
#Autowired
private Environment environment;
#Bean
public CassandraClusterFactoryBean cluster() {
CassandraClusterFactoryBean cluster = new CassandraClusterFactoryBean();
cluster.setContactPoints(env.getProperty("contactpoints from your properties file"));
cluster.setPort(Integer.parseInt(env.getProperty("ports from your properties file")));
return cluster;
}
#Bean
public CassandraConverter converter() {
return new MappingCassandraConverter(mappingContext());
}
#Bean
public CassandraSessionFactoryBean session() throws Exception {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster().getObject());
session.setKeyspaceName(env.getProperty("keyspace from your properties file"));
session.setConverter(converter());
session.setSchemaAction(SchemaAction.CREATE_IF_NOT_EXISTS);
return session;
}
#Bean
public CassandraOperations cassandraTemplate() throws Exception {
return new CassandraTemplate(session().getObject());
}
#Bean
public CassandraMappingContext mappingContext() throws ClassNotFoundException {
CassandraMappingContext mappingContext= new CassandraMappingContext();
mappingContext.setInitialEntitySet(getInitialEntitySet());
return mappingContext;
}
#Override
public String[] getEntityBasePackages() {
return new String[]{"base-package name of all your entity annotated
with #Table"};
}
#Override
protected Set<Class<?>> getInitialEntitySet() throws ClassNotFoundException {
return CassandraEntityClassScanner.scan(getEntityBasePackages());
}
}
This last getInitialEntitySet method might be an Optional one. Try without this too.
Make sure your Keyspace, contactpoints and port in .properties file. Like :
cassandra.contactpoints=localhost,127.0.0.1
cassandra.port=9042
cassandra.keyspace='Your Keyspace name here'
Actually, after digging into the source code located in spring-data-cassandra:3.1.9, you can check the implementation:
org.springframework.data.cassandra.config.SessionFactoryFactoryBean#performSchemaAction
with implementation as following:
protected void performSchemaAction() throws Exception {
boolean create = false;
boolean drop = DEFAULT_DROP_TABLES;
boolean dropUnused = DEFAULT_DROP_UNUSED_TABLES;
boolean ifNotExists = DEFAULT_CREATE_IF_NOT_EXISTS;
switch (this.schemaAction) {
case RECREATE_DROP_UNUSED:
dropUnused = true;
case RECREATE:
drop = true;
case CREATE_IF_NOT_EXISTS:
ifNotExists = SchemaAction.CREATE_IF_NOT_EXISTS.equals(this.schemaAction);
case CREATE:
create = true;
case NONE:
default:
// do nothing
}
if (create) {
createTables(drop, dropUnused, ifNotExists);
}
}
which means you have to assign CREATE to schemaAction if the table has never been created. And CREATE_IF_NOT_EXISTS dose not work.
More information please check here: Why `spring-data-jpa` with `spring-data-cassandra` won't create cassandra tables automatically?