Store data into a bean from database - java

In spring boot, I want to take data from database and store it into a bean object. This needs to be done once (cache), and for further request bean object needs to be used, not to make database call again.
Example
/*
"DataFromDB" -> bean should have the values
*/
List<Users> uList = ApplicationContext.getBean("DataFromDB");
Is there any way to achieve this ?
Thank you

During your application boot, you can simply create a bean List<Users> uList and populate it with your required info.
Bean creation will happen once and whenever you want to reuse it, just get that bean. Spring will take care of the rest.
Somewhere in a config file, declare the bean:
#Component
public class InitialConfiguration {
#Bean
public List<Users> ulist() {
List<Users> uList = null;
// ulist = populate it from db
return uList;
}
}
Spring will create a ulist bean and store it. Now whenever you want to use it, you can simply autowire it into your variables:
#Service
public class SomeRandomClass {
#Autowire
List<Users> ulist;
public void performOperationOnUList() {
ulist.get(0); // use it
}
}

you can use a caching mechanism like ehcache
To add Ehcache to your application, here is the very basic approach you can follow.
Add Ehcache through your build tool. Here is an example for Gradle.
dependencies {
compile("org.hibernate:hibernate-ehcache:5.2.12.Final")
}
Add Ehcache configuration, Here I'm using annotation-based bean configuration.
#Configuration
#EnableCaching
public class CacheConfiguration {
#Bean
public EhCacheManagerFactoryBean ehCacheManagerFactory() {
EhCacheManagerFactoryBean cacheManagerFactoryBean = new EhCacheManagerFactoryBean();
cacheManagerFactoryBean.setConfigLocation(new ClassPathResource("ehcache.xml"));
cacheManagerFactoryBean.setShared(true);
return cacheManagerFactoryBean;
}
#Bean
public EhCacheCacheManager ehCacheCacheManager() {
EhCacheCacheManager cacheManager = new EhCacheCacheManager();
cacheManager.setCacheManager(ehCacheManagerFactory().getObject());
cacheManager.setTransactionAware(true);
return cacheManager;
}
}
Define cache Regions. Here you can define individual caches for each repository you want to cache. create file named ehcache.xml and place in classpath.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE ehcache>
<ehcache>
<diskStore path="java.io.tmpdir"/>
<cache name="usercache" maxElementsInMemory="100" eternal="false" timeToIdleSeconds="600" timeToLiveSeconds="3600" overflowToDisk="true"/>
</ehcache>
Add Cachable annotation to transactional methods where you want to cache the DB operation.
#Cacheable(value = "userCache", key = "#p0")
public Company find(Long id) {
//db operation in here
}

From your problem statement, what I can understand is you want to cache objects from database, caching should be done only once (preferably on application start-up) and should be accessible anywhere in the context.
For this, you can store the data from db in a static final collection. The caching operation can be done on application startup via EventListener annotation.
#Component
public class DbCache {
public static final List<Object> dbCache = new ArrayList<>();
#EventListener(value = ApplicationReadyEvent.class)
private void initCache() {
List<Object> dataFromDB = // data fetched from DB
dbCache.addAll(dataFromDB);
}
public static List<Object> getDbCache() {
return dbCache;
}
}
You can use the DbCache.getDbCache() anywhere in your code now to fetch the data.

Related

Unable to refresh existing singleton bean in Spring boot application

I have a configuration defined in my spring boot applocation as follows:
#Configuration
public class RuleEngineConfiguration {
private final DatabaseRuleLoader databaseRuleLoader;
public RuleEngineConfiguration(
DatabaseRuleLoader databaseRuleLoader) {
this.databaseRuleLoader = databaseRuleLoader;
}
#Bean
public RuleEngineManager ruleEngine() {
return RuleEngineManagerFactory.getRuleEngineManager(this.databaseRuleLoader);
}
Now I would like to refresh RuleEngineManager bean in my spring boot application on creating/update of a row in a given table in DB with refresh function as defined below:
public void refresh() {
databaseRuleLoader.refresh(); <---THIS RELOADS ROWS FROM DB
BeanDefinitionRegistry registry = (BeanDefinitionRegistry) applicationContext
.getAutowireCapableBeanFactory();
RuleEngineManager ruleEngineManager = RuleEngineManagerFactory
.getRuleEngineManager(databaseRuleLoader);
registry.removeBeanDefinition("ruleEngine");
((SingletonBeanRegistry) registry).registerSingleton("ruleEngine", ruleEngineManager);
}
And in my application, where I need RuleEngineManager bean, I am getting the bean as follows:
((RuleEngineManager) applicationContext.getBean("ruleEngine"))
Even though the refresh function is getting executed every time, I am creating/updating any rows in DB, but, I am not seeing any changes. It seems the existing RuleEnginemanager bean is getting injected as a dependency. I am not able to figure out what I am missing here.
Could anyone please help here? Thanks.
What I was suggesting is to use a factory (design pattern)... for example in this way:
// you already have this class
public class RuleEngineManagerFactory{
private DatabaseRuleLoader databaseRuleLoader;
private RuleEngineManager ruleEngineManager;
public RuleEngineManagerFactory(DatabaseRuleLoader databaseRuleLoader) {
this.databaseRuleLoader = databaseRuleLoader;
}
public RuleEngineManager getRuleEngineManager(){
if(this.ruleEngineManager == null){
ruleEngineManager = new RuleEngineManager(this.databaseRuleLoader);
}
return this.ruleEngineManager;
}
public void refresh(){
ruleEngineManager = new RuleEngineManager(this.databaseRuleLoader);
}
}
In this way, you will inject RuleEngineManagerFactory wherever you need, and you can refresh the RuleEngineManager... so you will have to have a singleton for the factory of the manager, and not for the manager itself

Implement caching framework(ehcache) to cache the Lookupcode and Location dropdown values during the server startup

I want to load both the LookupCode and Location data from database
into cache memory using Spring ehCache when the application starts i.e
when the server starts before any other method is called. In future
few more dropdowns will be added. So there should be a common method
to cache whatever datas comes in based on the criteria of the dropdown
data.
There is a Entity, Repository and Service already written for
Lookupcode and Location
I have written the below for implementing caching framework:
ehcache.xml
<cache name= "LookupCodeRepository.getDropdownValues"/> <cache name= "LocationRepository.getDropdownValues"/>
application.properties
spring.jpa.properties.hibernate.cache.use_second_level_cache = false
spring.jpa.properties.hibernate.cache.use_query_cache = false
spring.jpa.properties.hibernate.cache.region.factory_class =
org.hibernate.cache.ehcache.EhCacheRegionFactory
spring.jpa.properties.hibernate.cache.provider_class =
org.hibernate.cache.EhCacheProvider
spring.jpa.properties.hibernate.cache.use_structured_entries = true
spring.jpa.properties.hibernate.cache.region_prefix =
spring.jpa.properties.hibernate.cache.provider_configuration_file_resource_path
= ehcache.xml spring.jpa.properties.hibernate.cache.use_second_level_cache
and using hibernate-ehcache jar in pom.xml
WebConfig.java
#Configuration public class WebConfig implements
ServletContextInitializer{
#Autowired CustomCache cache;
#Override public void onStartup ( ServletContext servletContext)
throws ServletException{
cache.loadCache();
}
CustomCache.java
public class CustomCache {
#Autowired private LookupCodeService lkupSer;
#Autowired private LocationService locSer;
public void loadCache(){
List<LookupCode> lkup = lkupServ.getDropdownValues();
List<Location> locat = locSer.getDropdownValues();
}
So here in loadCache() method instead of calling each individual
service it should be like, automatic. Whatever service is created
it should automatically be cached. So there should be a common method
to cache whatever datas comes in based on the criteria of the
dropdown data.
How to implement that?
The services you want to work with have a common method. Define an interface for that method:
interface ProvidesDropdownValues<T> {
List<T> getDropdownValues();
}
Now you can do:
class DropdownValuesService {
#Autowired ApplicationContext context;
#Cacheable List getDropdownValues(String beanName) {
ProvidesDropdownValues<?> bean = ((ProvidesDropdownValues) context.getBean(beanName));
return bean.getDropdownValues();
}
}
If your services don't have bean names you could work with class names instead.
For load on startup you could do:
class StartupWarmupService {
#Autowired ApplicationContext context;
#Autowired DropdownValuesService dropDowns;
#PostConstruct void startup() {
for (String n : context.getBeanNamesForType(ProvidesDropdownValues.class)) {
dropDowns.getDropdownValues(n);
}
}
}
I suggest that the load code only runs in the production application. That is why it makes sense to keep it separate from the general caching logic. For testing a single service you don't want to load everything. Startup times for developers should be fast.
Disclaimer: I am not a heavy Spring user, so details may be wrong but the basic approach should work out.

Using Spring Cache with Hazelcast Near Cache

I'm trying to configure Spring CacheManager with Hazelcast. Also, I want to configure Hazelcast's Near Cache so I can retrieve the (already deserialized) instance of my cached object.
Here is my configuration
#Bean
public HazelcastInstance hazelcastConfig() {
val config = new Config().setInstanceName("instance");
val serializationConfig = config.getSerializationConfig();
addCacheConfig(config, "USERS")
serializationConfig.addSerializerConfig(new SerializerConfig()
.setImplementation(getSerializer())
.setTypeClass(User.class)
return Hazelcast.newHazelcastInstance(config);
}
#Bean
public CacheManager cacheManager(HazelcastInstance hazelcastInstance) {
return new HazelcastCacheManager(hazelcastInstance);
}
#Bean
public PlatformTransactionManager chainedTransactionManager(PlatformTransactionManager jpaTransactionManager, HazelcastInstance hazelcastInstance) {
return new ChainedTransactionManager(
jpaTransactionManager,
new HazelcastTransactionManager(hazelcastInstance)
);
}
// Configure Near Cache
private void addCacheConfig(Config config, String cacheName) {
val nearCacheConfig = new NearCacheConfig()
.setInMemoryFormat(OBJECT)
.setCacheLocalEntries(true)
.setInvalidateOnChange(false)
.setTimeToLiveSeconds(hazelcastProperties.getTimeToLiveSeconds())
.setEvictionConfig(new EvictionConfig()
.setMaxSizePolicy(ENTRY_COUNT)
.setEvictionPolicy(EvictionPolicy.LRU)
.setSize(hazelcastProperties.getMaxEntriesSize()));
config.getMapConfig(cacheName)
.setInMemoryFormat(BINARY)
.setNearCacheConfig(nearCacheConfig);
}
Saving and retrieving from the Cache is working fine, but my object is deserialized every time I have a cache hit. I want to avoid this deserialization time using a NearCache, but it doesn´t work. I also tried BINARY memory format.
Is this possible with Hazelcast? Or is this deserialization always executed even if I have a NearCache?
Thanks
So after a few changes, it is working now. Here is my conclusion:
So in order to have NearCache working with Spring Cache, all your cached objects should be Immutable. This means final classes and final fields. Also, they all should extend the Serializable interface.

Spring Transaction Aware cache is not working

I am using spring cache abstraction using Ehcache as a cache provider.
I am trying to attach cache operations to spring JPA transactions, but not able to do so.
Even though transaction fails/rollback cache put happens.
Configuration,
#Bean
public EhCacheManagerFactoryBean cacheManagerUsingSpringApi() {
EhCacheManagerFactoryBean ehCacheManagerFactoryBean = new EhCacheManagerFactoryBean();
// provide xml file for ehcache configuration/
ehCacheManagerFactoryBean.setConfigLocation(new ClassPathResource("spring-cache-abs-ehcache.xml"));
return ehCacheManagerFactoryBean;
}
#Bean
public org.springframework.cache.CacheManager ehCacheCacheManager() {
final EhCacheCacheManager ehCacheCacheManager = new EhCacheCacheManager(cacheManagerUsingSpringApi().getObject());
ehCacheCacheManager.setTransactionAware(true); // Setting transaction aware
return ehCacheCacheManager;
}
spring-cache-abs-ehcache.xml,
<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="ehcache.xsd"
updateCheck="true"
monitoring="autodetect"
dynamicConfig="true">
<cache name="EmployeeCache"
maxEntriesLocalHeap="10000"
eternal="false"
timeToIdleSeconds="300" timeToLiveSeconds="600"
memoryStoreEvictionPolicy="LFU"
transactionalMode="off">
<persistence strategy="localTempSwap" />
</cache>
</ehcache>
EmployeeRepository,
public interface EmployeeRepository extends JpaRepository<Employee, Long>, CustomEmployeeRepository {
}
Transactional Method,
#Repository
public class EmployeeRepositoryImpl implements CustomEmployeeRepository {
#PersistenceContext
private EntityManager entityManager;
#Autowired
private CacheManager cacheManager;
// THIS METHOD SHOULD NOT PUT INTO CACHE WITH NEW NAME
#Override
#Transactional
#Cacheable(cacheNames = "EmployeeCache", key = "#a0.id")
public Employee customUpdate(Employee employee) {
employee.setFirstName(UUID.randomUUID().toString());
entityManager.merge(employee);
// rolling back transaction
TransactionAspectSupport.currentTransactionStatus().setRollbackOnly();
return employee;
}
}
Test case(caller),
#Test
public void testCustomUpdate() {
// GIVEN
Employee employee = new Employee();
employee.setFirstName(UUID.randomUUID().toString());
employee.setLastName(UUID.randomUUID().toString());
final Employee savedEmployee = employeeRepository.save(employee);
// WHEN
final Employee updatedEmployee = employeeRepository.customUpdate(savedEmployee);
// THEN
final Cache employeeCache = cacheManager.getCache("EmployeeCache");
final Cache.ValueWrapper object = employeeCache.get(updatedEmployee.getId());
assertNull(object);
}
Test Should succeed i.e spring should not PUT data into cache in method employeeRepository.customUpdate if transaction was rollback in that method.
But, spring puts data into cache even if transactions fails.
NOTE: Weird part is, if entry already exists in cache , then #CachePut does not updates entry in cache if transaction fails.
So, if I annotate employeeRepository.save with #CachePut(cacheNames = "EmployeeCache", key = "#result.id") then cache is not updated in update call.
What is missing here?
Option 1: Use advice ordering,
Refer to:
https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#aop-ordering
The below declaration will cause the transactional advice to be executed first and then the cacheable advice.
<tx:annotation-driven order="0"/>
<cache:annotation-driven cache-manager="ehCacheManager" order="1"/>
Option 2: Annotating #Cacheable and #Transactional on different methods
You will see the code working if you call the #Cacheable annotated method from another method which is annotated with #Transactional. Both annotations i.e. #Cacheable and #Transaction on the same method won't help your cause unless you have advice ordering implemented. Please see below a crude implementation of your logic.
#Override
#Transactional
public Employee customUpdate(Employee employee) {
Employee mergedEmployee = updateHelper(employee);
// rolling back transaction
TransactionAspectSupport.currentTransactionStatus().setRollbackOnly();
return mergedEmployee;
}
#Cacheable(cacheNames = "EmployeeCache", key = "#a0.id")
public Employee updateHelper(Employee employee)) {
employee.setFirstName(UUID.randomUUID().toString());
//On a merge(...) call on the entityManager, the returned item is the managed
//instance
return entityManager.merge(employee);
}

How to set tableName dynamically using environment variable in spring boot?

I am using AWS ECS to host my application and using DynamoDB for all database operations. So I'll have same database with different table names for different environments. Such as "dev_users" (for Dev env), "test_users" (for Test env), etc.. (This is how our company uses same Dynamo account for different environments)
So I would like to change the "tableName" of the model class using the environment variable passed through "AWS ECS task definition" environment parameters.
For Example.
My Model Class is:
#DynamoDBTable(tableName = "dev_users")
public class User {
Now I need to replace the "dev" with "test" when I deploy my container in test environment. I know I can use
#Value("${DOCKER_ENV:dev}")
to access environment variables. But I'm not sure how to use variables outside the class. Is there any way that I can use the docker env variable to select my table prefix?
My Intent is to use like this:
I know this not possible like this. But is there any other way or work around for this?
Edit 1:
I am working on the Rahul's answer and facing some issues. Before writing the issues, I'll explain the process I followed.
Process:
I have created the beans in my config class (com.myapp.users.config).
As I don't have repositories, I have given my Model class package name as "basePackage" path. (Please check the image)
For 1) I have replaced the "table name over-rider bean injection" to avoid the error.
For 2) I printed the name that is passing on to this method. But it is Null. So checking all the possible ways to pass the value here.
Check the image for error:
I haven't changed anything in my user model class as beans will replace the name of the DynamoDBTable when the beans got executed. But the table name over riding is happening. Data is pulling from the table name given at the Model Class level only.
What I am missing here?
The table names can be altered via an altered DynamoDBMapperConfig bean.
For your case where you have to Prefix each table with a literal, you can add the bean as such. Here the prefix can be the environment name in your case.
#Bean
public TableNameOverride tableNameOverrider() {
String prefix = ... // Use #Value to inject values via Spring or use any logic to define the table prefix
return TableNameOverride.withTableNamePrefix(prefix);
}
For more details check out the complete details here:
https://github.com/derjust/spring-data-dynamodb/wiki/Alter-table-name-during-runtime
I am able to achieve table names prefixed with active profile name.
First added TableNameResolver class as below,
#Component
public class TableNameResolver extends DynamoDBMapperConfig.DefaultTableNameResolver {
private String envProfile;
public TableNameResolver() {}
public TableNameResolver(String envProfile) {
this.envProfile=envProfile;
}
#Override
public String getTableName(Class<?> clazz, DynamoDBMapperConfig config) {
String stageName = envProfile.concat("_");
String rawTableName = super.getTableName(clazz, config);
return stageName.concat(rawTableName);
}
}
Then i setup DynamoDBMapper bean as below,
#Bean
#Primary
public DynamoDBMapper dynamoDBMapper(AmazonDynamoDB amazonDynamoDB) {
DynamoDBMapper mapper = new DynamoDBMapper(amazonDynamoDB,new DynamoDBMapperConfig.Builder().withTableNameResolver(new TableNameResolver(envProfile)).build());
return mapper;
}
Added variable envProfile which is an active profile property value accessed from application.properties file.
#Value("${spring.profiles.active}")
private String envProfile;
We have the same issue with regards to the need to change table names during runtime. We are using Spring-data-dynamodb 5.0.2 and the following configuration seems to provide the solutions that we need.
First I annotated my bean accessor
#EnableDynamoDBRepositories(dynamoDBMapperConfigRef = "getDynamoDBMapperConfig", basePackages = "my.company.base.package")
I also setup an environment variable called ENV_PREFIX which is Spring wired via SpEL.
#Value("#{systemProperties['ENV_PREFIX']}")
private String envPrefix;
Then I setup a TableNameOverride bean:
#Bean
public DynamoDBMapperConfig.TableNameOverride getTableNameOverride() {
return DynamoDBMapperConfig.TableNameOverride.withTableNamePrefix(envPrefix);
}
Finally, I setup the DynamoDBMapperConfig bean using TableNameOverride injection. In 5.0.2, we had to setup a standard DynamoDBTypeConverterFactory in the DynamoDBMapperConfig builder to avoid NPE.:
#Bean
public DynamoDBMapperConfig getDynamoDBMapperConfig(DynamoDBMapperConfig.TableNameOverride tableNameOverride) {
DynamoDBMapperConfig.Builder builder = new DynamoDBMapperConfig.Builder();
builder.setTableNameOverride(tableNameOverride);
builder.setTypeConverterFactory(DynamoDBTypeConverterFactory.standard());
return builder.build();
}
In hind sight, I could have setup a DynamoDBTypeConverterFactory bean that returns a standard DynamoDBTypeConverterFactory and inject that into the getDynamoDBMapperConfig() method using the DynamoDBMapperConfig builder. But this will also do the job.
I up voted the other answer but here is an idea:
Create a base class with all your user details:
#MappedSuperclass
public abstract class AbstractUser {
#Id
#GeneratedValue(strategy=GenerationType.AUTO)
private Long id;
private String firstName;
private String lastName;
Create 2 implentations with different table names and spirng profiles:
#Profile(value= {"dev","default"})
#Entity(name = "dev_user")
public class DevUser extends AbstractUser {
}
#Profile(value= {"prod"})
#Entity(name = "prod_user")
public class ProdUser extends AbstractUser {
}
Create a single JPA respository that uses the mapped super classs
public interface UserRepository extends CrudRepository<AbstractUser, Long> {
}
Then switch the implentation with the spring profile
#RunWith(SpringJUnit4ClassRunner.class)
#DataJpaTest
#Transactional
public class UserRepositoryTest {
#Autowired
protected DataSource dataSource;
#BeforeClass
public static void setUp() {
System.setProperty("spring.profiles.active", "prod");
}
#Test
public void test1() throws Exception {
DatabaseMetaData metaData = dataSource.getConnection().getMetaData();
ResultSet tables = metaData.getTables(null, null, "PROD_USER", new String[] { "TABLE" });
tables.next();
assertEquals("PROD_USER", tables.getString("TABLE_NAME"));
}
}

Categories