A very simple use case implemented using DDD and java.
I have a FooEntity and a FooRepository. The Entity has a delete method which validates certain state to check whether it is safe to be deleted, and in case this evaluates to true invoke the delete in the repository, which is injected in the entity.
So far so good, but, what happens if somebody invokes the delete method directly in the repository? Then the validation wouldn't be performed.
Placing the validation in the repository would solve the problem, but this would be clearly wrong since it would make necessary to expose the internal state of the entity.
What am I missing?
public class FooEntity {
#inject
FooRepository fooRepository;
private Boolean canBeDeleted;
public void delete(){
if (canBeDeleted){
fooRepository.delete(this);
}
throw new CannotBeDeletedException();
}
}
public class FooRepository {
#inject
FooDAO fooDAO;
public void delete(FooEntity fooEntity){
fooDAO.delete(fooEntity.getId());
}
}
Don't expose the internal state, expose a method like isDeletable() on the entity. The repository's delete can call entity.isDeletable() before deleting, and raise an exception if you are trying to delete an entity that is not deletable. That way you separate the concerns. The entity has the domain knowledge of it's "deletableness", while the repo knows how to delete the entity.
The example code is fine as is (except that it's strange to have a DAO inside a repository class, as "repository" is just a more abstract name for the same concept as the DAO).
You can't really prevent other developers from calling the wrong methods, except for using static analysis code inspections where available.
The repository should only concern itself with removing the given entity instance from the set of persistent entities. It cannot have logic for checking whether the entity is allowed to be deleted or not, even if the isDeletable() method is in the entity class.
I would put the delete functionality in a domain service.
public class FooService {
#inject
FooRepository fooRepository;
public void delete(Foo foo) {
if( /* insert validation stuff here to check if foo can be deleted */ ) {
fooRepository.delete(foo);
}
}
The way I do it though is I typically use a ValueObject to represent an Entity's identity. E.g.
public class FooId() {
String foodId;
public String FooId(String fooId) {
this.foodId = foodId;
}
}
public class Foo() {
FooId id;
/* other properties */
}
I would then revise FooService to:
public class FooService {
#inject
FooRepository fooRepository;
public void delete(FooId fooId) {
foo = fooRepository.retrieve(fooId);
if( /* insert validation stuff here to check if foo can be deleted */ ) {
fooRepository.delete(foo);
}
}
To delete a foo (assuming fooId was passed by a command from the UI:
fooService.delete(fooId);
I would not inject a FooRepository in an a class that represents entity. I don't think that it is the rightful place. An Entity for me should not be able to create or delete itself. These functions should be in a Domain Service for that Entity.
Related
Hi have read a lot about this but can't come to a conclusion about the best way to test a method that is dependent on other method call results to perform its actions.
Some of the questions I've read include:
Testing methods that depend on each other
Unit testing a method that calls other methods
Unit testing a method that calls another method
Some of the answers sugest that we should only test the methods that perform only one action and then test the method that call this methods for conditional behaviuour (for example, verifying if a given method was called or not) and that's fine, I get it, but I'm struggling with other scenario.
I have a service with a REST api.
The controller has a create method that receives a DTO and calls the Service class create method with this argument (DTO).
I'm trying to practice TDD and for this I use this project I'm building without a database.
The code is as follows:
#Service
public class EntityService implements FilteringInterface {
private MemoryDatabase db = MemoryDatabase.getInstance();
//Create method called from controller: receives DTO to create a new
//Entity after validating that it's name and code parameters are unique
public EntityDTO create(EntityDTO dto) throws Exception {
validateUniqueFields(dto);
Entity entity = Entity.toEntity(dto, "id1"); //maps DTO to Entity object
db.add(entity);
return new EntityDTO.Builder(entity);//maps entity to DTO
}
public void validateUniqueFields(EntityDTO dto) throws Exception {
Set<Entity> foundEntities = filterEntityByNameOrCode(dto.getName(),
dto.getCode(), db.getEntities());
if (!foundEntities.isEmpty()) {
throw new Exception("Already exists");
}
}
}
This is the interface with methods reused by other service classes:
public interface FilteringInterface {
default Set<Entity> filterEntityByNameOrCode(String name, String code, Set<Entity> list) {
return list.stream().filter(e -> e.getSiteId().equals(siteId)
&& (e.getName().equals(name)
|| e.getCode().equals(code))).collect(Collectors.toSet());
}
default Optional<Entity> filterEntityById(String id, Set<Entity> list) {
return list.stream().filter(e -> e.getId().equals(id)).findAny();
};
}
So, I'm testing this service class and I need to test the create() method because it can have different behaviors:
If the received DTO has a name that already exists on the list of entities -> throws Exception
If the received DTO has a code that already exists on the list of entities -> throws Exception
If the received DTO has a name and a code that already exists on the list of entities -> throws Exception
If name and code are different, than everything is ok, and creates the entity -> adds the entity to the existing list - > converts the entity to DTO and retrieves it.
Problem:
To test any of the scenarios, suppose, scenario 1: I need to make the filterEntityByNameOrCode() method return a list with an Entity that has the same name as the Entity I'm trying to create. This method is called inside validateUniqueFields() method.
Problem is: I can't call mockito when() for any of this methods because, for that, I would have to mock the service class, which is the class that I'm testing and, thus, it's wrong approach.
I've also read that using Spy for this is also wrong approach.
So, where thus that leaves me?
Also: if this code is not the correct aprocah, and thats why
it can't be correctly tested, than, whats should the correct approach be?
This service will have other methods (delete, update, etc.). All of this methods will make use of the FilteringInterface as well, so I will have the same problems.
What is the correct way of testing a service class?
I would apply an DI pattern in your service, in order to mock and control the db variable.
#Service
public class EntityService implements FilteringInterface {
private Persistence db;
public EntityService(Persistence db) {
this.db = db;
}
}
After that, you will be able to add entities to Set accordingly to your scenarios
#ExtendWith(MockitoExtension.class)
class EntityServiceTest {
#Mock
private Persistence persistence;
#InjectMocks
private EntityService entityService;
#BeforeEach
void before() {
final Set<Entity> existentEntity = Set.of(new Entity(1L,1L, "name", "code"));
when(persistence.getEntities()).thenReturn(existentEntity);
}
#Test
void shouldThrowWhenNameAlreadyExists() {
final EntityDTO dto = new EntityDTO(1L, "name", "anything");
assertThrows(RuntimeException.class, () -> entityService.create(dto));
}
#Test
void shouldThrowWhenCodeAlreadyExists() {
final EntityDTO dto = new EntityDTO(1L, "anything", "code");
assertThrows(RuntimeException.class, () -> entityService.create(dto));
}
#Test
void shouldThrowWhenNameAndCodeAlreadyExists() {
final EntityDTO dto = new EntityDTO(1L, "name", "code");
assertThrows(RuntimeException.class, () -> entityService.create(dto));
}
#Test
void shouldNotThrowWhenUnique() {
final EntityDTO dto = new EntityDTO(1L, "diff", "diff");
final EntityDTO entityDTO = entityService.create(dto);
assertNotNull(entityDTO);
}
}
I would like to have Documents stored with an UUID id and createdAt / updatedAt fields. My solution was working with Spring Boot 2.1.x. After I upgraded from Spring Boot 2.1.11.RELEASE to 2.2.0.RELEASE my test for MongoAuditing failed with createdAt = null. What do I need to do to get the createdAt field filled again?
This is not just a testproblem. I ran the application and it has the same behaviour as my test. All auditing fields stay null.
I have a Configuration to enable MongoAuditing and UUID generation:
#Configuration
#EnableMongoAuditing
public class MongoConfiguration {
#Bean
public GenerateUUIDListener generateUUIDListener() {
return new GenerateUUIDListener();
}
}
The listner hooks into the onBeforeConvert - I guess thats where the trouble starts.
public class GenerateUUIDListener extends AbstractMongoEventListener<IdentifiableEntity> {
#Override
public void onBeforeConvert(BeforeConvertEvent<IdentifiableEntity> event) {
IdentifiableEntity entity = event.getSource();
if (entity.isNew()) {
entity.setId(UUID.randomUUID());
}
}
}
The document itself (I dropped the getter and setters):
#Document
public class MyDocument extends InsertableEntity {
private String name;
}
public abstract class InsertableEntity extends IdentifiableEntity {
#CreatedDate
#JsonIgnore
private Instant createdAt;
}
public abstract class IdentifiableEntity implements Persistable<UUID> {
#Id
private UUID id;
#JsonIgnore
public boolean isNew() {
return getId() == null;
}
}
A complete minimal example can be find here (including a test) https://github.com/mab/auditable
With 2.1.11.RELEASE the test succeeds with 2.2.0.RELEASE it fails.
For me the best solution was to switch from event UUID generation to a callback based one. With the implementation of Ordered we can set the new callback to be executed after the AuditingEntityCallback.
public class IdEntityCallback implements BeforeConvertCallback<IdentifiableEntity>, Ordered {
#Override
public IdentifiableEntity onBeforeConvert(IdentifiableEntity entity, String collection) {
if (entity.isNew()) {
entity.setId(UUID.randomUUID());
}
return entity;
}
#Override
public int getOrder() {
return 101;
}
}
I registered the callback with the MongoConfiguration. For a more general solution you might want to take a look at the registration of the AuditingEntityCallback with the `MongoAuditingBeanDefinitionParser.
#Configuration
#EnableMongoAuditing
public class MongoConfiguration {
#Bean
public IdEntityCallback registerCallback() {
return new IdEntityCallback();
}
}
MongoTemplate works in the following way on doInsert()
this.maybeEmitEvent - emit an event (onBeforeConvert, onBeforeSave and such) so any AbstractMappingEventListener can catch and act upon like you did with GenerateUUIDListener
this.maybeCallBeforeConvert - call before convert callbacks like mongo auditing
like you can see in source code of MongoTemplate.class src (831-832)
protected <T> T doInsert(String collectionName, T objectToSave, MongoWriter<T> writer) {
BeforeConvertEvent<T> event = new BeforeConvertEvent(objectToSave, collectionName);
T toConvert = ((BeforeConvertEvent)this.maybeEmitEvent(event)).getSource(); //emit event
toConvert = this.maybeCallBeforeConvert(toConvert, collectionName); //call some before convert handlers
...
}
MongoAudit marks createdAt only to new entities by checking if entity.isNew() == true
because your code (UUID) already set the Id the createdAt is not populated (the entity is not considered new)
you can do the following (order by best to worst):
forget about the UUID and use String for your id, let the mongo itself create and manage it's entities ids (this how MongoTemplate actually works lines 811-812)
keep the UUID at the code level, convert from/to String when inserting and retrieving from the db
create a custom repository like in this post
stay with 2.1.11.RELEASE
set the updateAt by GenerateUUIDListener as well as id (rename it NewEntityListener or smth), basically implement the audit
implement a new isNew() logic that don't depends only on the entity id
in version 2.1.11.RELEASE the order of the methods was flipped (MongoTemplate.class 804-805) so your code worked fine
as an abstract approach, the nature of event is to be sort of send-and-forget (async compatible), so it's a very bad practice to change the object itself, there is NO grantee for order of computation, if any
this is why the audit build on callbacks and not events, and that's why Pivotal don't (need to) keep order between versions
I want to check my persistence logic and so I am running some test cases.
Repository class:
#Repository
public class MyRepository {
public void add(Object obj) {
/* Do some stuff here */
getSession().persist(obj);
}
}
Test class:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = { Context.class})
public class MyTests {
#Inject
private MyRepository myRepository;
#Test
#Rollback(true)
public void foo() {
/* Test logic */
myRepository.add(obj);
Assert.assert...;
}
}
The unit test: MyTests.java contains test cases which test some stuff that is related to persistence, but not the actual Hibernate persistence itself, so that's why the getSession.persist() statement is obsolete.
For performance reasons, I want to prevent Hibernate from storing data to my database, even if the whole data interaction is rolled back. My approach would be to mock: getSession().persist(). Is there a better, or more specifically, an easier way to achieve my intentions?
First of all, there are different id generators in Hibernate. If identity generator (not all the databases supports it), then to assign id to the entity, when session.persist method is called, insert query will be called. But if, for example, sequence or uuid generator is used, then insert won't be triggered (at least right away).
After that if methods session.get or session.load are called to load persisted (in the current session) object, then select query won't be called, because it gets object from Hibernate cache. But if HQL is used to select data, then select query is called. Moreover before it (by default) insert query for persisted object is called too.
This can be changed with FlushMode. By default is set to AUTO. It means:
The Session is sometimes flushed before query execution in order to
ensure that queries never return stale state.
But if getSession().setHibernateFlushMode(FlushMode.MANUAL) is set:
The Session is only ever flushed when Session.flush() is explicitly
called by the application.
Which means insert query won't be called until session.flush is called explicitly. If methods session.get and session.load are further used your code will still work (in the current session). But in case of select HQL query - it won't find the entity since it wasn't persisted. So beware.
Create an interface, implement it using the Hibernate persist() method, and use it in such a way, that:
the normal calls go through the implementation
the test calls go through a mock version of it
public interface MyRepository {
public void add(Object obj);
}
public class MyRepositoryImpl implements MyRepository {
public void add(Object obj) {
getSession().persist(obj);
}
}
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = { Context.class})
public class MyTests {
#Mock // we inject the mock instead of the true implementation
private MyRepository myRepository;
#Test
#Rollback(true)
public void foo() {
/* Test logic */
myRepository.add(obj); // the test uses the mocked version
Assert.assert...;
}
}
There are many Java libraries that let you mock objects, e.g.
Mockito
JMock
EasyMock
You need to be able to mock your repository object so that you can use the mock in tests, and use the real one in the rest of your application.
DAO:
#Repository(value="MockRepo")
public class MockMyRepositoryImpl implments MyRepository {
#Override
public void add(Foo foo) {
//Do nothing here
}
}
Test:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = { Context.class})
public class MyTests {
#Autowired
#Qualifier("MockRepo");
private MyRepository repo;
#Test
public void testFooSave() {
repo.add(obj);
}
}
The alternative is to use a mocking framework as detailed in another answer. Mocking frameworks are more flexible, but if you want something simple that's just going to work then try the above.
I'm not sure where to open my Transaction object. Inside the service layer? Or the controller layer?
My Controller basically has two services, let's call them AService and BService. Then my code goes something like:
public class Controller {
public AService aService = new AService();
public BService bService = new BService();
public void doSomething(SomeData data) {
//Transaction transaction = HibernateUtil.getSession().openTransaction();
if (data.getSomeCondition()) {
aService.save(data.getSomeVar1());
bService.save(data.getSomeVar2());
}
else {
bService.save(data.getSomeVar2());
}
//transaction.commit(); or optional try-catch with rollback
}
}
The behavior I want is that if bService#save fails, then I could invoke a transaction#rollback so that whatever was saved in aService would be rolled back as well. This only seems possible if I create one single transaction for both saves.
But looking at it in a different perspective, it looks really ugly that my Controller is dependent on the Transaction. It would be better if I create the Transaction inside the respective services, (something like how Spring #Transactional works), but if I do it that way, then I don't know how to achieve what I want to happen...
EDIT: Fixed code, added another condition. I am not using any Spring dependencies so the usage of #Transactional is out of the question.
You can accomplish what you're asking with another layer of abstraction and using composition.
public class CompositeABService {
#Autowired
private AService aservice;
#Autowired
private BService bservice;
#Transactional
public void save(Object value1, Object value2) {
aservice.save( value1 );
bservice.save( value2 );
}
}
public class AService {
#Transactional
public void save(Object value) {
// joins an existing transaction if one exists, creates a new one otherwise.
}
}
public class BService {
#Transactional
public void save(Object value) {
// joins an existing transaction if one exists, creates a new one otherwise.
}
}
This same pattern is typically used when you need to interact with multiple repositories as a part of a single unit of work (e.g. transaction).
Now all your controller needs to depend upon is CompositeABService or whatever you wish to name it.
I have a number of simple object types that need to be persisted to a database. I am using Spring JPA to manage this persistence. For each object type I need to build the following:
import org.springframework.data.jpa.repository.JpaRepository;
public interface FacilityRepository extends JpaRepository<Facility, Long> {
}
public interface FacilityService {
public Facility create(Facility facility);
}
#Service
public class FacilityServiceImpl implements FacilityService {
#Resource
private FacilityRepository countryRepository;
#Transactional
public Facility create(Facility facility) {
Facility created = facility;
return facilityRepository.save(created);
}
}
It occurred to me that it may be possible to replace the multiple classes for each object type with three generics based classes, thus saving a lot of boilerplate coding. I am not exactly sure how to go about it and in fact if it is a good idea?
First of all, I know we're raising the bar here quite a bit but this is already tremendously less code than you had to write without the help of Spring Data JPA.
Second, I think you don't need the service class in the first place, if all you do is forward a call to the repository. We recommend using services in front of the repositories if you have business logic that needs orchestration of different repositories within a transaction or has other business logic to encapsulate.
Generally speaking, you can of course do something like this:
interface ProductRepository<T extends Product> extends CrudRepository<T, Long> {
#Query("select p from #{#entityName} p where ?1 member of p.categories")
Iterable<T> findByCategory(String category);
Iterable<T> findByName(String name);
}
This will allow you to use the repository on the client side like this:
class MyClient {
#Autowired
public MyClient(ProductRepository<Car> carRepository,
ProductRepository<Wine> wineRepository) { … }
}
and it will work as expected. However there are a few things to notice:
This only works if the domain classes use single table inheritance. The only information about the domain class we can get at bootstrap time is that it will be Product objects. So for methods like findAll() and even findByName(…) the relevant queries will start with select p from Product p where…. This is due to the fact that the reflection lookup will never ever be able to produce Wine or Car unless you create a dedicated repository interface for it to capture the concrete type information.
Generally speaking, we recommend creating repository interfaces per aggregate root. This means you don't have a repo for every domain class per se. Even more important, a 1:1 abstraction of a service over a repository is completely missing the point as well. If you build services, you don't build one for every repository (a monkey could do that, and we're no monkeys, are we? ;). A service is exposing a higher level API, is much more use-case drive and usually orchestrates calls to multiple repositories.
Also, if you build services on top of repositories, you usually want to enforce the clients to use the service instead of the repository (a classical example here is that a service for user management also triggers password generation and encryption, so that by no means it would be a good idea to let developers use the repository directly as they'd effectively work around the encryption). So you usually want to be selective about who can persist which domain objects to not create dependencies all over the place.
Summary
Yes, you can build generic repositories and use them with multiple domain types but there are quite strict technical limitations. Still, from an architectural point of view, the scenario you describe above shouldn't even pop up as this means you're facing a design smell anyway.
This is very possible! I am probably very late to the party. But this will certainly help someone in the future. Here is a complete solution that works like a charm!
Create BaseEntity class for your entities as follows:
#MappedSuperclass
public class AbstractBaseEntity implements Serializable{
#Id #GeneratedValue
private Long id;
#Version
private int version;
private LocalDateTime createdAt;
private LocalDateTime updatedAt;
public AbstractBaseEntity() {
this.createdAt = LocalDateTime.now();
this.updatedAt = LocalDateTime.now();
}
// getters and setters
}
Create a generic JPA Repository interface for your DAO persistence as follows:
NB. Remember to put the #NoRepositoryBean so that JPA will not try to find an implementation for the repository!
#NoRepositoryBean
public interface AbstractBaseRepository<T extends AbstractBaseEntity, ID extends Serializable>
extends JpaRepository<T, ID>{
}
Create a Base Service class that uses the above base JPA repository. This is the one that other service interfaces in your domain will simply extend as follows:
public interface AbstractBaseService<T extends AbstractBaseEntity, ID extends Serializable>{
public abstract T save(T entity);
public abstract List<T> findAll(); // you might want a generic Collection if u prefer
public abstract Optional<T> findById(ID entityId);
public abstract T update(T entity);
public abstract T updateById(T entity, ID entityId);
public abstract void delete(T entity);
public abstract void deleteById(ID entityId);
// other methods u might need to be generic
}
Then create an abstract implementation for the base JPA repository & the basic CRUD methods will also be provided their implementations as in the following:
#Service
#Transactional
public abstract class AbstractBaseRepositoryImpl<T extends AbstractBaseEntity, ID extends Serializable>
implements AbstractBaseService<T, ID>{
private AbstractBaseRepository<T, ID> abstractBaseRepository;
#Autowired
public AbstractBaseRepositoryImpl(AbstractBaseRepository<T, ID> abstractBaseRepository) {
this.abstractBaseRepository = abstractBaseRepository;
}
#Override
public T save(T entity) {
return (T) abstractBaseRepository.save(entity);
}
#Override
public List<T> findAll() {
return abstractBaseRepository.findAll();
}
#Override
public Optional<T> findById(ID entityId) {
return abstractBaseRepository.findById(entityId);
}
#Override
public T update(T entity) {
return (T) abstractBaseRepository.save(entity);
}
#Override
public T updateById(T entity, ID entityId) {
Optional<T> optional = abstractBaseRepository.findById(entityId);
if(optional.isPresent()){
return (T) abstractBaseRepository.save(entity);
}else{
return null;
}
}
#Override
public void delete(T entity) {
abstractBaseRepository.delete(entity);
}
#Override
public void deleteById(ID entityId) {
abstractBaseRepository.deleteById(entityId);
}
}
How to use the above abstract entity, service, repository, and implementation:
Example here will be a MyDomain entity. Create a domain entity that extends the AbstractBaseEntity as follows:
NB. ID, createdAt, updatedAt, version, etc will be automatically be included in the MyDomain entity from the AbstractBaseEntity
#Entity
public class MyDomain extends AbstractBaseEntity{
private String attribute1;
private String attribute2;
// getters and setters
}
Then create a repository for the MyDomain entity that extends the AbstractBaseRepository as follows:
#Repository
public interface MyDomainRepository extends AbstractBaseRepository<MyDomain, Long>{
}
Also, Create a service interface for the MyDomain entity as follows:
public interface MyDomainService extends AbstractBaseService<MyDomain, Long>{
}
Then provide an implementation for the MyDomain entity that extends the AbstractBaseRepositoryImpl implementation as follows:
#Service
#Transactional
public class MyDomainServiceImpl extends AbstractBaseRepositoryImpl<MyDomain, Long>
implements MyDomainService{
private MyDomainRepository myDomainRepository;
public MyDomainServiceImpl(MyDomainRepository myDomainRepository) {
super(myDomainRepository);
}
// other specialized methods from the MyDomainService interface
}
Now use your `MyDomainService` service in your controller as follows:
#RestController // or #Controller
#CrossOrigin
#RequestMapping(value = "/")
public class MyDomainController {
private final MyDomainService myDomainService;
#Autowired
public MyDomainController(MyDomainService myDomainService) {
this.myDomainService = myDomainService;
}
#GetMapping
public List<MyDomain> getMyDomains(){
return myDomainService.findAll();
}
// other controller methods
}
NB. Make sure that the AbstractBaseRepository is annotated with #NoRepositoryBean so that JPA does not try to find an implementation for the bean.
Also the AbstractBaseServiceImpl must be marked abstract, otherwise JPA will try to autowire all the children daos of the AbstractBaseRepository in the constructor of the class leading to a NoUniqueBeanDefinitionException since more than 1 daos (repository) will be injected when the bean is created!
Now your service, repository, and implementations are more reusable. We all hate boilerplate!
Hope this helps someone.
I am working a project to create the generic repository for cassandra with spring data.
Firstly create a repository interface with code.
StringBuilder sourceCode = new StringBuilder();
sourceCode.append("import org.springframework.boot.autoconfigure.security.SecurityProperties.User;\n");
sourceCode.append("import org.springframework.data.cassandra.repository.AllowFiltering;\n");
sourceCode.append("import org.springframework.data.cassandra.repository.Query;\n");
sourceCode.append("import org.springframework.data.repository.CrudRepository;\n");
sourceCode.append("\n");
sourceCode.append("public interface TestRepository extends CrudRepository<Entity, Long> {\n");
sourceCode.append("}");
Compile the code and get the class, I use org.mdkt.compiler.InMemoryJavaCompiler
ClassLoader classLoader = org.springframework.util.ClassUtils.getDefaultClassLoader();
compiler = InMemoryJavaCompiler.newInstance();
compiler.useParentClassLoader(classLoader);
Class<?> testRepository = compiler.compile("TestRepository", sourceCode.toString());
And initialize the repository in spring data runtime. This is a little tricky as I debug the SpringData code to find how it initialize a repository interface in spring.
CassandraSessionFactoryBean bean = context.getBean(CassandraSessionFactoryBean.class);
RepositoryFragments repositoryFragmentsToUse = (RepositoryFragments) Optional.empty().orElseGet(RepositoryFragments::empty);
CassandraRepositoryFactory factory = new CassandraRepositoryFactory(
new CassandraAdminTemplate(bean.getObject(), bean.getConverter()));
factory.setBeanClassLoader(compiler.getClassloader());
Object repository = factory.getRepository(testRepository, repositoryFragmentsToUse);
Now you can try the save method of the repository and you can try other methods such as findById.
Method method = repository.getClass().getMethod("save", paramTypes);
T obj = (T) method.invoke(repository, params.toArray());
A full sample code and implementation I have put in this repo
https://github.com/maye-msft/generic-repository-springdata.
You can extend it to JPA with the similar logic.