I have a method that uses queryDSL for query generation.
public List<EntityDAO> getObject() {
QEntity entity = QEntity.entity;
JPAQueryFactory queryFactory = getJPAQueryFactory();
JPAQuery<EntityDAO> query = queryFactory
.select(Projections.bean(EntityDAO.class,
entity.propertyA,
entity.propertyB.count().as("count")))
.from(entity)
.where(predicateBuilder.build())
.groupBy(entity.propertyA)
.orderBy(order)
.limit(rowCount)
.offset(pageId*rowCount);
return query.fetch();
}
How can I test this method using Mockito?
This method is the data access layer. For testing the data layer access codes , it is best to test it with the actual DB instance but not a mock because in the end you still need to verify if it can really get the data from the actual database correctly and this is the most appropriate place to do it.
You can check with Testcontainers. It allows you to test with a containerised instance of your DB. After starting this DB container , you simply load some testing data to related tables , call this method and directly verify the correctness of the return data.
Related
I have some functional units of transactional work within service(s) which involved multiple calls to different DAOs (using jdbcTemplate or NamedJdbcTemplate) that return sequence generated IDs to run the next select/insert or run some void insertion in the order of calls. Each DAO call represents a sql statement which will require a call to the Database. For example
#Transactional
public void create() {
playerService.findByUserName(player.getPlayerId());
roleService.insertRoleForPlayer(player.getId(), ROLE_MANAGER);
leagueScoringService.createLeagueScoringForLeague(leagueScoring);
leagueService.insertPlayerLeague(player, code);
Standing playerStanding = new Standing();
playerStanding.setPlayerId(player.getId());
playerStanding.setPlayerUserName(player.getPlayerId());
playerStanding.setLeagueId(leagueId);
standingService.insertStandings(ImmutableList.of(playerStanding));
}
Preferably I want to batch up the calls or ensure they all get executed together on the DB. Note I do not use hibernate and I don't want to either. What is the best approach for this?
So I have JPA Entity (lets say Foo) for which there's FooRepository defined as extension of CrudRepository<Foo, Long>. Repository has few custom methods and among them there is a method (let's say initFoo) that maps to stored procedure with #Procedure annotation. Now in service layer there is a method that looks pretty much like this (heavy oversimplification):
Foo f = new Foo();
f.setId(5)
f.setName("Bar");
FooRepository.save(f);
FooRepository.initFoo(f.getId());
Calling this method results in an error from stored procedure. Upon close inspection (constraint violation: key foo_id=5 does not exist) it appears, that entity Foo doesn't end up in database right after FooRepository.save() completes. Most probably Entity Manager decides there is no rush and keeps the entity in memory/cache.
The question is: how to convince EM to flush that particular entity to db? I'd like to avoid wiring up EntityManager in service layer and calling flush() directly. I've tried annotating stored procedure method with #Modifying, but it appears it only works with #Query methods. Any sane way to have such issue resolved?
Spring Boot (with spring-boot-starter-data-jpa) 1.3.3.RELEASE
Instead of using CrudRepsitory you can use JpaRepository which contains method saveAndFlush()
I was following some example in which we can able to build OData service with Olingo from Java (maven project). The provided example doesn't have any database interaction. They are using some Storage.class, which contains hard codded data.
You can find sample code on git. please refer example p0_all in provided url.
Does anyone knows how we can connect git example with some database and furthermore perform CRUD operations
Please do help me with some good examples or concept.
Thanking you in advance.
I recently built an oData producer using Olingo and found myself similarly frustrated. I think that part of the issue is that there really are a lot of different ways to build an oData service with Olingo, and the data access piece is entirely up to the developer to sort out in their own project.
Firstly, you need an application that has a database connection set up. So completely disregarding Olingo, you should have an app that connects to and can query a database. If you are uncertain of how to build a java application that can query a MySQL datasource, then you should Google around for tutorials that are related to that problem and have nothing to do with Olingo.
Next you need to write the methods and queries to perform CRUD operations in your application. Again, these methods have nothing to do with Olingo.
Where Olingo starts to come in to play is in your implementation of the processor classes. EntityCollectionProcessor, EntityProcessor etc. (note that there are other concerns such as setting up your CsdlEntityTypes and Schema/Service Document etc., but those are outside the scope of your question)
Lets start by looking at EntityCollectionProcessor. By implementing the EntityCollectionProcessor class you need to override the readEntityCollection() function. The purpose of this function is to parse the oData URI for the entity name, fetch an EntityCollection for that Entity, and then serialize the EntityCollection into an oData compliant response. Here's the implementation of readEntityCollection() from your example link:
public void readEntityCollection(ODataRequest request, ODataResponse response, UriInfo uriInfo, ContentType responseFormat)
throws ODataApplicationException, SerializerException {
// 1st we have retrieve the requested EntitySet from the uriInfo object
// (representation of the parsed service URI)
List<UriResource> resourcePaths = uriInfo.getUriResourceParts();
UriResourceEntitySet uriResourceEntitySet = (UriResourceEntitySet) resourcePaths.get(0);
// in our example, the first segment is the EntitySet
EdmEntitySet edmEntitySet = uriResourceEntitySet.getEntitySet();
// 2nd: fetch the data from backend for this requested EntitySetName
// it has to be delivered as EntityCollection object
EntityCollection entitySet = getData(edmEntitySet);
// 3rd: create a serializer based on the requested format (json)
ODataSerializer serializer = odata.createSerializer(responseFormat);
// 4th: Now serialize the content: transform from the EntitySet object to InputStream
EdmEntityType edmEntityType = edmEntitySet.getEntityType();
ContextURL contextUrl = ContextURL.with().entitySet(edmEntitySet).build();
final String id = request.getRawBaseUri() + "/" + edmEntitySet.getName();
EntityCollectionSerializerOptions opts = EntityCollectionSerializerOptions.with().id(id).contextURL(contextUrl).build();
SerializerResult serializerResult = serializer.entityCollection(serviceMetadata, edmEntityType, entitySet, opts);
InputStream serializedContent = serializerResult.getContent();
// Finally: configure the response object: set the body, headers and status code
response.setContent(serializedContent);
response.setStatusCode(HttpStatusCode.OK.getStatusCode());
response.setHeader(HttpHeader.CONTENT_TYPE, responseFormat.toContentTypeString());
}
You can ignore (and reuse) everything in this example except for the "2nd" step:
EntityCollection entitySet = getData(edmEntitySet);
This line of code is where Olingo finally starts to interact with our underlying system, and the pattern that we see here informs how we should set up the rest of our CRUD operations.
The function getData(edmEntitySet) can be anything you want, in any class you want. The only restriction is that it must return an EntityCollection. So what you need to do is call a function that queries your MySQL database and returns all records for the given entity (using the string name of the entity). Then, once you have a List, or Set (or whatever) of your records, you need to convert it to an EntityCollection.
As an aside, I think that this is probably where the disconnect between the Olingo examples and real world application comes from. The code behind that getData(edmEntitySet); call can be architected in infinitely different ways, depending on the design pattern used in the underlying system (MVC etc.), styling choices, scalability requirements etc.
Here's an example of how I created an EntityCollection from a List that returned from my query (keep in mind that I am assuming you know how to query your MySQL datasource and have already coded a function that retrieves all records for a given entity):
private List<Foo> getAllFoos(){
// ... code that queries dataset and retrieves all Foo records
}
// loop over List<Foo> converting each instance of Foo into and Olingo Entity
private EntityCollection makeEntityCollection(List<Foo> fooList){
EntityCollection entitySet = new EntityCollection();
for (Foo foo: fooList){
entitySet.getEntities().add(createEntity(foo));
}
return entitySet;
}
// Convert instance of Foo object into an Olingo Entity
private Entity createEntity(Foo foo){
Entity tmpEntity = new Entity()
.addProperty(createPrimitive(Foo.FIELD_ID, foo.getId()))
.addProperty(createPrimitive(Foo.FIELD_FOO_NAME, foo.getFooName()));
return tmpEntity;
}
Just for added clarity, getData(edmEntitySet) might look like this:
public EntityCollection getData(String edmEntitySet){
// ... code to determine which query to call based on entity name
List<Foo> foos = getAllFoos();
EntityCollection entitySet = makeEntityCollection(foos);
return entitySet;
}
If you can find an Olingo example that uses a DataProvider class, there are some basic examples of how you might set up the // ...code to determine which query to call based on entity name. I ended up modifying that pattern heavily using Java reflection, but that is totally unrelated to your question.
So getData(edmEntitySet) is a function that takes an entity name, queries the datasource for all records of that entity (returning a List<Foo>), and then converts that List<Foo> into an EntityCollection. The EntityCollection is made by calling the createEntity() function which takes the instance of my Foo object and turns it into an Olingo Entity. The EntityCollection is then returned to the readEntityCollection() function and can be properly serialized and returned as an oData response.
This example exposes a bit of the architecture problem that Olingo has with its own examples. In my example Foo is an object that has constants that are used to identify the field names, which are used by Olingo to generate the oData Schema and Service Document. This object has a method to return it's own CsdlEntityType, as well as a constructor, its own properties and getters/setters etc. You don't have to set your system up this way, but for the scalability requirements of my project this is how I chose to do things.
This is the general pattern that Olingo uses. Override methods of an interface, then call functions in a separate part of your system that interact with your data in the desired manner. Then convert the data into Olingo readable objects so they can do whatever "oData stuff" needs to be done in the response. If you want to implement CRUD for a single entity, then you need to implement EntityProcessor and its various CRUD methods, and inside those methods, you need to call the functions in your system (totally separate from any Olingo code) that create(), read() (single entity), update(), or delete().
I am trying to make unit test on a DAO class using Mockito. I have written some unit test before but not on a DAO class using some data base(in this case JDBC and MySQl).
I decided to start with this simple method but I do not now which are the good practices and I do not know how to start.
I do not know if this is important in this case but the project is using Spring Framework.
public class UserProfilesDao extends JdbcDaoSupport {
#Autowired
private MessageSourceAccessor msa;
public long getUserId(long userId, int serviceId) {
String sql = msa.getMessage("sql.select.service_user_id");
Object[] params = new Object[] { userId, serviceId };
int[] types = new int[] { Types.INTEGER, Types.INTEGER };
return getJdbcTemplate().queryForLong(sql, params, types);
}
}
If you really like to test the DAO create an in memory database. Fill it with the expected values, execute the query within the DAO and check that the result is correct for the previous inserted values in the database.
Mocking the Connection, ResultSet, PreparedStatement is too heavy and the result are not as expected, because you are not accessing to a real db.
Note: to use this approach your in memory database should have the same dialect of your phisical database, so don't use specific functions or syntax of the final database, but try to follow the SQL standard.
If you use an in memory database you are "mocking" the whole database. So the result test is not a real Unit test, but is not also an integration test. Use a tool like DBUnit to easily configure and fill your database if you like this approach.
Consider that mocking the database classes (PreparedStatement, Statement, ResultSet, Connection) is a long process and you are not granted that it works as expected, because you are not testing the right format of your sql over an sql engine.
You can also take a look to an article of Lasse Koskela talking about unit testing daos.
To test the DAO you need to:
Empty the database (not necessary for in memory db)
Fill the database with data example (automatic with db unit, done in the #BeforeClass or #Before method)
Run the test (with JUnit)
If you like to formally separated real unit tests from integration tests you can move the DAO tests on a separate directory and test them when needed and in the integration tests.
A possible in memory database that has different compatibility modes is H2, with the following database compatibilities:
IBM DB2
Apache Derb
HSQLDB
MS SQL Server
MySQL
Oracle
PostgreSQL
We are using Spring and IBatis and I have discovered something interesting in the way a service method with #Transactional handles multiple DAO calls that return the same record. Here is an example of a method that does not work.
#Transactional
public void processIndividualTrans(IndvTrans trans) {
Individual individual = individualDAO.selectByPrimaryKey(trans.getPartyId());
individual.setFirstName(trans.getFirstName());
individual.setMiddleName(trans.getMiddleName());
individual.setLastName(trans.getLastName());
Individual oldIndvRecord = individualDAO.selectByPrimaryKey(trans.getPartyId());
individualHistoryDAO.insert(oldIndvRecord);
individualDAO.updateByPrimaryKey(individual);
}
The problem with the above method is that the 2nd execution of the line
individualDAO.selectByPrimaryKey(trans.getPartyId())
returns the exact object returned from the first call.
This means that oldIndvRecord and individual are the same object, and the line
individualHistoryDAO.insert(oldIndvRecord);
adds a row to the history table that contains the changes (which we do not want).
In order for it to work it must look like this.
#Transactional
public void processIndividualTrans(IndvTrans trans) {
Individual individual = individualDAO.selectByPrimaryKey(trans.getPartyId());
individualHistoryDAO.insert(individual);
individual.setFirstName(trans.getFirstName());
individual.setMiddleName(trans.getMiddleName());
individual.setLastName(trans.getLastName());
individualDAO.updateByPrimaryKey(individual);
}
We wanted to write a service called updateIndividual that we could use for all updates of this table that would store a row in the IndividualHistory table before performing the update.
#Transactional
public void updateIndividual(Individual individual) {
Individual oldIndvRecord = individualDAO.selectByPrimaryKey(trans.getPartyId());
individualHistoryDAO.insert(oldIndvRecord);
individualDAO.updateByPrimaryKey(individual);
}
But it does not store the row as it was before the object changed. We can even explicitly instantiate different objects before the DAO calls and the second one becomes the same object as the first.
I have looked through the Spring documentation and cannot determine why this is happening.
Can anyone explain this?
Is there a setting that can allow the 2nd DAO call to return the database contents and not the previously returned object?
You are using Hibernate as ORM and this behavior is perfectly described in the Hibernate documentation. In the Transaction chapter:
Through Session, which is also a transaction-scoped cache, Hibernate provides repeatable reads for lookup by identifier and entity queries and not reporting queries that return scalar values.
Same goes for IBatis
MyBatis uses two caches: a local cache and a second level cache. Each
time a new session is created MyBatis creates a local cache and
attaches it to the session. Any query executed within the session will
be stored in the local cache so further executions of the same query
with the same input parameters will not hit the database. The local
cache is cleared upon update, commit, rollback and close.