I am using Testcontainers to load a Dockerized database to be used for my Spring Boot application for integration testing. I currently an using an initialization script to load all of the data:
CREATE TABLE public.table1 (
...
...
);
CREATE TABLE public.table2 (
...
...
);
This is all working fine. I also have my own manual test data that I insert to test different scenarios:
-- Data for pre-existing quiz
INSERT INTO public.table1 (id, prop1, prop2) values (1, 'a', 'b');
INSERT INTO public.table2 (id, prop1, prop2) values (1, 'c', 'd');
INSERT INTO public.table2 (id, prop1, prop2) values (2, 'e', 'f');
Again, this is all working fine, and I am using a YAML file to read mock these objects to be used for my tests
table1s:
first:
id: 1
prop1: a
prop2: b
table2s:
first:
id: 1
prop1: c
prop2: d
second:
id: 2
prop1: e
prop2: f
In which I will be able to put these into a class that I can read from the YAML file properties so it can be used for my test classes
public class Table1TestData {
#Autowired
private Environment env;
private UUID id;
private boolean prop1;
private boolean prop2;
public UUID getId() {
return id;
}
public void setId(UUID id) {
this.id = id;
}
public boolean getProp1() {
return prop1;
}
public void setProp1(String trial) {
this.prop1 = prop1;
}
....
public Table1TestData getFirstRowData(){
Table1HelperFactory ret = new Table1HelperFactory();
ret.setId(UUID.fromString(env.getProperty("table1s.first.id")));
ret.setProp1(env.getProperty("table1s.first.id"));
....
return ret;
}
....
}
And I use this helper as an autowired entity in my tests (especially for my Service classes):
public class Table1ServiceTest {
#ClassRule
public static PostgresContainer postgresContainer = PostgresContainer.getInstance();
#Autowired
Table1Service table1Service;
#Autowired
Table1TestData table1TestData;
#Autowired
MockMvc mockMvc;
#Autowired
ObjectMapper objectMapper;
#BeforeAll
private static void startup() {
postgresContainer.start();
}
#Test
#DisplayName("Table 1 Service Test")
#Transactional
public void findTable1ById() throws Exception {
Table1TestData testData = table1TestData.getFirstRowData();
Table1 table1 = table1Service.findTable1ById(testData.getId());
assertNotNull(table1);
assertEquals(table1.getId(), testData.getId());
assertEquals(table1.prop1(), testData.prop1());
....
}
}
However, let's say I have to apply a new column to Table1 (or any table really) and I put the new schema into the init script. I now have to manually go to each of these insert statements and put in a new column with a value (assuming there's no default), or even say if a column is removed (even if it doesn't affect the classes necessarily). This ends up being cumbersome.
So my question really is, for someone who is using an init script to populate test data for a containerized DB, what is the best way to go about maintaining this data efficiently without much manual curating?
I think you could take advantage of the initialization script feature for postgresql: Just place your sql scripts under /docker-entrypoint-initdb.d (creating the directory if necessary) and it will execute them directly without any programmatic work.
You can check out an example here:
https://github.com/gmunozfe/clustered-ejb-timers-kie-server/blob/master/src/test/java/org/kie/samples/integration/ClusteredEJBTimerSystemTest.java
Define your postgresql pointing to that directory:
.withFileSystemBind("etc/postgresql", "/docker-entrypoint-initdb.d",
BindMode.READ_ONLY)
If you want different scripts per tests, have a look a this article:
https://www.baeldung.com/spring-boot-data-sql-and-schema-sql
Related
I created a SpringBoot test:
#RunWith(SpringRunner.class)
#SpringBootTest
#TestPropertySource(locations = "classpath:application-dev.properties")
#Transactional
public class ContactTests2 {
private Logger log = LogManager.getLogger();
#PersistenceContext
private EntityManager entityManager;
#Autowired
private ContactRepository customerRepository;
#Autowired
private StoreRepository storeRepository;
#Autowired
private NoteService noteService;
#Autowired
private Validator validator;
private Store store;
#Before
#WithMockUser(roles = "ADMIN")
public void setup() {
log.debug("Stores {}", storeRepository.count());
store = createStore();
storeRepository.save(store);
}
#Test
#WithMockUser(roles = "ADMIN")
public void saveWithNote() {
Contact customer = new Contact();
customer.setPersonType(PersonType.NATURAL_PERSON);
customer.setFirstName("Daniele");
customer.setLastName("Rossi");
customer.setGender(Gender.MALE);
customer.setBillingCountry(Locale.ITALY.getCountry());
customer.setShippingCountry(Locale.ITALY.getCountry());
customer.setStore(store);
Note note = new Note();
note.setGenre(NoteGenre.GENERIC);
note.setOperationType(AuditType.NOTE);
note.setText("note");
customer = customerRepository.save(customer);
noteService.addNote(note, customer);
}
#Test
#WithMockUser(roles = "ADMIN")
public void save() {
Contact customer = new Contact();
customer.setPersonType(PersonType.NATURAL_PERSON);
customer.setFirstName("Daniele");
customer.setLastName("Rossi");
customer.setGender(Gender.MALE);
customer.setBillingCountry(Locale.ITALY.getCountry());
customer.setShippingCountry(Locale.ITALY.getCountry());
customer.setStore(store);
customerRepository.save(customer);
assertEquals(customer, customerRepository.findById(customer.getId()).get());
}
// ====================================================
//
// UTILITY METHODS
//
// ====================================================
private Store createStore() {
Store store = new Store();
store.setName("Padova");
store.setCode("PD");
store.setCountry("IT");
return store;
}
}
this is the note service:
#Service
#Transactional
#PreAuthorize("isAuthenticated()")
public class NoteService {
#PersistenceContext
private EntityManager entityManager;
#Autowired
private NoteRepository noteRepository;
/**
* Add a note to a specific object (parent).
*
* #param note
* #param parent
* #return the added note
*/
public Note addNote(Note note, Persistable<Long> parent) {
// ****************************************************
// VALIDATION CHECKS
// ****************************************************
Assert.notNull(note, InternalException.class, ExceptionCode.INTERNAL_ERROR);
Assert.notNull(parent, InternalException.class, ExceptionCode.INTERNAL_ERROR);
// ****************************************************
// END VALIDATION CHECKS
// ****************************************************
note.setParentId(parent.getId());
note.setParentType(parent.getClass().getSimpleName());
note.setRemoteAddress(NetworkUtils.getRemoteIpFromCurrentContext());
note = noteRepository.save(note);
return note;
}
}
I'm using Hibernate and Mysql 5.7. The problem is that the test called saveWithNote(). When I run this test, following tests fails because the setup() method throw a duplicated exception. It seems the previous test is not rolledback.
This is what happens:
Removing the line noteService.addNote(note, customer); everything works like a charm.
What am I doing wrong? Why test isolation is not preserved?
This is because you are using a real data store as the dependency.
When running saveWithNote(), the customer entry is persisted in database. It is not removed in your test setup, so the when you run save(), you bump into a duplicate database entry.
Solution 1:
Use teardown() method to remove database entries you created during the test.
Example:
#After
#WithMockUser(roles = "ADMIN")
public void teardown() {
// delete the customer entry here
}
Reference: https://examples.javacodegeeks.com/core-java/junit/junit-setup-teardown-example/
Solution 2: Every time you run setup(), wipe the database tables clean.
Example:
#Before
#WithMockUser(roles = "ADMIN")
public void setup() {
// wipe your database tables to make them empty
}
Both solution 1 and 2 should be done with test database only. You DON'T want to clean up production DB.
Solution 3 (recommended):
Use mocked repositories and mock injection (instead of autowiring repositories with real implementation).
Sample/ Reference: https://stackoverflow.com/a/36004293/5849844
Most likely your table is using MyISAM storage engine which does not support transactions (as per Table 15.2 MyISAM Storage Engine Features docs).
Redefine the table using InnoDB storage engine. Take a look at 14.8.1.1 Creating InnoDB Tables docs, it should be on by default but you can check it with:
SELECT ##default_storage_engine;
I am using AWS ECS to host my application and using DynamoDB for all database operations. So I'll have same database with different table names for different environments. Such as "dev_users" (for Dev env), "test_users" (for Test env), etc.. (This is how our company uses same Dynamo account for different environments)
So I would like to change the "tableName" of the model class using the environment variable passed through "AWS ECS task definition" environment parameters.
For Example.
My Model Class is:
#DynamoDBTable(tableName = "dev_users")
public class User {
Now I need to replace the "dev" with "test" when I deploy my container in test environment. I know I can use
#Value("${DOCKER_ENV:dev}")
to access environment variables. But I'm not sure how to use variables outside the class. Is there any way that I can use the docker env variable to select my table prefix?
My Intent is to use like this:
I know this not possible like this. But is there any other way or work around for this?
Edit 1:
I am working on the Rahul's answer and facing some issues. Before writing the issues, I'll explain the process I followed.
Process:
I have created the beans in my config class (com.myapp.users.config).
As I don't have repositories, I have given my Model class package name as "basePackage" path. (Please check the image)
For 1) I have replaced the "table name over-rider bean injection" to avoid the error.
For 2) I printed the name that is passing on to this method. But it is Null. So checking all the possible ways to pass the value here.
Check the image for error:
I haven't changed anything in my user model class as beans will replace the name of the DynamoDBTable when the beans got executed. But the table name over riding is happening. Data is pulling from the table name given at the Model Class level only.
What I am missing here?
The table names can be altered via an altered DynamoDBMapperConfig bean.
For your case where you have to Prefix each table with a literal, you can add the bean as such. Here the prefix can be the environment name in your case.
#Bean
public TableNameOverride tableNameOverrider() {
String prefix = ... // Use #Value to inject values via Spring or use any logic to define the table prefix
return TableNameOverride.withTableNamePrefix(prefix);
}
For more details check out the complete details here:
https://github.com/derjust/spring-data-dynamodb/wiki/Alter-table-name-during-runtime
I am able to achieve table names prefixed with active profile name.
First added TableNameResolver class as below,
#Component
public class TableNameResolver extends DynamoDBMapperConfig.DefaultTableNameResolver {
private String envProfile;
public TableNameResolver() {}
public TableNameResolver(String envProfile) {
this.envProfile=envProfile;
}
#Override
public String getTableName(Class<?> clazz, DynamoDBMapperConfig config) {
String stageName = envProfile.concat("_");
String rawTableName = super.getTableName(clazz, config);
return stageName.concat(rawTableName);
}
}
Then i setup DynamoDBMapper bean as below,
#Bean
#Primary
public DynamoDBMapper dynamoDBMapper(AmazonDynamoDB amazonDynamoDB) {
DynamoDBMapper mapper = new DynamoDBMapper(amazonDynamoDB,new DynamoDBMapperConfig.Builder().withTableNameResolver(new TableNameResolver(envProfile)).build());
return mapper;
}
Added variable envProfile which is an active profile property value accessed from application.properties file.
#Value("${spring.profiles.active}")
private String envProfile;
We have the same issue with regards to the need to change table names during runtime. We are using Spring-data-dynamodb 5.0.2 and the following configuration seems to provide the solutions that we need.
First I annotated my bean accessor
#EnableDynamoDBRepositories(dynamoDBMapperConfigRef = "getDynamoDBMapperConfig", basePackages = "my.company.base.package")
I also setup an environment variable called ENV_PREFIX which is Spring wired via SpEL.
#Value("#{systemProperties['ENV_PREFIX']}")
private String envPrefix;
Then I setup a TableNameOverride bean:
#Bean
public DynamoDBMapperConfig.TableNameOverride getTableNameOverride() {
return DynamoDBMapperConfig.TableNameOverride.withTableNamePrefix(envPrefix);
}
Finally, I setup the DynamoDBMapperConfig bean using TableNameOverride injection. In 5.0.2, we had to setup a standard DynamoDBTypeConverterFactory in the DynamoDBMapperConfig builder to avoid NPE.:
#Bean
public DynamoDBMapperConfig getDynamoDBMapperConfig(DynamoDBMapperConfig.TableNameOverride tableNameOverride) {
DynamoDBMapperConfig.Builder builder = new DynamoDBMapperConfig.Builder();
builder.setTableNameOverride(tableNameOverride);
builder.setTypeConverterFactory(DynamoDBTypeConverterFactory.standard());
return builder.build();
}
In hind sight, I could have setup a DynamoDBTypeConverterFactory bean that returns a standard DynamoDBTypeConverterFactory and inject that into the getDynamoDBMapperConfig() method using the DynamoDBMapperConfig builder. But this will also do the job.
I up voted the other answer but here is an idea:
Create a base class with all your user details:
#MappedSuperclass
public abstract class AbstractUser {
#Id
#GeneratedValue(strategy=GenerationType.AUTO)
private Long id;
private String firstName;
private String lastName;
Create 2 implentations with different table names and spirng profiles:
#Profile(value= {"dev","default"})
#Entity(name = "dev_user")
public class DevUser extends AbstractUser {
}
#Profile(value= {"prod"})
#Entity(name = "prod_user")
public class ProdUser extends AbstractUser {
}
Create a single JPA respository that uses the mapped super classs
public interface UserRepository extends CrudRepository<AbstractUser, Long> {
}
Then switch the implentation with the spring profile
#RunWith(SpringJUnit4ClassRunner.class)
#DataJpaTest
#Transactional
public class UserRepositoryTest {
#Autowired
protected DataSource dataSource;
#BeforeClass
public static void setUp() {
System.setProperty("spring.profiles.active", "prod");
}
#Test
public void test1() throws Exception {
DatabaseMetaData metaData = dataSource.getConnection().getMetaData();
ResultSet tables = metaData.getTables(null, null, "PROD_USER", new String[] { "TABLE" });
tables.next();
assertEquals("PROD_USER", tables.getString("TABLE_NAME"));
}
}
I'm currently using Redis (3.2.100) with Spring data redis (1.8.9) and with Jedis connector.
When i use save() function on an existing entity, Redis delete my entity and re create the entity.
In my case i need to keep this existing entity and only update attributes of the entity. (I have another thread which read the same entity at the same time)
In Spring documentation (https://docs.spring.io/spring-data/data-redis/docs/current/reference/html/#redis.repositories.partial-updates), i found the partial update feature. Unfortunately, the example in the documentation use the update() method of RedisTemplate. But this method do not exist.
So did you ever use Spring-data-redis partial update?
There is another method to update entity redis without delete before?
Thanks
To get RedisKeyValueTemplate, you can do:
#Autowired
private RedisKeyValueTemplate redisKVTemplate;
redisKVTemplate.update(entity)
You should use RedisKeyValueTemplate for make partial update.
Well, consider following docs link and also spring data tests (link) actually made 0 contribution to resulting solution.
Consider following entity
#RedisHash(value = "myservice/lastactivity")
#Data
#AllArgsConstructor
#NoArgsConstructor
#Builder
public class LastActivityCacheEntity implements Serializable {
#Id
#Indexed
#Size(max = 50)
private String user;
private long lastLogin;
private long lastProfileChange;
private long lastOperation;
}
Let's assume that:
we don't want to do complex read-write exercise on every update.
entity = lastActivityCacheRepository.findByUser(userId);
lastActivityCacheRepository.save(LastActivityCacheEntity.builder()
.user(entity.getUser())
.lastLogin(entity.getLastLogin())
.lastProfileChange(entity.getLastProfileChange())
.lastOperation(entity.getLastOperation()).build());
what if there would pop up some 100 rows? then on each update entity got to fetched and saved, quite inefficient, but still would work out.
we don't actually want complex exercises with opsForHash + ObjectMapper + configuring beans approach - it's quite hard to implement and maintain (for example link)
So we're about to use something like:
#Autowired
private final RedisKeyValueTemplate redisTemplate;
void partialUpdate(LastActivityCacheEntity update) {
var partialUpdate = PartialUpdate
.newPartialUpdate(update.getUser(), LastActivityCacheEntity.class);
if (update.getLastLogin() > 0)
partialUpdate.set("lastlastLogin", update.getLastLogin());
if (update.getLastProfileChange() > 0)
partialUpdate.set("lastProfileChange", update.getLastProfileChange());
if (update.getLastOperation() > 0)
partialUpdate.set("lastOperation", update.getLastOperation());
redisTemplate.update(partialUpdate);
}
and the thing is - it doesn't really work for this case.
That is, values getting updated but you can not query new property later on via repository entity lookup: certain lastActivityCacheRepository.findAll() will return unchanged properties.
Here's the solution:
LastActivityCacheRepository.java:
#Repository
public interface LastActivityCacheRepository extends CrudRepository<LastActivityCacheEntity, String>, LastActivityCacheRepositoryCustom {
Optional<LastActivityCacheEntity> findByUser(String user);
}
LastActivityCacheRepositoryCustom.java:
public interface LastActivityCacheRepositoryCustom {
void updateEntry(String userId, String key, long date);
}
LastActivityCacheRepositoryCustomImpl.java
#Repository
public class LastActivityCacheRepositoryCustomImpl implements LastActivityCacheRepositoryCustom {
#Autowired
private final RedisKeyValueTemplate redisKeyValueTemplate;
#Override
public void updateEntry(String userId, String key, long date) {
redisKeyValueTemplate.update(new PartialUpdate<>(userId, LastActivityCacheEntity.class)
.set(key, date));
}
}
And finally working sample:
void partialUpdate(LastActivityCacheEntity update) {
if ((lastActivityCacheRepository.findByUser(update.getUser()).isEmpty())) {
lastActivityCacheRepository.save(LastActivityCacheEntity.builder().user(update.getUser()).build());
}
if (update.getLastLogin() > 0) {
lastActivityCacheRepository.updateEntry(update.getUser(),
"lastlastLogin",
update.getLastLogin());
}
if (update.getLastProfileChange() > 0) {
lastActivityCacheRepository.updateEntry(update.getUser(),
"lastProfileChange",
update.getLastProfileChange());
}
if (update.getLastOperation() > 0) {
lastActivityCacheRepository.updateEntry(update.getUser(),
"lastOperation",
update.getLastOperation());
}
all credits to Chris Richardson and his src
If you don't want to type your field names as strings in the updateEntry method, you can use use the lombok annotation on your entity class #FieldNameConstants. This creates field name constants for you and then you can access your field names like this:
...
if (update.getLastOperation() > 0) {
lastActivityCacheRepository.updateEntry(update.getUser(),
LastActivityCache.Fields.lastOperation, // <- instead of "lastOperation"
update.getLastOperation());
...
This makes refactoring the field names more bug-proof.
I have a user-defined function in MS SQL Server called from Java code that appears to be undefined when running integration tests in H2 database. You can find my code in the previous question.
Test code:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = {H2Config.class})
#TestExecutionListeners({
DependencyInjectionTestExecutionListener.class,
DbUnitTestExecutionListener.class,
TransactionalTestExecutionListener.class
})
#TransactionConfiguration(defaultRollback = true)
public class TableDaoTest {
#Autowired
private TableDao tableDao;
#Test
#DatabaseSetup("/datasets/import.xml")
public void testMethod01() {
tableDao.getRecordsByGroup();
...
Database schema is autogenerated by Hibernate. As you can see data for the test is populated by DbUnit using xml dataset. And this test fails because my function that exists in MS SQL server DB is undefined in H2 database.
Application log:
Caused by: org.hibernate.exception.GenericJDBCException: could not prepare statement
...
Caused by: org.h2.jdbc.JdbcSQLException: Function "SAFE_MOD" not found; SQL statement:
select table10_.id, table10_.value, ... from Table1 table10_ where table10_.group1=dbo.safe_mod(?, ?);
...
How to import / create a function before DbUnit test?
H2 database doesn't support user-defined SQL functions. However, in this database, Java functions can be used as stored procedures as well.
#SuppressWarnings("unused")
public class H2Function {
public static int safeMod(Integer n, Integer divider) {
if (divider == null) {
divider = 5000;
}
return n % divider;
}
}
Note, that only static Java methods are supported; both the class and the method must be public.
The Java function must be declared (registered in the database) by calling CREATE ALIAS ... FOR before it can be used:
CREATE ALIAS IF NOT EXISTS safe_mod DETERMINISTIC FOR "by.naxa.H2Function.safeMod";
This statement should be executed before any test so I decided to put it inside connection init SQL:
#Bean
public DataSource dataSource() {
BasicDataSource dataSource = new BasicDataSource();
dataSource.setDriverClassName("org.h2.Driver");
dataSource.setUrl("jdbc:h2:mem:my_db_name");
dataSource.setUsername("sa");
dataSource.setPassword("");
dataSource.setConnectionInitSqls(Collections.singleton(
"CREATE ALIAS IF NOT EXISTS safe_mod DETERMINISTIC FOR \"by.naxa.H2Function.safeMod\";"));
return dataSource;
}
Credit to naxa, this solution is based on theirs. This is to 'stub' the WORD_SIMILARITY from postgres,
#RunWith(SpringRunner.class)
#SpringBootTest
#TestPropertySource(
locations = "classpath:application-test.properties")
public class testServiceTests {
#Autowired
private MyService myService;
#Test
public void someTest() {
}
}
This should be in your application-test.properties
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.url=jdbc:h2:~/my_database;MODE=PostgreSQL;INIT=CREATE ALIAS IF NOT EXISTS WORD_SIMILARITY DETERMINISTIC FOR "com.example.H2Function.wordSimilarity";TRACE_LEVEL_FILE=0;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE;
spring.datasource.username=postgres
spring.datasource.password=postgres
hibernate.dialect=org.hibernate.dialect.PostgreSQL9Dialect
hibernate.flushMode=FLUSH_AUTO
hibernate.hbm2ddl.auto=create-drop
spring.datasource.data=classpath:V1__base-schema.sql
Here is the H2 function to replace
#SuppressWarnings("unused")
public class H2Function {
public static double wordSimilarity(String string, String word) {
if ( word== null ) {
return 0;
}
return 0.5;
}
}
#naXa and #Alan approaches works for scalar valued functions. For Table valued functions use ResultSet:
package com.package.app;
import org.h2.tools.SimpleResultSet;
#SuppressWarnings("unused")
public class H2Functions {
// All function params goes here
// LocalDate not working here, we have to use java.sql.Date
public static ResultSet gePrice(Long recipientId, Long currencyId, Date priceDate) {
SimpleResultSet rs = new SimpleResultSet();
rs.addColumn("price", Types.DECIMAL, 10, 0);
rs.addColumn("priceDate", Types.TIMESTAMP, 10, 0);
rs.addRow(new BigDecimal("123.23"), new Timestamp(LocalDateTime.now().toEpochSecond(ZoneOffset.UTC)) );
return rs;
}
}
Example application.yml configuration:
spring:
datasource:
type: com.zaxxer.hikari.HikariDataSource
url: jdbc:h2:DB_NAME;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
name:
username:
password:
hikari:
auto-commit: false
# create alias for functions on db connection initialization
connection-init-sql: "CREATE ALIAS IF NOT EXISTS SAFE_MOD DETERMINISTIC FOR \"com.package.app.H2Functions.gePrice\";"
Then you can wrap response with your POJO.
More about user defined functions you can find in H2 documentation:
http://www.h2database.com/html/features.html#user_defined_functions
I am trying to implement a sharded Hibernate logic. All Databases have same table called MyTable which is mapped to MyClass through Hibernate POJO.
public class SessionFactoryList {
List<SessionFactory> factories;
int minShard;
int maxShard;
// getters and setters here.
}
In my Dao implementation, I have a method getAll which is following -
public class MyClassDao {
#Autowired // through Spring
private SessionFactoryList list;
List<MyClass> getAll() {
List<MyClass> outputList = new ArrayList<>();
for(SessionFactory s : list.getFactories()) {
Criteria c = s.getCurrentSession.createCriteria(MyClass.class);
outputList.addAll(c.list());
}
return outputList;
}
Here is my test for the corresponding getAll implementation -
public class MyClassTest {
#Autowired
SessionFactoryList list;
#Autowired
MyClassDao myClassDao;
#Test
void getAllTest() {
Session session1 = list.getFactories.get(0).getCurrentSession();
session1.beginTransaction();
session1.save(new MyClass(// some parameters here));
Session session2 = list.getFactories.get(1).getCurrentSession();
session2.beginTransaction();
session2.save(new MyClass(// some parameters here));
//Set up done.
assert myClassDao.getAll().size() == 2
}
}
I am using HSQL in-memory database for the test cases.
I have verified that DB connections are correctly setup, but the Assert statement is failing.
'getAll' method of MyClassDao is returning 3 rows. MyClass object inserted in SessionFactory1's session is getting duplicated.
Is there anything I am missing out here?
I found it. The 2 sessionFactory configurations which I used for the test had same Database URL. Hence the same database was queried twice which caused the duplicates.