I would like to make use of prepared statements when executing CQL in my application. This functionality looks to be provided by the ReactiveCqlTemplate class, which I have passed into the ReactiveCassandraTemplate in my Cassandra configuration here:
#Configuration
#EnableReactiveCassandraRepositories(
basePackages = "com.my.app",
includeFilters = {
#ComponentScan.Filter(type = FilterType.ASSIGNABLE_TYPE, classes = {ScyllaPersonRepository.class})
})
public class CassandraConfiguration extends AbstractReactiveCassandraConfiguration {
#Value("${cassandra.host}")
private String cassandraHost;
#Value("${cassandra.connections}")
private Integer cassandraConnections;
#Override
public CassandraClusterFactoryBean cluster() {
PoolingOptions poolingOptions = new PoolingOptions()
.setCoreConnectionsPerHost(HostDistance.LOCAL, cassandraConnections)
.setMaxConnectionsPerHost(HostDistance.LOCAL, cassandraConnections*2);
CassandraClusterFactoryBean bean = super.cluster();
bean.setJmxReportingEnabled(false);
bean.setPoolingOptions(poolingOptions);
bean.setLoadBalancingPolicy(new TokenAwarePolicy(new RoundRobinPolicy()));
return bean;
}
#Override
public ReactiveCassandraTemplate reactiveCassandraTemplate() {
return new ReactiveCassandraTemplate(reactiveCqlTemplate(), cassandraConverter());
}
#Bean
public CassandraEntityInformation getCassandraEntityInformation(CassandraOperations cassandraTemplate) {
CassandraPersistentEntity<Person> entity =
(CassandraPersistentEntity<Person>)
cassandraTemplate
.getConverter()
.getMappingContext()
.getRequiredPersistentEntity(Person.class);
return new MappingCassandraEntityInformation<>(entity, cassandraTemplate.getConverter());
}
#Override
public SchemaAction getSchemaAction() {
return SchemaAction.CREATE_IF_NOT_EXISTS;
}
public String getContactPoints() {
return cassandraHost;
}
public String getKeyspaceName() {
return "mykeyspace";
}
}
This is the ScyllaPersonRepository referenced in my Cassandra configuration filters.
public interface ScyllaPersonRepository extends ReactiveCassandraRepository<Person, PersonKey> {
#Query("select id, name from persons where id = ?0")
Flux<Object> findPersonById(#Param("id") String id);
}
After executing a few queries, the CQL Non-Prepared statements metric in my Scylla Monitoring Dashboard showed that I'm not using prepared statements at all.
I was able to use prepared statements after followed the documentation here which walked me through creating the CQL myself.
public class ScyllaPersonRepository extends SimpleReactiveCassandraRepository<Person, PersonKey> {
private final Session session;
private final CassandraEntityInformation<Person, PersonKey> entityInformation;
private final ReactiveCassandraTemplate cassandraTemplate;
private final PreparedStatementCache cache = PreparedStatementCache.create();
public ScyllaPersonRepository(
Session session,
CassandraEntityInformation<Person, PersonKey> entityInformation,
ReactiveCassandraTemplate cassandraTemplate
) {
super(entityInformation, cassandraTemplate);
this.session = session;
this.entityInformation = entityInformation;
this.cassandraTemplate = cassandraTemplate;
}
public Flux<ScyllaUser> findSegmentsById(String id) {
return cassandraTemplate
.getReactiveCqlOperations()
.query(
findPersonByIdQuery(id),
(row, rowNum) -> convert(row)
);
}
private BoundStatement findPersonByIdQuery(String id) {
return CachedPreparedStatementCreator.of(
cache,
QueryBuilder.select()
.column("id")
.column("name")
.from("persons")
.where(QueryBuilder.eq("id", QueryBuilder.bindMarker("id"))))
.createPreparedStatement(session)
.bind()
.setString("id", id);
}
private Person convert(Row row) {
return new Person(
row.getString("id"),
row.getString("name"));
}
}
But, I would really like the ORM to handle that all for me. Is it possible to configure this behaviour out of the box, so that I don't need to manually write the CQL myself but instead just enable it as an option in my Cassandra Configuration and get the ORM to orchestrate it all behind the scenes?
Frankly, I think this is a bug(request for enhancement) and it should be filed in Springs Jira.
It seems the repository simply doesn't support this out of box(nor did I find any config option how to flip it, but I might have missed it).
Actually, my theory was correct:
https://jira.spring.io/projects/DATACASS/issues/DATACASS-578?filter=allopenissues
so just add yourself and try to ask them for resolution.
Related
I am using reactive mongoDB with Micronaut application
implementation("io.micronaut.mongodb:micronaut-mongo-reactive")
Trying to create a TextIndex and search Free text functionality
public class Product {
#BsonProperty("id")
private ObjectId id;
private String name;
private float price;
private String description;
}
In spring data we have #TextIndexed(weight = 2) to create a TextIndex to the collection, what is the equivalent in the Micronaut application.
I'm afraid that Micronaut Data does not yet support automatic index creation based on annotations for MongoDB. Micronaut Data now simplifies only work with SQL databases.
But you can still create the index manually using MongoClient like this:
#Singleton
public class ProductRepository {
private final MongoClient mongoClient;
public ProductRepository(MongoClient mongoClient) {
this.mongoClient = mongoClient;
}
public MongoCollection<Product> getCollection() {
return mongoClient
.getDatabase("some-database")
.getCollection("product", Product.class);
}
#PostConstruct
public void createIndex() {
final var weights = new BasicDBObject("name", 10)
.append("description", 5);
getCollection()
.createIndex(
Indexes.compoundIndex(
Indexes.text("name"),
Indexes.text("description")
),
new IndexOptions().weights(weights)
)
.subscribe(new DefaultSubscriber<>() {
#Override
public void onNext(String s) {
System.out.format("Index %s was created.%n", s);
}
#Override
public void onError(Throwable t) {
t.printStackTrace();
}
#Override
public void onComplete() {
System.out.println("Completed");
}
});
}
}
You can of course use any subscriber you want. That anonymous class extending DefaultSubscriber is used here only for demonstration purpose.
Update: You can create indexes on startup for example by using #PostConstruct. It means to add all index creation logic in a method annotated by #PostConstruct in some repository or service class annotated by #Singleton, then it will be called after repository/service singleton creation.
I have a class which extends SequenceStyleGenerator for generating custom primary key for the database.
I'm asked to write test cases for the whole application. While I think that code coverage on SonaqQube can't be 100%, still I'm being asked to write the test cases for the whole application.
I've gone through the source code for SequenceStyleGenerator but I'm unable to get it how to test the class.
Here is the code for the same.
public class BigIntegerSequenceGenerator extends SequenceStyleGenerator {
public static final String VALUE_PREFIX_PARAMETER = "valuePrefix";
public static final String VALUE_PREFIX_DEFAULT = "";
private String valuePrefix;
public static final String NUMBER_FORMAT_PARAMETER = "numberFormat";
public static final String NUMBER_FORMAT_DEFAULT = "%d";
private String numberFormat;
#Override
public Serializable generate(SharedSessionContractImplementor session,
Object object) {
return valuePrefix + String.format(numberFormat, super.generate(session, object));
}
#Override
public void configure(Type type, Properties params,
ServiceRegistry serviceRegistry) {
super.configure(LongType.INSTANCE, params, serviceRegistry);
valuePrefix = ConfigurationHelper.getString(VALUE_PREFIX_PARAMETER,
params, VALUE_PREFIX_DEFAULT);
numberFormat = ConfigurationHelper.getString(NUMBER_FORMAT_PARAMETER,
params, NUMBER_FORMAT_DEFAULT);
}
}
Here is the test for this file
class BigIntegerSequenceGeneratorTest {
SharedSessionContractImplementor session;
BigIntegerSequenceGenerator generator;
#BeforeEach
void setUp() {
generator = new BigIntegerSequenceGenerator();
session = mock(SharedSessionContractImplementor.class);
session = mock(Session.class);
}
#Test
void testGenerate() {
generator.generate(session,new Object());
}
}
I'm getting
java.lang.NullPointerException
at org.hibernate.id.enhanced.SequenceStyleGenerator.generate(SequenceStyleGenerator.java:520)
I check it by using debug points and this is the code I'm getting.
#Override
public Serializable generate(SharedSessionContractImplementor session, Object object) throws HibernateException {
return optimizer.generate( databaseStructure.buildCallback( session ) );
}
optimizer and databaseStructure are null.
For what I know, when the SpringBoot application runs, it automatically configures the hibernate and the optimizer and databaseStruictore are configured which will then be used for the models when saving the data.
I know that this will require the database connection mocking but it seems like I'm unable to do it.
NOTE : I'm using JUnit5 for testing
-Thank you
We will face high data volume on our mariadb databases. To overcome problems with backups and ddl operations, we had the idea to store the data into multiple databases. Basically we will have a database with short term data (e.g. last 30 days, named short_term) and another one with the rest of the data (named long_term). Obviously the data needs to be moved from short_term to long_term but that should be achievable.
The problem I'm currently facing on a prototype is that I can connect to short_term but am not able to switch to long_term if for example want to query both of them (e.g. get() where I can't find it in the short_term database).
I have set it up like this (both work independently but not with switch database context):
HistoryAwareRoutingSource:
public class HistoryAwareRoutingSource extends AbstractRoutingDataSource {
#Override
protected Object determineCurrentLookupKey() {
return ThreadLocalStorage.getDatabaseType();
}
}
ThreadLocalStorage
public class ThreadLocalStorage {
private static ThreadLocal<String> databaseType = new ThreadLocal<>();
public static void setDatabaseType(String databaseTypeName) {
databaseType.set(databaseTypeName);
}
public static String getDatabaseType() {
return databaseType.get();
}
}
DatasourceConfiguration
#Configuration
public class DatasourceConfiguration {
#Value("${spring.datasource.url}")
private String db1Url;
#Value("${spring.datasource.username}")
private String db1Username;
#Value("${spring.datasource.password}")
private String db1Password;
#Value("${spring.datasource.driver-class-name}")
private String db1DriverClassName;
#Value("${spring.datasource.connectionProperties}")
private String db1ConnectionProps;
#Value("${spring.datasource.sqlScriptEncoding}")
private String db1Encoding;
#Value("${spring.datasource2.url}")
private String db2Url;
#Value("${spring.datasource2.username}")
private String db2Username;
#Value("${spring.datasource2.password}")
private String db2Password;
#Value("${spring.datasource2.driver-class-name}")
private String db2DriverClassName;
#Value("${spring.datasource2.connectionProperties}")
private String db2ConnectionProps;
#Value("${spring.datasource2.sqlScriptEncoding}")
private String db2Encoding;
#Bean
public DataSource dataSource() {
HistoryAwareRoutingSource historyAwareRoutingSource = new HistoryAwareRoutingSource();
Map<Object, Object> dataSourceMap = new HashMap<>();
dataSourceMap.put("PRIMARY", dataSource1());
dataSourceMap.put("SECONDARY", dataSource2());
historyAwareRoutingSource.setDefaultTargetDataSource(dataSource1());
historyAwareRoutingSource.setTargetDataSources(dataSourceMap);
historyAwareRoutingSource.afterPropertiesSet();
return historyAwareRoutingSource;
}
private DataSource dataSource1() {
HikariDataSource primary = new HikariDataSource();
primary.setInitializationFailTimeout(0);
primary.setMaximumPoolSize(5);
primary.setDriverClassName(db1DriverClassName);
primary.setJdbcUrl(db1Url);
primary.setUsername(db1Username);
primary.setPassword(db1Password);
primary.addDataSourceProperty("connectionProperties", db1ConnectionProps);
primary.addDataSourceProperty("sqlScriptEncoding", db1Encoding);
return primary;
}
private DataSource dataSource2() {
HikariDataSource secondary = new HikariDataSource();
secondary.setInitializationFailTimeout(0);
secondary.setMaximumPoolSize(5);
secondary.setDriverClassName(db2DriverClassName);
secondary.setJdbcUrl(db2Url);
secondary.setUsername(db2Username);
secondary.setPassword(db2Password);
secondary.addDataSourceProperty("connectionProperties", db2ConnectionProps);
secondary.addDataSourceProperty("sqlScriptEncoding", db2Encoding);
return secondary;
}
}
Then I have a RestController class like this:
#RestController
#RequestMapping(value = "/story")
#RequiredArgsConstructor
public class MultiDBController {
#Autowired
private StoryService storyService;
#GetMapping("/create")
#UsePrimaryStorage
public ResponseEntity<StoryDTO> createEntity() {
setPrimaryDB();
return ResponseEntity.ok(storyService.createOne());
}
private void setPrimaryDB() {
// TODO destroy the current db connection or hand it back to the pool so the next time a connect is taken it is the PRIMARY Datasource?
ThreadLocalStorage.setDatabaseType("PRIMARY");
}
private void setSecondaryDB() {
// TODO destroy the current db connection or hand it back to the pool so the next time a connect is taken it is the PRIMARY Datasource?
ThreadLocalStorage.setDatabaseType("SECONDARY");
}
#GetMapping("/{storyId}")
public ResponseEntity<StoryDTO> get(#PathVariable UUID storyId) {
// try to find in primary db
setPrimaryDB();
Optional<StoryDTO> storyOptional = storyService.findOne(storyId);
if (!storyOptional.isPresent()) {
setSecondaryDB();
Optional<StoryDTO> storyOptionalSecondary = storyService.findOne(storyId);
if(storyOptionalSecondary.isPresent()) {
return ResponseEntity.ok(storyOptionalSecondary.get());
} else {
return ResponseEntity.notFound().build();
}
}
return ResponseEntity.ok(storyOptional.get());
}
}
So the question is, how do I implement the TODO's
Suppose I have the following Spring MVC controller
#RestController
#RequestMapping("/books")
public class BooksController {
#Autowired
private final BooksRepository booksRepository;
#RequestMapping(value="/search", method=RequestMethod.POST, consumes="application/json")
public Collection<Book> doSearch(#RequestBody final SearchCriteria criteria) {
return booksRepository.find(criteria);
}
}
and the following repository
#Service
public class BooksRepository {
#Autowired
private final JdbcTemplate jdbcTemplate;
#Autowired
private final SearchQueryBuilder searchQueryBuilder;
public Collection<BookLite> find(final SearchCriteria criteria) {
// TODO: will this cause race conditions?
searchQueryBuilder.clear().addKeywords(criteria.getKeywords());
final String query = searchQueryBuilder.buildQuery();
final Object[] params = searchQueryBuilder.buildParams();
return jdbcTemplate.query(query, params, new BookExtractor());
}
}
with the following SearchQueryBuilder implementation
#Component
public class SearchQueryBuilder {
private final List<String> keywords = new LinkedList<String>();
public SearchQueryBuilder clear() {
keywords.clear();
return this;
}
public SearchQueryBuilder addKeywords(final List<String> keywords) {
for (String keyword : keywords) {
add(keyword);
}
return this;
}
private SearchQueryBuilder add(final String keyword) {
keywords.add(keyword);
return this;
}
public String buildQuery() {
...
}
public Object[] buildParams() {
...
}
}
My concerns are the following. Since the SearchQueryBuilder class is not thread safe, injecting it this way will probably cause race conditions. What is a good way to handle this? Is it enough (and a good practice) to change the bean scope to e.g. request?
I would use SearchQueryBuilderFactory as a Spring bean, and create the SearchQueryBuilder instances on the fly.
I would avoid creating Spring beans that change state during the execution.
Your reliance on having them request-scoped makes your solution more fragile and error-prone, since the problem would reappear if you try to use it as Spring bean outside the web context.
I have created my own repository like that:
public interface MyRepository extends TypedIdCassandraRepository<MyEntity, String> {
}
So the question how automatically create cassandra table for that? Currently Spring injects MyRepository which tries to insert entity to non-existent table.
So is there a way to create cassandra tables (if they do not exist) during spring container start up?
P.S. It would be very nice if there is just config boolean property without adding lines of xml and creation something like BeanFactory and etc. :-)
Overide the getSchemaAction property on the AbstractCassandraConfiguration class
#Configuration
#EnableCassandraRepositories(basePackages = "com.example")
public class TestConfig extends AbstractCassandraConfiguration {
#Override
public String getKeyspaceName() {
return "test_config";
}
#Override
public SchemaAction getSchemaAction() {
return SchemaAction.RECREATE_DROP_UNUSED;
}
#Bean
public CassandraOperations cassandraOperations() throws Exception {
return new CassandraTemplate(session().getObject());
}
}
You can use this config in the application.properties
spring.data.cassandra.schema-action=CREATE_IF_NOT_EXISTS
You'll also need to Override the getEntityBasePackages() method in your AbstractCassandraConfiguration implementation. This will allow Spring to find any classes that you've annotated with #Table, and create the tables.
#Override
public String[] getEntityBasePackages() {
return new String[]{"com.example"};
}
You'll need to include spring-data-cassandra dependency in your pom.xml file.
Configure your TestConfig.class as below:
#Configuration
#PropertySource(value = { "classpath:Your .properties file here" })
#EnableCassandraRepositories(basePackages = { "base-package name of your Repositories'" })
public class CassandraConfig {
#Autowired
private Environment environment;
#Bean
public CassandraClusterFactoryBean cluster() {
CassandraClusterFactoryBean cluster = new CassandraClusterFactoryBean();
cluster.setContactPoints(env.getProperty("contactpoints from your properties file"));
cluster.setPort(Integer.parseInt(env.getProperty("ports from your properties file")));
return cluster;
}
#Bean
public CassandraConverter converter() {
return new MappingCassandraConverter(mappingContext());
}
#Bean
public CassandraSessionFactoryBean session() throws Exception {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster().getObject());
session.setKeyspaceName(env.getProperty("keyspace from your properties file"));
session.setConverter(converter());
session.setSchemaAction(SchemaAction.CREATE_IF_NOT_EXISTS);
return session;
}
#Bean
public CassandraOperations cassandraTemplate() throws Exception {
return new CassandraTemplate(session().getObject());
}
#Bean
public CassandraMappingContext mappingContext() throws ClassNotFoundException {
CassandraMappingContext mappingContext= new CassandraMappingContext();
mappingContext.setInitialEntitySet(getInitialEntitySet());
return mappingContext;
}
#Override
public String[] getEntityBasePackages() {
return new String[]{"base-package name of all your entity annotated
with #Table"};
}
#Override
protected Set<Class<?>> getInitialEntitySet() throws ClassNotFoundException {
return CassandraEntityClassScanner.scan(getEntityBasePackages());
}
}
This last getInitialEntitySet method might be an Optional one. Try without this too.
Make sure your Keyspace, contactpoints and port in .properties file. Like :
cassandra.contactpoints=localhost,127.0.0.1
cassandra.port=9042
cassandra.keyspace='Your Keyspace name here'
Actually, after digging into the source code located in spring-data-cassandra:3.1.9, you can check the implementation:
org.springframework.data.cassandra.config.SessionFactoryFactoryBean#performSchemaAction
with implementation as following:
protected void performSchemaAction() throws Exception {
boolean create = false;
boolean drop = DEFAULT_DROP_TABLES;
boolean dropUnused = DEFAULT_DROP_UNUSED_TABLES;
boolean ifNotExists = DEFAULT_CREATE_IF_NOT_EXISTS;
switch (this.schemaAction) {
case RECREATE_DROP_UNUSED:
dropUnused = true;
case RECREATE:
drop = true;
case CREATE_IF_NOT_EXISTS:
ifNotExists = SchemaAction.CREATE_IF_NOT_EXISTS.equals(this.schemaAction);
case CREATE:
create = true;
case NONE:
default:
// do nothing
}
if (create) {
createTables(drop, dropUnused, ifNotExists);
}
}
which means you have to assign CREATE to schemaAction if the table has never been created. And CREATE_IF_NOT_EXISTS dose not work.
More information please check here: Why `spring-data-jpa` with `spring-data-cassandra` won't create cassandra tables automatically?