I am using the latest spring-data-mongodb (1.1.0.M2) and the latest Mongo Driver (2.9.0-RC1). I have a situation where I have multiple clients connecting to my application and I want to give each one their own "schema/database" in the same Mongo server. This is not a very difficult task to achieve if I was using the driver directly:
Mongo mongo = new Mongo( new DBAddress( "localhost", 127017 ) );
DB client1DB = mongo.getDB( "client1" );
DBCollection client1TTestCollection = client1DB.getCollection( "test" );
long client1TestCollectionCount = client1TTestCollection.count();
DB client2DB = mongo.getDB( "client2" );
DBCollection client2TTestCollection = client2DB.getCollection( "test" );
long client2TestCollectionCount = client2TTestCollection.count();
See, easy. But spring-data-mongodb does not allow an easy way to use multiple databases. The preferred way of setting up a connection to Mongo is to extend the AbstractMongoConfiguration class:
You will see that you override the following method:
getDatabaseName()
So it forces you to use one database name. The repository interfaces that you then build use that database name inside the MongoTemplate that is passed into the SimpleMongoRepository class.
Where on earth would I stick multiple database names? I have to make multiple database names, multiple MongoTempates (one per database name), and multiple other config classes. And that still doesn't get my repository interfaces to use the correct template. If anyone has tried such a thing let me know. If I figure it out I will post the answer here.
Thanks.
Here is a link to an article I think is what you are looking for http://michaelbarnesjr.wordpress.com/2012/01/19/spring-data-mongo/
The key is to provide multiple templates
configure a template for each database.
<bean id="vehicleTemplate" class="org.springframework.data.mongodb.core.MongoTemplate">
<constructor-arg ref="mongoConnection"/>
<constructor-arg name="databaseName" value="vehicledatabase"/>
</bean>
configure a template for each database.
<bean id="imageTemplate" class="org.springframework.data.mongodb.core.MongoTemplate">
<constructor-arg ref="mongoConnection"/>
<constructor-arg name="databaseName" value="imagedatabase"/>
</bean>
<bean id="vehicleTemplate" class="org.springframework.data.mongodb.core.MongoTemplate">
<constructor-arg ref="mongoConnection"/>
<constructor-arg name="databaseName" value="vehicledatabase"/>
</bean>
Now, you need to tell Spring where your repositories are so it can inject them. They must all be in the same directory. I tried to have them in different sub-directories, and it did not work correctly. So they are all in the repository directory.
<mongo:repositories base-package="my.package.repository">
<mongo:repository id="imageRepository" mongo-template-ref="imageTemplate"/>
<mongo:repository id="carRepository" mongo-template-ref="vehicleTemplate"/>
<mongo:repository id="truckRepository" mongo-template-ref="vehicleTemplate"/>
</mongo:repositories>
Each repository is an Interface and is written as follows (yes, you can leave them blank):
#Repository
public interface ImageRepository extends MongoRepository<Image, String> {
}
#Repository
public interface TruckRepository extends MongoRepository<Truck, String> {
}
The name of the private variable imageRepository is the collection! Image.java will be saved to the image collection within the imagedb database.
Here is how you can find, insert, and delete records:
#Service
public class ImageService {
#Autowired
private ImageRepository imageRepository;
}
By Autowiring you match the variable name to the name (id) in your configuration.
You may want to sub-class SimpleMongoDbFactory and strategize how the default DB as returned by getDb is returned. One option is to use thread-local variables to decide on the Db to use, instead of using multiple MongoTemplates.
Something like this:
public class ThreadLocalDbNameMongoDbFactory extends SimpleMongoDbFactory {
private static final ThreadLocal<String> dbName = new ThreadLocal<String>();
private final String defaultName; // init in c'tor before calling super
// omitted constructor for clarity
public static void setDefaultNameForCurrentThread(String tlName) {
dbName.set(tlName);
}
public static void clearDefaultNameForCurrentThread() {
dbName.remove();
}
public DB getDb() {
String tlName = dbName.get();
return super.getDb(tlName != null ? tlName : defaultName);
}
}
Then, override mongoDBFactory() in your #Configuration class that extends from AbstractMongoConfiguration like so:
#Bean
#Override
public MongoDbFactory mongoDbFactory() throws Exception {
if (getUserCredentials() == null) {
return new ThreadLocalDbNameMongoDbFactory(mongo(), getDatabaseName());
} else {
return new ThreadLocalDbNameMongoDbFactory(mongo(), getDatabaseName(), getUserCredentials());
}
}
In your client code (maybe a ServletFilter or some such) you will need to call:
ThreadLocalDBNameMongoRepository.setDefaultNameForCurrentThread()
before doing any Mongo work and subsequently reset it with:
ThreadLocalDBNameMongoRepository.clearDefaultNameForCurrentThread()
after you are done.
So after much research and experimentation, I have concluded that this is not yet possibly with the current spring-data-mongodb project. I tried baja's method above and ran into a specific hurdle. The MongoTemplate runs its ensureIndexes() method from within its constructor. This method calls out the the database to make sure annotated indexes exist in the database. The constructor for MongoTemplate gets called when Spring starts up so I never even have a chance to set a ThreadLocal variable. I have to have a default already set when Spring starts, then change it when a request comes in. This is not allowable because I don't want nor do I have a default database.
All was not lost though. Our original plan was to have each client running on its own application server, pointed at its own MongoDB database on the MongoDB server. Then we can provide a -Dprovider= system variable and each server runs pointing only to one database.
We were instructed to have a multi-tenant application, hence the attempt at the ThreadLocal variable. But since it did not work, we were able to run the application the way we had originally designed.
I believe there is a way though to make this all work, it just takes more than is described in the other posts. You have to make your own RepositoryFactoryBean. Here is the example from the Spring Data MongoDB Reference Docs. You would still have to implement your own MongoTemplate and delay or remove the ensureIndexes() call. But you would have to rewrite a few classes to make sure your MongoTemplate is called instead of Spring's. In other words, a lot of work. Work that I would like to see happen or even do, I just did not have the time.
Thanks for the responses.
The spot to look at is the MongoDbFactory interface. The basic implementation of that takes a Mongo instance and works with that throughout all the application lifetime. To achieve a per-thread (and thus per-request) database usage you'll probably have to implement something along the lines of AbstractRoutingDataSource. The idea is pretty much that you have a template method that will have to lookup the tenant per invocation (ThreadLocal bound I guess) and then select a Mongo instance from a set of predefined ones or some custom logic to come up with a fresh one for a new tenant etc.
Keep in mind that MongoDbFactory usually get's used through the getDb() method. However, there are features in MongoDB that need us to provide a getDb(String name). DBRefs (sth. like a foreign key in the relational world) can point to documents an entirely different database. So if you're doing the delegation either avoid using that feature (I think the DBRefs pointing to another DB are the only places calling getDb(name)) or explicitly handle it.
From a configuration point of view you could either simply override mongoDbFactory() entirely or simply not extend the base class at all and come up with your own Java based configuration.
I used different DB using java Config, this is how i did it:
#Bean
public MongoDbFactory mongoRestDbFactory() throws Exception {
MongoClientURI uri=new MongoClientURI(environment.getProperty("mongo.uri"));
return new SimpleMongoDbFactory(uri);
}
#Override
public String getDatabaseName() {
return "rest";
}
#Override
public #Bean(name = "secondaryMongoTemplate") MongoTemplate mongoTemplate() throws Exception{ //hay que cambiar el nombre de los templates para que el contendor de beans sepa la diferencia
return new MongoTemplate(mongoRestDbFactory());
}
And the other was like this:
#Bean
public MongoDbFactory restDbFactory() throws Exception {
MongoClientURI uri = new MongoClientURI(environment.getProperty("mongo.urirestaurants"));
return new SimpleMongoDbFactory(uri);
}
#Override
public String getDatabaseName() {
return "rest";
}
#Override
public #Bean(name = "primaryMongoTemplate") MongoTemplate mongoTemplate() throws Exception{
return new MongoTemplate(restDbFactory());
}
So when i need to change my database i only select which Config to use
An example with Spring boot V2.6.2 :
Content of your "application.yml" file :
spring:
application:
name: myApp
autoconfigure:
data:
mongodb:
host: localhost
port: 27017
database: FirstDatabase
mongodbreference:
host: localhost
port: 27017
database: SecondDatabase
In a Classe named "MultipleMongoProperties.java" :
package your.packagename;
import lombok.Data;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.mongo.MongoProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
#Data
#ConfigurationProperties(prefix = "spring.data")
public class MultipleMongoProperties {
private MongoProperties mongodb = new MongoProperties();
private MongoProperties mongodbreference = new MongoProperties();
}
And finaly the class "MultipleMongoConfig.java" :
package your.package;
import com.mongodb.client.MongoClients;
import lombok.RequiredArgsConstructor;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.autoconfigure.mongo.MongoProperties;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import org.springframework.data.mongodb.MongoDatabaseFactory;
import org.springframework.data.mongodb.core.MongoTemplate;
import org.springframework.data.mongodb.core.SimpleMongoClientDatabaseFactory;
#Configuration
#RequiredArgsConstructor
#EnableConfigurationProperties(MultipleMongoProperties.class)
public class MultipleMongoConfig {
private static final Logger LOG = LoggerFactory.getLogger(Multip
leMongoConfig.class);
private final MultipleMongoProperties mongoProperties;
private MongoProperties mongoDestination;
#Bean("referenceMongoTemplate")
#Primary
public MongoTemplate referenceMongoTemplate() {
return new MongoTemplate(referenceFactory(this.mongoProperties.getMongodbreference()));
}
#Bean("destinationMongoTemplate")
public MongoTemplate destinationMongoTemplate() {
return new MongoTemplate(destinationFactory(this.mongoProperties.getMongodb()));
}
public MongoDatabaseFactory referenceFactory(final MongoProperties mongo) {
this.setUriToMongoProperties(mongo);
return new SimpleMongoClientDatabaseFactory(MongoClients.create(mongo.getUri()), mongo.getDatabase());
}
public MongoDatabaseFactory destinationFactory(final MongoProperties mongo) {
this.setUriToMongoProperties(mongo);
return new SimpleMongoClientDatabaseFactory(MongoClients.create(mongo.getUri()), mongo.getDatabase());
}
private void setUriToMongoProperties(MongoProperties mongo) {
mongo.setUri("mongodb://" + mongo.getUsername() + ":" + String.valueOf(mongo.getPassword()) + "#" + mongo.getHost() + ":" + mongo.getPort() + "/" + mongo.getAuthenticationDatabase());
}
}
In another class you just have to implement :
package your.package;
import com.mongodb.bulk.BulkWriteResult;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.data.mongodb.core.MongoTemplate;
import org.springframework.stereotype.Component;
#Component
public class CollectionRepositoryImpl implements CollectionsRepository {
#Autowired
#Qualifier("referenceMongoTemplate")
private MongoTemplate referenceMongoTemplate;
#Autowired
#Qualifier("destinationMongoTemplate")
private MongoTemplate destinationMongoTemplate;
...
As far as I understand, you want more flexibility in changing the current db on the fly.
I've linked a project that implements multi tenancy in a simple way.
It could be used as a starting point for the application.
It implements SimpleMongoDbFactory and provide a custom getDB method to resolve the correct db to use in certain moment. It can be improved in many ways, for example, by retrieving the db details from a HttpSession from SpringSession object, which for instance could be cached by Redis .
To have different mongoTemplates using different dbs at the same time, maybe change the scope of your mongoDbFactory to session.
References:
multi-tenant-spring-mongodb
Related
I work for a company that has multiple brands, therefore we have a couple MongoDB instances on some different hosts holding the same document model for our Customer on each of these brands. (Same structure, not same data)
For the sake of simplicity let's say we have an Orange brand, with database instance serving on port 27017 and Banana brand with database instance serving on port 27018
Currently I'm developing a fraud detection service which is required to connect to all databases and analyze all the customers' behavior together regardless of the brand.
So my "model" has a shared entity for Customer, annotated with #Document (org.springframework.data.mongodb.core.mapping.Document)
Next thing I have is two MongoRepositories such as:
public interface BananaRepository extends MongoRepository<Customer, String>
List<Customer> findAllByEmail(String email);
public interface OrangeRepository extends MongoRepository<Customer, String>
List<Customer> findAllByEmail(String email);
With some stub method for finding customers by Id, Email, and so on. Spring is responsible for generating all implementation classes for such interfaces (pretty standard spring stuff)
In order to hint each of these repositories to connect to the right mongodb instance, I need two Mongo Config such as:
#Configuration
#EnableMongoRepositories(basePackageClasses = {Customer.class})
public class BananaConfig extends AbstractMongoConfiguration {
#Value("${database.mongodb.banana.username:}")
private String username;
#Value("${database.mongodb.banana.database}")
private String database;
#Value("${database.mongodb.banana.password:}")
private String password;
#Value("${database.mongodb.banana.uri}")
private String mongoUri;
#Override
protected Collection<String> getMappingBasePackages() {
return Collections.singletonList("com.acme.model");
}
#Override
protected String getDatabaseName() {
return this.database;
}
#Override
#Bean(name="bananaClient")
public MongoClient mongoClient() {
final String authString;
//todo: Use MongoCredential
//todo: Use ServerAddress
//(See https://docs.spring.io/spring-data/mongodb/docs/current/reference/html/#repositories) 10.3.4
if ( valueIsPresent(username) ||valueIsPresent(password)) {
authString = String.format("%s:%s#", username, password);
} else {
authString = "";
}
String conecctionString = "mongodb://" + authString + mongoUri + "/" + database;
System.out.println("Going to connect to: " + conecctionString);
return new MongoClient(new MongoClientURI(conecctionString, builder()
.connectTimeout(5000)
.socketTimeout(8000)
.readPreference(ReadPreference.secondaryPreferred())
.writeConcern(ACKNOWLEDGED)));
}
#Bean(name = "bananaTemplate")
public MongoTemplate mongoTemplate(#Qualifier("bananaFactory") MongoDbFactory mongoFactory) {
return new MongoTemplate(mongoFactory);
}
#Bean(name = "bananaFactory")
public MongoDbFactory mongoFactory() {
return new SimpleMongoDbFactory(mongoClient(),
getDatabaseName());
}
private static int sizeOfValue(String value){
if (value == null) return 0;
return value.length();
}
private static boolean valueIsMissing(String value){
return sizeOfValue(value) == 0;
}
private static boolean valueIsPresent(String value){
return ! valueIsMissing(value);
}
}
I also have similar config for Orange which points to the proper mongo instance.
Then I have my service like this:
public List<? extends Customer> findAllByEmail(String email) {
return Stream.concat(
bananaRepository.findAllByEmail(email).stream(),
orangeRepository.findAllByEmail(email).stream())
.collect(Collectors.toList());
}
Notice that I'm calling both repositories and then collecting back the results into one single list
What I would expect to happen is that each repository would connect to its corresponding mongo instance and query for the customer by its email.
But this don't happened. I always got the query executed against the same mongo instance.
But in the database log I can see both connections being made by spring.
It just uses one connection to run the queries for both repositories.
This is not surprising as both Mongo Config points to the same model package here. Right. But I also tried other approaches such as creating a BananaCustomer extends Customer, into its own model.banana package, and another OrangeCustomer extends Customer into its model.orange package, along with specifying the proper basePackageClasses into each config. But that neither worked, I've ended up getting both queries run against the same database.
:(
After scavenging official Spring-data-mongodb documentation for hours, and looking throughout thousands of lines of code here and there, I've run out of options: seems like nobody have done what I'm trying to accomplish before.
Except for this guy here that had to do the same thing but using JPA instead of mongodb: Link to article
Well, while it's still spring-data it't not for mongodb.
So here is my question:
¿How can I explicitly tell each repository to use a specific mongo config?
Magical autowiring rules, except when it doesn't work and nobody understands the magic.
Thanks in advance.
Well, I had a very detailed answer but StackOverflow complained about looking like spam and didn't allow me to post
The full answer is still available as a Gist file here
The bottom line is that both MongoRepository (interface) and the model object must be placed in the same package.
I am using AWS ECS to host my application and using DynamoDB for all database operations. So I'll have same database with different table names for different environments. Such as "dev_users" (for Dev env), "test_users" (for Test env), etc.. (This is how our company uses same Dynamo account for different environments)
So I would like to change the "tableName" of the model class using the environment variable passed through "AWS ECS task definition" environment parameters.
For Example.
My Model Class is:
#DynamoDBTable(tableName = "dev_users")
public class User {
Now I need to replace the "dev" with "test" when I deploy my container in test environment. I know I can use
#Value("${DOCKER_ENV:dev}")
to access environment variables. But I'm not sure how to use variables outside the class. Is there any way that I can use the docker env variable to select my table prefix?
My Intent is to use like this:
I know this not possible like this. But is there any other way or work around for this?
Edit 1:
I am working on the Rahul's answer and facing some issues. Before writing the issues, I'll explain the process I followed.
Process:
I have created the beans in my config class (com.myapp.users.config).
As I don't have repositories, I have given my Model class package name as "basePackage" path. (Please check the image)
For 1) I have replaced the "table name over-rider bean injection" to avoid the error.
For 2) I printed the name that is passing on to this method. But it is Null. So checking all the possible ways to pass the value here.
Check the image for error:
I haven't changed anything in my user model class as beans will replace the name of the DynamoDBTable when the beans got executed. But the table name over riding is happening. Data is pulling from the table name given at the Model Class level only.
What I am missing here?
The table names can be altered via an altered DynamoDBMapperConfig bean.
For your case where you have to Prefix each table with a literal, you can add the bean as such. Here the prefix can be the environment name in your case.
#Bean
public TableNameOverride tableNameOverrider() {
String prefix = ... // Use #Value to inject values via Spring or use any logic to define the table prefix
return TableNameOverride.withTableNamePrefix(prefix);
}
For more details check out the complete details here:
https://github.com/derjust/spring-data-dynamodb/wiki/Alter-table-name-during-runtime
I am able to achieve table names prefixed with active profile name.
First added TableNameResolver class as below,
#Component
public class TableNameResolver extends DynamoDBMapperConfig.DefaultTableNameResolver {
private String envProfile;
public TableNameResolver() {}
public TableNameResolver(String envProfile) {
this.envProfile=envProfile;
}
#Override
public String getTableName(Class<?> clazz, DynamoDBMapperConfig config) {
String stageName = envProfile.concat("_");
String rawTableName = super.getTableName(clazz, config);
return stageName.concat(rawTableName);
}
}
Then i setup DynamoDBMapper bean as below,
#Bean
#Primary
public DynamoDBMapper dynamoDBMapper(AmazonDynamoDB amazonDynamoDB) {
DynamoDBMapper mapper = new DynamoDBMapper(amazonDynamoDB,new DynamoDBMapperConfig.Builder().withTableNameResolver(new TableNameResolver(envProfile)).build());
return mapper;
}
Added variable envProfile which is an active profile property value accessed from application.properties file.
#Value("${spring.profiles.active}")
private String envProfile;
We have the same issue with regards to the need to change table names during runtime. We are using Spring-data-dynamodb 5.0.2 and the following configuration seems to provide the solutions that we need.
First I annotated my bean accessor
#EnableDynamoDBRepositories(dynamoDBMapperConfigRef = "getDynamoDBMapperConfig", basePackages = "my.company.base.package")
I also setup an environment variable called ENV_PREFIX which is Spring wired via SpEL.
#Value("#{systemProperties['ENV_PREFIX']}")
private String envPrefix;
Then I setup a TableNameOverride bean:
#Bean
public DynamoDBMapperConfig.TableNameOverride getTableNameOverride() {
return DynamoDBMapperConfig.TableNameOverride.withTableNamePrefix(envPrefix);
}
Finally, I setup the DynamoDBMapperConfig bean using TableNameOverride injection. In 5.0.2, we had to setup a standard DynamoDBTypeConverterFactory in the DynamoDBMapperConfig builder to avoid NPE.:
#Bean
public DynamoDBMapperConfig getDynamoDBMapperConfig(DynamoDBMapperConfig.TableNameOverride tableNameOverride) {
DynamoDBMapperConfig.Builder builder = new DynamoDBMapperConfig.Builder();
builder.setTableNameOverride(tableNameOverride);
builder.setTypeConverterFactory(DynamoDBTypeConverterFactory.standard());
return builder.build();
}
In hind sight, I could have setup a DynamoDBTypeConverterFactory bean that returns a standard DynamoDBTypeConverterFactory and inject that into the getDynamoDBMapperConfig() method using the DynamoDBMapperConfig builder. But this will also do the job.
I up voted the other answer but here is an idea:
Create a base class with all your user details:
#MappedSuperclass
public abstract class AbstractUser {
#Id
#GeneratedValue(strategy=GenerationType.AUTO)
private Long id;
private String firstName;
private String lastName;
Create 2 implentations with different table names and spirng profiles:
#Profile(value= {"dev","default"})
#Entity(name = "dev_user")
public class DevUser extends AbstractUser {
}
#Profile(value= {"prod"})
#Entity(name = "prod_user")
public class ProdUser extends AbstractUser {
}
Create a single JPA respository that uses the mapped super classs
public interface UserRepository extends CrudRepository<AbstractUser, Long> {
}
Then switch the implentation with the spring profile
#RunWith(SpringJUnit4ClassRunner.class)
#DataJpaTest
#Transactional
public class UserRepositoryTest {
#Autowired
protected DataSource dataSource;
#BeforeClass
public static void setUp() {
System.setProperty("spring.profiles.active", "prod");
}
#Test
public void test1() throws Exception {
DatabaseMetaData metaData = dataSource.getConnection().getMetaData();
ResultSet tables = metaData.getTables(null, null, "PROD_USER", new String[] { "TABLE" });
tables.next();
assertEquals("PROD_USER", tables.getString("TABLE_NAME"));
}
}
I am using Spring Data Solr in my project. In some cases generated queries to Solr are too big (e.g.15Kb+) and cause Solr exceptions. This solution: http://codingtricks.fidibuy.com/participant/join/54fce329b760506d5d9e7db3/Spring-Data-Solr-cannot-handle-long-queries
still fails for some queries.
Since directly sending those queries to Solr via POST works fine, I chose to work in this direction. I failed to find in Spring Data Solr any way to configure the preferred method (GET/POST) for queries. Therefore, I came to the following solution: I extended SolrServer
public class CustomSolrServer extends HttpSolrServer {
public CustomSolrServer(String home, String core) {
super(home);
setCore(core);
}
#Override
public QueryResponse query(SolrParams params) throws SolrServerException {
METHOD method = METHOD.GET;
if (isBigQuery(params)) {
method = METHOD.POST;
}
return new QueryRequest( params, method ).process( this );
}
}
(some details skipped, setCore() and isBigQuery() are trivial and skipped as well)
and use it as SolrServer bean in SolrConfiguration.class:
#Configuration
#EnableSolrRepositories(basePackages = { "com.vvy.repository.solr" }, multicoreSupport=false)
#Import(value = SolrAutoConfiguration.class)
#EnableConfigurationProperties(SolrProperties.class)
public class SolrConfiguration {
#Autowired
private SolrProperties solrProperties;
#Value("${spring.data.solr.core}")
private String solrCore;
#Bean
public SolrServer solrServer() {
return new CustomSolrServer(solrProperties.getHost(),solrCore) ;
}
}
This works OK, but has a couple of drawbacks: I had to set multiCoreSupport to false. This was done because when Spring Data Solr implements repositories from the interfaces, with multiCoreSupport on it uses MultiCoreSolrServerFactory and tries to store a server per core, which is done by cloning them to the holding map. Naturally, it crashes on a customized SolrServer, because SolrServerUtils doesn't know how to clone() it. Also, I have to set core manually instead of enjoying Spring Data extracting it from #SolrDocument annotation's parameter on the entity class.
Here are the questions
1) the main and general question: is there any reasonable way to solve the problem of too long queries in Spring Data Solr (or, more specifically, to use POST instead of GET)?
2) a minor one: is there a reasonable way to customize SolrServer in Spring Data Solr and yet maintain multiCoreSupport?
Answer for Q1:Yes, u can using POST instead of GET.
Answer for Q2:Yes, u already have done a half.Except following:
1)u have to rename 'CustomSolrServer' to 'HttpSolrServer',u can check method
org.springframework.data.solr.server.support.SolrServerUtils#clone(T, java.lang.String)
for reason.
2)u don't have to specify concrete core name.U can specify core name using annotation
org.springframework.data.solr.core.mapping.SolrDocument
on corresponding solr model.
3)set multicoreSupport = true
According to your sample of classes, they should look like as following:
package com.x.x.config;
import org.apache.solr.client.solrj.SolrRequest;
import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.request.QueryRequest;
import org.apache.solr.client.solrj.response.QueryResponse;
import org.apache.solr.common.params.SolrParams;
public class HttpSolrServer extends org.apache.solr.client.solrj.impl.HttpSolrServer {
public HttpSolrServer(String host) {
super(host);
}
#Override
public QueryResponse query(SolrParams params) throws SolrServerException {
SolrRequest.METHOD method = SolrRequest.METHOD.POST;
return new QueryRequest(params, method).process(this);
}
}
#Configuration
#EnableSolrRepositories(basePackages = { "com.vvy.repository.solr" }, multicoreSupport=true)
#Import(value = SolrAutoConfiguration.class)
#EnableConfigurationProperties(SolrProperties.class)
public class SolrConfiguration {
#Autowired
private SolrProperties solrProperties;
#Bean
public SolrServer solrServer() {
return new com.x.x.config.HttpSolrServer(solrProperties.getHost()) ;
}
}
ps: Latest spring-data-solr 3.x.x already support custom query request method,see post issue
This is related to
MongoDB and SpEL Expressions in #Document annotations
This is the way I am creating my mongo template
#Bean
public MongoDbFactory mongoDbFactory() throws UnknownHostException {
String dbname = getCustid();
return new SimpleMongoDbFactory(new MongoClient("localhost"), "mydb");
}
#Bean
MongoTemplate mongoTemplate() throws UnknownHostException {
MappingMongoConverter converter =
new MappingMongoConverter(mongoDbFactory(), new MongoMappingContext());
return new MongoTemplate(mongoDbFactory(), converter);
}
I have a tenant provider class
#Component("tenantProvider")
public class TenantProvider {
public String getTenantId() {
--custome Thread local logic for getting a name
}
}
And my domain class
#Document(collection = "#{#tenantProvider.getTenantId()}_device")
public class Device {
-- my fields here
}
As you see I have created my mongotemplate as specified in the post, but I still get the below error
Exception in thread "main" org.springframework.expression.spel.SpelEvaluationException: EL1057E:(pos 1): No bean resolver registered in the context to resolve access to bean 'tenantProvider'
What am I doing wrong?
Finally figured out why i was getting this issue.
When using Servlet 3 initialization make sure that you add the application context to the mongo context as follows
#Autowired
private ApplicationContext appContext;
public MongoDbFactory mongoDbFactory() throws UnknownHostException {
return new SimpleMongoDbFactory(new MongoClient("localhost"), "apollo-mongodb");
}
#Bean
MongoTemplate mongoTemplate() throws UnknownHostException {
final MongoDbFactory factory = mongoDbFactory();
final MongoMappingContext mongoMappingContext = new MongoMappingContext();
mongoMappingContext.setApplicationContext(appContext);
// Learned from web, prevents Spring from including the _class attribute
final MappingMongoConverter converter = new MappingMongoConverter(factory, mongoMappingContext);
converter.setTypeMapper(new DefaultMongoTypeMapper(null));
return new MongoTemplate(factory, converter);
}
Check the autowiring of the context and also
mongoMappingContext.setApplicationContext(appContext);
With these two lines i was able to get the component wired correctly to use it in multi tenant mode
The above answers just worked partially in my case.
I've been struggling with the same problem and finally realized that under some runtime execution path (when RepositoryFactorySupport relies on AbstractMongoQuery to query MongoDB, instead of SimpleMongoRepository which as far as I know is used in "out of the box" methods provided by SpringData) the metadata object of type MongoEntityMetadata that belongs to MongoQueryMethod used in AbstractMongoQuery is updated only once, in a method named getEntityInformation()
Because MongoQueryMethod object that holds a reference to this 'stateful' bean seems to be pooled/cached by infrastructure code #Document annotations with Spel not always work.
As far as I know as a developer we just have one choice, use MongoOperations directly from your #Repository bean in order to be able to specify the right collection name evaluated at runtime with Spel.
I've tried to use AOP in order to modify this behaviour, by setting a null collection name in MongoEntityMetadata but this does not help because changes in AbstractMongoQuery inner classes, that implement Execution interface, would also need to be done in order to check if MongoEntityMetadata collection name is null and therefore use a different MongoTemplate method signature.
MongoTemplate is smart enough to guess the right collection name by using its private method
private <T> String determineEntityCollectionName(T obj)
I've a created a ticket in spring's jira https://jira.spring.io/browse/DATAMONGO-1043
If you have the mongoTemplate configured as in the related issue, the only thing i can think of is this:
<context:component-scan base-package="com.tenantprovider.package" />
Or if you want to use annotations:
#ComponentScan(basePackages = "com.tenantprovider.package")
You might not be scanning the tenant provider package.
Ex:
#ComponentScan(basePackages = "com.tenantprovider.package")
#Document(collection = "#{#tenantProvider.getTenantId()}_device")
public class Device {
-- my fields here
}
When my Grails app starts up, I also begin a Spring Integration and Batch process in the background. I want to have some DB connection properties stored in the Config.groovy file, but how do I access them from a Java class used in teh Integration/Batch process?
I found this thread:
Converting Java -> Grails ... How do I load these properties?
Which suggests using:
private Map config = ConfigurationHolder.getFlatConfig();
followed by something like:
String driver = (String) config.get("jdbc.driver");
This actually works fine (teh properties are loaded correctly from Config.groovy) but the problem is that ConfigurationHolder is after being deprecated. And any thread I've found dealing with the issue seems to be Grails-specific and suggest using dependancy injection, like in this thread:
How to access Grails configuration in Grails 2.0?
So is there a non-deprecated way to get access to the Config.groovy properties from a Java class file?
Just checked in some of my existing code, and I use this method described by Burt Beckwith
Create a new file: src/groovy/ctx/ApplicationContextHolder.groovy
package ctx
import org.springframework.context.ApplicationContext
import org.springframework.context.ApplicationContextAware
import javax.servlet.ServletContext
import org.codehaus.groovy.grails.commons.GrailsApplication
import org.codehaus.groovy.grails.plugins.GrailsPluginManager
import org.springframework.context.ApplicationContext
import org.springframework.context.ApplicationContextAware
#Singleton
class ApplicationContextHolder implements ApplicationContextAware {
private ApplicationContext ctx
private static final Map<String, Object> TEST_BEANS = [:]
void setApplicationContext(ApplicationContext applicationContext) {
ctx = applicationContext
}
static ApplicationContext getApplicationContext() {
getInstance().ctx
}
static Object getBean(String name) {
TEST_BEANS[name] ?: getApplicationContext().getBean(name)
}
static GrailsApplication getGrailsApplication() {
getBean('grailsApplication')
}
static ConfigObject getConfig() {
getGrailsApplication().config
}
static ServletContext getServletContext() {
getBean('servletContext')
}
static GrailsPluginManager getPluginManager() {
getBean('pluginManager')
}
// For testing
static void registerTestBean(String name, bean) {
TEST_BEANS[name] = bean
}
// For testing
static void unregisterTestBeans() {
TEST_BEANS.clear()
}
}
Then, edit grails-app/config/spring/resources.groovy to include:
applicationContextHolder(ctx.ApplicationContextHolder) { bean ->
bean.factoryMethod = 'getInstance'
}
Then, in your files inside src/java or src/groovy, you can call:
GrailsApplication app = ApplicationContextHolder.getGrailsApplication() ;
ConfigObject config = app.getConfig() ;
Just to register, in Grails 2.x, there's a Holders class that replaces this deprecated holder. You can use this to access grailsApplication in a static context.
I can't work out why this is not working, but I can suggest an alternative approach entirely. Grails sets up a PropertyPlaceholderConfigurer that takes its values from the grailsApplication.config, so you could declare a
public void setDriver(String driver) { ... }
on your class and then say
<bean class="com.example.MyClass" id="exampleBean">
<property name="driver" value="${jdbc.driver}" />
</bean>
This also works in resources.groovy if you're using the beans DSL, but you must remember to use single quotes rather than double:
exampleBean(MyClass) {
driver = '${jdbc.driver}'
}
Using "${jdbc.driver}" doesn't work because that gets interpreted by Groovy as a GString and (fails to be) resolved when resources.groovy is processed, whereas what you need is to put a literal ${...} expression in as the property value to be resolved later by the placeholder configurer.