There is a third-party application, which database is accessed by my application. It's database schema had been changed several times, so, there are about four different database schemas right now (different columns, different select conditions for the same entities).
For example, there is an entity "Application". For different schemas it could be retrieved by:
1)SELECT * FROM apps WHERE cell_number < 65535 AND page_number < 65535 AND top_number = 65535
2)SELECT * FROM menu_item WHERE cell_number > -1 AND page_number > -1 AND parent_id = -1 AND component_name IS NOT NULL
And so on...
So, what design pattern (in Java) would be more comfortable to support multiple database schemas of different versions of the same application? It should be ready for future changes, of course.
It's not an easy task because is difficult to proper map a table structure to an object (nowadays an ORM is often used to perform this task).
From your question seems that declaring Application as an abstract class or interface and provide different implementation is enough.
public abstract class Application(){
public abstract Application getAnApplication();
}
public ConcreteApplicationOne(){
public Application getAnApplication(){
//retrieve application data from database scheama 1 , build object and return it.
}
}
public ConcreteApplicationTwo(){
public Application getAnApplication(){
//retrieve application data from database scheama 2 , build object and return it.
}
}
And using some sort of factory to build give to the user the right concrete Application class:
public class factory{
public Application getApplicationImplementation(){
if (cond1){
return new ConcreteApplicationOne();
}else {
return new ConcreteApplicationTwo();
}
}
}
I believe the solution to your problem is to define your data classes in your application and use an ORM like Hibernate to generate the database tables in your DB. You will need to check for the Migration functionality. Please check out the following article that talks about this topic:
Hibernate and DB migration
With moving the data structure to your primary code you will win the following:
No need to maintain code and logic in two places and in two languages
Easier to test as there is no logic in DB
The migration script can be generated automatically
Related
We are doing a data migration from one database to another using Hibernate and Spring Batch. The example below is slightly disguised.
Therefore, we are using the standard processing pipeline:
return jobBuilderFactory.get("migrateAll")
.incrementer(new RunIdIncrementer())
.listener(listener)
.flow(DConfiguration.migrateD())
and migrateD consists of three steps:
#Bean(name="migrateDsStep")
public Step migrateDs() {
return stepBuilderFactory.get("migrateDs")
.<org.h2.D, org.mssql.D> chunk(100)
.reader(dReader())
.processor(dItemProcessor)
.writer(dWriter())
.listener(chunkLogger)
.build();
Now asume that this table has a manytomany relationship to another table. How can I persist that? I have basically a JPA Entity Class for all my Entities and fill those in the processor which does the actual migration from the old database objects to the new ones.
#Component
#Import({mssqldConfiguration.class, H2dConfiguration.class})
public class ClassificationItemProcessor implements ItemProcessor<org.h2.d, org.mssql.d> {
public ClassificationItemProcessor() {
super();
}
public Classification process(org.h2.d a) throws Exception {
d di = new di();
di.setA(a.getA);
di.setB(a.getB);`
// asking for object e.g. possible via, But this does not work:
// Set<e> es = eRepository.findById(a.getes());
di.set(es)
...
// How to model a m:n?
return d;
}
So I could basically ask for the related object via another database call (Repository) and add it to d. But when I do that, I rather run into LazyInitializationExceptions or, if it was successful sometimes the data in the intermediate tables will not have been filled up.
What is the best practice to model this?
This is not a Spring Batch issue, it is rather a Hibernate mapping issue. As far as Spring Batch is concerned, your input items are of type org.h2.D and your output items are of type org.mssql.D. It is up to you define what an item is and how to "enrich" it in your item processor.
You need to make sure that items received by the writer are completely "filled in", meaning that you have already set any other entities on them (be it a single entity or a set of of entities such as di.set(es) in your example). If this leads to lazy intitialization exceptions, you need to change your model to be eagerly initialized instead, because Spring Batch cannot help at that level.
Since spring-data-neo4j 6.0 the #Depth annotation for query methods was removed (DATAGRAPH-1333, commit).
How would one migrate existing 5.3 code which uses the annotation to 6.0? There is no mention of it in the migration guide.
Example usage, documented in the 5.3.6.RELEASE reference:
public interface MovieRepo extends Neo4jRepository<Movie, Long> {
#Depth(1) // Default, load simple properties and its immediately-related objects
Optional<Movie> findById(Long id);
#Depth(0) // Load simple properties only
Optional<Movie> findByProperty1(String property1);
#Depth(2) // Load simple properties, immediately-related objects and their immediately-related objects
Optional<Movie> findByProperty2(String property2);
#Depth(-1) // Load whole relationship graph
Optional<Movie> findByProperty3(String property3);
}
Are custom queries the only option or is there a replacement?
There is no custom depth anymore in SDN. It either loads everything that is described in your Java model or you have to supply custom Cypher statements.
Some background for this: with SDN 6 we dropped the internal session cache completely because we want to ensure that the Java object graph is after loading and persisting in sync with the database graph. As a consequence we cannot track a custom depth over multiple operations anymore.
A partial loaded graph now reflects the truth of the Java model and when getting persisted might remove existing (but not loaded) relationships.
Some insights can be found in the documentation section for the query creation. https://docs.spring.io/spring-data/neo4j/docs/current/reference/html/#query-creation
I am very new to Springboot and Spring Data JPA and working on a use case where I am required to create users in different databases.
The application will receive 2 inputs from a queue - username and database name.
Using this I have to provision the given user in the given database.
I am unable to understand the project architecture.
Since the query I need to run will be of the format - create user ABC identified by password;
How should the project look like in terms of model class, repositories etc? Since I do not have an actual table against which the query will be run, do I need a model class since there will be no column mappings happening as such.
TLDR - Help in architecturing Springboot-Spring Data JPA application configured with multiple data sources to run queries of the format : create user identified by password
I have been using this GitHub repo for reference - https://github.com/jahe/spring-boot-multiple-datasources/blob/master/src/main/java/com/foobar
I'll be making some assumptions here:
your database of choice is Oracle, based on provided syntax: create user ABC identified by password
you want to create and list users
your databases are well-known and defined in JNDI
I can't just provide code unfortunately as setting it up would take me some work, but I can give you the gist of it.
Method 1: using JPA
first, create a User entity and a corresponding UserRepository. Bind the entity to the all_users table. The primary key will probably be either the USERNAME or the USER_ID column... but it doesn't really matter as you won't be doing any insert into that table.
to create and a user, add a dedicated method to your own UserRepository specifying the user creation query within a #NativeQuery annotation. It should work out-of-the-box.
to list users you shouldn't need to do anything, as your entity at this point is already bound to the correct table. Just call the appropriate (and already existing) method in your repository.
The above in theory covers the creation and listing of users in a given database using JPA.
If you have a limited number of databases (and therefore a limited number of well-known JNDI datasources) at this point you can proceed as shown in the GitHub example you referenced, by providing different #Configuration classes for each different DataSource, each with the related (identical) repository living in a separate package.
You will of course have to add some logic that will allow you to appropriately select the JpaRepository to use for the operations.
This will lead to some code duplication and works well only if the needs remain very simple over time. That is: it works if all your "microservice" will ever have to do is this create/list (and maybe delete) of users and the number of datasources remains small over time, as each new datasource will require you to add new classes, recompile and redeploy the microservice.
Alternatively, try with the approach proposed here:
https://www.endpoint.com/blog/2016/11/16/connect-multiple-jpa-repositories-using
Personally however I would throw JPA out of the window completely as it's anything but easy to dynamically configure arbitrary DataSource objects and reconfigure the repositories to work each time against a different DataSource and the above solution will force you to constant maintenance over such a simple application.
What I would do would be sticking with NamedParameterJdbcTemplate initialising it by using JndiTemplate. Example:
void createUser(String username, String password, String database) {
DataSource ds = (new JndiTemplate()).lookup(database);
NamedParameterJdbcTemplate npjt = new NamedParameterJdbcTemplate();
Map<String, Object> params = new HashMap<>();
params.put("USERNAME", username);
params.put("PASSWORD", password);
npjt.execute('create user :USERNAME identified by :PASSWORD', params);
}
List<Map<String, Object>> listUsers() {
DataSource ds = (new JndiTemplate()).lookup(database);
NamedParameterJdbcTemplate npjt = new NamedParameterJdbcTemplate();
return npjt.queryForList("select * from all_users", new HashMap<>());
}
Provided that your container has the JNDI datasources already defined, the above code should cover both the creation of a user and the listing of users. No need to define entities or repositories or anything else. You don't even have to define your datasources in a spring #Configuration. The above code (which you will have to test) is really all you need so you could wire it in a #Controller and be done with it.
If you don't use JNDI it's no problem either: you can use HikariCP to define your datasources, providing the additional arguments as parameters.
This solution will work no matter how many different datasources you have and won't need redeployment unless you really have to work on its features. Plus, it doesn't need the developer to know JPA and it doesn't need to spread the configuration all over the place.
I have an issue where I have only one database to use but I have multiple servers where I want them to use a different table name for each server.
Right now my class is configured as:
#Entity
#Table(name="loader_queue")
class LoaderQueue
I want to be able to have dev1 server point to loader_queue_dev1 table, and dev2 server point to loader_queue_dev2 table for instance.
Is there a way i can do this with or without using annotations?
I want to be able to have one single build and then at runtime use something like a system property to change that table name.
For Hibernate 4.x, you can use a custom naming strategy that generates the table name dynamically at runtime. The server name could be provided by a system property and so your strategy could look like this:
public class ServerAwareNamingStrategy extends ImprovedNamingStrategy {
#Override
public String classToTableName(String className) {
String tableName = super.classToTableName(className);
return resolveServer(tableName);
}
private String resolveServer(String tableName) {
StringBuilder tableNameBuilder = new StringBuilder();
tableNameBuilder.append(tableName);
tableNameBuilder.append("_");
tableNameBuilder.append(System.getProperty("SERVER_NAME"));
return tableNameBuilder.toString();
}
}
And supply the naming strategy as a Hibernate configuration property:
<property
name="hibernate.ejb.naming_strategy"
value="my.package.ServerAwareNamingStrategy"
/>
I would not do this. It is very much against the grain of JPA and very likely to cause problems down the road. I'd rather add a layer of views to the tables providing unified names to be used by your application.
But you asked, so have some ideas how it might work:
You might be able to create the mapping for your classes, completely by code. This is likely to be tedious, but gives you full flexibility.
You can implement a NamingStrategy which translates your class name to table names, and depends on the instance it is running on.
You can change your code during the build process to build two (or more) artefacts from one source.
I'm using the datasources pluging for Grails described here: http://burtbeckwith.com/blog/?p=70
I'm connecting to 2 MySQL database schemas on the same server: my_schema_1 and my_schema_2. Most of the data I need comes from my_schema_1, but one of its tables contains a column that references one of the tables in my_schema_2.
Here are my datasource definitions in my Datasources.groovy file (simplified):
datasources = {
datasource(name: 'my_schema_1') {
domainClasses([Question, Answer])
driverClassName('com.mysql.jdbc.Driver')
url('jdbc:mysql://test.myserver.com/my_schema_1')
username('***')
password('***')
}
datasource(name: 'my_schema_2') {
domainClasses([Genre])
driverClassName('com.mysql.jdbc.Driver')
url('jdbc:mysql://test.myserver.com/my_schema_2')
username('***')
password('***')
}
}
Here are my 3 class definitions:
class Question {
String text
Answer answer
Genre genre
}
class Answer {
String text
}
class Genre {
String name
}
Whenever I try to perform a criteria query on the Question class, I get the following mapping exception:
An association from the table question refers to an unmapped class: Genre
If I comment out the genre property in the Question class, everything works fine. If I perform a criteria query on the Genre class itself,
everything works fine. There just seems to be a problem joining the 2 classes across schemas. (Of course, it's also very possible I missed
something or did something incorrectly.)
Am I doing something wrong or is this a limitation of the datasources plugin? And, if this is a limitation of the plugin, what alternatives could I use to achieve what I need?
Any help/suggestions are much appreciated.
Thanks,
B.J.
The datasources plugin only supports weak-references between databases. This means you will need to manage the integrity of the associations yourself. The best way to accomplish this is to implement a service that is capable of querying both domain instances and providing you with the composite domain instance.
The link you referenced notes this towards the bottom of the entry. Also, here is the same question posed (and answered) on the grails mailing lists.
I found a simpler solution since the databases are on the same server.
I simply define one datasource as follows (without specifying a database):
datasources = {
datasource(name: 'my_schemas') {
domainClasses([Question, Answer, Genre])
driverClassName('com.mysql.jdbc.Driver')
url('jdbc:mysql://test.myserver.com')
username('***')
password('***')
}
}
Then I specify the database in my domain classes' mapping sections:
class Question {
String text
Answer answer
Genre genre
static mapping = {
table 'my_schema_1.question'
}
}
class Answer {
String text
static mapping = {
table 'my_schema_1.answer'
}
}
class Genre {
String name
static mapping = {
table 'my_schema_2.genre'
}
}
Again, this only works because the 2 databases are on the same server, and they use the same username/password.