I have an issue where I have only one database to use but I have multiple servers where I want them to use a different table name for each server.
Right now my class is configured as:
#Entity
#Table(name="loader_queue")
class LoaderQueue
I want to be able to have dev1 server point to loader_queue_dev1 table, and dev2 server point to loader_queue_dev2 table for instance.
Is there a way i can do this with or without using annotations?
I want to be able to have one single build and then at runtime use something like a system property to change that table name.
For Hibernate 4.x, you can use a custom naming strategy that generates the table name dynamically at runtime. The server name could be provided by a system property and so your strategy could look like this:
public class ServerAwareNamingStrategy extends ImprovedNamingStrategy {
#Override
public String classToTableName(String className) {
String tableName = super.classToTableName(className);
return resolveServer(tableName);
}
private String resolveServer(String tableName) {
StringBuilder tableNameBuilder = new StringBuilder();
tableNameBuilder.append(tableName);
tableNameBuilder.append("_");
tableNameBuilder.append(System.getProperty("SERVER_NAME"));
return tableNameBuilder.toString();
}
}
And supply the naming strategy as a Hibernate configuration property:
<property
name="hibernate.ejb.naming_strategy"
value="my.package.ServerAwareNamingStrategy"
/>
I would not do this. It is very much against the grain of JPA and very likely to cause problems down the road. I'd rather add a layer of views to the tables providing unified names to be used by your application.
But you asked, so have some ideas how it might work:
You might be able to create the mapping for your classes, completely by code. This is likely to be tedious, but gives you full flexibility.
You can implement a NamingStrategy which translates your class name to table names, and depends on the instance it is running on.
You can change your code during the build process to build two (or more) artefacts from one source.
Related
I am very new to Springboot and Spring Data JPA and working on a use case where I am required to create users in different databases.
The application will receive 2 inputs from a queue - username and database name.
Using this I have to provision the given user in the given database.
I am unable to understand the project architecture.
Since the query I need to run will be of the format - create user ABC identified by password;
How should the project look like in terms of model class, repositories etc? Since I do not have an actual table against which the query will be run, do I need a model class since there will be no column mappings happening as such.
TLDR - Help in architecturing Springboot-Spring Data JPA application configured with multiple data sources to run queries of the format : create user identified by password
I have been using this GitHub repo for reference - https://github.com/jahe/spring-boot-multiple-datasources/blob/master/src/main/java/com/foobar
I'll be making some assumptions here:
your database of choice is Oracle, based on provided syntax: create user ABC identified by password
you want to create and list users
your databases are well-known and defined in JNDI
I can't just provide code unfortunately as setting it up would take me some work, but I can give you the gist of it.
Method 1: using JPA
first, create a User entity and a corresponding UserRepository. Bind the entity to the all_users table. The primary key will probably be either the USERNAME or the USER_ID column... but it doesn't really matter as you won't be doing any insert into that table.
to create and a user, add a dedicated method to your own UserRepository specifying the user creation query within a #NativeQuery annotation. It should work out-of-the-box.
to list users you shouldn't need to do anything, as your entity at this point is already bound to the correct table. Just call the appropriate (and already existing) method in your repository.
The above in theory covers the creation and listing of users in a given database using JPA.
If you have a limited number of databases (and therefore a limited number of well-known JNDI datasources) at this point you can proceed as shown in the GitHub example you referenced, by providing different #Configuration classes for each different DataSource, each with the related (identical) repository living in a separate package.
You will of course have to add some logic that will allow you to appropriately select the JpaRepository to use for the operations.
This will lead to some code duplication and works well only if the needs remain very simple over time. That is: it works if all your "microservice" will ever have to do is this create/list (and maybe delete) of users and the number of datasources remains small over time, as each new datasource will require you to add new classes, recompile and redeploy the microservice.
Alternatively, try with the approach proposed here:
https://www.endpoint.com/blog/2016/11/16/connect-multiple-jpa-repositories-using
Personally however I would throw JPA out of the window completely as it's anything but easy to dynamically configure arbitrary DataSource objects and reconfigure the repositories to work each time against a different DataSource and the above solution will force you to constant maintenance over such a simple application.
What I would do would be sticking with NamedParameterJdbcTemplate initialising it by using JndiTemplate. Example:
void createUser(String username, String password, String database) {
DataSource ds = (new JndiTemplate()).lookup(database);
NamedParameterJdbcTemplate npjt = new NamedParameterJdbcTemplate();
Map<String, Object> params = new HashMap<>();
params.put("USERNAME", username);
params.put("PASSWORD", password);
npjt.execute('create user :USERNAME identified by :PASSWORD', params);
}
List<Map<String, Object>> listUsers() {
DataSource ds = (new JndiTemplate()).lookup(database);
NamedParameterJdbcTemplate npjt = new NamedParameterJdbcTemplate();
return npjt.queryForList("select * from all_users", new HashMap<>());
}
Provided that your container has the JNDI datasources already defined, the above code should cover both the creation of a user and the listing of users. No need to define entities or repositories or anything else. You don't even have to define your datasources in a spring #Configuration. The above code (which you will have to test) is really all you need so you could wire it in a #Controller and be done with it.
If you don't use JNDI it's no problem either: you can use HikariCP to define your datasources, providing the additional arguments as parameters.
This solution will work no matter how many different datasources you have and won't need redeployment unless you really have to work on its features. Plus, it doesn't need the developer to know JPA and it doesn't need to spread the configuration all over the place.
I'm using simple Java classes which are the schema for my mongo db table.
There are several frameworks for serialization/ deserialization to/ from JSON and CRUD operations for mongo (I've looked into Jackson serializer and Morphia).
But none of them seems to provide a solution for handling changes:
Let's say I have this class as my schema:
Class Person
{
String name;
int age;
String occupation;
}
In my code, I will probably use a setter in some place for age:
Person newDbEntry = new Person();
newDbEntry.setAge(45);
newDbEntry.setOccupation("Carpenter");
Now let's say that at some point of the development process, it was decided that age field name needs to be changed to "theAge", and it was also decided to remove "occupation" field from this collection completely- to a new table.
The problem that I'm faced with is that all my queries look like this:
JsonObject query = new JsonObject().put("age",new JsonObject().put("$gte", 22);
In other words, all field names are written in queries as Strings (and also in all other mongo APIs- update, findAndModify, etc).
I'm looking for a way to "bind" all mentions of the field "age" in my code with the POJO class- so that when something in the POJO schema changes (like renaming this field), I'll have (ideally) compiler errors in all queries that mention this field.
As it currently stands, changes to schema cause no compiler errors and - more critically - usually no runtime errors. The old string query just quietly returns no results, or something similar. This makes changes to the schema very hard to implement.
How should this be done correctly?
Here's the solution that I ended up using:
Project lombok now supports FieldNames generation:
https://projectlombok.org/features/experimental/FieldNameConstants
So instead of using the name hardcoded as string:
serviceRepository.setField(id, “service.serviceName”, “newName”);
I use:
serviceRepository.setField(id, ConnectivityServiceDetails.Fields.service + "." + ConnectivityService.Fields.serviceName, “newName”);
This way, when we search in Intellij for usages of this field (or try to refactor it), it will find those places also automatically.
Is it possible to create a template/live template with IntelliJ to create the full stack of usual boilerplate for a Domain Object?
Let me give you an example: A usual structure in in a backend could look something like this:
Define a functional domain object: Foobar
Create the entity FoobarEntity:
#Entity #Table(name="foobar") #Getter #Setter
public class FoobarEntity implements Persistable<Long> {
#Id
private Long id;
#Column
private String someData;
#Column
private String someMoreData;
}
Now the boilerplate party starts, create data transfer objects, data access objects, services, ...: Create FoobarDto (to get started), Interface FoobarDao (CRUD) and default implementation FoobarDaoJpa, Interface FoobarService (CRUD) and default implementation FoobarServiceImpl, a mapper to map from Entity to Dto FoobarDtoMapper, maybe a Spring config FoobarConfig, maybe a filter object to search FoobarSearchFilter, maybe some more Classes for a REST api like FoobarRessource, FoobarController, ...
Some further considerations: More annotations (like #Service or something like that) would be somehow useless since the all the classes start with the same code base (like add, delete, edit, load methods for a service and a dao) but, however, will grow in the further process of development.
Is this somehow possible with IntelliJ (or another tool you know)?
You can create entities like that with hibernate plugin. It creates entities according to your table structure. Just add hibernate framework to your project (in linux, press Ctrl + Shift + a, then type hibernate and select add hibernate framework), then you'll get a task window like that:
Now right click on your projects name (will be different in your case) and select Generate Persistence Mapping > By Database Schema.
Now a window will open and you can select the tables you want to create an entity for.
Note that you need to have set up your database in idea to make this work.
For your third point, use file templates. Again, press Ctrl + Shift + a, but then type file template - create the templates once and just use them...
I have some legacy tables with the same structure in MySQL like:
my_table_01
my_table_02
my_table_03
...
Is there a way I can configure JOOQ codegen to generate only one table/record class which shared by all those tables ?
There are two steps that you have to do in order to achieve what you like to do:
1. Configure the code generator
You'll probably have to exclude my_table_02 and my_table_03 from being generated. You can do this by specifying the <excludes/> tag as documented here.
Optionally, you could use generator strategies (programmatic config) or matcher strategies (XML config) to rename my_table_01 to my_table.
2. Configure your runtime
While running queries against MY_TABLE, you can specify runtime table mapping in order to map MY_TABLE back to my_table_01 or my_table_02 or my_table_03. This mapping works on a per-configuration basis, i.e. it will have the scope of a single query if you're using one configuration per query.
Another option is, of course, to abstract over these suffixes in your client code, e.g. via a table selection method:
public static Table<?> myTable() {
if (something)
return DSL.table("{0}_01", MY_TABLE);
else if (somethingElse)
return DSL.table("{0}_02", MY_TABLE);
...
}
Imagine you have four MySQL database schemas across two environments:
foo (the prod db),
bar (the in-progress restructuring of the foo db),
foo_beta (the test db),
and bar_beta (the test db for new structures).
Further, imagine you have a Spring Boot app with Hibernate annotations on the entities, like so:
#Table(name="customer", schema="bar")
public class Customer { ... }
#Table(name="customer", schema="foo")
public class LegacyCustomer { ... }
When developing locally it's no problem. You mimic the production database table names in your local environment. But then you try to demo functionality before it goes live and want to upload it to the server. You start another instance of the app on another port and realize this copy needs to point to "foo_beta" and "bar_beta", not "foo" and "bar"! What to do!
Were you using only one schema in your app, you could've left off the schema all-together and specified hibernate.default_schema, but... you're using two. So that's out.
Spring EL--e.g. #Table(name="customer", schema="${myApp.schemaName}") isn't an option--(with even some snooty "no-one needs this" comments), so if dynamically defining schemas is absurd, what does one do? Other than, you know, not getting into this ridiculous scenario in the first place.
I have fixed such kind of problem by adding support for my own schema annotation to Hibernate. It is not very hard to implement by extending LocalSessionFactoryBean (or AnnotationSessionFactoryBean for Hibernate 3). The annotation looks like this
#Target(TYPE)
#Retention(RUNTIME)
public #interface Schema {
String alias() default "";
String group() default "";
}
Example of using
#Entity
#Table
#Schema(alias = "em", group = "ref")
public class SomePersistent {
}
And a schema name for every combination of alias and group is specified in a spring configuration.
you can try with interceptors
public class CustomInterceptor extends EmptyInterceptor {
#Override
public String onPrepareStatement(String sql) {
String prepedStatement = super.onPrepareStatement(sql);
prepedStatement = prepedStatement.replaceAll("schema", "Schema1");
return prepedStatement;
}
}
add this interceptor in session object as
Session session = sessionFactory.withOptions().interceptor(new MyInterceptor()).openSession();
so what happens is when ever onPrepareStatement is executed this block of code will be called and schema name will be changed from schema to schema1.
You can override the settings you declare in the annotations using a orm.xml file. Configure maven or whatever you use to generate your deployable build artifacts to create that override file for the test environment.