I am dynamically creating and using physical DB tables for which I only have one metamodel object. An example: I have one JOOQ class Customer in my metamodel, but I have CUSTOMER1, CUSTOMER2, etc. at runtime. I'd like to write strongly typed JOOQ queries for these dynamic tables. The following seems to do the trick:
Customer CUSTOMER1 = Customer.rename("CUSTOMER1")
Of course, there's a whole bunch of tables for which I need to do this. Unfortunately, I cannot leverage the rename method generically, because it is not part of the Table<R> interface. Is this an oversight or a deliberate measure against something I am missing?
Is there a robust and elegant way to achieve what I want, i.e. without resorting to reflection?
EDIT: the two tables are never used jointly in the same query. Concrete usage pattern is the following: At any given moment, a DB synonym CUSTOMER will point to one (the active), while the other is being modified (the shadow copy). Once the modification is complete, roles are swapped by pointing the synonym at the other one, and we start over. We do this to minimize "downtime" of heavy reporting result tables.
The answer to your question
The question being in the title:
why is there no Table.rename(String)?
The feature was implemented in jOOQ 3.3 (#2921). The issue reads:
As table renaming is not really a SQL DSL feature, the rename() method should be generated onto generated tables only, instead of being declared in org.jooq.Table
In fact, this argument doesn't really make much sense. There are other "non-DSL" utilities on the DSL types. I don't see why rename() is special. The method should be declared on Table as you suggested. This will be done in jOOQ 3.9 (#5242).
The answer to your problem
From the update of your question, I understand that you don't really need that fine-grained renaming control. jOOQ's out of the box multi-tenancy feature will do. This is called "table mapping" in the manual:
http://www.jooq.org/doc/latest/manual/sql-building/dsl-context/runtime-schema-mapping
For the scope of a Configuration (most fine-grained scope: per-query level), you can rewrite any matching table name as such:
Settings settings = new Settings()
.withRenderMapping(new RenderMapping()
.withSchemata(
new MappedSchema().withInput("MY_SCHEMA")
.withOutput("MY_SCHEMA")
.withTables(
new MappedTable().withInput("CUSTOMER")
.withOutput("CUSTOMER1"))));
Related
We have a (possibly large) custom data structure implemented in Java (8+). It has a simple and optimal API for querying pieces of data. The logical structure is roughly similar to an RDMS (it has e. g. relations, columns, primary keys, and foreign keys), but there is no SQL driver.
The main goal is to access the data via ORM (mapping logical entities to JPA annotated beans). It would be nice if we could use JPQL. Hibernate is preferred but other alternatives are welcome too.
What is the simplest way to achieve this? Which are the key parts of such an implementation?
(P. S. Directly implementing SessionImplementor, EntityManagerImplementor etc. seems to be too complicated.)
You have two possibilities.
Implement a JDBC compliant driver for your system, so you can use a JPA implementation such as Hibernate "directly" (although you may need to create a custom dialect for your system).
Program directly against the JPA specification like ObjectDB does, which bypasses the need to go through SQL and JPA implementations completely.
The latter one is probably easier, but you'd still need to implement the full JPA API. If it's a custom in-house-only system, there's very little sense in doing either one.
One idea I thought up just now, that I feel may work is this:
Use an existing database implementation like H2 and use the JPA integration with that. H2 already has a JPA integration libraries, so it should be easy.
In this database, create a Java stored procedure or function and call it from your current application through JPA. See this H2 documentation on how to create a Java stored procedure or function. (You may want to explore the section "Using a Function as a Table" also.)
Define a protocol for the service methods and encapsulate it in a model class. An instance of this model class may be passed to the function/SP and responses retrieved.
Caveat: I have never done this myself but I think it will work.
Edit: Here is a diagram representing the thought. Though the diagram show H2 separately, it will most probably be in the same JVM as "Your Java/JEE application". However, since it is not necessary to use H2, I have shown it as as separate entity.
So the question - I have a lot of tables in database and almost all of them have on delete cascade. What is the best way to inform user what will be deleted in entire database if he deletes one certain row. What algorithm/patterns should I read? It's desirable with implementation in java. Thank you.
First off, this seems like a wrong design when you consider the fact that the cascade paths through the object graph are known at compile time and you are asking how to construct them, on demand, at runtime. You could build them and store them once.
That said, there probably isn't much reason for a design pattern. Mostly you are going to need Reflection, including the ability to find annotations on either properties or methods.
Then as you navigate the graph, you will look for the target annotations and either add or not add, and of course, if you don't find a cascade, you can stop going down that branch of the graph.
If there were some reason to handle types differently, Visitor would apply, but there isn't. The annotation processing tool from Sun used visitor, but that was for compile time processing.
Probably don't have the ability to do this, but it would be interesting to do it in Java 8 because you could more cleanly separate the navigation code from the test code, by defining a Predicate (as a Lambda) and then just having that be evaluated at each node. Your predicate would simple check for the presence of the Cascade annotation. Sounds like maybe you are not using an ORM so might not have annotations in your code for the cascades, all the more reason to have a separate predicate because then you could have a metadata version that actually looks at the specific database (Postgres), but if you wanted to use it with an ORM, you'd literally be changing a few lines of code.
I have a number of domain/business objects, which when used in a hibernate criteria are referenced by the field name as a string, for example :
Criteria crit = session.createCriteria(User.class);
Order myOrdering = Order.desc("firstname");
crit.addOrder(myOrdering);
Where firstname is a field/property of User.class.
I could manually create an Enum and store all the strings in there; is there any other way that I am missing and requires less work(I'll probably forget to maintain the Enum).
I'm afraid there is no a good way to do that.
Even if you decide to use reflections, you'll discover the problem only when the query will run.
But there is a little bit better solution how to discover the problem early: if you use the Named Queries (javax.persistence.NamedQueries) you'll get all your queries compiled as soon as your entities are processed by Hibernate, so basically it will happen during the server's start-up. So if some object was changed breaking the query, you'll know about it the next time you start the server and not when the query is actually run.
Hope it helps.
This is one of the things that irritates me about Hibernate.
In any case, I've solved this in the past using one of two mechanisms, either customizing the templates used to generate base classes from Hibernate config files, or interrogating my Hibernate classes for annotations/properties and generating appropriate enums, classes, constants, etc. from that. It's pretty straight-forward.
It adds a step to the build process, but IMO it was exactly what I needed when I did it. (The last few projects I haven't done it, but for large, multi-dev things I really like it.)
I'm working on a project using Play Framework that requires me to create a multi-user application. I've a central panel where we add a certain workshop for a team. Thing is, I don't know if this is the best way, but I want to generate the tables like
team1_tablename
team1_secondtable..
Then when a certain request hits using the virtual host (e.x. http://teamawesome.workshop.com) I would need to maneuver the query to THAT certain table.
The problem is not generating the tables, but working with the models. All the workshops are going to have the same generic tables. In the model I would have to state the table, etc but then if this was PHP with doctrine I would have a template created them after creating the workshop team1, but in java even if I generate them I would have to compile them too which requires me to do more research.
My question is more Hibernate oriented before jumping the gun here and giving up on possible solutions. I'm all ears
I've thought of using NamedQueries, I don't know if I misread but I read in a hibernate book that you could query then add the result to a generic model so then I use that model to retain all my results...
If there are any doubts let me know, thanks (note this is not a multi database question, just using different sets of tables with unique prefixes)
I wonder if you could use one single set of tables, but have something like TEAM_ID as a foreign key in each table.
You would need one single TEAM table, where TEAM_ID will be the primary key. This will get migrated to tables and become part of foreign keys.
For instance, if you have a Player entity, having a collection of HighScores, then in the DB the Player table will have a TEAM_ID (foreign key from the Team table) and the HighScores table will have a composed foreign key (Player_id, Team_id) coming from the Player table..
So, bottom line, I am suggesting a logical partitioning of your database rather then a physical one (as you've considered initially).
Hope this makes sense, it definitely needs more thought, but if you think it's an interesting idea, I can think it through in more detail.
I am familiar with Hibernate and another web framework, here is how I would handle it:
I would create a single set of tables for one team that would address all my needs. Then I would:
Using DB2: Create a schema for each team copying the set of tables into each schema.
Using MySQL: Create a new Database for each copying the set of tables into each one.
Note: A 'database' in MySQL is more like a schema in other databases. (Sorry I'd rather keep things too simple than miss the point)
Now you can set up a separate hibernate.cfg.xml file for each connection (this isn't exactly the best way but perhaps best to start because it's so easy). Now you can specify the connection parameters... including the schema/db. Now your entity table, lets say it's called "team" will use the "team" table where ever it is connected...
To get started very quickly, when a user logs on create a user object in their session.
The user object will have a Hibernate SessionFactory which will be used for all database requests built from the correct hibernate.cfg.xml file as determined by parsing the URL used in the login.
Once the above is working... There are some serious efficiency concerns to address. That being that each logged on user is creating a SessionFactory... Maybe it isn't an issue if there isn't a lot of concurrent use but you probably want to look into Spring at that point and use a connection pool per team. This way there is one Session factory per team and there is no major object creation when a user signs in.
The benefits of this solution is that it should be easier to create new sets of tables because each table set lives in it's own world. There will only be one set of Entity Classes as opposed to the product of one for every team and table. The database schema stays rather simple not being complicated by adding team names and then the required constraints. If the teams require data ownership and privacy it will be rather easy to move the database to a different location.
The down side is that if the model needs to be changed for a team it must be done for each team (as opposed to a single table set using teamName as a foreign key).
The idea of using different tables for each team (despite what successful apps may use it) is honestly quite naïve, and has serious pitfalls when you take maintenance into account...
Just think what you will be forced to do if you discover you need a new table or even just an index... you'll end up needing to write DML scripts as templates and to use some (custom) software to run them on all the teams...
As mentioned in the other answers (Quaternion's and Octav's), I think you have two viable options:
Bring the "team" into your data model
Split the data in different databases/schemas
To choose the option that works best for you, you must decide if the "team" is really something you can partition your dataset into, or if it is really one more entity you want to bring into your datamodel.
You may have noticed that I'm using "splitting" here instead of "partitioning" - that's because the latter term is generally used by DBAs to indicate what we could call "sharding" - "splitting" is intended to be a stronger term.
Splitting is only viable if:
entities in different partitions do not ever need to reference each other
no query will ever need to access data from different partitions (this applies to queries used for reporting too)
As you might well see, splitting in this sense is not very attractive (maybe it could be ok now, but what when you find yourself wanting to add new features?), so my advice is to go for the "the Team is an entity" solution.
Also note that maintaining a set of databases/schemas is actually harder than maintaining a single (albeit maybe a bit more complex) database... again, think of what steps you should take to add an index in a production system...
The only downside of the single-databse solution manifests if you end up having multiple front-ends (maybe due to customizations for particular customers): changes to a shared database have the potential to affect all the applications using it, so you may need to coordinate upgrades to the different webapps to minimize risks (note, however, that in most cases you'll be able to change the database without breaking compatibility).
After all it's a little bit frustrating to get no information just shoot into the dark. Nevertheless now I have start the work, I try to finish.
I think you could do you job with following solution:
Wrote a PlayPlugin and make sure you add to every request the team to the request args. Then you wrote your own NamingStrategy. In the NamingStrategy you could read the request.args and put the team into your table name. Depending on how you add it Team_ or Team. it will be your preferred solution or something with schema. It sounds that you have an db-schema so it would be probably the best solution to stay with this tables and don't migrate.
Please make the next time your request more abstract so that you can provide some information like how many tables, is team an entity and how much records a table has (max, avg, min). How stable is your table model? This are all questions which helps to give a clear recommendation with arguments.
You can try the module vhost, but it seems not very good maintained. But I think the idea to put the name of the team into the table name is really weired. Postgres and Oracle has schemas for that. So you use myTeam.myTable. But then you must do the persistence by your selves.
Another approach would be different databases, but again you don't have good support by play. I would try this
Run for each team a separate play-server, if you don't have to much teams.
Put a reference to a Team-table for every model. Then you can use hibernate-filters or add it manually as additional parameter to each query. Of course this increase your performance. You can fix this issue with oracle partitions.
I'm trying to write a program with Hibernate. My domain is now complete and I'm writing the database.
I got confused about what to do. Should I
make my sql tables in classes and let the Hibernate make them
Or create tables in the
database and reverse engineer it and
let the hibernate make my classes?
I heard the first option one from someone and read the second option on the Netbeans site.
Does any one know which approach is correct?
It depends on how you best conceptualize the program you are writing. When I am designing my system I usually think in terms of entities and their relationships to eachother, so for me, I start with my business objects, then write my hibernate mappings and let hibernate create the database.
Other people are able to think better in terms of database tables, in whcih case that approach is best for them. So you gotta decide which one works for you based on your experience.
I believe you can do either, so it's down to preference.
Personally, I write the lot by hand. While Hibernate does a reasonable job of creating a database for you it doesn't do it as well as I can do myself. I'd assume the same goes for the Java classes it produces although I've never used that feature.
With regards to the generated classes (if you went the class generation route) I'm betting every field has a getter/setter whether fields should be read only or not (did somebody say thread safety and mutability) and that you can't add behavior because it gets overridden if you regenerate the classes.
Definitely write the java objects and then add the persistence and let hibernate generate the tables.
If you go the other way you lose the benefit of OOD and all that good stuff.
I'm in favor of writing Java first. It can be a personal preference though.
If you analyse your domain, you will probably find that they are some duplication.
For example, the audit columns (user creator and editor, time created and edited) are often common to most tables.
The id is often a common field.
Look at your domain to see your duplication.
The duplication is an opportunity to reuse.
You could use inheritance, or composition.
Advantages :
less time : You will have much less things to write,
logical : the same logical field would be written once (that would be other be many similar fields)
reuse : in the client code for your entities, you could write reusable code. For example, if all your entities have the same id field called ident because of their superclass, a client code could make the generic call object.getIdent() without having to find out the exact class of the object, so it will be more reusable.