Create multiple tables of same class in hibernate - java

In my java project using hibernate, I have a class named Employee.java.
I want to create employee table every month with table name as 'Employee_MMYYYY' (MM-Month YYYY-Year). I have tried by creating my own naming startegy and configuring the same in Configuration object of org.hibernate.cfg. But the issue i am facing is, my effort is resulting in creation of only one table. I am not able to create multiple tables. Can any one throw some light on this?

I don't know why you want do that but best way is use oracle enterprise database and add partition to your table by range or list , it's way to get better performance . if you need millions of rows do that , but if you need practice about something or find some way to create new table with Month in hibernate , better know hibernate haven't this feature so , you can create a shell program or something like (or use java to create new hibernate xml configurations).
don't use annotation in javabeans and use xml files.
look at this : https://docs.oracle.com/cd/E18283_01/server.112/e16541/part_admin001.htm
CREATE TABLE sales
( prod_id NUMBER(6)
, cust_id NUMBER
, time_id DATE
, channel_id CHAR(1)
, promo_id NUMBER(6)
, quantity_sold NUMBER(3)
, amount_sold NUMBER(10,2)
)
PARTITION BY RANGE (time_id) SUBPARTITION BY HASH (cust_id)
SUBPARTITIONS 8 STORE IN (ts1, ts2, ts3, ts4)
( PARTITION sales_q1_2006 VALUES LESS THAN (TO_DATE('01-APR-2006','dd-MON-yyyy'))
, PARTITION sales_q2_2006 VALUES LESS THAN (TO_DATE('01-JUL-2006','dd-MON-yyyy'))
, PARTITION sales_q3_2006 VALUES LESS THAN (TO_DATE('01-OCT-2006','dd-MON-yyyy'))
, PARTITION sales_q4_2006 VALUES LESS THAN (TO_DATE('01-JAN-2007','dd-MON-yyyy'))
);

Related

Multiple sequence number generator for a single column while saving to the postgresql db

I have a column (certificate number) in DB, wherein I have to save the value of certificate number for 3 different products, each products series starts from a different number (1st product series starts from 001, 2nd product starts with 2000 and so on so forth), I have to update the same column in the table from the previously saved certificate number of that particular product series.is there any way I can achieve this using spring boot, JPA, with PostgreSQL?
Thanks in advance.
It's quite easy to do with PostgreSQL SEQUENCE.
First you create a set of required sequences, i.e.
CREATE SEQUENCE product1 START 1; // You may need to define max value as well
CREATE SEQUENCE product2 START 2000;
etc.
Then you can use nextval(sequence_name) in your SQL statements, i.e.
INSERT INTO table_name VALUES (nextval('product1'), 'bla-bla');
You can find more info here https://www.postgresql.org/docs/13/sql-createsequence.html
If you want to use JPA, then you need to look at two annotations.
In order to create a sequence for an entity you use
#SequenceGenerator(name="product2", initialValue=2000)
and on the field definition you use
#GeneratedValue(strategy=GenerationType.SEQUENCE, generator="product2")
Edit: I see the issue you may have with JPA, you can't choose which sequence to use for an operation.

How to fill a closure table using JPA?

I am trying to model a hierarchy of objects (actually, domain groups) in a database. I decided to use a closure table, so that I can gain high flexibility in querying the hierarchy. Basically, my schema looks something like this:
CREATE TABLE group (
id INT -- primary key
... -- other fields here
)
CREATE TABLE groupHierarchy (
idAncestor INT,
idGroup INT,
hierarchyLevel INT
)
So, when a group with an id of 1 contains a group with an id of 2, which in turn contains a group with an id of 3, I will need to have following rows in the groupHierarchy table.
idAncestor idGroup hierarchyLevel
1 1 0
2 2 0
3 3 0
1 2 1
2 3 1
1 3 2
I am also OK with not having the rows with the hierarchyLevels of 0 (self - reference).
Now I would like to have an JPA entity that would map to the group table. My question is - what would be a good way to manage the groupHierarchy table?
What I already considered is:
1) Having the group hierarchy mapped as an element collection, like :
#ElementCollection
#JoinTable(name = "groupHierarchy")
#MapKeyJoinColumn(name = "idAncestor")
#Column(name = "hierarchyLevel")
Map<Group, Integer> ancestors;
This would require handling the hierarchy entirely in the application, and I am afraid that this may become very complex.
2) Make the application unaware of the hierarchyLevel column and handle it in the database using a trigger (when a record is added, check if the parent already has ancestors and if so, add any other required rows. This is also where the hierarchyLevel of 0 would come in handy). It seems to me that the database trigger would be simpler, but I'm not sure if that would be good for the overall readability.
Can anyone suggest other options? Or maybe point to any pros or cons of the solutions I have mentioned?
May I suggest you using JpaTreeDao? I think it's complete and very well documented. I'm going to try to port the closure tables code to a groovy implementation...

Pass multiple rows or array to DB2 Store procedure from java

We need to update DB2 database with following type of data with store procedure from java application.
ManId ManaFirstName ManLastName CubicleId Unit EmpId EmpFirstName EmpLastName
2345 Steeven Rodrigue 12345RT HR 2456 John Graham
45464 Peter Black
Here, the columns related Emp (Emp Id , Emp First Name and Emp Last Name) is actually array, it can any number of employees from my front application for one manager.
We need to pass these values in Store Procedure and process in SP. However, I am not able to find any array type datatype in db2.
I know I can take following two approaches :-
1. Delimited value :- Have varchar column and append all the values with help of delimiter and split in SP.
2456,John,Graham|45464,Peter,Black
2. Have separate SP and call in batch.
However, I am looking for approach where I can pass them in single go and some more structured datatype. Does DB2 have datatype like array to support this or any way to create custom datatype.
I am using Spring JDBCTemplate at front end to call SP (I am flexible to change that) and DB2 as database.\
P.S. :- Open queries is not option for me , need to call SP only.
This SP is going to be called from java directly, so if have to use custom datatype, only scope is to define it in store procedure which is being called
Since the data types of Emp Id , Emp First Name and Emp Last Name are probably different, you should use a DB2 ROW type to contain them, not ARRAY. In Java that would be represented by java.sql.Struct. You can also pass an ARRAY of ROW types to the stored procedure. Check the manual for details and examples.

How to retrieve only the information that got changed from Cassandra?

I am working on designing the Cassandra Column Family schema for my below use case.. I am not sure what is the best way to design the cassandra column family for my below use case? I will be using CQL Datastax Java driver for this..
Below is my use case and the sample schema that I have designed for now -
SCHEMA_ID RECORD_NAME SCHEMA_VALUE TIMESTAMP
1 ABC some value t1
2 ABC some_other_value t2
3 DEF some value again t3
4 DEF some other value t4
5 GHI some new value t5
6 IOP some values again t6
Now what I will be looking from the above table is something like this -
For the first time whenever my application is running, I will ask for everything from the above table.. Meaning give me everything from the above table..
Then every 5 or 10 minutes, my background thread will be checking this table and will ask for give me everything that has changed only (full row if anything got changed for that row).. so that is the reason I am using timestamp as one of the column here..
But I am not sure how to design the query pattern in such a way such that both of my use cases gets satisfied easily and what will be the proper way of designing the table for this? Here SCHEMA_ID will be primary key I am thinking to use...
I will be using CQL and Datastax Java driver for this..
Update:-
If I am using something like this, then is there any problem with this approach?
CREATE TABLE TEST (SCHEMA_ID TEXT, RECORD_NAME TEXT, SCHEMA_VALUE TEXT, LAST_MODIFIED_DATE TIMESTAMP, PRIMARY KEY (ID));
INSERT INTO TEST (SCHEMA_ID, RECORD_NAME, SCHEMA_VALUE, LAST_MODIFIED_DATE) VALUES ('1', 't26', 'SOME_VALUE', 1382655211694);
Because, in my this use case, I don't want anybody to insert same SCHEMA_ID everytime.. SCHEMA_ID should be unique whenever we are inserting any new row into this table.. So with your example (#omnibear), it might be possible, somebody can insert same SCHEMA_ID twice? Am I correct?
And also regarding type you have taken as an extra column, that type column can be record_name in my example..
Regarding 1)
Cassandra is used for heavy writing, lots of data on multiple nodes. To retrieve ALL data from this kind of set-up is daring since this might involve huge amounts that have to be handled by one client. A better approach would be to use pagination. This is natively supported in 2.0.
Regarding 2)
The point is that partition keys only support EQ or IN queries. For LT or GT (< / >) you use column keys. So if it makes sense to group your entries by some ID like "type", you can use this for your partition key, and a timeuuid as a column key. This allows to query for all entries newer than X like so
create table test
(type int, SCHEMA_ID int, RECORD_NAME text,
SCHEMA_VALUE text, TIMESTAMP timeuuid,
primary key (type, timestamp));
select * from test where type IN (0,1,2,3) and timestamp < 58e0a7d7-eebc-11d8-9669-0800200c9a66;
Update:
You asked:
somebody can insert same SCHEMA_ID twice? Am I correct?
Yes, you can always make an insert with an existing primary key. The values at that primary key will be updated. Therefore, to preserve uniqueness, a UUID is often used in the primary key, for instance, timeuuid. It is a unique value containing a timestamp and the MAC address of the client. There is excellent documentation on this topic.
General advice:
Write down your queries first, then design your model. (Use case!)
Your queries define your data model which in turn is primarily defined by your primary keys.
So, in your case, I'd just adapt my schema above, like so:
CREATE TABLE TEST (SCHEMA_ID TEXT, RECORD_NAME TEXT, SCHEMA_VALUE TEXT,
LAST_MODIFIED_DATE TIMEUUID, PRIMARY KEY (RECORD_NAME, LAST_MODIFIED_DATE));
Which allows this query:
select * from test where RECORD_NAME IN ("componentA","componentB")
and LAST_MODIFIED_DATE < 1688f180-4141-11e3-aa6e-0800200c9a66;
the uuid corresponds to -> Wednesday, October 30, 2013 8:55:55 AM GMT
so you would fetch everything after that

How to add arbitrary columns to Cassandra using CQL with Datastax Java driver?

I have recently started taking much interest in CQL as I am thinking to use Datastax Java driver. Previously, I was using column family instead of table and I was using Astyanax driver. I need to clarify something here-
I am using the below column family definition in my production cluster. And I can insert any arbitrary columns (with its value) on the fly without actually modifying the column family schema.
create column family FAMILY_DATA
with key_validation_class = 'UTF8Type'
and comparator = 'UTF8Type'
and default_validation_class = 'BytesType'
and gc_grace = 86400;
But after going through this post, it looks like- I need to alter the schema every time whenever I am getting a new column to insert which is not what I want to do... As I believe CQL3 requires column metadata to exist...
Is there any other way, I can still add arbitrary columns and its particular value if I am going with Datastax Java driver?
Any code samples/example will help me to understand better.. Thanks..
I believe in CQL you solve this problem using collections.
You can define the data type of a field to be a map, and then insert arbitrary numbers of key-value pairs into the map, that should mostly behave as dynamic columns did in traditional Thrift.
Something like:
CREATE TABLE data ( data_id int PRIMARY KEY, data_time long, data_values map );
INSERT INTO data (data_id, data_time, data_values) VALUES (1, 21341324, {'sum': 2134, 'avg': 44.5 });
Here is more information.
Additionally, you can find the mapping between the CQL3 types and the Java types used by the DataStax driver here.
If you enable compact storage for that table, it will be backwards compatible with thrift and CQL 2.0 both of which allow you to enter dynamic column names.
You can have as many columns of whatever name you want with this approach. The primary key is composed of two things, the first element which is the row_key and the remaining elements which when combined as a set form a single column name.
See the tweets example here
Though you've said this is in production already, it may not be possible to alter a table with existing data to use compact storage.

Categories