I have a requirement where I need to generate an account number and insert it into a table column in the following format.
"TBA2222011300000001" = where "TBA" is the value of another column or user sent data "22220113" implies the current date and "00000001" is a seven digit sequence that needs to be incremented and appended for every insert.
How can I append the sequence to the column, Should I do it in java or can it be done at DB end. I am currently using postgres with java and spring boot.
https://www.postgresql.org/docs/current/ddl-generated-columns.html
A generated column is a special column that is always computed from
other columns.
Several restrictions apply to the definition of
generated columns and tables involving generated columns:
The generation expression can only use immutable functions and cannot use subqueries or reference anything other than the current row
in any way.
now() is mutable function, so you cannot use Generated columns.
I am not sure why Default not working.
https://www.postgresql.org/docs/current/ddl-default.html
So now the only option is trigger.
CREATE TABLE account_info(
account_id INT GENERATED ALWAYS AS IDENTITY,
account_type text not null,
acconut_number text ) ;
So what you want is to automate:
UPDATE account_info set
account_number =
concat(
account_type,
to_char(CURRENT_DATE, 'yyyymmdd'),
to_char(account_id, 'FM00000000'));
create the function
create or replace function update_account_number() returns trigger as $$
BEGIN
UPDATE account_info set
account_number =
concat(
account_type,
to_char(CURRENT_DATE, 'yyyymmdd'),
to_char(account_id, 'FM00000000'));
RETURN NULL;
end;
$$ LANGUAGE plpgSQL;
create the trigger:
CREATE OR REPLACE TRIGGER udpate_accout_number
AFTER INSERT ON account_info
FOR EACH ROW
EXECUTE FUNCTION update_account_number();
Have an id column which is identity in postgres with start and end index as required.
For your reference to create identity column as desired
https://www.postgresqltutorial.com/postgresql-identity-column/
Have 1 more column for createdDate.
Then account number is simply a derived value TBA + formatted(DATE) + formatted(Id).
Ex -
No not a trigger just a function. There won't be any account number column in your table. It will simply be a function which takes date and identity as input and gives account number as output. Since account number is only dependent on id and date. No need to store this value at all, whenever you need the account number just call that function. Account number will not exist at all. It will always be calculated based on id and date. Simple.
Refer this in the article
Method 1: Derived Value called "markup"
The first method we may want to add to this table is a accountNumber method, for calculating our accountNumber based on current date and id. Since this value will always be based on two other stored values, there is no sense in storing it (other than possibly in a pre-calculated index). To do this, we:
CREATE FUNCTION accountNumber(id,date) RETURNS varchar AS
$$ SELECT TBA + format(id) + format(date)
$$ LANGUAGE SQL IMMUTABLE;
You need to put logic for format(id) and format(date) as per your requirement.
There is no point of storing the value which can be easily derived from other 2 columns. It would unnecessary consume space. Also maintaining data integrity and checks will be an overhead.
Creating function for derived values
https://ledgersmbdev.blogspot.com/2012/08/postgresql-or-modelling-part-2-intro-to.html
You can use the function in output as well as search.
Index would also be utilized as required.
I did the following to generate the desired account number.
Created a new sequence and appended zeros to it.
select to_char(nextval('finance_accounts_id_seq'), 'fm00000000')
Got the Current date in java using DateTimeFormatter
DateTimeFormatter dmf = DateTimeFormatter.ofPattern("yyyyMMdd");
String date = LocalDate.now().format(dmf);
Got the "TBA" from request param of the user.
Related
I post a similar question previously, but have opened another question to be more specific as the previous one gave me a solution but I have now encountered another problem.
We have an existing Oracle database that had boolean columns defined like so
CREATE TABLE MY_TABLE
(
SOME_TABLE_COLUMN NUMBER(1) DEFAULT 0 NOT NULL,
etc..
And with the corresponding java field defined like private boolean someTableColumn; I've come to understand this is because Oracle does not have a Boolean datatype, but under the hood does the conversion from boolean to integer when inserting data, and the reverse when retriving data.
This has caused an issue when I have been working on migrating our database from Oracle to Postgres. Thanks to answers on my previous question, I have migrated the column type from NUMBER(1) to BOOLEAN. This has solved the issue with inserting data. However, our codebase uses JDBCTemplate and we unfortunately have hundereds of hardcoded queries in the code that make queries like SELECT * FROM MY_TABLE WHERE TABLE_COLUMN=1.
When these run against the Postgres DB, I get the following error ERROR: operator does not exist: boolean = integer. We have a requirement to have backwards compatability with Oracle, so I can't simply update these queries to replace 1 and 0 with TRUE and FALSE respectively.
Is there a way I can configure Postgres so it can do a conversion behind the scenes to resolve this? I have looked at casts but I don't really understand the documentation and the examples given don't seem to match my use case. Any help is appreciated.
Can you try to use '0' and '1' instead of 0 and 1 in your requests ?
I used to work on apps compliant with both Oracle and Postgresql using this syntax. Apps were using JPA but can say with certainty that we were also using this syntax with nativeQuery = true.
Note: I would have posted this as a comment but I don't have the required reputation to do so, hence the post as an answer
Update:
Duh! Brain Freeze. On later thought there is a way to get this conversion in both directions. Process via a View.
Steps:
Rename your table.
Create a view having the same name as the old table. In this view
translate the boolean column to the appropriate integer.
Create an trigger function and an instead of trigger on the view for insert/update dml. In the trigger function translate the column value to boolean as appropriate.
See revised demo.
alter table testb rename to testb_tab;
create or replace view testb (id, name, is_ok)
as
select i,
, name
, is_ok::int
from testb_tab;
create or replace
function testb_act_dml()
returns trigger
language plpgsql
as $$
begin
if tg_op = 'INSERT' then
insert into testb_tab(name,is_ok)
values (new.name, new.is_ok::boolean) ;
else
update testb_tab
set name = new.name
, is_ok = new.is_ok::boolean
where id = old.id;
end if;
return new;
end;
$$;
create trigger testb_biuri -- before insert update row instead of
instead of insert or update on testb
for each row execute function testb_act_dml();
Finally, there is another option which probably has the lease work. Do not change the column description to Boolean. Either leave it as an integer or define it as a smallint. Either way a check constraint may come in useful. So something along the line of:
create table tests( id int generated always as identity primary key
, name text
, is_ok smallint
, constraint is_ok_ck
check ( is_ok in (0,1) or is_ok is null)
);
This is one of the issues I have with the concept of "database independence". It simply does not exist. Vendors implementation often simply vary too much. In this case Postures to the rescue, perhaps but also perhaps extreme: create you own create your own operators. Proceed with caution:
-- function to compare boolean = integer
create or replace
function"__bool=int"( b boolean, i int)
returns boolean
language sql
as $$
select (b=i::boolean);
$$;
-- create the Operator for boolean = integer
create operator = (
leftarg = boolean
, rightarg = int
, function = "__bool=int"
, commutator = =
);
The above will not allow your code: SELECT * FROM MY_TABLE WHERE TABLE_COLUMN=1 (see demo here).
However, this road may lead to unexpected twists, and lots of function/operator pairs. For example the above does not support SELECT * FROM MY_TABLE WHERE TABLE_COLUMN<>1. That requires another function/operator combination. Further I do not see a retrieval function that converts a boolean back to an integer. If you follow this path be sure to massively test your boolean-to-integer (integer-to-boolean) operations. It may just be better to just byte the bullet and updated those few queries (you did say hundreds not thousands) as #mlogario suggests.
I am unable to grasp the concept of the lookup table.
I am currently working on a project wherein I am using two tables.
The first table consists of two columns- name(varchar) and value(varchar).
The second table also has two rows- Result(varchar) and value(varchar).
Result is used to store the values which are obtained from a Java code. Whenever the Result of the Java code matches the name in the first table, I need to update the second table with the corresponding value in the first table.
Does using lookup table help in any way? If it does, can it be explained with an example?If not, is there any other way?
Just imagine a table person with a column GenderIsMale BIT. You can set this value to 1 (yes, it is a boy) or to 0 (no, a girl). This was easy in earlier days.
Now we have more categories. According to this link facebook offers more than 50 differing categories...
There the lookup-table comes into play: You create a table which has - as minium - a unique key and a value. In most cases this is an ID INT IDENTITY and a Content VARCHAR(100) NOT NULL. You can add more columns like Abbreviation or any other additional content (e.g. other languages or codes of external code systems read about mapping tables also) directly bound to this value.
The next step is, to take the GenderIsMale-column away and replace it with a
GenderID INT NOT NULL
CONSTRAINT FK_Person_GenderID FOREIGN KEY REFERENCES GenderLookUpTable(GenderID)
The person table will store the GenderID only, the related values are stored in the side table and can be looked up.
The simple lookup table is the basic construct of how to create a relational database model in min. 3.NF or BCNF (which should be a minium reuqirement for professional database design).
Whenever the Result of the Java code matches the name in the first
table, I need to update the second table with the corresponding value
in the first table.
That's a perfect use case for database trigger, which can be used to perform various things when a change (insert, update, delete) happens in a table.
Assuming you're inserting the value of your Java calculations to your (result, value) table (let's call it foo, and the other table is bar), you can write a trigger that replaces the value being written with the value from the other table. Example given for Postgres, if using another db refer to your particular RDBMS manual to see the syntax.
CREATE FUNCTION get_value_from_lookup_table() RETURNS trigger AS $$
BEGIN
IF EXISTS (SELECT 1 FROM bar WHERE name = NEW.result) THEN
RETURN SELECT name, value FROM bar WHERE name = NEW.result;
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER lookup_value
INSTEAD OF INSERT ON foo
FOR EACH ROW
EXECUTE PROCEDURE get_value_from_lookup_table();
Every time an INSERT is done on foo, a check is done to see if a row exists in bar where name=result. If so, that row is inserted, otherwise the insert goes on normally. That's the basic gist of it. The actual solution depends on table constraints, whether you need to handle inserts and updates, etc.
I will be storing the archival users' passwords in the ArchivalPassword table:
CREATE TABLE public.ArchivalPassword (
id SERIAL,
userid INTEGER NOT NULL,
content VARCHAR(100) NOT NULL,
CONSTRAINT archivalpassword_pkey PRIMARY KEY(id),
CONSTRAINT archivalpassword_user FOREIGN KEY (userid)
REFERENCES public.user(id)
ON DELETE CASCADE
ON UPDATE CASCADE
NOT DEFERRABLE
)
WITH (oids = false);
CREATE INDEX fki_archivalpassword_user ON public.archivalpassword
USING btree (userid);
For each user I store the limited number of the passwords (based on the archived.passwords.limit property). If the user changes the password I am fetching the archived passwords number from the ArchivalPassword table and if it is greater than limit I calculate how many have to be deleted and delete them.
The requirement is that I delete the oldest passwords. And the question is if I can make and assumption that the password with the lower ID is older than the one with greater ID? Or do I need to add the EXPIREDAT column (date), which will be used to determine which password is needed to be deleted (the one which has the oldest date in the EXPIREDAT column)?
Here is the hypothetical EXPIREDAT column definition:
expiredat TIMESTAMP(0) WITH TIME ZONE DEFAULT '2017-03-20 00:00:00+01' NOT NULL;
And the ID sequence definition:
CREATE SEQUENCE public.archivalpassword_id_seq
INCREMENT 1 MINVALUE 1
MAXVALUE 9223372036854775807 START 1
CACHE 1;
Can you see any drawbacks of using the ID column in the described case?
Assuming your id column is something like a BIGSERIAL then it has a sequence definition which is where it auto allocates the next id from. Under normal circumstances the id's will reliably be allocated in order based on user's changing their password. The sequence definition can however be manually changed so that it starts at a different number and if anyone did this then id numbers would no longer represent chronological order.
I would personally opt to use the EXPIREDAT column though as that will always be accurate and the intention is clear. Not sure why you say "but then i would have to sort the dates instead of the integers" - assuming you are letting Postgres do the sorting I'm not sure why you think there is much difference?
If you have many users then integer (serial data type in Postgres) is faster then a date and time (timestamp data type in Postgres) column to access the record. Not sure a date column would be good if password changes multiple times on the same day.
I am a beginner PLSQL user, and I have what might be a rather simple question.
I have created the following SQL Function, which returns the created date of the process whose corporate ID matches the corporate ID that I have given it. I have this connected to my JDBC, and it returns values just fine.
However, I just realized I overlooked an important issue--it's entirely possible that more than one process will have a corporate ID that matches the ID value I've inputted, and in cases like that I will need to be able to access all of the created date values where the IDs return a match.
CREATE OR REPLACE FUNCTION FUNCTION_1(
c_id IN INT)
RETURN INT
AS
p_date process.date_created%TYPE;
BEGIN
SELECT process.date_created
FROM PROCESS
WHERE process.corporate_id = c_id
ORDER BY process.corporate_id;
RETURN p_id;
END FUNCTION_1;
/
Is there a way that I can modify my function to return multiple rows from the same column, and then call that function to return some sort of array using JDBC? Or, if that isn't possible, is there a way I can return what I need using PLSQL procedures or just plain SQL combined with JDBC? I've looked through other questions here, but none seemed to be about quite what I need to know.
Thanks to anyone who can help!
you need make some changes in your function. on java side it will be simple select
you need change type of your function from int to collection
read about the table functions here Table Functions
user oracle table() function to convert result of your function to table
it let you use your function in queries. read more about the syntax here: Table Collections: Examples
here the example how to call your function from java:
select t.column_value as process_id
from table(FUNCTION_1(1)) t
--result
PROCESS_ID
1 1
2 2
--we need create new type - table of integers
CREATE OR REPLACE TYPE t_process_ids IS TABLE OF int;
--and make changes in function
CREATE OR REPLACE FUNCTION FUNCTION_1(
c_id IN INT)
RETURN t_process_ids
AS
l_ids t_process_ids := t_process_ids();
BEGIN
--here I populated result of select into the local variables
SELECT process.id
bulk collect into l_ids
FROM PROCESS
WHERE process.corporate_id = c_id
ORDER BY process.corporate_id;
--return the local var
return l_ids;
END FUNCTION_1;
--the script that I used for testing
create table process(id int, corporate_id int, date_created date);
insert into process values(1, 1, sysdate);
insert into process values(2, 1, sysdate);
insert into process values(3, 2, sysdate);
Here I am using MySQL and I want my primary key to start with a letter, like D000. Then everytime I enter a new record the primary key auto increments like so:
D001
D002
D003.
How can I do this?
You can't AUTO_INCREMENT a column whose type is VARCHAR.
What you could do is make it BIGINT and AUTO_INCREMENT, and whenever you need it as String, you can prepend it with your letter 'D' like:
Long dbKey = ...;
String key = "D" + dbKey;
You could create a stored procedure for this to set an "auto-incremented" string as the default value for this column, but it just doesn't worth the hassle. Plus working with numbers is always faster and more efficient than working with strings.
I'm not sure whether I get your question right, but shouldn't the following work?
CREATE TRIGGER myTrigger
BEFORE INSERT
ON myTable
FOR EACH ROW
BEGIN
SET NEW.myCustomId = COALESCE('D', RPAD('0',3,NEW.id));
END
for this case you NEED a "normal" primary key column..
Two ideas.
(Useless IMHO) I think Maria DB has virtual columns, though MySQL I think not. But you have views. So you could make a normal INT, AUTOINCREMENT and in the view have a calculated column concatting your key.
One can use different number ranges for different tables.
ALTER TABLE debtors AUTO_INCREMENT=10000;
ALTER TABLE creditors AUTO_INCREMENT=30000;
ALTER TABLE guests AUTO_INCREMENT=50000;
This admittedly is a lame solution, but might do. I think such a distinction might be what you are aiming at.
Not sure why you need it but you can add the D AFTER you fetched the data (String id = "D" + autoIncId;).
You can't insert a string or anything in an autoincrement field and I can't see anyway this can be useful (all the recorde will have a D, so no one has).
If you want to declare a row default, you can add a boolean column named DEFAULT.
while(rs.next()){
String id = rs.getBoolean("DEFAULT")?"D":"ND";
id+=rs.getLong(1);
}
EDIT
As per your comment I understand that you want to select the max ID and add 1 to it. Then it's ok to use an autoincrement field in your DB and it must be a number type (INTEGER, BIGINT...).
Please FORGET to add the "D" to your primary key, it will simply not going to work as you want. The autoincrement takes the last inserted ID and adds 1 to it. If your last id is "D3" adding 1 has the same meaning as adding 4 to "apple". You are using different types.
There is no way for SQL or any other programming language to understand that if you add 1 to "D3" it should become "D4". What you need to do is get rid of that D (which purpose I still don't understand).
Yo may try to do this aberration at your own risk:
INSERT INTO table (id, a, b, c)
VALUES ( fn_get_key( LAST_INSERT_ID("table_name ") +1), "a", "b", "c");
Where fn_get_key is a function that will convert the number into your desired string AND will execute:
ALTER TABLE table_name AUTO_INCREMENT = start_value;
Anyway I do not recommend your approach. Numeric strings are faster and easier to sort. You could always create a view that transforms the ID or use logic o change from the "D001" key to "1". Foreing key and uniqness of ids enforcement will be harder and more expensive