Select the first duplicated element - java

I have 2 Classes in JAVA ( MODEL1 && MODEL2 )
As you can see here :
MODEL1_ID ACTN_DTE MODEL2_ID
---------- --------- --------------
1 14/11/19 18
1000 14/11/19 4
1001 14/11/19 19
1002 14/11/19 4
1003 14/11/19 4
1004 14/11/19 18
2000 14/11/19 5
I am trying to find a way with SQL Or HQL to get all elements from MODEL1 that have a list of MODEL2_ID , get only the first (min MODEL1_ID) MODEL1 per MODEL2_ID ( in case if it's duplucated ).
Exemple :
Input : MODEL2_ID in (18,4,19,5)
MODEL1_ID ACTN_DTE MODEL2_ID
---------- --------- --------------
1 14/11/19 18
1000 14/11/19 4
1001 14/11/19 19
2000 14/11/19 5

select MIN(MODEL1_ID) FROM table GROUP BY (MODEL2_ID)

It is possible that by "first" you mean the minimum actn_date and the question just has a useless sample of data (because all the values are the same).
If so, you can use aggregation with keep to get the first value by actn_date:
select model2_id, min(actn_date) as actn_date,
min(model1_id) keep (dense_rank first order by actn_date) as model1_id
from t
group by model2_id;

Related

How to update two tables together in Oracle?

Query :
UPDATE TABLE_ONE
SET DATE=?, URL=?, TYPE=?, STATE=?, FEE=?, NAME=?, STATUS=?
WHERE ID=?";
Another table TABLE_TWO has columns - NAME, ID, FEE, STATUS, TOTAL,I want that on running update all above fields specified in query plus FEE, STATUS of TABLE_TWO gets updated together. I'm using Spring.
You can use trigger for say Table1 & write a trigger if any update on table 1 call trigger to update the Table 2 data.
it starts like..
create or replace TRIGGER trigger_name
after insert or update on Table1
for each row
...
For complete trigger refer (https://www.tutorialspoint.com/plsql/plsql_triggers.htm)
You can't update 2 tables (more than one) in one statement, instead use 2 statements and call them inside method annotated with #Transactional
#Transactional
public void updateTables() {
updateTableOne();
updateTableTwo();
}
commit will happen when method exists for both tables.
See more about using #Transactional in Spring
Other option is calling oracle procedure
Spring provides various ways of abstractions on JDBC to call database stored procedures.
Example
SimpleJdbcCall call = new SimpleJdbcCall(jdbcTemplate)
.withProcedureName("MOVE_TO_HISTORY")
Solution 1: You can add both update statements in PLSQL procedure/function and execute the same in java code
Solution 2: You can add both update statements in a function and invoke the same
Oracle way of doing that is an instead-of trigger on a view.
Here's an example based on Scott's schema.
First, I'll create a view as a join of two tables, EMP and DEPT (that's where updating two tables together comes into a game):
SQL> create or replace view v_ed as
2 select d.deptno, e.empno, d.dname, e.ename, e.sal
3 from emp e join dept d on e.deptno = d.deptno;
View created.
Now, an instead-of trigger. I'm handling INSERT and UPDATE; you can add DELETE as well. It means that - when you insert into a view or update it, underlying tables will be the final target of those commands.
SQL> create or replace trigger trg_io_ed
2 instead of insert or update on v_ed
3 for each row
4 begin
5 if inserting then
6 insert into emp (deptno, empno, ename, sal)
7 values (:new.deptno, :new.empno, :new.ename, :new.sal);
8 insert into dept (deptno, dname)
9 values (:new.deptno, :new.dname);
10 elsif updating then
11 update emp set
12 deptno = :new.deptno,
13 ename = :new.ename,
14 sal = :new.sal
15 where empno = :new.empno;
16 update dept set
17 dname = :new.dname
18 where deptno = :new.deptno;
19 end if;
20 end;
21 /
Trigger created.
Some testing: insert:
SQL> insert into v_ed (deptno, empno, dname, ename, sal)
2 values (99, 100, 'test dept', 'Littlefoot', 1000);
1 row created.
SQL> select * From dept;
DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
99 test dept
SQL> select * From emp;
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- -------- ---------- ---------- ----------
7369 SMITH CLERK 7902 17.12.80 800 20
7499 ALLEN SALESMAN 7698 20.02.81 1600 300 30
7521 WARD 7698 22.02.81 1250 500
7566 JONES MANAGER 7839 02.04.81 2975 20
7654 MARTIN SALESMAN 7698 28.09.81 1250 1400 30
7698 BLAKE MANAGER 7839 01.05.81 2850
7782 CLARK MANAGER 7839 09.06.81 2450 10
7788 SCOTT ANALYST 7566 09.12.82 3000 20
7839 KING PRESIDENT 17.11.81 5000 10
7844 TURNER SALESMAN 7698 08.09.81 1500 0 30
7876 ADAMS CLERK 7788 12.01.83 1100 20
7900 JAMES CLERK 7698 03.12.81 950 30
7902 FORD ANALYST 7566 03.12.81 3000 20
7934 MILLER CLERK 7782 23.01.82 1300 10
100 Littlefoot 1000 99
15 rows selected.
SQL>
Update:
SQL> update v_ed set ename = 'Bigfoot' where empno = 100;
1 row updated.
SQL> select * From emp;
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- -------- ---------- ---------- ----------
7369 SMITH CLERK 7902 17.12.80 800 20
7499 ALLEN SALESMAN 7698 20.02.81 1600 300 30
7521 WARD 7698 22.02.81 1250 500
7566 JONES MANAGER 7839 02.04.81 2975 20
7654 MARTIN SALESMAN 7698 28.09.81 1250 1400 30
7698 BLAKE MANAGER 7839 01.05.81 2850
7782 CLARK MANAGER 7839 09.06.81 2450 10
7788 SCOTT ANALYST 7566 09.12.82 3000 20
7839 KING PRESIDENT 17.11.81 5000 10
7844 TURNER SALESMAN 7698 08.09.81 1500 0 30
7876 ADAMS CLERK 7788 12.01.83 1100 20
7900 JAMES CLERK 7698 03.12.81 950 30
7902 FORD ANALYST 7566 03.12.81 3000 20
7934 MILLER CLERK 7782 23.01.82 1300 10
100 Bigfoot 1000 99
15 rows selected.
SQL>
See if it helps.

organizing query result in sql developer (oracle 11g)

Currently i have a table called schedule in db (SQL developer).
Assuming theavailableIDs are 1,3,7,8. The table consists of something like this:
Stud Title Supervisor Examiner availableID
abc Hello 1024 1001 1
def Hi 1024 1001 1
ghi Hey 1002 1004 1
xxx hhh 1020 1011 1
jkl hhh 1027 1010 1
try ttt 1001 1011 1
654 bbb 1007 1012 1
gyg 888 1027 1051 1
yyi 333 1004 1022 3
fff 111 1027 1041 3
ggg 222 1032 1007 3
hhh 444 1007 1001 3
ppp 444 1005 1072 7
ooo 555 1067 1009 7
uuu 666 1030 1010 7
yyy 777 1004 1001 7
qqq yhh 1015 1072 8
www 767 1017 1029 8
eee 566 1030 1020 8
rrr 888 1004 1031 8
abc 5555 1045 1051 8
As you can see, I have sort these value using ORDER BY availableID asc.
However, I would like to ORGANIZE them again into something like this:
Stud Title Supervisor Examiner availableID
abc Hello 1024 1001 1
def Hi 1024 1001 1
ghi Hey 1002 1004 1
xxx hhh 1020 1011 1
yyi 333 1004 1022 3
fff 111 1027 1041 3
ggg 222 1032 1007 3
hhh 444 1007 1001 3
ppp 444 1005 1072 7
ooo 555 1067 1009 7
uuu 666 1030 1010 7
yyy 777 1004 1001 7
qqq yhh 1015 1072 8
www 767 1017 1029 8
eee 566 1030 1020 8
rrr 888 1004 1031 8
jkl hhh 1027 1010 1
try ttt 1001 1011 1
654 bbb 1007 1012 1
gyg 888 1027 1051 1
........
abc 5555 1045 1051 8
For every availableID it will called four times then proceed to next availableID. Next it will iterate back to the lowest ID but using different other values. Stud must be distinct.
Is it possible to achieve this by using sql query?
You can do this with row_number() and some arithmetic. Something like:
Select t.*
From (select t.*,
Row_number() over (partition by availableid order by stud) as seqnum
From t
) t
Order by trunc((seqnum - 1) / 4), availableid
A slightly another equivalent approach as above, using floor and partitioning and grouping by the same availableID column -
select a.stud,
a.title,
a.supervisor,
a.examiner,
a.availableID
from ( select s.*,row_number() over (partition by availableID order by availableID) rn
from student s) a
order by floor((rn-1)/4),availableID

Can I use values from Spark dataframe to create a new one dynamically?

I have a Spark dataframe(oldDF) that looks like:
Id | Category | Count
898989 5 12
676767 12 1
334344 3 2
676767 13 3
And I want to create a new dataframe with column names of Category with value of Count grouped by Id.
The reason why I can't specify schema or would rather not is because the categories change a lot.Is there any way to do it dynamically?
An output I would like to see as Dataframe from the one above:
Id | V3 | V5 | V12 | V13
898989 0 12 0 0
676767 0 0 1 3
334344 2 0 0 0
With Spark 1.6
oldDf.groupBy("Id").pivot("category").sum("count)
You need to do your groupby operation first, then you can apply implement a pivot operation as explained here

Select single row for column value

Here is sample table data which is dynamic.
ColId Name JobId Instance
1 aaaaaaaaa 1 2dc757b
2 bbbbbbbbb 1 2dc757b
3 aaaaaaaaa 1 010dbb8
4 bbbbbbbbb 1 010dbb8
5 bbbbbbbbb 1 faa2733
6 aaaaaaaaa 1 faa2733
7 aaaaaaaaa 1 bc13d69
8 aaaaaaaaa 1 9428f4d
I want output like
ColId Name JobId Instance
1 aaaaaaaaa 1 2dc757b
3 aaaaaaaaa 1 010dbb8
5 bbbbbbbbb 1 faa2733
7 aaaaaaaaa 1 bc13d69
8 aaaaaaaaa 1 9428f4d
What should be the JPA query so that I can retrieve entire row having only single 'Instance'(there is no max min condition involved).
I need one row for each 'Instance' value
FROM table t GROUP BY t.instance should suit your needs.
Something like JPQL "Select entity from Entity entity where entity.id in (select min(subinstance.id) from Entity subinstance group by subinstance.instance)"
Functions like count, min, avg etc are allowed over columns not included in the group by statement, so any such should work if it returns a single id value from the grouping.

select where.... electrical status is required in ms sql 2005

I have a database table planning sheet as follows
SONo. LineNo. ElectricalStatus
1 10 Required
1 20 Required
2 10 NotRequired
2 20 Required
2 30 Required
3 10 NotRequired
4 10 NotRequired
I want to display all records + beside the SONo., say if electrical status is required or not.
e.g.,
SONo. ElectricalStatus
1 Required
2 Required
because SONo. 3 and 4 have no records with electrical status as required and SONo. 2 has records with electrical status required
You can simply do this:
SELECT DISTINCT SONO, ElectricalStatus
FROM tablename
WHERE ElectricalStatus = 'Required';
SQL Fiddle Demo
this will give you:
| SONO | ELECTRICALSTATUS |
---------------------------
| 1 | Required |
| 2 | Required |

Categories