Given the following SQL structure of MY_TABLE:
GROUP_LABEL | FILE | TOPIC
-----------------------------
group A | 1.pdf | topic A
group A | 1.pdf | topic B
group A | 2.pdf | topic A
group B | 2.pdf | topic B
My task is to get this stuff grouped by GROUP_LABEL, while forgetting about the different TOPICs of a file. So my expected result is
GROUP_LABEL | COUNT(*)
----------------------
group A | 2 -- two different files 1.pdf and 2.pdf here
group B | 1 -- only one file here
In pure SQL I would do it like
SELECT GROUP_LABEL, COUNT(*) FROM (
SELECT DISTINCT GROUP_LABEL, FILE FROM MY_TABLE
);
Is it possible to transform it into a JPA Criteria API query? I don't have any idea to get my inner query into the from construct of the Criteria query, in 9.3.1 of https://docs.jboss.org/hibernate/entitymanager/3.5/reference/en/html/querycriteria.html it seems like this is not possible.
But I just can't believe it ;-) Has anyone done this before? The inner query would be enriched with various, well-tested, filter Predicates which I would want to reuse.
I'm using spring-boot-starter-data : 1.5.6.RELEASE with mainly standard configuration.
Try this,
Query: select label, count(distinct file) from tableName group by label;
Criteria: criteria.setProjection(Projections.projectionList().add(Projections.groupProperty("label")).add(Projections.countDistinct("file")));
Firstly your sql query can be resumed to this :
Select distinct GLOBAL_LABEL ,count (distinct FILE) from MY_TABLE group by GLOBAL_LABEL
Secondly it's always good to not name your columns with primary names to avoid problems.
Finaly you can use this as your HQL query (with no magic) :
Select distinct ge.globalLabel,count (distinct ge.file) from GlobalEntity ge group by ge.globalLabel
Yes it is possible by using JPA javax.persistence.criteria API.
Take a look at this example in the official Documentation.
Related
I have two tables (Leave, CompOff) I want to show these tables data to user(frontend) in form of Requests from employee. Employee can request for leave and compoff. Both the tables have createdOn, empid column. And I am not understanding how to fetch both table data and then return a list which contain both the tables data.
Leave tables -
empid | ceated_on
1 | 09-07-2022
2 | 05-07-2022
3 | 02-07-2022
CompOff tables -
empid | ceated_on
1 | 08-07-2022
2 | 06-07-2022
3 | 01-07-2022
In Springboot I have created three classes name - leave, compoff, request. And they have some create/update opertaion. Now in request class I want both (leave,compoff) data and send to user.
Use a union not a join.
Create a native query something like:
select 'leave' as type, empid, created_on, <other columns>
from leave
union all
select 'compoff', empid, created_on, <same other columns as above>
from compoff
I'm currently trying to insert in batch many records (~2000) and Jooq's batchInsert is not doing what I want.
I'm transforming POJOs into UpdatableRecords and then I'm performing batchInsert which is executing insert for each record. So Jooq is doing ~2000 queries for each batch insert and it's killing database performance.
It's executing this code (jooq's batch insert):
for (int i = 0; i < records.length; i++) {
Configuration previous = ((AttachableInternal) records[i]).configuration();
try {
records[i].attach(local);
executeAction(i);
}
catch (QueryCollectorSignal e) {
Query query = e.getQuery();
String sql = e.getSQL();
// Aggregate executable queries by identical SQL
if (query.isExecutable()) {
List<Query> list = queries.get(sql);
if (list == null) {
list = new ArrayList<Query>();
queries.put(sql, list);
}
list.add(query);
}
}
finally {
records[i].attach(previous);
}
}
I could just do it like this (because Jooq is doing same thing internally):
records.forEach(UpdatableRecord::insert);
instead of:
jooq.batchInsert(records).execute();
How can I tell Jooq to create new records in batch mode? Should I transform records into bind queries and then call batchInsert? Any ideas? ;)
jOOQ's DSLContext.batchInsert() creates one JDBC batch statement per set of consecutive records with identical generated SQL strings (the Javadoc doesn't formally define this, unfortunately).
This can turn into a problem when your records look like this:
+------+--------+--------+
| COL1 | COL2 | COL3 |
+------+--------+--------+
| 1* | {null} | {null} |
| 2* | B* | {null} |
| 3* | {null} | C* |
| 4* | D* | D* |
+------+--------+--------+
.. because in that case, the generated SQL strings will look like this:
INSERT INTO t (col1) VALUES (?);
INSERT INTO t (col1, col2) VALUES (?, ?);
INSERT INTO t (col1, col3) VALUES (?, ?);
INSERT INTO t (col1, col2, col3) VALUES (?, ?, ?);
The reason for this default behaviour is the fact that this is the only way to guarantee ... DEFAULT behaviour. As in SQL DEFAULT. I gave a rationale of this behaviour here.
With this in mind, and as each consecutive SQL string is different, the inserts unfortunately aren't batched as a single batch as you intended.
Solution 1: Make sure all changed flags are true
One way to enforce all INSERT statements to be the same is to set all changed flags of each individula record to true:
for (Record r : records)
r.changed(true);
Now, all SQL strings will be the same.
Solution 2: Use the Loader API
Instead of batching, you could import the data (and specify batch sizes there). For details, see the manual's section about importing records:
https://www.jooq.org/doc/latest/manual/sql-execution/importing/importing-records
Solution 3: Use a batch statement instead
Your usage of batchInsert() is convenience that works when using TableRecords. But of course, you can generate an INSERT statement manually and batch the individual bind variables by using jOOQ's batch statement API:
https://www.jooq.org/doc/latest/manual/sql-execution/batch-execution
A note on performance
There are a couple of open issues regarding the DSLContext.batchInsert() and similar API. The client side algorithm that generates SQL strings for each individual record is inefficient and might be changed in the future, relying on changed() flags directly. Some relevant issues:
https://github.com/jOOQ/jOOQ/issues/4533
https://github.com/jOOQ/jOOQ/issues/6294
In my application each user can have multiple authorization roles. Depending on his roles the user should be allowed to see certain excerpts of data. I want to provide this data from my relational database via a REST-API.
For example:
table "Role"
UserName | Role
---------------------------
Anne | ViewFreshFruits
Mike | ViewFreshFruits
Mike | ViewTinySoft
table "Company"
Name | Address | Role
--------------------------------------------
FreshFruits | 123 America | ViewFreshFruits
TinySoft | 543 Britain | ViewTinySoft
table "Contract"
ID | CompanyName | Dollar
---------------------------
147 | FreshFruits | 15549
148 | FreshFruits | 16321
149 | TinySoft | 2311
To implement the REST-Resource http://api:8080/Application/Contracts/getAll the data (without permission check) could simply be:
SELECT Contract.* FROM Contract
But Anne is only allowed to see 147 and 148. Mike can see 147, 148, 149. And Tomy must not get any results.
I started to implement the permission check like this:
SELECT Contract.* FROM Contract
INNER JOIN Company ON Contract.CompanyName = Company.Name
INNER JOIN Role ON Role.Role = Company.Role
WHERE User = #CurrentlyAuthenticatedUser
This kind of SQL gains complexity with the number of tables in my database. I'm looking for an easier approach: less complex and easier to maintain. Performance is not my primary concern.
How can I filter certain rows of data, depending on the user, as simple as possible?
I'm using a Microsoft SQL Server 2012, Java Tomcat 8 and Connection Pooling.
That seems the easiest way to create and maintain.
If you want to have a faster SQL query, that doesn't need joins, you could create a materialized view based on that query, but without the WHERE clause and with the User on the column selection.
SELECT User, Contract.* FROM Contract
INNER JOIN Company ON Contract.CompanyName = Company.Name
INNER JOIN Role ON Role.Role = Company.Role
That way you would have a virtual table that keeps that query saved and up to date for fast authentication data retrieval. To select you would only need to:
SELECT * FROM MaterializedView
WHERE User = #CurrentlyAuthenticatedUser
How about stored procedures where you pass the role. Or to simplify you select use a view to hide some of the details.
edit
Create a view on company and authorisation tables (or a stored procedure or CTE)
CREATE VIEW CompanySecurity AS SELECT Company.*, role.username FROM Company INNER JOIN Role ON Role.Role = Company.Role
If you want contract details join that view to CompanySecurity and filter on username
If you want to see Sales, join sales to CompanySecurity and filter on username
I would like to know how to create custom setups/teardown mostly to fix cyclyc refence issues where I can insert custom SQL commands with Spring Test Dbunit http://springtestdbunit.github.io/spring-test-dbunit/index.html.
Is there an annotation I can use or how can this be customized?
There isn't currently an annotation that you can use but you might be able to create a subclass of DbUnitTestExecutionListener and add custom logic in the beforeTestMethod. Alternatively you might get away with creating your own TestExecutionListener and just ordering it before DbUnitTestExecutionListener.
Another, potentially better solution would be to re-design your database to remove the cycle. You could probably drop the reference from company to company_config and add a unique index to company_id in the company_config table:
+------------+ 1 0..1 +--------------------------------+
| company |<---------| company_config |
+------------+ +--------------------------------+
| company_id | | config_id |
| ... | | company_id (fk, notnull, uniq) |
+------------+ +--------------------------------+
Rather than looking at company.config_id to get the config you would do select * from company_config where company_id = :id.
Dbunit needs the insert statements (xml lines) in order, because they are performed sequentially. There is no or magic parameter or annotation so dbunit can resolve your cyclyc refences or foreign keys automatically.
The most automate way I could achieve if you your data set contain many tables with foreign keys:
Populate your database with few records. In your example: Company, CompanyConfig and make it sure that the foreign keys are met.
Extract a sample of your database using dbunit Export tool.
This is an snippets you could use:
IDatabaseConnection connection = new DatabaseConnection(conn, schema);
configConnection((DatabaseConnection) connection);
// dependent tables database export: export table X and all tables that have a // PK which is a FK on X, in the right order for insertion
String[] depTableNames = TablesDependencyHelper.getAllDependentTables(connection, "company");
IDataSet depDataset = connection.createDataSet(depTableNames);
FlatXmlWriter datasetWriter = new FlatXmlWriter(new FileOutputStream("target/dependents.xml"));
datasetWriter.write(depDataset);
After running this code, you will have your dbunit data set in "dependents.xml", with all your cycle references fixed.
Here I pasted you the full code: also have a look on dbunit doc about how to export data.
Following is my Cassandra table structure.
Advertisement
AdvertisementId | Ad_Language | Ad_Caption | Others
----------------------------------------------------------------------
A01(UUID) | EN_US (text)| englishCaption (text) | Other Info(text)
A01(UUID) | FR_CA (text)| French Caption (text) | Other Info (text)
Primary key is (AdvertisementId, Ad_Language);
I am using java to integrate with Cassandra. There is a Java API call to fetch List<advertisements>
Is there a possiblity to fetch the rows like
Query : select * from ad_details orderBy advertisementId; (Unfortunately I cannot specify a col_name that will be used in WHERE or In clause)
I cannot have advertisement Id as cluster key as I need to maintain the UUID as partition key of the composite primary key in Cassandra.
The following query works: Select * from ad_details where advertisementId=xxx orderBy language ASC;
Can someone please help me in carrying out the orderBy advertisementId?
You can not order by a partition key when using the MurMur3partitioner or RandomPartitioner. If you are using an ordered partitioner the results will be in the order of the type specified for the partition key when creating the table.
You can't order by primary key unless you are using IN.
If you are not limiting your search with "where" you probably need to redesign your table as it is not an efficient design, when table gets big you can't query it in efficient way.