How can I batch select from several tables with jdbctemplate? - java

I have the following query:
select name_table.name, age_table.age, address_table.address
from
name_table name_table,
age_table age_table,
address_table address_table
where
name_table.id = age_table.id
and age_table.id = address_table.id
and name_table.new_date between ?(start_date) and ?(end_date)
How can i use JDBCTemplate to get the needed data in batches? The start_date and end_date are variables.

First your query is not correct:
from the syntax point of view "date" is a reserved word and
from the logic point of view, you are doing a cross table join over the 3 tables because you don't have any condition to join them in the WHERE clause (and you should better do it using standard ANSI join).
Second if your intent is to have the table names be parameters of the template, the answer is NO: table names and column names can't be parameters of a query.
This has been answered several times on stackoverflow, including by AskTOM site's guys, explaining in details why: mainly the data structure of the result must be known at parse time.

Related

How to auto generate product SKU in MySQL? [duplicate]

I'm trying to make a blog system of sort and I ran into a slight problem.
Simply put, there's 3 columns in my article table:
id SERIAL,
category VARCHAR FK,
category_id INT
id column is obviously the PK and it is used as a global identifier for all articles.
category column is well .. category.
category_id is used as a UNIQUE ID within a category so currently there is a UNIQUE(category, category_id) constraint in place.
However, I also want for category_id to auto-increment.
I want it so that every time I execute a query like
INSERT INTO article(category) VALUES ('stackoverflow');
I want the category_id column to be automatically be filled according to the latest category_id of the 'stackoverflow' category.
Achieving this in my logic code is quite easy. I just select latest num and insert +1 of that but that involves two separate queries.
I am looking for a SQL solution that can do all this in one query.
This has been asked many times and the general idea is bound to fail in a multi-user environment - and a blog system sounds like exactly such a case.
So the best answer is: Don't. Consider a different approach.
Drop the column category_id completely from your table - it does not store any information the other two columns (id, category) wouldn't store already.
Your id is a serial column and already auto-increments in a reliable fashion.
Auto increment SQL function
If you need some kind of category_id without gaps per category, generate it on the fly with row_number():
Serial numbers per group of rows for compound key
Concept
There are at least several ways to approach this. First one that comes to my mind:
Assign a value for category_id column inside a trigger executed for each row, by overwriting the input value from INSERT statement.
Action
Here's the SQL Fiddle to see the code in action
For a simple test, I'm creating article table holding categories and their id's that should be unique for each category. I have omitted constraint creation - that's not relevant to present the point.
create table article ( id serial, category varchar, category_id int )
Inserting some values for two distinct categories using generate_series() function to have an auto-increment already in place.
insert into article(category, category_id)
select 'stackoverflow', i from generate_series(1,1) i
union all
select 'stackexchange', i from generate_series(1,3) i
Creating a trigger function, that would select MAX(category_id) and increment its value by 1 for a category we're inserting a row with and then overwrite the value right before moving on with the actual INSERT to table (BEFORE INSERT trigger takes care of that).
CREATE OR REPLACE FUNCTION category_increment()
RETURNS trigger
LANGUAGE plpgsql
AS
$$
DECLARE
v_category_inc int := 0;
BEGIN
SELECT MAX(category_id) + 1 INTO v_category_inc FROM article WHERE category = NEW.category;
IF v_category_inc is null THEN
NEW.category_id := 1;
ELSE
NEW.category_id := v_category_inc;
END IF;
RETURN NEW;
END;
$$
Using the function as a trigger.
CREATE TRIGGER trg_category_increment
BEFORE INSERT ON article
FOR EACH ROW EXECUTE PROCEDURE category_increment()
Inserting some more values (post trigger appliance) for already existing categories and non-existing ones.
INSERT INTO article(category) VALUES
('stackoverflow'),
('stackexchange'),
('nonexisting');
Query used to select data:
select category, category_id From article order by 1,2
Result for initial inserts:
category category_id
stackexchange 1
stackexchange 2
stackexchange 3
stackoverflow 1
Result after final inserts:
category category_id
nonexisting 1
stackexchange 1
stackexchange 2
stackexchange 3
stackexchange 4
stackoverflow 1
stackoverflow 2
Postgresql uses sequences to achieve this; it's a different approach from what you are used to in MySQL. Take a look at http://www.postgresql.org/docs/current/static/sql-createsequence.html for complete reference.
Basically you create a sequence (a database object) by:
CREATE SEQUENCE serials;
And then when you want to add to your table you will have:
INSERT INTO mytable (name, id) VALUES ('The Name', NEXTVAL('serials')

CLOB and CriteriaQuery

I have an entity that has a CLOB attribute:
public class EntityS {
...
#Lob
private String description;
}
To retrieve certain EntityS from the DB we use a CriteriaQuery where we need the results to be unique, so we do:
query.where(builder.and(predicates.toArray(new Predicate[predicates.size()]))).distinct(true).orderBy(builder.asc(root.<Long> get(EntityS_.id)));
If we do that we get the following error:
ORA-00932: inconsistent datatypes: expected - got CLOB
I know that's because you cannot use distinct when selecting a CLOB. But we need the CLOB. Is there a workaround for this using CriteriaQuery with Predicates and so on?
We are using an ugly workaround getting rid of the .unique(true) and then filtering the results, but that's crap. We are using it only to be able to keep on developing the app, but we need a better solution and I don't seem to find one...
In case you are using Hibernate as persistence provider, you can specify the following query hint:
query.setHint(QueryHints.HINT_PASS_DISTINCT_THROUGH, false);
This way, "distinct" is not passed through to the SQL command, but Hibernate will take care of returning only distinct values.
See here for more information: https://thoughts-on-java.org/hibernate-tips-apply-distinct-to-jpql-but-not-sql-query/
Thinking outside the box - I have no idea if this will work, but perhaps it is worth a shot. (I tested it and it seems to work, but I created a table with just one column, CLOB data type, and two rows, both with the value to_clob('abcd') - of course it should work on that setup.)
To de-duplicate, compute a hash of each clob, and instruct Oracle to compute a row number partitioned by the hash value and ordered by nothing (null). Then select just the rows where the row number is 1. Something like below (t is the table I created, with one CLOB column called c).
I expect that execution time should be reasonably good. The biggest concern, of course, is collisions. How important is it that you not miss ANY of the CLOBs, and how many rows do you have in the base table in the first place? Is something like "one chance in a billion" of having a collision acceptable?
select c
from (
select c, row_number() over (partition by dbms_crypto.hash(c, 3) order by null) as rn
from t
)
where rn = 1;
Note - the user (your application, in your case) must have EXECUTE privilege on SYS.DBMS_CRYPTO. A DBA can grant it if needed.

Search DB entries for a match when table has eight columns

I have to work with a POJO "Order" that 8 fields and each of these fields is a column in the "order" table. The DB schema is denormalized (and worse, deemed final and unchangeable) so now I have to write a search module that can execute a search with any combination of the above 8 fields.
Are there any approaches on how to do this? Right now I get the input in a new POJO and go through eight IF statements looking for values that are not NULL. Each time I find such a value I add it to the WHERE condition in my SELECT statement.
Is this the best I can hope for? Is it arguably better to select on some minimum of criteria and then iterate over the received collection in memory, only keeping the entries that match the remaining criteria? I can provide pseudo code if that would be useful. Working on Java 1.7, JSF 2.2 and MySQL.
Each time I find such a value I add it to the WHERE condition in my SELECT statement.
This is a prime target for Sql Injection attacks!
Would something like the following work with MySql?
SELECT *
FROM SomeTable
WHERE (#param1 IS NULL OR SomeTable.SomeColumn1 = #param1) OR
(#param2 IS NULL OR SomeTable.SomeColumn2 = #param2) OR
(#param3 IS NULL OR SomeTable.SomeColumn3 = #param3) OR
/* .... */

Mapping partial column values using Toplink

Suppose I have a list of IDs as follows:
EmployeeID
-------
ABCD
AECD
ABDF
ACDF
ACDE
I have a need to read the distinct values from a list of codes, while selecting only the first two characters of the column.
In other words, its similar to using the following query:
SELECT DISTINCT LEFT (EmployeeID,2) FROM TABLE1
My question is how do I map such a field in TOPLINK.
Note:I have created a class for the EmployeeID, but dont have an idea of mapping a partial field.
Ok... After looking at many workarounds, I seem to have a more suited solution.
I created an object for this particular scenario (the POJO has only the field for the holding the 2 Char ID, and its getter and setter methods).
During the mapping, I mapped the above field to the DB column in question (EmployeeID in the table described above).
Now I selected "Custom Queries" for the above object and entered the following query for "Read all" tab.
SELECT DISTINCT LEFT (EmployeeID,2) AS EmploeeID FROM TABLE1
All the read all operations on the object will now return the list of distinct first 2 characters of IDs.
Welcome anyone's opinion on this.

How to execute a SQL statement in Java with many values in a single variable in where in clause

I have to execute below query through JDBC call
select primaryid from data where name in ("abc", adc", "anx");
Issue is inside in clause I have to pass 11000 strings. Can I use prepared statement here? Or any other solution any one can suggest. I dont want to execute the query for each record, as it is consuming time. I need to run this query in very less time.
I am reading the strings from an XML file using DOMParser. and I am using sql server db.
I'm just wondering why you would need to have a manual set of 11,000 items where you need to specify each item. It sounds like you need to bring the data into a staging table
(surely it's not been selected from the UI..?), then join to that to get your desired resultset.
Using an IN clause with 11k literal values is a really bad idea - off the top of my head, I know one major RDBMS (Oracle) that doesn't support more than 1k values in the IN list.
What you can do instead:
create some kind of (temporary) table T_NAMES to hold your names; if your RDBMS doesn't support "real" (session-specific) temporary tables, you'll have to add some kind of session ID
fill this table with the names you're looking for
modify your query to use the temporary table instead of the IN list: select primaryid from data where name in (select name from T_NAMES where session_id = ?session_id) or (probably even better) select primaryid from data join t_names on data.name = t_names.name and t_names.session_id = ?session_id (here, ?session_id denotes the bind variable used to pass your session id)
A prepared statement will need to know the number of arguments in advance - something along the lines of :
PreparedStatement stmt = conn.prepareStatement(
"select id, name from users where id in (?, ?, ?)");
stmt.setInt(1);
stmt.setInt(2);
stmt.setInt(3);
11,000 is a large number of parameters. It may be easiest to use a 'batch' approach as described here (in summary - looping over your parameters, using a prepared statement
each time)
Note - if your 11,000 strings are the result of an earlier database select, then the best approach is to write a stored procedure to do the whole calculation in the database (avoiding passing the 11,000 strings back and forth with your code)
You can merge all your parameter strings into one bitg string separating by ';' char
bigStrParameter=";abc;adc;anx;"
And use LOCATE to find substring.
select primaryid from data where LOCATE(concat(';',name,';'),?)>=0;

Categories