How to sum values from a database column in AnyLogic? - java

as a newbie I want to sum the values of a column pv from a database table evm in my model and store it in a variable. I have tried the SQL code SELECT SUM(pv) FROM evm; but that doesn't seem to work.I would be grateful if you lend me an aid regarding how to pull this one.

You can always write a native query and get the response in the resultset to populate the field of your pojo. Once you have the POJO/DTO created as the list of result set perform your sum on the field by Iterating the list.

You do just use the SQL you have suggested. (The database in an AnyLogic model is a standard HSQLDB database which supports this SQL syntax.)
The simplest way to execute it is to use AnyLogic's in-built functions for such queries (as would be produced by the Insert Database Query wizard), so
mySumVariable = selectFirstValue("SELECT SUM(pv) FROM evm;");
You didn't say what errors you had; obviously the table and column has to exist (and the column you're summing needs to be numeric, though NULLs are OK), as does the variable you're assigning the sum to.
If you wanted to do this in a way which more easily fits one of the standard query 'forms' suggested by the wizard (i.e., not having to know particular SQL syntax) you could just adapt the "Iterate over returned rows and do something" code to 'explicitly' sum the columns; e.g., (using the Query DSL format this time):
List<Tuple> rows = selectFrom(evm).list();
for (Tuple row : rows) {
mySumVariable += row.get(evm.pv);
}

Related

How to get List<Object[]> from a query using Spring Jdbc

I use Spring Jdbc for queries. I need to execute select from the database, but I don't know how many columns in the table to get the result from resultset (for example, for RowMapper).I would like to get List<Object[]>. Is it possible? And how can I get data without knowing the number of columns in the database?
If you have a ResultSet at hand, you can look into the result using metadata:
ResultSetMetaData metaData = resultSet.getMetaData();
Then you can get a column count with metaData.getColumnCount(), and poke at specific ones with various methods like this:
int count = metaData.getColumnCount();
for (int i = 1; i <= count; i++) { // yeah, sql indexes from 1
System.out.println(metaData.getColumnName(i));
System.out.println(metaData.isNullable(i));
//... see ResultSetMetadata JavaDoc for the rest
}
If you want a List as the result, you should definitely question your conceptual approach for what you are trying to achieve.
Ask yourself if you really can't or don't want to know in advance what colums you will receive, and how will you handle the unknown types of these columns?
You would make your life way easier if you have static model classes with statically typed fields that neatly map to your result set; and if you do this, you can even skip jdbc alltoghter and let Spring Data all the query and Resultset mapping work.
But if this really is not an option for you, then at least don't map your result rows to an Object array, but rather to simple JSON objector a Map, unless you don't care much about what value belonged to what column.

CLOB and CriteriaQuery

I have an entity that has a CLOB attribute:
public class EntityS {
...
#Lob
private String description;
}
To retrieve certain EntityS from the DB we use a CriteriaQuery where we need the results to be unique, so we do:
query.where(builder.and(predicates.toArray(new Predicate[predicates.size()]))).distinct(true).orderBy(builder.asc(root.<Long> get(EntityS_.id)));
If we do that we get the following error:
ORA-00932: inconsistent datatypes: expected - got CLOB
I know that's because you cannot use distinct when selecting a CLOB. But we need the CLOB. Is there a workaround for this using CriteriaQuery with Predicates and so on?
We are using an ugly workaround getting rid of the .unique(true) and then filtering the results, but that's crap. We are using it only to be able to keep on developing the app, but we need a better solution and I don't seem to find one...
In case you are using Hibernate as persistence provider, you can specify the following query hint:
query.setHint(QueryHints.HINT_PASS_DISTINCT_THROUGH, false);
This way, "distinct" is not passed through to the SQL command, but Hibernate will take care of returning only distinct values.
See here for more information: https://thoughts-on-java.org/hibernate-tips-apply-distinct-to-jpql-but-not-sql-query/
Thinking outside the box - I have no idea if this will work, but perhaps it is worth a shot. (I tested it and it seems to work, but I created a table with just one column, CLOB data type, and two rows, both with the value to_clob('abcd') - of course it should work on that setup.)
To de-duplicate, compute a hash of each clob, and instruct Oracle to compute a row number partitioned by the hash value and ordered by nothing (null). Then select just the rows where the row number is 1. Something like below (t is the table I created, with one CLOB column called c).
I expect that execution time should be reasonably good. The biggest concern, of course, is collisions. How important is it that you not miss ANY of the CLOBs, and how many rows do you have in the base table in the first place? Is something like "one chance in a billion" of having a collision acceptable?
select c
from (
select c, row_number() over (partition by dbms_crypto.hash(c, 3) order by null) as rn
from t
)
where rn = 1;
Note - the user (your application, in your case) must have EXECUTE privilege on SYS.DBMS_CRYPTO. A DBA can grant it if needed.

Search DB entries for a match when table has eight columns

I have to work with a POJO "Order" that 8 fields and each of these fields is a column in the "order" table. The DB schema is denormalized (and worse, deemed final and unchangeable) so now I have to write a search module that can execute a search with any combination of the above 8 fields.
Are there any approaches on how to do this? Right now I get the input in a new POJO and go through eight IF statements looking for values that are not NULL. Each time I find such a value I add it to the WHERE condition in my SELECT statement.
Is this the best I can hope for? Is it arguably better to select on some minimum of criteria and then iterate over the received collection in memory, only keeping the entries that match the remaining criteria? I can provide pseudo code if that would be useful. Working on Java 1.7, JSF 2.2 and MySQL.
Each time I find such a value I add it to the WHERE condition in my SELECT statement.
This is a prime target for Sql Injection attacks!
Would something like the following work with MySql?
SELECT *
FROM SomeTable
WHERE (#param1 IS NULL OR SomeTable.SomeColumn1 = #param1) OR
(#param2 IS NULL OR SomeTable.SomeColumn2 = #param2) OR
(#param3 IS NULL OR SomeTable.SomeColumn3 = #param3) OR
/* .... */

Efficient way to check if record (from a large set of data) is existing in the Database (JPA/Hibernate)

We have a large set of data (bulk data) that needs to be checked if the record is existing in the database.
We are using SQL Server2012/JPA/Hibernate/Spring.
What would be an efficient or recommended way to check if a record exists in the database?
Our entity ProductCodes has the following fields:
private Integer productCodeId // this is the PK
private Integer refCode1 // ref code 1-5 has a unique constraint
private Integer refCode2
private Integer refCode3
private Integer refCode4
private Integer refCode5
... other fields
The service that we are creating will be given a file where each line is a combination of refCode1-5.
The task of the service is to check and report all lines in the file that are already existing in the database.
We are looking at approaching this in two ways.
Approach1: Usual approach.
Loop through each line and call the DAO to query the refCode1-5 if existing in the db.
//psuedo code
for each line in the file
call dao. pass the refCode1-5 to query
(select * from ProductCodes where refCode1=? and refCode2=? and refCode3=? and refCode4=? and refCode5=?
given a large list of lines to check, this might be inefficient since we will be invoking the DAO xxxx number of times. If the file say consists of 1000 lines to check, this will be 1000 connections to the DB
Approach2: Query all records in the DB approach
We will query all records in the DB
Create a hash map with concatenated refCode1-5 as keys
Loop though each line in the file validating against the hashmap
We think this is more efficient in terms of DB connection since it will not create 1000 connections to the DB. However, if the DB table has for example 5000 records, then hibernate/jpa will create 5000 entities in memory and probably crash the application
We are thinking of going for the first approach since refCode1-5 has a unique constraint and will benefit from the implicit index.
But is there a better way of approaching this problem aside from the first approach?
try something like a batch select statement for say 100 refCodes instead of doing a single select for each refCode.
construct a query like
select <what ever you want> from <table> where ref_code in (.....)
Construct the select projection in a way that not just gives you wnat you want but also the details of ref_code. Teh in code you can do a count or multi-threaded scan of resultset if DB said you got less refCodes that the number you codes you entered in query.
You can try to use the concat operator.
select <your cols> from <your table> where concat(refCode1, refCode2, refCode3, refCode4, refCode5) IN (<set of concatenation from your file>);
I think this will be quite efficient and it may be worth to try to see if pre-sorting the lines and playing with the num of concatenation taken each times bring you some benefits.
I would suggest you create a temp table in your application where all records from file are stored initially with batch save, and later you run a query joining new temp table and productCodes table to achieve filtering how you like. In this way you are not locking productCodes table many times to check individual row as SqlServer locks rows on select statement as well.

How to update sparse values in a database table?

What is the best practice to update a table record most effectively (in my case with a primary key), when not all values are present?
Imagine:
PRIMARY_KEY1, COLUMN_2, COLUMN_3, COLUMN_4, COLUMN_5, COLUMN_6, ...
I always get tuples like (PRIMARY_KEY1, COLUMN_5, COLUMN_4) or (PRIMARY_KEY1, COLUMN_2, COLUMN_6, COLUMN_3) and want to just update them in the fastest way possible without having a database lookup for all other values.
Since I have to to this very fast, I would like to use something like batches for prepared statements in order to prevent massive database requests.
Thanks for all replies!
You can 'cheat' by expecting SQL to fill in the values at row-access time. Eg, this type of statement:
UPDATE MyTable SET (column_1, column_2, ..., column_6)
= (COLAESCE(#suppliedValue1, column_1),
COLAESCE(#suppliedValue2, column_2),
...,
COLAESCE(#suppliedValue6, column_6))
WHERE primary_Key1 = #primaryKey
Then, when filling out the parameters, just leave anything unsupplied null... and you should be good.
you are not required to update the entire row in SQL. just use UPDATEs SET syntax.
UPDATE table SET COLUMN_5 = 'foo', COLUMN_4 = 'goo' WHERE PRIMARY_KEY1 = 'hoo';
See this post here,
JDBC batch insert performance
Read it. Then look on the right column of the page under related links for other similar posts
You should find all the answers you need in no time.

Categories