How to use SQL UNION with three separate Ignite Caches - java

I was having a ignite cache and doing sql queries as follows.
IgniteCache<String, ClassName> cache = ignite.cache(CACHE_NAME);
private static final String sql = "Select timestamp from cache1 where orderId = ? and timestamp <= ? and timestamp >= ? ";
SqlFieldsQuery sqlQ = new SqlFieldsQuery(sql).setArgs(id, t1, t2);
try (QueryCursor<List<?>> cursor = cache.query(sqlQ)) {
for (List<?> row : cursor) {
timestamps.add((Long) row.get(0));
}
Now I want to query from three different caches and get the union. I was able to successfully execute the SQL command in a SQL engine as follows and get results.
Select starttime from "unconfirmed_event_mc_79".unconfirmedevent union all Select starttime from "unconfirmed_event_urgent_mc_79".unconfirmedevent union all Select starttime from "confirmed_event_mc_79".confirmedevent order by starttime desc limit 1;
I want to add the timestamp results from three separate caches to a single arraylist.
I tried following method and it was not successful.
EVENT_GET_RHYTHM_BY_ID = "Select timestamp from ConfirmedEvent where orderId = ? and startTime < ? and endTime > ? and type = %s UNION ALL " +
"Select timestamp from " + String.format(UNCONFIRMED_NON_URGENT_EVENT_CACHE,mcId) + ".UnconfirmedEvent where orderId = ? and startTime < ? and endTime > ? and type = %s " +
"Select timestamp from " + String.format(UNCONFIRMED_URGENT_EVENT_CACHE,mcId) + ".UnconfirmedEvent where orderId = ? and startTime < ? and endTime > ? and type = %s " + " order by startTime";
sql_Afib = new SqlFieldsQuery(String.format(EVENT_GET_RHYTHM_BY_ID, AnnotationConverter.StringToRhythmValue(AFIB), AnnotationConverter.StringToRhythmValue(AFIB),
AnnotationConverter.StringToRhythmValue(AFIB))).setArgs(orderId, endTimestamp, startTimestamp,
orderId, endTimestamp, startTimestamp, orderId, endTimestamp, startTimestamp);
try (QueryCursor<List<?>> cursor = confirmedEventCache.query(sql_Afib)) {
for (List<?> row : cursor) {
EventsEndTime.add(row.get(0));
}
}
I want to know how to use the query cursor? Now there are three caches and how to do the this part? (QueryCursor<List<?>> cursor = cache.query(sqlThreeCaches)) and how to write the SQL field in a java code?
Or is there any other way to do this?
Here is how I define the caches in a java code. There are three different caches, but column names in the cache tables are same.
public static final String EVENT_VIEW_RESERVED_EVENT_CACHE = "event_view_reserved_event_mc_%d";
public static final String UNCONFIRMED_NON_URGENT_EVENT_CACHE = "unconfirmed_event_mc_%d";
public static final String UNCONFIRMED_URGENT_EVENT_CACHE = "unconfirmed_event_urgent_mc_%d";
IgniteCache<String, ConfirmedEvent> confirmedEventCache = ignite.cache(String.format(CONFIRMED_EVENT_CACHE, mcId));
IgniteCache<String, UnconfirmedEvent> unconfirmedEventCache = ignite.cache(String.format(UNCONFIRMED_NON_URGENT_EVENT_CACHE,mcId));
IgniteCache<String, UnconfirmedEvent> unconfirmedUrgentEventCache = ignite.cache(String.format(UNCONFIRMED_URGENT_EVENT_CACHE,mcId));

Related

Get PostgreSql table size on disk (byte) in ORMlite

I use ORMLite in a Java application, in order to deal with a PostgreSql DataBase.
I want to get the space on the disc used by a table of DataBase.
It seems that OrmLite doesn't have a specific method to get it, so I tried without success:
final String TABLE_NAME = "a_table_name_of_db";
//1)
String SQL = "SELECT pg_relation_size('" + TABLE_NAME + "');" ;
int result = OGGETTO_DAO.executeRaw(SQL);
//2)
String SQL = "SELECT pg_table_size('" + TABLE_NAME + "');" ;
int result = OGGETTO_DAO.executeRaw(SQL);
//3 - it was just a try...)
SQL = "SELECT pg_table_size('" + TABLE_NAME + "');" ;
GenericRawResults<String> ARRAY = OGGETTO_DAO.queryRaw(SQL);
String result= ARRAY.getFirstResult();
With 1) and 2) I get always -1, with 3) I get a cast exception;
I I use the command 'pg_relation_size' or 'pg_table_size' by command line (linux - by psql prompt), it works properly.
What am I wrong?
Thank you
UPDATE - WORKING SOLUTION:
Now it works! Solution, as per accepted answer below, is:
final String TABLE_NAME = "a_table_name_of_db";
String SQL = "SELECT pg_table_size('" + TABLE_NAME + "');"
final long RESULT = OGGETTO_DAO.queryRawValue(SQL); //in bytes
int result = OGGETTO_DAO.executeRaw(SQL);
Yeah that's not right. Looking at the javadocs for executeRaw(...) they say that it returns the number of rows affected not the result.
SQL = "SELECT pg_table_size('" + TABLE_NAME + "');" ;
GenericRawResults ARRAY = OGGETTO_DAO.queryRaw(SQL);
String result= ARRAY.getFirstResult();
Looking at the javadocs for queryRaw(...), the problem here is it returns a GenericRawResults<String[]> and not <String>. It returns a collection of raw results, each row being represented by a string array. I'm really surprised that your code even compiles.
It should be:
GenericRawResults<String[]> ARRAY = OGGETTO_DAO.queryRaw(SQL);
String result= ARRAY.getFirstResult()[0];
Probably the best way to do this is to use queryRawValue(...) which performs a raw query and returns a single value.
// throws an exception if there are no results or if the first one isn't a number
long size = OGGETTO_DAO.queryRawValue(SQL);

Is it possible to return data set instead of any mapped model class in MyBatis?

I have scenario where I want to make dynamic query which myBatis supposed to support like as below :
<select id=“someRecords” resultMap=“someRecordMap”>
DROP TABLE IF EXISTS TEMP_TABLE;
CREATE TEMPORARY TABLE TEMP_TABLE(some_stub UUID);
INSERT INTO TEMP_TABLE (some_stub)
select regexp_split_to_table(#{someIds},',')::uuid;
SELECT wil.some_identifier_stub as identifier_stub
,wil.x
,wil.y
,wil.z
,wil.u
,wil.o
,msg.p
FROM TABLE_A msg
INNER JOIN TABLE_B wil ON msg.a_id = wil.b_id
INNER JOIN TABLE_C est ON est.c_stub = wil.b_stub
WHERE wil.unique_id = #{uniqueId} AND wil.b_type_id = #{b_TypeId}
<if test="environment != null">
<include refid="environmentCondition"></include>
</if>
</select>
<sql id="environmentCondition">
AND environment = #{environment}
</sql>
But instead of someRecordMap, I want to return DataSet so that it become backward compatible with my existing code
So instead of suing myBatis XMl approach I just make custom approach using reflection and annotations like below :
Section : Dynamic Query Based on some condition like IGNORE_SOME_JOIN, IGNORE_SOME_STUB, IGNORE_SOME_EXT_FLAG
#SqlQueries({#SqlQuery(name = "query1",
query = "select a,b," +
"c,d,e,f,g,wil.h," +
" j,h,i " +
START_DELIMITER + " " + IGNORE_SOME_JOIN + " " +
" ,some_message" + END_DELIMITER +
" FROM A_TABLE wil " +
START_DELIMITER + " " + IGNORE_SOME_JOIN + " " +
"LEFT OUTER JOIN B_TABLE wim on" +
" wil.unique_id = wim.unique_id" +
" and wim.created_date >= ? and wim.created_date <= ? " + END_DELIMITER +
" WHERE ( wil.created_date >= ? AND wil.created_date <= ? AND wil.primary_id = ? " +
START_DELIMITER + " " + IGNORE_SOME_STUB + " " +
" AND wil.unique_identifier_stub = ?::uuid " + END_DELIMITER +
START_DELIMITER + " " + IGNORE_SOME_EXT_FLAG +
" AND wil.some_ext_success_flag = ANY(?) " + END_DELIMITER + ")" +
"ORDER BY wil.created_date OFFSET ? LIMIT ? ")}
)
parsing logic for dynamic query be like :
abstract class ReportingQuery {
private static final Logger LOG = LoggerFactory.getLogger(ReportingQuery.class);
static final String START_DELIMITER = "#~";
static final String END_DELIMITER = "#~";
static final String REPLACEABLE_DELIMITER = "--";
/**
* Responsible to prepare final query after applying dynamic query criteria
*
* #param className : ReportingQuery class reference
* #param methodName : Query method name
* #param queryName : Dynamic query
* #param ignoreStrings : Criteria to be applied in dynamic query
* #return : final static query after applying dynamic query criteria(ignoreStrings)
*/
static Optional<String> getQuery(Class className, String methodName, String queryName, List<String> ignoreStrings) {
StringBuilder builder = new StringBuilder();
try {
Method[] methods = className.getDeclaredMethods();
Optional<String> queryString = Optional.empty();
if (Arrays.stream(methods).anyMatch(x -> x.getName().equals(methodName))) {
QueryExample.SqlQuery[] sqlQueries = Arrays.stream(methods)
.filter(x -> x.getName().equals(methodName))
.findFirst().get().getAnnotation(ReportingQuery.SqlQueries.class).value();
if (Arrays.stream(sqlQueries).anyMatch(x -> x.name().equals(queryName))) {
queryString = Optional.of(Arrays.stream(sqlQueries).filter(x -> x.name()
.equals(queryName)).findFirst().get().query());
}
}
String[] token = new String[0];
if (queryString.isPresent()) {
token = queryString.get().split(START_DELIMITER);
}
...... SOME logic to make query based on some dynamic condition
return Optional.of(builder.toString());
}
/**
*
*/
#Retention(RetentionPolicy.RUNTIME)
#Target({ElementType.METHOD})
#interface SqlQuery {
String name();
String query();
}
/**
*
*/
#Retention(RetentionPolicy.RUNTIME)
#Target({ElementType.METHOD})
#interface SqlQueries {
SqlQuery[] value();
}
}
So if I have condition IGNORE_SOME_JOIN then my final query with logic would be like
select a,b,c,d,e,f,g,wil.h,j,h,i FROM A_TABLE wil WHERE ( wil.created_date >= '2018-08-29T15:15:42.42'
AND wil.created_date <= '2018-08-30T15:15:42.42' AND wil.acct_id = 2000017
AND wil.unique_identifier_stub = 'a004f322-1003-40a7-a54b-f3b979744fd2'
AND wil.some_ext_success_flag = ANY('{"0","1"}')) ORDER BY wil.created_date OFFSET 0 LIMIT 500;
So with above I got query as string and I'll run below code and get Result Set :
PreparedStatement ps = con.prepareStatement(query)) {
prepareStatement(prepareStatementAndQueryList, ps, con);
try (ResultSet rs = ps.executeQuery()) {
DO SOMETHING WITH RESULT SET NOW
}
}
But I want to do this with MyBatis instead my own custom solution which would little error prone and might be not performance efficient as I have used much reflection here.
Yes, this is possible and is directly supported by mybatis. Here is an example from the documentation:
#SelectProvider(type = UserSqlBuilder.class, method = "buildGetUsersByName")
List<User> getUsersByName(String name);
class UserSqlBuilder {
public static String buildGetUsersByName(final String name) {
return new SQL(){{
SELECT("*");
FROM("users");
if (name != null) {
WHERE("name like #{value} || '%'");
}
ORDER_BY("id");
}}.toString();
}
}
In the example above getUsersByName is a mapper method and UserSqlBuilder is used by the mapper to generate SQL query text dynamically based on mapper parameters. It's exactly what you need and very similar to what ReportingQuery does.
You need to adapt your code so that your existing query generator confirms to SelectProvider API but this seems to be rather straightforward.

Improve speed of insertion of big amount of data

I have a rest service that take xml with 400_000 records, each record contain the following fields: code,type,price.
In DB (MySql )I have table named PriceData with 2_000_000 rows. The purpose of this rest is: select all PriceDatas from DB according to code,type from XML, replace price of each PriceData with price from XML, if there is no PriceData with this code,type create new with provided price.
Now it work as : select one PriceData from DB accroding to first record from XML, set new price or create new PriceData, save PriceData and these steps repeats 400_000 times.(It takes about 5 minutes)
I want to speed up this process.
First try:
Select 1000 elements step by step from PriceData, and when all elements will be selected update them:
Code:
private void updateAll(final List<XmlData> prices/*data from xml*/) {
int end= 1000;
int begin= 0;
final List<PriceData> models = new ArrayList<>();
while(end != prices.size() || begin !=end){
models.addAll(dao.findByPrices(prices.subList(begin,end)));
begin = end;
end +=1000;
}
final Map<String,XmlData> xmlData= prices.stream()
.collect(Collectors.toMap(this::keyForPriceDate,e->e));
final Map<String,PriceData> modelMap = models.stream()
.collect(Collectors.toMap(this::keyForRowModel,e->e));
final List<PriceData> modelsToSave = new ArrayList<>();
for(final String key : xmlData.keySet()){
final XmlData price = xmlData.get(key);
PriceData model = modelMap.get(key);
if(model == null){
model = onEmptyPriceData(price);
}
model.setPrice(price.getPrice());
modelsToSave.add(model);
}
modelService.saveAll(modelsToSave);
}
I convert two lists to maps to know does PriceData exist (keys for xmlData and modelMap created as (code+type))
findByPrices method create query in following format
select * from PriceData where (code =123 and type ='qwe') or (...)//and this `Or` repeats 1000 times
Now it takes 2 minutes.
Second try:
Select all PriceData from db (2 millions)
and use the algorithm above
It takes 3 minutes. First try is better but in future my rest can take 500_000 and I want to know which try will be better in this scenario or maybe there is the better way to do this task.
My select method
public List<PriceData> findBy(final List<XmlData> selectData) {
final StringBuilder query = new StringBuilder("SELECT * from PriceData ");
query.append("WHERE \n");
final Iterator<PriceRowSelectData> selectDataIterator = selectData.iterator();
while(selectDataIterator.hasNext()){
final PriceRowSelectData data = selectDataIterator.next();
query.append("( \n")
.append("productCode = "+ data.getProductId()+" \n")
.append(" AND type = "+ data.getPriceind()+" \n")
.append(" ) \n");
if(selectDataIterator.hasNext()){
query.append("OR \n");
}
}
final SearchResult<PriceRowModel> searchRes = search(query.toString());
/*
Here i use custom mapper that map list of result to my object
*/
return searchRes.getResult();
}
You should use the MySQL INSERT ... ON DUPLICATE KEY UPDATE statement, combined with JDBC batch processing. This of course assumes that code,type is the primary key, or at least a unique index.
private void updateAll(final List<XmlData> prices) throws SQLException {
String sql = "INSERT INTO PriceData (code, type, price)" +
" VALUES (?,?,?)" +
" ON DUPLICATE KEY" +
" UPDATE price = ?";
try (PreparedStatement stmt = this.conn.prepareStatement(sql)) {
int batchSize = 0;
for (XmlData price : prices) {
if (batchSize == 1000) { // flush batch every 1000
stmt.executeBatch();
batchSize = 0;
}
stmt.setInt (1, price.getCode());
stmt.setString (2, price.getType());
stmt.setBigDecimal(3, price.getPrice());
stmt.setBigDecimal(4, price.getPrice());
stmt.addBatch();
batchSize++;
}
if (batchSize != 0)
stmt.executeBatch();
}
}
You can twiddle the batch size, but not flushing will use a lot of memory. I think 1000 statements per batch is good, but I have no numbers backing that.

How to use AND SqlRestriction which contains OR condition inside?

I'm facing problems in using Restrictions. I've a table employee which has following structure :
id : int (primary key)
create_date : datetime
modified_date: datetime
I'm using following code to list down an employee if it's created/modified within a particular time interval :
Criteria criteria = getSession().createCriteria(Employee.class);
criteria.add(Restrictions.eq("id", employeeId));
if (interval > 0) {
String sql = "{alias}.create_date > DATE_SUB(NOW(), INTERVAL " + interval + " SECOND) OR {alias}.modified_date > DATE_SUB(NOW(), INTERVAL " + interval + " SECOND)";
criteria.add(Restrictions.sqlRestriction(sqlWhere));
}
List<Employee> employeeList = criteria.list();
Please note that there is a OR condition inside the SqlRestriction.
Now suppose employeeId = 10 and interval = 3600, the employeeList contains other employees along with id=10 which should not happen.
Should I use Restrictions.and or Restrictions.conjunction to solve it ? Or I'm missing something else ?
There's no magic here. Use Restrictions.and method to group two criterion. Hibernate should automatically group subqueries to achieve the desired results.
Criteria criteria = getSession().createCriteria(Employee.class);
Criterion whereClause = Restrictions.eq("id", employeeId);
if (interval > 0) {
String sql = "{alias}.create_date > DATE_SUB(NOW(), INTERVAL " + interval + " SECOND) OR {alias}.modified_date > DATE_SUB(NOW(), INTERVAL " + interval + " SECOND)";
Criterion andConjunction = Restrictions.and(
whereClause,
Restrictions.sqlRestriction(sqlWhere)
);
whereClause = andConjunction;
}
criteria.add(whereClause);
List<Employee> employeeList = criteria.list();

Multiple updates in one sql query

I have a table in H2 DB
Order
--------
id (key)
MarketId1
MarketId2
MarketId3
ListName1
ListName2
ListName3
From XML I'm getting list of ListOrder
public final class ListOrder
{
public long listId;
public String Name;
}
So I have 3 prepared statements
"UPDATE Order set " + ListName1 + " = ? WHERE " + MarketId1 + " = ?"
"UPDATE Order set " + ListName2 + " = ? WHERE " + MarketId2 + " = ?"
"UPDATE Order set " + ListName3 + " = ? WHERE " + MarketId3 + " = ?"
The in a method I prepare a list of PreparedStament to execute
final PreparedStatement statement1 = connection.prepareStatement(QUERY1);
final PreparedStatement statement2 = connection.prepareStatement(QUERY2);
final PreparedStatement statement3 = connection.prepareStatement(QUERY3);
for (ListOrder listOrder: listOrders)
{
statement1.setString(1, listOrder.Name);
statement1.setLong(2, listOrder.listId);
statement1.addBatch();
statement2.setString(1, listOrder.Name);
statement2.setLong(2, listName.listId);
statement2.addBatch();
statement3.setString(1, listName.Name);
statement3.setLong(2, listOrder.listId);
statement3.addBatch();
}
return new ArrayList<PreparedStatement>(){{add(statement1); add(statement2); add(statement3);}};
I'm a SQL noob. Is there any better way of doing it? I assume that MarketId 1 2 3 could be the same. ListNames could be null (there will be at least one)
UPDATE:
In code I would write something like this (prob change to HashMap)
for (ListOrder listOrder: listOrders)
{
for(Order order : orders)
{
if(order.marketID1 == listOrder.listID)
order.listName1 = listOrder.Name; //break if no dups
if(order.marketID2 == listOrder.listID)
order.listName2 = listOrder.Name;
if(order.marketID3 == listOrder.listID)
order.listName3 = listOrder.Name;
}
}
You can use update comma separated
UPDATE <TABLE>
SET COL1 = <VAL1>,
COL2= <VAL2>
WHERE <CONDITION>
Is it this what you expect as one update query?
Unless you are trying to update the same record, then there is no way to do this easily or efficiently in a single query. Otherwise, assuming this is the desired result, you could use an OR (or an AND if that is desired) statement such as:
UPDATE Order
SET ListName1=?, ListName2=?, ListName3=?
WHERE MarketId1=? OR MarketId2=? OR MarketId3=?
You might also consider updating your table to use a one:many relationship which might make your queries easier. For example:
Order
--------
id (key)
name
etc
Market_List
--------
id (key)
order_id (fk)
market
listname

Categories