I have a request to schedule some reports one after another in oracle BI Publisher.
The reports scheduler would start run at 7:00 PM and finish with all the reports nearly at 7:00 AM of the next day.
What I thought was to create a trigger that would check BI Publisher database if one other specific report was run and then run the report.
The trigger query is as below:
select "XMLP_SCHED_JOB"."STATUS" as "STATUS",
"XMLP_SCHED_JOB"."CREATED" as "CREATED",
"XMLP_SCHED_JOB"."USER_JOB_NAME" as "USER_JOB_NAME",
"XMLP_SCHED_JOB"."JOB_TYPE" as "JOB_TYPE"
from "DEV1_BIPLATFORM"."XMLP_SCHED_JOB" "XMLP_SCHED_JOB"
where "XMLP_SCHED_JOB"."STATUS" !='R'
and "XMLP_SCHED_JOB"."CREATED" BETWEEN (SELECT CASE WHEN TRUNC( SYSDATE, 'HH24' ) < TRUNC(SYSDATE) + 7/24 THEN TRUNC(SYSDATE-1) + 7/24
ELSE TRUNC(SYSDATE) + 7/24 END FROM DUAL
)
AND SYSDATE
and "XMLP_SCHED_JOB"."USER_JOB_NAME" ='test'
and "XMLP_SCHED_JOB"."JOB_TYPE" ='I'
When I run it in oracle database I get results normally, but when I enter it in BI Publisher trigger query I get the error in logs:
oracle.xdo.XDOException: oracle.xdo.XDOException: oracle.xml.parser.v2.XMLParseException: Expected name instead of .
I get the error only when I place the TRUNC( SYSDATE, 'HH24' ) < TRUNC(SYSDATE) + 7/24 in the query
You can create a standard request set, and then schedule that. It will run them one after another in order. One problem with this approach, however, is that request sets only allow you to have one report layout per request/data definition.
Related
I have a table Order_Status in Oracle DB 11, which stores order id and all its status for ex
ample
order id status date
100 at warehouse 01/01/18
100 dispatched 02/01/18
100 shipped 03/01/18
100 at customer doorstep 04/01/18
100 delivered 05/01/18
a few days back some of the orders were stuck in warehouse but it is not possible to check status of each order every day so no one noticed until we received a big escalation mail from business which arouse the requirement of a system or daily report which will tell us about status of all the order along with there present status and with some condition like if there are more than 2 days and no new status has been updated in DB for the order then mark it in red or highlight it.
we already have cron scheduled some of our reports but even if a create a SQL query for the status report it won't highlight pending order.
Note:- SQL, Java or some other tool suggestions both are welcome but SQL preferred then Tool then java.
I am assuming that your requirement is "status will always change in every 2 days, if not there is something wrong"
select * from (
select order_id,
status,
update_date,
RANK() OVER (PARTITION BY order_id ORDER BY update_date DESC) as rank
from Order_Status
where status != 'delivered'
)
where update_date < sysdate - 2 and rank = 1
In my application I use native query to fetch data:
SELECT
time_month,
CCC.closing_strike_value AS closing_strike_value,
CC.opening_strike_value AS opening_strike_value,
C.closing_time AS closing_time,
C.max_ask AS max_ask,
C.max_bid AS max_bid,
C.max_point_value AS max_point_value,
C.max_strike_value AS max_strike_value,
C.min_ask AS min_ask,
C.min_bid AS min_bid,
C.min_point_value AS min_point_value,
C.min_strike_value AS min_strike_value,
C.opening_time AS opening_time,
C.option_name AS option_name,
C.opening_time AS id
FROM
(SELECT
DATE(DATE_FORMAT(FROM_UNIXTIME(opening_time / 1000), '%Y-%m-01')) AS time_month,
MAX(closing_time) AS closing_time,
MAX(max_ask) AS max_ask,
MAX(max_bid) AS max_bid,
MAX(max_point_value) AS max_point_value,
MAX(max_strike_value) AS max_strike_value,
MIN(min_ask) AS min_ask,
MIN(min_bid) AS min_bid,
MIN(min_point_value) AS min_point_value,
MIN(min_strike_value) AS min_strike_value,
MIN(opening_time) AS opening_time,
option_name
FROM
candle_option
WHERE
option_name LIKE CONCAT('%', :optionName, '%')
AND opening_time BETWEEN :from AND :to
GROUP BY DATE(DATE_FORMAT(FROM_UNIXTIME(opening_time / 1000), '%Y-%m-01'))) C
JOIN
candle_option CC ON CC.opening_time = C.opening_time
AND CC.option_name = C.option_name
JOIN
candle_option CCC ON CCC.closing_time = C.closing_time
AND CCC.option_name = C.option_name
ORDER BY C.opening_time
Every column listed in the first select statement corresponds to a field within the Entity which I retrieve.
This native query works fine and returns valid results. However, there is one crucial problem - it fetches results dramatically slow even on small amounts of data when used by hibernate. However, when I run this query directly (I use MySQL Workbench) results are returned very fast. For instance, if this query is executed by hibernate, It takes about about 1 minute to get results; if I run this query directly from MySQL Workbench, then it takes only 100 milliseconds.
I wonder why is this happening and would appreciate any hint or help.
Thanks!
I have a SQL that counts rows per date truncate (months, days, hours). Think a history graph. Query works fine if executed in pgAdmin but fails in Java using EclipseLink.
pgAdmin query:
SELECT date_trunc( 'hour', delivered_at ),
COUNT(date_trunc( 'hour', delivered_at )) AS num
FROM messages
WHERE channel_type='EMAIL'
AND created_at>='2016-02-28 16:01:08.882'
AND created_at<='2016-02-29 16:01:08.882'
GROUP BY date_trunc( 'hour', delivered_at );
JPQL Named query
SELECT FUNCTION('date_trunc', 'hour', m.deliveredAt ),
COUNT(FUNCTION('date_trunc', 'hour', m.deliveredAt )) AS num
FROM Message m
WHERE m.channelType = :channelType
AND m.createdAt >= :fromDate
AND m.createdAt <= :toDate
GROUP BY FUNCTION('date_trunc', 'hour', m.deliveredAt )
EclipseLink debugging log:
SELECT date_trunc(?, delivered_at), COUNT(date_trunc(?, delivered_at)) FROM messages
WHERE (((channel_type = ?) AND (created_at >= ?)) AND (created_at <= ?)) GROUP BY date_trunc(?, delivered_at)
bind => [hour, hour, EMAIL, 2015-12-27 00:00:00.0, 2015-12-27 00:00:00.0, hour]
Error:
ERROR: column "messages.delivered_at" must appear in the GROUP BY
clause or be used in an aggregate function Position: 23
PostgreSQL log:
2016-03-01 13:22:08 CET ERROR: column "messages.delivered_at" must
appear in the GROUP BY clause or be used in an aggregate function at
character 23 2016-03-01 13:22:08 CET STATEMENT: SELECT date_trunc($1,
delivered_at), COUNT(delivered_at) FROM messages WHERE (((channel_type
= $2) AND (created_at >= $3)) AND (created_at <= $4)) GROUP BY date_trunc($5, delivered_at) 2016-03-01 13:22:08 CET LOG: execute
S_2: SELECT 1
If I replace the binded variables from EclipseLink logged query and execute it in pgAdmin the query works. What is going on here?
Edit: Just to clarify, it also works using em.createNativeQuery.
PostgreSQL can have trouble with resolving parameter binding, which in this case manifests as native SQL with parameters inline work, while JPA generated SQL which defaults to bound parameters fails.
One solution is to turn off parameter binding by passing "eclipselink.jdbc.bind-parameters" with a value of "false" as either a query hint for the specific query, or as a persistence unit property to turn off parameter binding by default for all queries.
On postgresql, you can use the following syntax
GROUP BY 1
That means you will group you agregate using the first selected attribute in the SELECT clause. That might be helpful there
I am trying to execute an SQL query into Hibernate because of the complexity of it. To do so, I am using the following method:
session.createSQLQuery(sSql).list();
And the SQL query is:
String sSql = "select timestamp, value, space_name, dp_id, dp_description from "+sTable+
" inner join space_datapoint on id = dp_id and timestamp between "+
" (select max(timestamp)-30 day from "+sTable+") and (select max(timestamp) day from "+sTable+")"+
" order by space_name";
The SQL query tries to retrieve a set of values by means of cross references between multiple table/views. The result is a list of objects (different fields from the tables). I have tested the query in the SQL manager of the database and it works. However, when I run it inside the Hibernate framework, it takes a lot of time (I had to stopped the debugger after some minutes, whereas it should take over 5 seconds according to the tests). Do you know what could be the mistake? Or a possible solution?
Thanks a lot in advance,
Using the Java client, I'll insert a series like this:
Serie serie1 =
new Serie.Builder(perfStat.pointCut).columns("time", "value").values(perfStat.start, perfStat.end - perfStat.start).build();
influxDB.write("pointcut_performance", TimeUnit.MICROSECONDS, serie1);
Grafana tries to run this query, which fails... It also fails in the influxdb admin tool:
select mean(value) from "com.xxx.databunker.salesforce.processing.jms.SalesForceLeadMessageListener.onMessage(Message)" where time > now() - 6h group by time(1s) order asc
You get this error: ERROR: Couldn't look up columns. If you take out the "where" clause, it runs:
select value from "com.springventuregroup.databunker.salesforce.processing.jms.SalesForceLeadMessageListener.onMessage(Message)"
I can't find this in the documentation. Any help much appreciated!
EDIT
The problem is: there is obviously data in the database that is query-able, as long as your query doesn't have a where close. Why am I getting that error?
I had the exact same issue, after several tests I found out that the problem was how I was sending the time column,
If I had this data:
1326219450000 132850001 request http://yahoo.com 1
1326219449000 132840001 response http://tranco.com 1
then the error was thrown only when I added this part "where time > now() - 1d", I could add another where clause but not that one, after dividing by 1000 the time I was sending no more errors.
1412218778912 132830001 response http://google.com 1
1412218778720 132820001 request http://cranka.com 1
now If I leave both sets:
1412133515000 132870001 request http://penca.com 1
1412133515000 132860001 request http://cranka.com 1
1326219450000 132850001 request http://yahoo.com 1
1326219449000 132840001 response http://tranco.com 1
I can do the query just fine and grafana works again
select * from requests where time > now() - 1d
here is a comment about influx taking time in seconds instead of milliseconds, https://github.com/influxdb/influxdb/issues/572
You insert into "pointcut_performance" and select from "com.xxx.databunker.salesforce.processing.jms.SalesForceLeadMessageListener.onMessage(Message)" ...
Use list series to see the available series names.