I'm trying to query select statements using JDBCTamplate.
select statement:
SELECT currency, SUM(amount) AS total
FROM table_name
WHERE user_id IN (:userIdList)
GROUP BY currency
DB Table has three columns:
user_id
currency
amount
table for example
user_id currency amount
1 EUR 9000
2 EUR 1000
3 USD 124
When I'm trying to run this code
namedParamJDBCTemplate.query(query,
new MapSqlParameterSource('user_id', userIdList),
new ResultSetExtractor<Map>() {
#Override
public Map extractData(ResultSet resultSet) throws SQLException, DataAccessException {
HashMap<String,Object> mapRet = new HashMap<String,Object>();
while(resultSet.next()){
mapRet.put(resultSet.getString("currency"), resultSet.getString("total"));
}
return mapRet;
}
});
I'm getting the result set as a map, but the result of the amount looks like this :
EUR -> 10000.0E0
USD -> 124.0E0
When I run the same query in DB ( not via code) the result set is fine and without the '0E0'.
How can I get only EUR -> 10000 and USD-> 124 without the '0E0'?
.0E0 is the exponent of the number, as I think. So 124.0E0 stands for 124.0 multiplied with ten raised to the power of 0 (written 124 x 10^0). Anything raised to the power of 0 is 1, so you've got 124 x 1, which, of course, is the right value.
(If it was, e. g., 124.5E3, this would mean 124500.)
This notation is used more commonly to work with large numbers, because 5436.7E20 is much more readable than 543670000000000000000000.
Without knowing your database background, I can only suppose that this notation arises from the conversion of the numeric field to a string (in result.getString("total")). Therefore, you should ask yourself, if you really need the result as a string (or just use .getFloat or so, also changing your HashMap type). If so, you still have some possibilities:
Convert the value to a string later → e. g. String.valueOf(resultSet.getFloat("total"))
Truncate the .0E0 → e. g. resultSet.getString("total").replace(".0E0", "") (Attention, of course this won't work if, for some reason, you get another suffix like .5E3; it will also cut off any positions after the decimal point)
Perhaps find a database, JDBC or driver setting that suppresses the E-Notation.
Related
I am trying to do a windowed aggregation query on a data stream that contains over 40 attributes in Flink. The stream's schema contains an epoch timestamp which I want to use for the WatermarkStrategy so I can actually define tumbling windows over it.
I know from the docs, that you can define a Timestamp using the SQL Api in an CREATE TABLE-query by first using TO_TIMESTAMP_LTZ on the epochs to convert it to a proper timestamp which can be used in the following WATERMARK FOR-statement. Since I got a really huge schema tho, I want to deserialise and provide the schema NOT by completely writing the complete CREATE TABLE-statement containing all columns BUT by using a custom class derived from the proto file that cointains the schema. As far as I know, this is only possible by providing a deserializer for the KafkaSourceBuilder and calling the results function of the stream on the class derived from the protofile with protoc. Which means, that I have to define the table using the stream api.
Inspired by the answer to this question, I do it like this:
WatermarkStrategy watermarkStrategy = WatermarkStrategy
.<Row>forBoundedOutOfOrderness(Duration.ofSeconds(10))
.withTimestampAssigner( (event, ts) -> (Long) event.getField("ts"));
tableEnv.createTemporaryView(
"bidevents",
stream
.returns(BiddingEvent.BidEvent.class)
.map(e -> Row.of(
e.getTracking().getCampaign().getId(),
e.getTracking().getAuction().getId(),
Timestamp.from(Instant.ofEpochSecond(e.getTimestamp().getMilliseconds() / 1000))
)
)
.returns(Types.ROW_NAMED(new String[] {"campaign_id", "auction_id", "ts"}, Types.STRING, Types.STRING, Types.SQL_TIMESTAMP))
.assignTimestampsAndWatermarks(watermarkStrategy)
);
tableEnv.executeSql("DESCRIBE bidevents").print();
Table resultTable = tableEnv.sqlQuery("" +
"SELECT " +
" TUMBLE_START(ts, INTERVAL '1' DAY) AS window_start, " +
" TUMBLE_END(ts, INTERVAL '1' DAY) AS window_end, " +
" campaign_id, " +
" count(distinct auction_id) auctions " +
"FROM bidevents " +
"GROUP BY TUMBLE(ts, INTERVAL '1' DAY), campaign_id");
DataStream<Row> resultStream = tableEnv.toDataStream(resultTable);
resultStream.print();
env.execute();
I get this error:
Caused by: org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: Window aggregate can only be defined over a time attribute column, but TIMESTAMP(9) encountered.
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372) ~[flink-dist-1.15.1.jar:1.15.1]
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222) ~[flink-dist-1.15.1.jar:1.15.1]
at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114) ~[flink-dist-1.15.1.jar:1.15.1]
at org.apache.flink.client.deployment.application.ApplicationDispatcherBootstrap.runApplicationEntryPoint(ApplicationDispatcherBootstrap.java:291) ~[flink-dist-1.15.1.jar:1.15.1]
This seems kind of logical, since in line 3 I cast a java.sql.Timestamp to a Long value, which it is not (but also the stacktrace does not indicate that an error occurred during the cast). But when I do not convert the epoch (in Long-Format) during the map-statement to a Timestamp, I get this exception:
"Cannot apply '$TUMBLE' to arguments of type '$TUMBLE(<BIGINT>, <INTERVAL DAY>)'"
How can I assign the watermark AFTER the map-statement and use the column in the later SQL Query to create a tumbling window?
======UPDATE=====
Thanks to a comment from David, I understand that I need the column to be of type TIMESTAMP(p) with precision p <= 3. To my understanding this means, that my timestamp may not be more precise than having full milliseconds. So i tried different ways to create Java Timestamps (java.sql.Timestamps and java.time.LocaleDateTime) which correspond to the Flink timestamps.
Some examples are:
1 Trying to convert epochs into a LocalDateTime by setting nanoseconds (the 2nd parameter of ofEpochSecond to 0):
LocalDateTime.ofEpochSecond(e.getTimestamp().getMilliseconds() / 1000, 0, ZoneOffset.UTC )
2 After reading the answer from Svend in this question who uses LocalDateTime.parse on timestamps that look like this "2021-11-16T08:19:30.123", I tried this:
LocalDateTime.parse(
DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ss").format(
LocalDateTime.ofInstant(
Instant.ofEpochSecond(e.getTimestamp().getMilliseconds() / 1000),
ZoneId.systemDefault()
)
)
)
As you can see, the timestamps even only have seconds-granularity (which i checked when looking at the printed output of the stream I created) which I assume should mean, they have a precision of 0. But actually when using this stream for defining a table/view, it once again has the type TIMESTAMP(9).
3 I also tried it with the sql timestamps:
new Timestamp(e.getTimestamp().getMilliseconds() )
This also did not change anything. I somehow always end up with a precision of 9.
Can somebody please help me how I can fix this?
Ok, I found the solution to the problem. If you got a stream containing a timestamp which you want to define as event time column for watermarks, you can use this function:
Table inputTable = tableEnv.fromDataStream(
stream,
Schema.newBuilder()
.column("campaign_id", "STRING")
.column("auction_id", "STRING")
.column("ts", "TIMESTAMP(3)")
.watermark("ts", "SOURCE_WATERMARK()")
.build()
);
The important part is, that you can "cast" the timestamp ts from TIMESTAMP(9) "down" to TIMESTAMP(3) or any other precision below 4 and you can set the column to contain the water mark.
Another mention that seems important to me is, that only Timestamps of type java.time.LocalDateTime actually worked for later use as watermarks for tumbling windows.
Any other attempts to influence the precision of the timestamps by differently creating java.sql.Timestamp or java.time.LocalDateTime failed. This seemed to be the only viable way.
I have database with 300 000 rows, and I need filter some rows by algorithm.
protected boolean validateMatch(DbMatch m) throws MatchException, NotSupportedSportException{
// expensive part
List<DbMatch> hh = sd.getMatches(DateService.beforeDay(m.getStart()), m.getHt(), m.getCountry(),m.getSportID());
List<DbMatch> ah = sd.getMatches(DateService.beforeDay(m.getStart()), m.getAt(), m.getCountry(),m.getSportID());
....
My hibernate dao function for load data from Mysql is used 2x times of init array size.
public List<DbMatch> getMatches(Date before,String team, String country,int sportID) throws NotSupportedSportException{
//Match_soccer where date between :start and :end
Criteria criteria = session.createCriteria(DbMatch.class);
criteria.add(Restrictions.le("start",before));
criteria.add(Restrictions.disjunction()
.add(Restrictions.eq("ht", team))
.add(Restrictions.eq("at", team)));
criteria.add(Restrictions.eq("country",country));
criteria.add(Restrictions.eq("sportID",sportID));
criteria.addOrder(Order.desc("start") );
return criteria.list();
}
Example how i try filter data
function List<DbMatch> filter(List<DbMatch> mSet){
List<DbMatch> filtred = new ArrayList<>();
for(DbMatch m:mSet){
if(validateMatch(DbMatch m))filtred.add(m);
}
}
(1)I tried different criteria settings and counted function times with stopwatch. My result is when I use filter(matches) matches size 1000 my program take 3 min 21 s 659 ms.
(2)I tried remove criteria.addOrder(Order.desc("start")); than program filtered after 3 min 12 s 811 ms.
(3)But if I remove criteria.addOrder(Order.desc("start")); and add criteria.setMaxResults(1); result was 22 s 311 ms.
Using last configs i can filter all my 300 000 record by 22,3 * 300 = 22300 s (~ 6h), but if use first function I should wait (~ 60 h).
If I want use criteria without order and limit i must be sure that my table is sorted by date on database because it is important get last match .
All data is stored on matches table.
Table indexes:
Table, Non_unique, Key_name, Seq_in_index, Column_name, Collation, Cardinality, Sub_part, Packed, Null, Index_type, Comment, Index_comment
matches, 0, PRIMARY, 1, mid, A, 220712, , , , BTREE, ,
matches, 0, UK_kcenwf4m58fssuccpknl1v25v, 1, beid, A, 220712, , , YES, BTREE, ,
UPDATED
After added ALTER TABLE matches ADD INDEX (sportID, country); now program time deacrised to 15s for 1000 matches. But if I not use order by and add limit need wait only 4s for 1000 mathces.
How I should act on this situation to improve program executions speed?
Your first order of business is to figure out how long each component take to process the request.
Find out the SQL query generated by the ORM and run that manually in MySQL workbench and see how long it takes (non cached). You can also ask for it to explain the index usage.
If it's fast enough then it's your java code that's taking longer and you need to optimize your algorithm. You can use JConsole to dig further into that.
If you identify which component is taking longer you can post here with your analysis and we can make suggestions accordingly.
I want to retrieve a record which has a date field whose value is closer to a given date.How should I proceed?
Below is the table,
id |employeeid|region |startdate |enddate |
1 1234 abc 2014-11-24 2015-01-17
2 1234 xyz 2015-01-18 9999-12-31
Here, I should retrieve the record whose enddate is closer to the startdate of another record say,'2015-01-18', so it should retrieve the 1 st record.I tried the following queries
1.
SELECT l.region
FROM ABC.location l where l.EmployeeId=1234
ORDER BY ABS( DATEDIFF('2015-01-18',l.Enddate) );
2.
SELECT l.region
FROM ABC.location l where l.EmployeeId=1234
ORDER BY ABS( DATEDIFF(l.Enddate,'2015-01-18') );
But, none of them is working. Kindly help me in this.
Thanks,
Poorna.
You might want to try this:
Query query = session.createQuery("SELECT l.region, ABS( DATEDIFF('2015-01-18',l.Enddate) ) as resultDiff FROM ABC.location l where l.EmployeeId=1234 ORDER BY resultDiff");
query.setFirstResult(0);
query.setMaxResults(1);
List result = query.list();
Well, Unix timestamps are expressed as a number of seconds since 01 Jan 1970, so if you subtract one from the other you get the difference in seconds. The difference in days is then simply a matter of dividing by the number of seconds in a day:
(date_modified - date_submitted) / (24*60*60)
or
(date_modified - date_submitted) / 86400
get the minimum of them.
refer this question it may be helpful::::Selecting the minimum difference between two dates in Oracle when the dates are represented as UNIX timestamps
Using Google's "electric meter" example from a few years back, we would have:
MeterID (Datastore Key) | MeterDate (Date) | ReceivedDate (Date) | Reading (double)
Presuming we received updated info (Say, out of calibration/busted meter, etc.) and put in a new row with same MeterID and MeterDate, using a Window Function to grab the newest Received Date for each ID+MeterDate pair would only cost more if there is multiple records for that pair, right?
Sadly, we are flying without a SQL expert, but it seems like the query should look like:
SELECT
meterDate,
NTH_VALUE(reading, 1) OVER (PARTITION BY meterDate ORDER BY receivedDate DESC) AS reading
FROM [BogusBQ:TableID]
WHERE meterID = {ID}
AND meterDate BETWEEN {startDate} AND {endDate}
Am I missing anything else major here? Would adding 'AND NOT IS_NAN(reading)' cause the Window Function to return the next row, or nothing? (Then we could use NaN to signify "deleted".)
Your SQL looks good. Couple of advices:
- I would use FIRST_VALUE to be a bit more explicit, but otherwise should work.
- If you can - use NULL instead of NaN. Or better yet, add new BOOLEAN column to mark deleted rows.
I use MongoTemplate from Spring to access a MongoDB.
final Query query = new Query(Criteria.where("_id").exists(true));
query.with(new Sort(Direction.ASC, "FIRSTNAME", "LASTNAME", "EMAIL"));
if (count > 0) {
query.limit(count);
}
query.skip(start);
query.fields().include("FIRSTNAME");
query.fields().include("LASTNAME");
query.fields().include("EMAIL");
return mongoTemplate.find(query, User.class, "users");
I generated 400.000 records in my MongoDB.
When asking for the first 25 Users without using the above written sort line, I get the result within less then 50 milliseconds.
With sort it lasts over 4 seconds.
I then created indexes for FIRSTNAME, LASTNAME, EMAIL. Single indexes, not combined ones
mongoTemplate.indexOps("users").ensureIndex(new Index("FIRSTNAME", Order.ASCENDING));
mongoTemplate.indexOps("users").ensureIndex(new Index("LASTNAME", Order.ASCENDING));
mongoTemplate.indexOps("users").ensureIndex(new Index("EMAIL", Order.ASCENDING));
After creating these indexes the query again lasts over 4 seconds.
What was my mistake?
-- edit
MongoDB writes this on the console...
Thu Jul 04 10:10:11.442 [conn50] query mydb.users query: { query: { _id: { $exists: true } }, orderby: { LASTNAME: 1, FIRSTNAME: 1, EMAIL: 1 } } ntoreturn:25 ntoskip:0 nscanned:382424 scanAndOrder:1 keyUpdates:0 numYields: 2 locks(micros) r:6903475 nreturned:25 reslen:3669 4097ms
You have to create a compound index for FIRSTNAME, LASTNAME, and EMAIL, in this order and all of them using ascending order.
Thu Jul 04 10:10:11.442 [conn50] query mydb.users query:
{ query: { _id: { $exists: true } }, orderby: { LASTNAME: 1, FIRSTNAME: 1, EMAIL: 1 } }
ntoreturn:25 ntoskip:0 nscanned:382424 scanAndOrder:1 keyUpdates:0 numYields: 2
locks(micros) r:6903475 nreturned:25 reslen:3669 4097ms
Possible bad signs:
Your scanAndOrder is coming true (scanAndOrder=1), correct me if I am wrong.
It has to return (ntoreturn:25) means 25 documents but it is scanning (nscanned:382424) 382424 documents.
indexed queries, nscanned is the number of index keys in the range that Mongo scanned, and nscannedObjects is the number of documents it looked at to get to the final result. nscannedObjects includes at least all the documents returned, even if Mongo could tell just by looking at the index that the document was definitely a match. Thus, you can see that nscanned >= nscannedObjects >= n always.
Context of Question:
Case 1: When asking for the first 25 Users without using the above written sort line, I get the result within less then 50 milliseconds.
Case 2: With sort it lasts over 4 seconds.
query.with(new Sort(Direction.ASC, "FIRSTNAME", "LASTNAME", "EMAIL"));
As in this case there is no index: so it is doing as mentioned here:
This means MongoDB had to batch up all the results in memory, sort them, and then return them. Infelicities abound. First, it costs RAM and CPU on the server. Also, instead of streaming my results in batches, Mongo just dumps them all onto the network at once, taxing the RAM on my app servers. And finally, Mongo enforces a 32MB limit on data it will sort in memory.
Case 3: created indexes for FIRSTNAME, LASTNAME, EMAIL. Single indexes, not combined ones
I guess it is still not fetching data from index. You have to tune your indexes according to Sorting order
Sort Fields (ascending / descending only matters if there are multiple sort fields)
Add sort fields to the index in the same order and direction as your query's sort
For more details, check this
http://emptysqua.re/blog/optimizing-mongodb-compound-indexes/
Possible Answer:
In the query orderby: { LASTNAME: 1, FIRSTNAME: 1, EMAIL: 1 } } order of sort is different than the order you have specified in :
mongoTemplate.indexOps("users").ensureIndex(new Index("FIRSTNAME", Order.ASCENDING));
mongoTemplate.indexOps("users").ensureIndex(new Index("LASTNAME", Order.ASCENDING));
mongoTemplate.indexOps("users").ensureIndex(new Index("EMAIL", Order.ASCENDING));
I guess Spring API might not be retaining order:
https://jira.springsource.org/browse/DATAMONGO-177
When I try to sort on multiple fields the order of the fields is not maintained. The Sort class is using a HashMap instead of a LinkedHashMap so the order they are returned is not guaranteed.
Could you mention spring jar version?
Hope this answers your question.
Correct me where you feel I might be wrong, as I am little rusty.