I have a CSV datasource something like this:
User,Site,Requests
user01,www.facebook.com,54220
user01,plusone.google.com,2015
user01,www.twitter.com,33564
user01,www.linkedin.com,54220
user01,weibo.com,2015
user02,www.twitter.com,33564
user03,www.facebook.com,54220
user03,plusone.google.com,2015
user03,www.twitter.com,33564
In the report I want to display the first 3 rows (max) for each user, while the other rows will only contribute to the group total. How do I limit the report to only print 3 rows per group?
e.g
User Site Requests
user01 | www.facebook.com | 54220
plusone.google.com | 2015
www.twitter.com | 33564
| 146034
user02 | www.twitter.com | 33564
| 33564
user03 | www.facebook.com | 54220
user03 | plusone.google.com | 2015
user03 | www.twitter.com | 33564
| 89799
It is really just the line limiting I am struggling with, the rest is working just fine.
I found a way to do it, if anyone can come up with a more elegant answer I would be happy to see it, as this feels a bit hacky!
for each item in detail band:
<reportElement... isRemoveLineWhenBlank="true">
<printWhenExpression><![CDATA[$V{userGroup_COUNT} < 4]]></printWhenExpression>
</reportElement>
where userGroup is the field I am grouping by. I only seemed to need the isRemoveLineWhenBlank attribute for the first element.
you may consider to use subreport by querying the grouping fields in the main report and then passing the grouping fields as parameters into the subreport; the merits of this method is to avoid the report engine to actually looping through all un-required rows (although they are not shown) and spending unnecessary server-to-server or server-to-client bandwidth especially when the dataset returned is large
Related
I have following test case where i want pass '00:00:00.0' (date_suffix) for one example and one for not.
however using this approach it also append space in first example with no date_suffix
so it results something like this:
// I need to get rid of last space (after /17) for example 1.
example1. "1996/06/17 "
example2. "1996/06/17 00:00:00.0"
--
Then Some case:
| birthdate |
| 1996/06/17 <date_suffix> |
| 1987-11-08 <date_suffix> |
| 1998-07-20 <date_suffix> |
#example1
Examples:
| date_suffix |
| |
#example2
Examples:
| date_suffix |
| 00:00:00.0 |
What you want to do is not possible in Gherkin.
However it seems like you are testing a date parser or validation tool through some other component.
By adding the time stamp to the date, you're adding incidental details to your scenario. It is not immediately apparent what these test and maybe overlooked in the future.
Consider instead testing the parser/validator separately and directly.
Once you have confidence in the date parser works correctly, use for your current scenario a list of mixed dates, some with and some without suffix.
Use a trim function to eliminate all the spacese.
exemple = urString.trim();
Imagine there are two entities Children and Gifts. Let's say there are 1000 children and 10 gifts. Children will always try to grab available gifts, and gifts will be tagged to children on a "first come, first serve" basis.
Table structure
children table
+----+------+
| id | name |
+----+------+
| 1 | Sam |
| 2 | John |
| 3 | Sara |
+----+------+
gift table
+----+---------+-------------+
| id | gift | children_id |
+----+---------+-------------+
| 1 | Toy Car | 2 |
| 2 | Doll | 3 |
+----+---------+-------------+
Here the children_id is the child who grabbed the gift first.
In my application, I want to update the children_id in such a way that only the first child who initiated the request will get it and rest get a GiftUnavailableException.
How will I ensure that, even if a 1000 requests come at a time to grab a specific gift, only the first one will get it. Or how will I ensure that there are no race conditions for this update.
Are there any spring specific feature that I can make use of or are there any other ways.
I'm using spring-boot as my backend.
I can't post a comment so here I go !
I assume you are using Spring Data JPA.
Then you should use #Transactional annotation : This mean that everytime you are requesting your database, you do a transaction :
Begin Transaction
Execute Transaction
Commit Transaction
Lot of usefull informations about this (Read it !!): Spring #Transactional - isolation, propagation
You will need to seed your Transactional Isolation to Serializable and maybe change the propagation method.
And if you are not using Spring data JPA.. Well There is a synchronized keywords but I think it's a bit awful to use it here.
I have a problem with my job when I want to make a query with 2 context variables. I attached photos with my job and my components and when I run the job, it's giving me this error:
Exception in component tMysqlInput_1 (facebook_amazon_us)
java.lang.NullPointerException
at mava.facebook_amazon_us_0_1.facebook_amazon_us.tWaitForFile_1Process(facebook_amazon_us.java:2058)
at mava.facebook_amazon_us_0_1.facebook_amazon_us.tMysqlConnection_1Process(facebook_amazon_us.java:798)
at mava.facebook_amazon_us_0_1.facebook_amazon_us.runJobInTOS(facebook_amazon_us.java:5363)
at mava.facebook_amazon_us_0_1.facebook_amazon_us.main(facebook_amazon_us.java:5085)
What I want to do in this job: I have a csv file with multiple columns. The first one is called Reporting_Starts. I want to get the first registration from that column and put it in the query for a select like:
SELECT * FROM my_table WHERE MONTH(my_table.Reporting_Starts)='"+context.month+"'.
I cannot get why my tJava_4 sees the variables and tMysqlInput don't.
In my tJava_4 I have the following code:
System.out.println(context.month);[My job][1][after running the job][1][tJava_3][1][tJavaRow_1][1][tMysqlInput_1 query][1]
Please let me know if you need any additional information about the job.
Thanks!
With all the iterate links you have, I'm guessing the code isn't executing in the order you expect. Could you please make the following changes:
Remove all the iterate links from tFileList_1
Reorganize your jobs as :
tMysqlConnection_1
|
OnSubjobOk
|
tWaitForFile_1
|
Iterate
|
tFileList_1 -- Iterate -- tJava_3
|
OnSubjobOk
|
tFileInputDelimited_1 -- Main -- tJavaRow_1
|
OnSubjobOk
|
tMysqlInput -- tMap -- tMysqlOutput (delete mode, set a column as delete key)
|
tFileInputDelimited -- tMap -- tMysqlOutput (insert csv)
|
OnSubjobOk
|
tFileCopy
First test with just this part. Then if it works, you can add the rest of your job.
I have a database with a table called Car. The car table looks like this:
+----+------+--------------+-----------+----------+------+
| Id | Name | Desccription | Make | Model | Year |
+----+------+--------------+-----------+----------+------+
| 1 | A | something1 | Ford | Explorer | 2010 |
| 2 | B | something2 | Nissan | Ultima | 2005 |
| 3 | C | something3 | Chevrolet | Malibu | 2012 |
+----+------+--------------+-----------+----------+------+
Different pages on my website want to display different information. Some pages only want to display the name, others wants to display the make and model, etc.
I have an api that the web calls to retrieve all this information. The api uses JPA and QueryDSL to communicate with the database and fetch information. I want to only fetch the information that I want for that particular page. I'm thinking about implementing some sort of builder patter to my repo to allow for me to only retrieve what I want but I'm not quite sure how to go about it.
For example, my home page only wants to display the Name of the car. So it'll call the HomeController and the controller will call the HomeService which will call the repository layer something like this:
carRepository.getCarById(1).withName().build();
Some other page that wants to display the make and model would make a repo call like this:
carRepository.getCarById(1).withMake().withModel.build();
What is the best way to implement something like this in Java/Jpa?
If I understand the question correctly, you want queries for different projections of your entities to be built dynamically.
In that case, dynamic entity graphs are what you want (see e.g. here: https://www.thoughts-on-java.org/jpa-21-entity-graph-part-2-define/). You start with an empty entity graph, and each call to one of your with() method simply adds a field to the graph.
The base query remains unchanged, you just need to set the fetch graph hint (javax.persistence.fetchgraph) upon calling build() (note that the samples in the above link use load graphs instead of fetch graphs; the subtle difference between the two is described here: What is the diffenece between FETCH and LOAD for Entity graph of JPA?)
I am working for a log analyzer system,which read the log of tomcat and display them by a chart/table in web page.
(I know there are some existed log analyzer system,I am recreating the wheel. But this is my job,my boss want it.)
Our tomcat log are saved by day. For example:
2011-01-01.txt
2011-01-02.txt
......
The following is my manner for export logs to db and read them:
1 The DB structure
I have three tables:
1)log_current:save the logs generated today.
2)log_past:save the logs generated before today.
The above two tables own the SAME schema.
+-------+-----------+----------+----------+--------+-----+----------+----------+--------+---------------------+---------+----------+-------+
| Id | hostip | username | datasend | method | uri | queryStr | protocol | status | time | browser | platform | refer |
+-------+-----------+----------+----------+--------+-----+----------+----------+--------+---------------------+---------+----------+-------+
| 44359 | 127.0.0.1 | - | 0 | GET | / | | HTTP/1.1 | 404 | 2011-02-17 08:08:25 | Unknown | Unknown | - |
+-------+-----------+----------+----------+--------+-----+----------+----------+--------+---------------------+---------+----------+-------+
3)log_record:save the information of log_past,it record the days whose logs have been exported to the log_past table.
+-----+------------+
| Id | savedDate |
+-----+------------+
| 127 | 2011-02-15 |
| 128 | 2011-02-14 |
..................
+-----+------------+
The table shows log of 2011-02-15 have been exported.
2 Export(to db)
I have two schedule work.
1) day work.
at 00:05:00,check the tomcat log directory(/tomcat/logs) to find all the latest 30 days log files(of course it include logs of yesterday.
check the log_record table to see if logs of one day is exported,for example,2011-02-16 is not find in the log_record,so I will read the 2011-02-16.txt,and export them to log_past.
After export log of yesterday,I start the file monitor for today's log(2011-02-17.txt) not matter it exist or not.
2)the file monitor
Once the monitor is started,it will read the file hour by hour. Each log it read will be saved in the log_current table.
3 tomcat server restart.
Sometimes we have to restart the tomcat,so once the tomcat is started,I will delete all logs of log_current,then do the day work.
4 My problem
1) two table (log_current and log_past).
Because if I save the today's log to log_past,I can not make sure all the log file(xxxx-xx-xx.txt) are exported to db. Since I will do a check in 00:05:00 every day which make sure that logs before today must be exported.
But this make it difficult to query logs accros yestersay and today.
For example,query from 2011-02-14 00:00:00 to 2011-02-15 00:00:00,these log must be at log_past.
But how about from 2011-02-14 00:00:00 to 2011-02-17 08:00:00 ?(suppose it is 2011-02-17 09:00:00 now).
It is complex to query across tables.
Also,I always think my desing for the table and work manner(schedule work of export/read) are not perfect,so anyone can give a good suggestion?
I just need to export and read log and can do a almost real-time analysis where real-time means I have to make logs of current day visiable by chart/table and etc.
First of all, IMO you don't need 2 different tables log_current and log_past. You can insert all the rows in the same table, say logs and retrieve using
select * from logs where id = (select id from log_record where savedDate = 'YOUR_DATE')
This will give you all the logs of the particular day.
Now, once you are able to remove the current and past distinction between tables using above way, I think the problem you are asking here would be solved. :)