I'm developing a chat application in android using firebase and my own database.
This is my database structure:
id | text | user | time | seen | conversation_id
__________________________________________________________________
1 | Hi! | 8 | 1567443254 | 0 | 1
In seen column, 0 means delivered and 1 means message seen.
So for indicating message status, I tried to get the whole data in every second and set a new adapter to existing RecyclerView.
It works and update the messages status every second but there is a problem, when it loads the new data, RecyclerView's scroll changes and I also tried .notifyDataSetChanged() and that changes the scroll position too.
I think technically this is not the right way. Is there better suggestion to do this?
Related
I have a database with a table called Car. The car table looks like this:
+----+------+--------------+-----------+----------+------+
| Id | Name | Desccription | Make | Model | Year |
+----+------+--------------+-----------+----------+------+
| 1 | A | something1 | Ford | Explorer | 2010 |
| 2 | B | something2 | Nissan | Ultima | 2005 |
| 3 | C | something3 | Chevrolet | Malibu | 2012 |
+----+------+--------------+-----------+----------+------+
Different pages on my website want to display different information. Some pages only want to display the name, others wants to display the make and model, etc.
I have an api that the web calls to retrieve all this information. The api uses JPA and QueryDSL to communicate with the database and fetch information. I want to only fetch the information that I want for that particular page. I'm thinking about implementing some sort of builder patter to my repo to allow for me to only retrieve what I want but I'm not quite sure how to go about it.
For example, my home page only wants to display the Name of the car. So it'll call the HomeController and the controller will call the HomeService which will call the repository layer something like this:
carRepository.getCarById(1).withName().build();
Some other page that wants to display the make and model would make a repo call like this:
carRepository.getCarById(1).withMake().withModel.build();
What is the best way to implement something like this in Java/Jpa?
If I understand the question correctly, you want queries for different projections of your entities to be built dynamically.
In that case, dynamic entity graphs are what you want (see e.g. here: https://www.thoughts-on-java.org/jpa-21-entity-graph-part-2-define/). You start with an empty entity graph, and each call to one of your with() method simply adds a field to the graph.
The base query remains unchanged, you just need to set the fetch graph hint (javax.persistence.fetchgraph) upon calling build() (note that the samples in the above link use load graphs instead of fetch graphs; the subtle difference between the two is described here: What is the diffenece between FETCH and LOAD for Entity graph of JPA?)
I want to verify if all needed elemants exist on the page.
I can list them in the Examples section for Scenario Outline. For example:
Scenario Outline: I am able to see all elements on My Page
When I am on my page
Then I should see the following <element> My Menu
Examples:
| element |
| MENU button |
| MY logo |
| MY_1 link |
| MY_2 link |
| Button_1 button |
| Button_2 button |
| Loggin button |
Each row runs a separate method to verify an element's presence on the page. The problem is - the page is reloaded.
How can the problem be solved in more appropriate way?
You don't need a scenario outline. You just need a step that verifies all the elements in the table.
Scenario: I am able to see all elements on My Page
When I am on my page
Then I should see the following elements in My Menu
| MENU button |
| MY logo |
| MY_1 link |
| MY_2 link |
| Button_1 button |
| Button_2 button |
| Loggin button |
Where you can use the table as a array of arrays:
Then(/^I should see the following elements in My Menu$/) do |table|
table.raw.each do |menu_item|
#my_page_object.menu(menu_item).should == true
end
end
When(/^I am on my page$/) do
#my_page_object = MyPageObject.new(browser)
end
First of all using a scenario outline will result in 1 scenario for each element you want to test. This has huge run-time costs, and is not the way to go.
Secondly putting all this information in the scenario is also very expensive and unproductive. Gherkin scenarios are supposed to talk at the business level, not the developer level, so I would rewrite this as
Scenario: I am able to see all elements on Foo page
When I am on foo page
Then I should see all the foo elements
and implement it with something like
Then "I should see all the foo elements" do
expect(should_see_all_foo_elements).to be true
end
and now you can make a helper module to make this work
module FooPageStepHelper
def should_see_all_foo_elements
find('h1', text: /foo/) &&
...
end
end
World FooPageStepHelper
Now when the foo page gets a new element, you only have to change one line in one file. Notice how the business need (that all the elemnts should appear on the page) doesn't change when you add or remove elements
(n.b. you could improve the helper function in a number of ways to get better info when something goes wrong, and even output listing the elements that are present)
I have two machines that will be an application server in each.
The machine X is dynamic sources. The machine Y is static sources.
Thus, the user is always connected to "x.com".
When he does one upload an image, I need to send this information to "y.com".
How can I pass (at the time of upload) the byte image server x.com to save on y.com ?
See here what I started doing:
http://forum.primefaces.org/viewtopic.php?f=3&t=30239&p=96776#p96776
Balusc answered very well here:
Simplest way to serve static data from outside the application server in a Java web application
But my case is slightly diferent.
I appreciate any help!
Thank you!
I think the simplest way is to create a database table on X.com to track all images your user store on Y.com, for example:
+----------+-------------------------+
| user_id | image_path |
+----------+-------------------------+
| 0 | /images/image_xxxxx.jpg |
| 0 | /images/image_xxxxx.jpg |
| 2 | /images/image_xxxxx.jpg |
| 2 | /images/image_xxxxx.jpg |
| 3 | /images/image_xxxxx.jpg |
+----------+-------------------------+
and then serve on X.com all your images redirecting the browser to Y.com
X.com:
<img src="Y.com/images/image.xxxxx.jpg" />
Use share network disk like: samba or NFS.
Ootionaly you can consider to setup rsyncs if you have Linux/U*x hosts
I have a CSV datasource something like this:
User,Site,Requests
user01,www.facebook.com,54220
user01,plusone.google.com,2015
user01,www.twitter.com,33564
user01,www.linkedin.com,54220
user01,weibo.com,2015
user02,www.twitter.com,33564
user03,www.facebook.com,54220
user03,plusone.google.com,2015
user03,www.twitter.com,33564
In the report I want to display the first 3 rows (max) for each user, while the other rows will only contribute to the group total. How do I limit the report to only print 3 rows per group?
e.g
User Site Requests
user01 | www.facebook.com | 54220
plusone.google.com | 2015
www.twitter.com | 33564
| 146034
user02 | www.twitter.com | 33564
| 33564
user03 | www.facebook.com | 54220
user03 | plusone.google.com | 2015
user03 | www.twitter.com | 33564
| 89799
It is really just the line limiting I am struggling with, the rest is working just fine.
I found a way to do it, if anyone can come up with a more elegant answer I would be happy to see it, as this feels a bit hacky!
for each item in detail band:
<reportElement... isRemoveLineWhenBlank="true">
<printWhenExpression><![CDATA[$V{userGroup_COUNT} < 4]]></printWhenExpression>
</reportElement>
where userGroup is the field I am grouping by. I only seemed to need the isRemoveLineWhenBlank attribute for the first element.
you may consider to use subreport by querying the grouping fields in the main report and then passing the grouping fields as parameters into the subreport; the merits of this method is to avoid the report engine to actually looping through all un-required rows (although they are not shown) and spending unnecessary server-to-server or server-to-client bandwidth especially when the dataset returned is large
I am working for a log analyzer system,which read the log of tomcat and display them by a chart/table in web page.
(I know there are some existed log analyzer system,I am recreating the wheel. But this is my job,my boss want it.)
Our tomcat log are saved by day. For example:
2011-01-01.txt
2011-01-02.txt
......
The following is my manner for export logs to db and read them:
1 The DB structure
I have three tables:
1)log_current:save the logs generated today.
2)log_past:save the logs generated before today.
The above two tables own the SAME schema.
+-------+-----------+----------+----------+--------+-----+----------+----------+--------+---------------------+---------+----------+-------+
| Id | hostip | username | datasend | method | uri | queryStr | protocol | status | time | browser | platform | refer |
+-------+-----------+----------+----------+--------+-----+----------+----------+--------+---------------------+---------+----------+-------+
| 44359 | 127.0.0.1 | - | 0 | GET | / | | HTTP/1.1 | 404 | 2011-02-17 08:08:25 | Unknown | Unknown | - |
+-------+-----------+----------+----------+--------+-----+----------+----------+--------+---------------------+---------+----------+-------+
3)log_record:save the information of log_past,it record the days whose logs have been exported to the log_past table.
+-----+------------+
| Id | savedDate |
+-----+------------+
| 127 | 2011-02-15 |
| 128 | 2011-02-14 |
..................
+-----+------------+
The table shows log of 2011-02-15 have been exported.
2 Export(to db)
I have two schedule work.
1) day work.
at 00:05:00,check the tomcat log directory(/tomcat/logs) to find all the latest 30 days log files(of course it include logs of yesterday.
check the log_record table to see if logs of one day is exported,for example,2011-02-16 is not find in the log_record,so I will read the 2011-02-16.txt,and export them to log_past.
After export log of yesterday,I start the file monitor for today's log(2011-02-17.txt) not matter it exist or not.
2)the file monitor
Once the monitor is started,it will read the file hour by hour. Each log it read will be saved in the log_current table.
3 tomcat server restart.
Sometimes we have to restart the tomcat,so once the tomcat is started,I will delete all logs of log_current,then do the day work.
4 My problem
1) two table (log_current and log_past).
Because if I save the today's log to log_past,I can not make sure all the log file(xxxx-xx-xx.txt) are exported to db. Since I will do a check in 00:05:00 every day which make sure that logs before today must be exported.
But this make it difficult to query logs accros yestersay and today.
For example,query from 2011-02-14 00:00:00 to 2011-02-15 00:00:00,these log must be at log_past.
But how about from 2011-02-14 00:00:00 to 2011-02-17 08:00:00 ?(suppose it is 2011-02-17 09:00:00 now).
It is complex to query across tables.
Also,I always think my desing for the table and work manner(schedule work of export/read) are not perfect,so anyone can give a good suggestion?
I just need to export and read log and can do a almost real-time analysis where real-time means I have to make logs of current day visiable by chart/table and etc.
First of all, IMO you don't need 2 different tables log_current and log_past. You can insert all the rows in the same table, say logs and retrieve using
select * from logs where id = (select id from log_record where savedDate = 'YOUR_DATE')
This will give you all the logs of the particular day.
Now, once you are able to remove the current and past distinction between tables using above way, I think the problem you are asking here would be solved. :)