Is there a way to convert AbstractBackEndDataProvider to ListProvider? - java

I have written a custom https://mindbug.in/vaadin/vaadin-dataprovider-example/ CallBackDataProvider that I based on this link here, which is used for a multi-select combo box (an addon https://github.com/bonprix/vaadin-combobox-multiselect from Vaadin's addon directory) for the purpose of providing a item lazy loading.
According to the addon's clear() and selectAll(), it expects a ListDataProvider. I've already set the component's data provider to used the custom data provider above. Whenever a clear or selectAll function is triggered, the Class Cast Exception is being thrown. It is expecting a ListDataProvider.
The very straightforward workaround for this case is to disable the clear and selectAll method by setting the boolean flag to false, but from the user's point of view, this will not be flexible.
Another step attempted is to to convert the stream into a Collection List, yet, it didn't work. It still throws an error.
This is the custom CallbackDataProvider, extended from the AbstractBackendDataProvider:
public ItemDataProvider(ReceiptService receiptService) {
if(receiptService != null){
this.receiptService = receiptService;
}else {
this.receiptService = new ReceiptService();
}
}
#Override
protected Stream<SkusSelectBox> fetchFromBackEnd(Query<SkusSelectBox, String> query) {
stream = receiptService.fetchSkus(query.getFilter().orElse(null), query.getLimit(), query.getOffset(), query.getSortOrders()).stream();
return stream;
}
#Override
protected int sizeInBackEnd(Query<SkusSelectBox, String> query) {
return receiptService.countSkus(query.getFilter().orElse(null));
}
#Override
public Object getId(SkusSelectBox item) {
return item.getItemId();
}
public Stream<SkusSelectBox> getStream(){
return stream;
}
The SkuSelectBox is a simple two string attribute object that retrieves the id and the name.
For this component, I have set the following at the view page:
ItemDataProvider itemDataProvider = new ItemDataProvider(receiptService);
ComboBoxMultiselect<SkusSelectBox> skuSelect = new ComboBoxMultiselect<>("Items");
skuSelect.setPlaceholder("Choose Items");
skuSBox.add(new SkusSelectBox("0", "No data found"));
skuSelect.setWidth(80, Unit.PERCENTAGE);
skuSelect.setRequiredIndicatorVisible(true);
skuSelect.setItemCaptionGenerator(SkusSelectBox::getItemName);
skuSelect.setSelectAllButtonCaption("Select All");
skuSelect.setClearButtonCaption("Clear");
skuSelect.showSelectAllButton(true);
skuSelect.showClearButton(true);
skuSelect.setDataProvider(itemDataProvider);
skuSelect.getDataProvider().refreshAll();
skuSelect.isReadOnly();
skuSelect.setPageLength(20);
if(skuSBox.size() <=1 ){
skuSelect.showSelectAllButton(false);
//skuSelect.showClearButton(false);
}
skuSelect.setResponsive(true);
The selectAll and clear methods are very similar except for the very end of the method:
#Override
public void selectAll(final String filter) {
final ListDataProvider<T> listDataProvider = ((ListDataProvider) getDataProvider());
final Set<String> addedItems = listDataProvider.getItems()
.stream()
.filter(t -> {
final String caption = getItemCaptionGenerator().apply(t);
if (t == null) {
return false;
}
return caption.toLowerCase()
.contains(filter.toLowerCase());
})
.map(t -> itemToKey(t))
.collect(Collectors.toSet());
updateSelection(addedItems, new HashSet<>(), true);
updateSelection(new HashSet<>(), removedItems, true); (this is for clear method)
}
Basically the class cast exception is shown in this error message, referring to either the clear or selectAll, whichever method I was invoking:
java.lang.ClassCastException: com.igi.sycarda.dashboard.hib.utils.ItemDataProvider cannot be cast to com.vaadin.data.provider.ListDataProvider
at org.vaadin.addons.ComboBoxMultiselect$1.clear(ComboBoxMultiselect.java:224)
I'm looking at the selectAll or clear method, when invoked to work as usual as if not using a CallbackDataProvider.
Until the next patch release for the addon is released, I need to put in a workaround for this problem, how can I convert a custom provider to a ListDataProvider either in a quick dirty way or a cleaner way if required?
UPDATE: Normally, I would do a direct fetch from the service class, but when tested with a tenant that has about 20K of item records, the loading of the page and the specific component box is quite slow to load. That CallbackDataProvider is to test this will work for those big amount of records.

The idea with a list data provider is that all items are loaded into memory. It is possible to load all items from a database into memory and then use that to create a list data provider. This does on the other hand defeat the purpose of having a callback data provider.
It's probably more straightforward for you to fetch the items into a list directly from your receiptService rather than going through the existing data provider.

Since there are restrictions or blocks that cause error to approach I was doing, someone just suggested to me to create a view derived from the tables / columns required and used them instead of the normal tables.
After creating a view, I just reverted and removed these lines below to the usual implementation:
skuSelect.setDataProvider(itemDataProvider);
skuSelect.getDataProvider().refreshAll();
skuSelect.isReadOnly();
skuSelect.setPageLength(20);
if(skuSBox.size() <=1 ){
skuSelect.showSelectAllButton(false);
//skuSelect.showClearButton(false);
}
At the time of writing this, we've tested it an hour ago and it solves the problem without sacrificing the performance time taken and creating an additional component. In terms of time measurement, a 20K result set in a view loads in less than 10 seconds vs 7-9 minutes previously.

Related

What is the clean way to check a SELECT query result's length in MVVM architecture?

I am creating an App that will use a SearchView to let user make queries to filter data. I am using RoomDB, and trying to follow Model-View-ViewModel architecture as recommended in Android Developers' Guidelines.
I have one entity and one DAO (for now my DB has only one table). I have a method in the DAO that looks like this:
#Query("SELECT * FROM table WHERE column1 = :search OR column2 = :search")
LiveData<List<TableRow>> filteredSearch(String search);
So, from an AsyncTask I can use the RoomDatabase's instance, right, and obtain the results of an user's query, like this, right?
// Let's assume search already contains user's input
String search;
// DatabaseClient is a singleton that holds MyRoomDatabase instance
// I am using it as Repository for now... bad call?
LiveData<List<TableRow>> user_query = DatabaseClient
.getInstance(getApplicationContext())
// my DatabaseClient has this method that
// I made to call DAO's query method
.getFilteredList(search);
So, I want to load one fragment or another in my Main Activity depending on this user query's length (if 0 results, fragmentA else fragmentB). This is business logic to some extent? I wonder... should I read the query's length from the Viemodel, or from the View (AKA Activity)? As you can see, I am still struggling with RoomDB and ViewModels at all.
My plan was making a method in the ViewModel that returns the LiveData<List<TableRow>> with the query results by using a code snippet similar to the one above, and then, from the MainActivity:
search_view.setOnQueryTextListener(new SearchView.OnQueryTextListener() {
#Override
public boolean onQueryTextSubmit(String query) {
// Should I delegate AsyncTask to the Repository AKA
// DatabaseClient? Maybe... but please bear with me
class UserSearchTask extends AsyncTask<Void, Void, List<TableRows>> {
#Override
protected List<TableRows> doInBackground(Void... voids) {
TableRowsViewModel my_viewmodel = new TableRowsViewModel(getApplication());
LiveData<List<TableRows>> search_results;
search_results = my_viewmodel.getUserSearch(query);
// TODO I will care about type mismatches later
return search_results;
}
#Override
protected void onPostExecute(List<TableRows> found_elements) {
super.onPostExecute(found_elements);
// TODO So, I want to check user's search results from here
if ( found_elements.length > 0) {
showFragmentB();
} else {
showFragmentA();
}
}
}
}
I have been without coding for more than half year, and I am a little rusty. I forgot concepts that I knew before about ViewModels, LiveData and stuff. Am I breaking MVVM's architecture with my approach? What's the role of LiveData in this kind of logic attempt of mine? Can I read the SQL query result's length directly from some LiveData's method, or else, I should retrieve the list from it to do so?
I guess the actual question is: is my approach wrong? What would be the cleanest way to implement my fragment's logic depending on user search's length?
EDIT: I am not asking only about good practices (which are still welcome); I have barely dedicated 7 hours to this app yet and I couldn't still build a first alpha version to start testing it. My first priority in short-term is putting this thing together (and myself together I might add); in other words: first, I want to make it work even if it is not clean. Right now I am a simple-minded monkey which just thinks about this like if this was a normal PC app in which I don't have to struggle with App lifecycles, multithreading and all related stuff,in which I just retrieve the SQL query's result right away. I beg your pardon for my ignorance.
So, in order to add more context about what I am trying to do: If searxh results are zero, fragmentA would be a form for adding a new row to the table; fragmentB would show just the data of the first row, not listing yet (I will reach there eventually, but not yet).
Here is the way I ended up implementing the logic I had in mind when I made the question. But that doesn't mean this is the clean way to do it.
In order to get the size of the User query, I ended up using an Observer (as Teo said in his comment). I am not sure if using Observer and LiveData for a Database that is merely local in the phone's app (and therefore shall only be modified by the App's user himself) for obtaining query results each time the user hits "Search" button, I am not sure if using Oberser and LiveData for this is overkill or not... and the aberration of using DatabaseClient (the RoomDatabase's singleton) as a Repository? Not anymore... I have created a dedicated Repository Class to handle the DatabaseClient and the DAOs.
That said, the relevant part of my Repository class:
public class Repository {
private final TableRowDao tablerow_dao;
public Repository(Application application) {
AppDatabase app_db = DatabaseClient.getInstance(application).getAppDatabase();
tablerow_dao = app_db.tableRowDao();
}
public LiveData<List<TableRow>> getFilteredList(String search) {
return tablerow_dao.filteredSearch(search);
}
// [...]
...here, the ViewModel:
public class TableRowsViewModel extends AndroidViewModel {
private Repository repository;
public TableRowsViewModel(#NonNull Application application) {
super(application);
repository = new Repository(application);
}
public LiveData<List<TableRow>> getUserSearch(String search) {
return repository.getFilteredList(search);
}
No AsyncTasks were used for this purpose.
In MainActivity, within OnCreate method:
search_view.setOnQueryTextListener(new SearchView.OnQueryTextListener() {
#Override
public boolean onQueryTextSubmit(String query) {
my_viewmodel = new TableRowsViewModel(getApplication());
search_results = my_viewmodel.getUserSearch(query);
observeSearchResults(search_results);
return true;
}
// [...]
});
ObserveSearchResults is a private method that I declared in MainActivity as well:
// Having a observer is good and stuff, but am I overdoing it?
private void observeSearchResults(LiveData<List<TableRow>> search_results) {
search_results.observe(this, new Observer<List<TableRow>>() {
#Override
public void onChanged(List<TableRow> rows) {
if ( rows.size() > 0 ) {
// TODO I don't list the results yet, I show the first one right away
profileFragment = ProfileFragment.newInstance(rows.get(0));
transaction = getSupportFragmentManager().beginTransaction();
transaction.replace(R.id.fragmentContainer, profileFragment);
transaction.addToBackStack(null);
transaction.commit();
} else {
transaction = getSupportFragmentManager().beginTransaction();
transaction.replace(R.id.fragmentContainer, insertFragment);
transaction.addToBackStack(null);
transaction.commit();
}
}
});
}
This worked for me, but that doesn't mean that I am doing this in a clean way at all.

Vaadin grid - filtering with lazy loading

I have vaadin grid, and it's great that it has lazy data loading from the box. But for some reasons I have custom filters, which I use via
CallbackDataProvider<> dataProvider.fetch(Query query)
Query object has parameters for loading by portions (offset and limit), so I need to set it dynamically (?) and somehow listen grid scrolling event to load next part of data when user scrolls down (?)
Grid.dataComunicator has field Range pushRows but there no public methods to get it. And all i have is grid with lazy loading without filtered data or grid with eager loading with filtered data.
So, is there any way to implement filtering data with lazy loading in vaadin grid element?
ok, problem solved by using ConfigurableFilterDataProvider<> as wrapper over CallbackDataProvider<>.
so, when i filter table, this wrapper adds filtering conditions to all queries, and data loads lazy as usual.
I arrived here using vaadin 22. The answer probably isn't in the same context as the question but given I arrived here I suspect others will.
To create a grid that uses lazy loading and is able to inject a filter into the query use:
class SearchableGrid<E> {
Grid<E> entityGrid = new Grid<>();
private SearchableGrid(DaoDataProvider daoProvider)
{
var view = entityGrid.setItems(query ->
{
// add the filter to the query
var q = new Query<E, String>(query.getOffset(), query.getLimit(), query.getSortOrders(), null,
getSearchField().getValue());
return daoProvider.fetchFromBackEnd(q);
});
view.setItemCountCallback(query ->
{
// add the filter to the query
var q = new Query<E, String>(query.getOffset(), query.getLimit(), query.getSortOrders(), null,
getSearchField().getValue());
return daoProvider.sizeInBackEnd(q);
});
}
I've packaged the methods into a BackEndDataProvider as the same class
can be used to as a provider for comboboxes.
public class DaoDataProvider<E extends CrudEntity>
extends AbstractBackEndDataProvider<E, String>
{
JpaBaseDao<E> dao;
GetFilterBuilder<E> getFilterBuilder;
public DaoDataProvider(JpaBaseDao<E> daoProvider, GetFilterBuilder<E> getFilterBuilder)
{
this.dao = daoProvider;
this.getFilterBuilder = getFilterBuilder;
}
#Override
public int sizeInBackEnd(Query<E, String> query)
{
var q = getFilterBuilder.builderFilter(query);
return (int) q.count().intValue();
}
#Override
public Stream<E> fetchFromBackEnd(Query<E, String> query)
{
var q = getFilterBuilder.builderFilter(query);
q.startPosition(query.getOffset()).limit(query.getLimit());
return q.getResultList().stream();
}
}
The filterBuilder is where you construct your query for your back end data provider.

Realm - multiple operations on the same object. Does my implementation hit the performance?

I need to check some data, whether or not to send a tracking info. This data is saved inside the Realm database. Here is the model:
public class RealmTrackedState extends RealmObject {
#PrimaryKey
private int id = 1;
private RealmList<RealmChat> realmChatsStarted;
private boolean isSupportChatOpened;
private boolean isSupportChatAnswered;
/* getters and setters */
}
The idea is - every chat that is not inside the realmChatsStarted should be tracked and then added to this list. Similar thing for isSupportChatOpened boolean - however because of the business logic this is a special case.
So - I've wrapped this inside one Realm object. And I've wrapped this into few shouldTrack() methods, like this:
#Override
public void insertOrUpdateAsync(#NonNull final RealmModel object, #Nullable OnInsertListener listener) {
Realm instance = getRealmInstance();
instance.executeTransactionAsync(realm -> realm.insertOrUpdate(object), () ->
notifyOnSuccessNclose(listener, instance),
error -> notifyOnErrorNclose(listener, error, instance));
}
#Override
public RealmTrackedState getRealmTrackedState() {
try (Realm instance = getRealmInstance()) {
RealmResults<RealmTrackedState> trackedStates = instance.where(RealmTrackedState.class).findAll();
if (!trackedStates.isEmpty()) {
return instance.copyFromRealm(trackedStates.first());
}
RealmTrackedState trackedState = new RealmTrackedState();
trackedState.setRealmChatsStarted(new RealmList<>());
insertOrUpdateAsync(trackedState, null);
return trackedState;
}
}
#Override
public boolean shouldTrackChatStarted(#NonNull RealmChat chat) {
if (getCurrentUser().isRecruiter()) {
return false;
}
RealmList<RealmChat> channels = getRealmTrackedState().getRealmChatsStarted();
for (RealmChat trackedChats : channels) {
if (trackedChats.getId() == chat.getId()) {
return false;
}
}
getRealmInstance().executeTransaction(realm -> {
RealmTrackedState realmTrackedState = getRealmTrackedState();
realmTrackedState.addChatStartedChat(chat);
realm.insertOrUpdate(realmTrackedState);
});
return true;
}
And for any other field inside RealmTrackedState model happens the same.
So, within the presenter class, where I'm firing a track I have this:
private void trackState(){
if(dataManager.shouldTrackChatStarted(chatCache)){
//track data
}
if(dataManager.shouldTrackSupportChatOpened(chatCache)){
//track data
}
if(dataManager.shouldTrackWhatever(chatCache)){
//track data
}
...
}
And I wonder:
a. How much of a performance impact this would have.
I'm new to Realm, but for me opening and closing a DB looks ... heavy.
I like in this implementation that each should(...) method is standalone. Even though I'm launching three of them in a row - in other cases I'd probably use only one.
However would it be wiser to get this main object once and then operate on it? Sounds like it.
b. I see that I can either operate on synchronous and asynchronous transactions. I'm afraid that stacking a series of synchronous transactions may clog the CPU, and using the series of asynchronous may cause unexpected behaviour.
c. #PrimaryKey - I used this because of the wild copy paste session. Assuming that this class should have only instance - is it a correct way to do this?
ad a.
Realm caches instances so opening and closing instances are not that expensive as it sounds. First time an app is opening a Realm file, a number of consistency checks are performed (primarily does model classes match classes on disk) but next time you open an instance, you don't do this check.
ad b.
If your transactions depend on each other, you might have to be careful. On the other hand, why have multiple transactions? An async transaction will notify you when it has completed which can help me to get the behaviour you except.
ad c.
Primary keys are useful when you update objects (using insertOrUpdate()) as the value is use to decide if you are creating/inserting or updating an object.

How to set TTL for a specific Couchbase document using spring-data-couchbase?

How to set TTL (Time to Live) for a specific couchbase document using spring-data-couchbase?
I know there is way to set expiry time using Document notation as follows
#Document(expiry = 10)
http://docs.spring.io/spring-data/couchbase/docs/1.1.1.RELEASE/reference/html/couchbase.entity.html
It will set TTL for all the documents save through the Entity class.
But it seems there is way to set expiration(TTL) time for a specific document
"Get and touch: Fetch a specified document and update the document expiration."
mentioned in
http://docs.couchbase.com/developer/dev-guide-3.0/read-write.html
How can I achieve above functionality through spring-data-couchbase
Even If I can achieve the functionality using Java SDK, would be fine.
Any help.....
Using Spring data couchbase, this is a simple way you can configure ttl per document.
public class CouchbaseConfig extends AbstractCouchbaseConfiguration {
#Override
protected List<String> bootstrapHosts() {
return Arrays.asList("localhost");
}
#Override
protected String getBucketName() {
return "default";
}
#Override
protected String getBucketPassword() {
return "password1";
}
#Bean
public MappingCouchbaseConverter mappingCouchbaseConverter() throws Exception {
MappingCouchbaseConverter converter = new ExpiringDocumentCouchbaseConverter(couchbaseMappingContext());
converter.setCustomConversions(customConversions());
return converter;
}
class ExpiringDocumentCouchbaseConverter extends MappingCouchbaseConverter {
/**
* Create a new {#link MappingCouchbaseConverter}.
*
* #param mappingContext the mapping context to use.
*/
public ExpiringDocumentCouchbaseConverter(MappingContext<? extends CouchbasePersistentEntity<?>, CouchbasePersistentProperty> mappingContext) {
super(mappingContext);
}
// Setting custom TTL on documents.
#Override
public void write(final Object source, final CouchbaseDocument target) {
super.write(source, target);
if (source instanceof ClassContainingTTL) {
target.setExpiration(((ClassContainingTTL) source).getTimeToLive());
}
}
}
}
Using Spring-Data-Couchbase, you cannot set a TTL on a particular instance. Inserting (mutating) and setting the TTL in one go would be quite complicated given the transcoding steps that are hidden away in the CouchbaseTemplate save method.
However, if what you want to do is just update the TTL of an already persisted document (which is what getAndTouch does), there is a way that doesn't involve any transcoding and so can be applied easily:
From the CouchbaseTemplate, get access to the underlying SDK client via getCouchbaseClient() (note for now sdc is built on top of the previous generation of SDK, 1.4.x, but there'll be a preview of sdc-2.0 soon ;) )
Using the SDK, perform a touch operation on the document's ID, give it the new TTL
The touch() method returns an OperationFuture (it is async), so make sure to either block on it or consider the touch done only if notified so in the callback.
As of spring-data-couchbase:4.3.0 the code should look like yourRepository.getOperations().getCouchbaseClientFactory().getCollection(null).touch(id, ttl) or alternatively this can be done through CouchbaseTemplate as couchbaseTemplate.getCollection(null).touch(id, ttl)
findById() has a withExpiry() method that results in getAndTouch() being used and the expiration being set
User foundUser = couchbaseTemplate.findById(User.class).withExpiry(Duration.ofSeconds(1)).one(id);

How to refresh an entity in a Future?

I am not really sure where my problem lies, as I am experimenting in two areas that I don't have much experience with: JPA and Futures (using Play! Framework's Jobs and Promises).
I have the following bit of code, which I want to return a Meeting object, when one of the fields of this object has been given a value, by another thread from another HTTP request. Here is what I have:
Promise<Meeting> meetingPromise = new Job<Meeting> () {
#Override
public Meeting doJobWithResult() throws Exception {
Meeting meeting = Meeting.findById(id);
while (meeting.bbbMeetingId == null) {
Thread.sleep(1000);
meeting = meeting.refresh(); // I tried each of these
meeting = meeting.merge(); // lines but to no avail; I
meeting = Meeting.findById(id); // get the same result
}
return meeting;
}
}.now();
Meeting meeting = await(meetingPromise);
As I note in the comments, there are three lines in there, any one of which I think should allow me to refresh the contents of my object from the database. From the debugger, it seems that the many-to-one relationships are refreshed by these calls, but the single values are not.
My Meeting object extends Play! Framework's Model, and for convenience, here is the refresh method:
/**
* Refresh the entity state.
*/
public <T extends JPABase> T refresh() {
em().refresh(this);
return (T) this;
}
and the merge method:
/**
* Merge this object to obtain a managed entity (usefull when the object comes from the Cache).
*/
public <T extends JPABase> T merge() {
return (T) em().merge(this);
}
So, how can I refresh my model from the database?
So, I ended up cross-posting this question on the play-framework group, and I got an answer there. So, for the discussion, check out that thread.
In the interest of having the answer come up in a web search to anyone who has this problem in the future, here is what the code snippet that I pasted earlier looks like:
Promise<Meeting> meetingPromise = new Job<Meeting> () {
#Override
public Meeting doJobWithResult() throws Exception {
Meeting meeting = Meeting.findById(id);
while (meeting.bbbMeetingId == null) {
Thread.sleep(1000);
if (JPA.isInsideTransaction()) {
JPAPlugin.closeTx(false);
}
JPAPlugin.startTx(true);
meeting = Meeting.findById(id);
JPAPlugin.closeTx(false);
}
return meeting;
}
}.now();
Meeting meeting = await(meetingPromise);
I am not using the #NoTransaction annotation, because that messes up some other code that checks if the request is coming from a valid user.
I'm not sure about it but JPA transactions are managed automatically by Play in the request/controller context (the JPAPlugin opens a transaction before invocation and closes it after invocation).
But I'm not sure at all what happens within jobs and I don't think transactions are auto-managed (or it's a feature I don't know). So, is your entity attached to an entitymanager or still transient? Is there a transaction somewhere? I don't really know but it may explain some weird behavior if not...

Categories