Is it possible to request fields by specific pattern in solr? - java

I can use fl=fld1,fld2,fld3 tor return specific fields from solr. But sometimes i generate dynamic field names like ".*_attribute_group1" and want solr to return all group.
Is it posible to extend solr 'fl' field with regexp? Where to look in solr codebase?

Solr doesn't support wildcard patterns in field names ( "fl" param ). But you could write your own component to process the request & identify the list of fileds present in the index that you want.
Pesudo Code of extending search component to implement custom fields..
// PSUEDO CODE
public class FLPatternCustomComponent extends SearchComponent {
#Override
//Gauranteed to be called before any other SearchComponent.process
public void prepare(ResponseBuilder rb) throws IOException {
SolrParams params = rb.req.getParams();
//Input fl=field_*
String[] inputFl = params.getParams(CommonParams.FL);
Collection<String> existingFl = rb.req.getSearcher().getFieldNames();
//process & find matching fields{
SolrQuery newFields = new SolrQuery();
newFields.set(CommonParams.FL, "field_1,field_2,field_3,field_4");
AppendedSolrParams appendedParams = new AppendedSolrParams(params, q);
rb.req.setParams(appendedParams);
super.prepare(rb);
}
#Override
public void process(ResponseBuilder rb) throws IOException {
//Process request
super.process(rb);
}
}
You could have this a component chained to your existing request handler or create your request handler & perhaps you could also add any additional invariants.
You may want to consider any additional performance overhead of custom component & its processing. I have created couple of custom components for custom ranking & custom request handlers & use it without much issues.
You might want to check Solr Plugin Development.

Related

LazyList is not defined in ZUL Page

I'm trying to pass data from DAO with flexbile search query trough to the zul page using de the widgetModel. But when I print the widgetModel.orders it says...
(index):93 Uncaught ReferenceError: LazyList is not defined
at window.onload ((index):93)
zul page
window.onload = function () {
const myChart = new Chart(
document.getElementById('myChart'),
config
);
const test = [[${widgetModel.orders}]];
console.log(test);
};
controller class
public class customGraphController extends DefaultWidgetController {
private static final long serialVersionUID = 7954736389190109887L;
#WireVariable
private transient customGraphService customGraphService;
#Override
public void preInitialize(Component comp) {
super.preInitialize(comp);
WidgetModel model = getWidgetInstanceManager().getModel();
model.put("orders", customGraphService.getAllOrders());
}
}
service class
public class customGraphService {
#Autowired
private OrdersDataDao ordersDataDao;
public List<OrderModel> getAllOrders() {
return ordersDataDao.getAllOrders();
}
}
dao class
public class OrdersDataDao {
#Resource
private FlexibleSearchService flexibleSearchService;
public List<OrderModel> getAllOrders() {
final String stringQuery = "select {o.pk} from {order as o}";
final FlexibleSearchQuery query = new FlexibleSearchQuery(stringQuery);
final SearchResult<OrderModel> result = flexibleSearchService.search(query);
if (null != result.getResult()) {
return result.getResult();
} else {
return null;
}
}
}
Does someone knows a solution?
window.onLoad is always too early to manipulate ZK widgets. The onLoad callback will trigger once the initial document has been loaded, but that document itself will load ZK libaries, initialize the client engine, etc.
If you need to do something "after libraries have been loaded", you can use zk.afterload. This hook is good if you need to modify a framework function before the widgets actually use it.
However, this is still before widget instantiation by the client engine, so if your goal is to access a widget, it is still too early.
If you want to do something to a widget after that widget has been added to the page an initialized, what you actually need is a client-side onBind listener.
You can set that listener in zul or in java, but the simplest way to do that is like this: https://zkfiddle.org/sample/1v9phuk/1-Another-new-ZK-fiddle
Lastly, if you are going to use ZK's client-side API to access / modify content at client-side, I'd recommend looking into the ZK client-side selectors. Way easier than trying to manually lookup elements by ID, and way more robust in the long run.
document.getElementById('myChart') will only work if you have a dom element with the actual ID "myChart", which is not how ZK works (not with default UUID generator anyway).
Instead, you can select a widget by it's ZK ID.
Assuming you have <charts id="myChart" /> in your zul, you can get the ZK widget directly as argument to the onBind listener, or you can get it with the ZK selector:
zk.$("$myChart"), and from there you can get the DOM node: zk.$("$myChart").$n()
Make sure you know what is client-side (JavaScript), and what is Server side.
(Lastly)^2, keep in mind that in a ZK architecture, the server is the source of the state, and the client only update the state by sending client commands back to the server.
If you use JS to modify the client-side state, you can create desynchronization between the server-side state and the client-side state, so proceed with caution.
I found a solution, I used GSON to stringify the list and than pass it to the zul page :-)
private Object convertAllOrderModelsToJSON() {
//get all the models
List list = customGraphService.getAllOrdersModels();
//convert models to JSON
Gson gson = new GsonBuilder().setPrettyPrinting().create();
String json = gson.toJson(list);
return json;
}

How can I create dynamic rule in apache-flink?

I'm using flink with Java and I succeeded in defining a static pattern as follow:
Pattern<Event, ?> pattern = Pattern.<Event>
begin("first")
.where(
new SimpleCondition<Event>() {
#Override
public boolean filter(Event event) {
return event.getTemperature() > 50;
}
}).within(Time.seconds(10L));
Is there a way in apache-flink to create patterns in a dynamic way?
I need to define the pattern according to user's input.
Thanks
You might be interested in the "Dynamic Updates of Application Logic" pattern.
Use BroadcastStream for your rules that you connect to the stream.
With the example in the article you could even have dynamic aggregations definitions:
// Streams setup
DataStream<Transaction> transactions = [...]
DataStream<Rule> rulesUpdateStream = [...]
BroadcastStream<Rule> rulesStream = rulesUpdateStream.broadcast(RULES_STATE_DESCRIPTOR);
// Processing pipeline setup
DataStream<Alert> alerts =
transactions
.connect(rulesStream)
.process(new DynamicKeyFunction())
.keyBy((keyed) -> keyed.getKey())
.connect(rulesStream)
.process(new DynamicAlertFunction())

Spring Data Rest: Limit sending values on Update method

I have implemented by project using Spring-Data-Rest. I am trying to do an update on an existing record in a table. But when I try to send only a few fields instead of all the fields(present in Entity class) through my request, Spring-Data-Rest thinking I am sending null/empty values. Finally when I go and see the database the fields which I am not sending through my request are overridden with null/empty values. So my understanding is that even though I am not sending these values, spring data rest sees them in the Entity class and sending these values as null/empty. My question here is, is there a way to disable the fields when doing UPDATE that I am not sending through the request. Appreciate you are any help.
Update: I was using PUT method. After reading the comments, I changed it to PATCH and its working perfectly now. Appreciate all the help
Before update, load object from database, using jpa method findById return object call target.
Then copy all fields that not null/empty from object-want-to-update to target, finally save the target object.
This is code example:
public void update(Object objectWantToUpdate) {
Object target = repository.findById(objectWantToUpdate.getId());
copyNonNullProperties(objectWantToUpdate, target);
repository.save(target);
}
public void copyNonNullProperties(Object source, Object target) {
BeanUtils.copyProperties(source, target, getNullPropertyNames(source));
}
public String[] getNullPropertyNames (Object source) {
final BeanWrapper src = new BeanWrapperImpl(source);
PropertyDescriptor[] propDesList = src.getPropertyDescriptors();
Set<String> emptyNames = new HashSet<String>();
for(PropertyDescriptor propDesc : propDesList) {
Object srcValue = src.getPropertyValue(propDesc.getName());
if (srcValue == null) {
emptyNames.add(propDesc.getName());
}
}
String[] result = new String[emptyNames.size()];
return emptyNames.toArray(result);
}
You can write custom update query which updates only particular fields:
#Override
public void saveManager(Manager manager) {
Query query = sessionFactory.getCurrentSession().createQuery("update Manager set username = :username, password = :password where id = :id");
query.setParameter("username", manager.getUsername());
query.setParameter("password", manager.getPassword());
query.setParameter("id", manager.getId());
query.executeUpdate();
}
As some of the comments pointed out using PATCH instead of PUT resolved the issue. Appreciate all the inputs. The following is from Spring Data Rest Documentation:
"The PUT method replaces the state of the target resource with the supplied request body.
The PATCH method is similar to the PUT method but partially updates the resources state."
https://docs.spring.io/spring-data/rest/docs/current/reference/html/#customizing-sdr.hiding-repository-crud-methods
Also, I like #Tran Quoc Vu answer but not implementing it for now since I dont have to use custom controller. If there is some logic(ex: validation) involved when updating the entity, I am in favor of using the custom controller.

Programmatically grouping and typehinting different classes

Given I have a class that uses some kind of searcher to get and display a list of URLs, like this:
package com.acme.displayer;
import com.acme.searcher.SearcherInterface;
class AcmeDisplayer {
private SearcherInterface searcher;
public AcmeDisplayer(SearcherInterface searcher) {
this.searcher = searcher;
}
public void display() {
List<String> urls = searcher.getUrls();
for (String url : urls) {
System.out.println(url);
}
}
}
Whereas the SearcherInterface looks like the following:
package com.acme.searcher;
public interface SearcherInterface {
List<String> getUrls();
}
There's multiple implementations of these searchers. (One, for instance, only returns a hardcoded list of Strings for testing purposes).
Another one, however, performs HTTP Requests to whatever API and parses the response for URLs, like so:
package com.acme.searcher.http;
import com.acme.searcher.SearcherInterface;
public class HttpSearcher implements SearcherInterface {
private RequestPerformerInterface requestPerformer;
private ParserInterface parser;
public HttpSearcher(RequestPerformerInterface requestPerformer, ParserInterface parser) {
this.requestPerformer = requestPerformer;
this.parser = parser;
}
List<String> getUrls() {
InputStream stream = requestPerformer.performRequest();
return parser.parse(stream);
}
}
The splitting of such an HTTP request is done because of seperation of concerns.
However, this is leading to a problem: A Parser might only be built for a certain API, which is represented by a certain RequestPerformer. So they need to be compatible. I've fiddled around with generic types for such a structure now, i.e. having a TypeInterface that both arguments of HttpSearchers constructor should implement, but I didn't get it working... Another approach would be to just implement a check in one class if the other one is compatible with it, but that seems ugly.
Is there any way to achieve such a grouping of RequestPerformers and Parsers by the API they're handling? Or is there something wrong with the architecture itself?
Your HttpSearcher seems like such a device to group these 2 together. You could create a factory class that returns HttpSearcher and other classes like it, and code that factory to group the compatible RequestPerformers and Parsers together.
The reason why I wouldn't advice leveraging the type system, e.g. through generics, is that the type InputStream can guarantee nothing about the format/type of data it holds. Separating the responsibility of getting the raw data, and parsing seems like a good idea, but you will still have to 'manually' group the compatible types together, because only you know what format/type of data the InputStream will hold.

How to set TTL for a specific Couchbase document using spring-data-couchbase?

How to set TTL (Time to Live) for a specific couchbase document using spring-data-couchbase?
I know there is way to set expiry time using Document notation as follows
#Document(expiry = 10)
http://docs.spring.io/spring-data/couchbase/docs/1.1.1.RELEASE/reference/html/couchbase.entity.html
It will set TTL for all the documents save through the Entity class.
But it seems there is way to set expiration(TTL) time for a specific document
"Get and touch: Fetch a specified document and update the document expiration."
mentioned in
http://docs.couchbase.com/developer/dev-guide-3.0/read-write.html
How can I achieve above functionality through spring-data-couchbase
Even If I can achieve the functionality using Java SDK, would be fine.
Any help.....
Using Spring data couchbase, this is a simple way you can configure ttl per document.
public class CouchbaseConfig extends AbstractCouchbaseConfiguration {
#Override
protected List<String> bootstrapHosts() {
return Arrays.asList("localhost");
}
#Override
protected String getBucketName() {
return "default";
}
#Override
protected String getBucketPassword() {
return "password1";
}
#Bean
public MappingCouchbaseConverter mappingCouchbaseConverter() throws Exception {
MappingCouchbaseConverter converter = new ExpiringDocumentCouchbaseConverter(couchbaseMappingContext());
converter.setCustomConversions(customConversions());
return converter;
}
class ExpiringDocumentCouchbaseConverter extends MappingCouchbaseConverter {
/**
* Create a new {#link MappingCouchbaseConverter}.
*
* #param mappingContext the mapping context to use.
*/
public ExpiringDocumentCouchbaseConverter(MappingContext<? extends CouchbasePersistentEntity<?>, CouchbasePersistentProperty> mappingContext) {
super(mappingContext);
}
// Setting custom TTL on documents.
#Override
public void write(final Object source, final CouchbaseDocument target) {
super.write(source, target);
if (source instanceof ClassContainingTTL) {
target.setExpiration(((ClassContainingTTL) source).getTimeToLive());
}
}
}
}
Using Spring-Data-Couchbase, you cannot set a TTL on a particular instance. Inserting (mutating) and setting the TTL in one go would be quite complicated given the transcoding steps that are hidden away in the CouchbaseTemplate save method.
However, if what you want to do is just update the TTL of an already persisted document (which is what getAndTouch does), there is a way that doesn't involve any transcoding and so can be applied easily:
From the CouchbaseTemplate, get access to the underlying SDK client via getCouchbaseClient() (note for now sdc is built on top of the previous generation of SDK, 1.4.x, but there'll be a preview of sdc-2.0 soon ;) )
Using the SDK, perform a touch operation on the document's ID, give it the new TTL
The touch() method returns an OperationFuture (it is async), so make sure to either block on it or consider the touch done only if notified so in the callback.
As of spring-data-couchbase:4.3.0 the code should look like yourRepository.getOperations().getCouchbaseClientFactory().getCollection(null).touch(id, ttl) or alternatively this can be done through CouchbaseTemplate as couchbaseTemplate.getCollection(null).touch(id, ttl)
findById() has a withExpiry() method that results in getAndTouch() being used and the expiration being set
User foundUser = couchbaseTemplate.findById(User.class).withExpiry(Duration.ofSeconds(1)).one(id);

Categories