I need to realize faceting search with dynamic boundaries in hybris.
I have not ideas how to make it.
Can you help me?
Facet boundaries, and I am going to assume you mean "Ranges" in the terminology used in hybris, are pushed into SOLR if using a default hybris Accelerator site.
To make these "dynamic" you will probably need to implement https://issues.apache.org/jira/browse/SOLR-1581 and integrate this via the solrfacetsearch extension.
Related
I'm new to JSF.
I have two use cases.
1: URL: https://site.com/context/part/{partId}
2: URL: https://site.com/context/register-token?tokenType=xxxxxx&token=xxxxx
in each of these cases i'd like to extract the variable information, execute code in a java class (scoped bean/#PostConstruct, i presume) then display appropriate content based on the values.
I'm sure this is pretty straight forward in JSF and I have seen quite a few nice suggestions on how to do pieces of these, but they seem to not be without controversy, so I can't say they're the correct way due to my ignorance. Additionally there seems to be significant enough changes in 2.2 the older posts could be out of date as far as "correctness" goes. Lastly there doesn't seem to be a guide (that I can find) that specifically talks to these workflows specifically in 2.2.
Can anyone provide me a semi comprehensive "correct" way to do these things in JEE7/JSF2.2?
Correct can be subjective I know, but my thinking though this seems rudimentary enough that in this case a vanilla happy path suggestion would be enough.
Much appreciated, thanks.
Finishing comment from above as the last issue has been resolved...
For workflow 1: i found this and it worked: http://www.oracle.com/technetwork/articles/java/jsf22-1377252.html
But it seemed limited to query params.
For workflow 2: I'm using prettyfaces and i was able to get it to work ~sort-of~ by using this (Section 3.6): http://ocpsoft.org/docs/prettyfaces/3.3.3/en-US/html/Configuration.html#config.actions
My web assets not resolving issue was resolved, by using this tip given by #chkal: PrettyFaces using mapped urls and actions, i lose all my stylings.
This suggestion pushed me over the edge to abandon a pure JSF solution and use pretty faces especially since im going to lean towards workflow 2 more often than not: Should I use f:event or action element in PrettyFaces?
I intend to make a niche search engine. I am using apache-nutch-1.6 as the crawler and apache-solr-3.6.2 as the searcher. I must say there is very less updated information on web about these technologies.
I followed this tutorial http://wiki.apache.org/nutch/NutchTutorial and have successfully installed apache and solr on my ubuntu system. I was also successful in injecting seed url to webdb and perform the crawl.
Using solr interface at http://localhost:8983/solr/admin, I can also query the crawled results. But this is the output I receive. .
Am I missing something here, the earlier apache-nutch-0.7 had a war which generated a clear html output like this. . How do I achieve this... Or if anyone could point me to a latest tutorial or guidebook, highly appreciated.
A couple of things:
If you are just starting, do not use Solr 3.6, go straight to latest 4.1+. A bunch of things have changed and a lot of new features are added.
You seem to be saying that you will expose Solr + UI directly to general web - that's a really bad idea, as Solr is completely unsecured and allows web-based delete queries. You really want a business layer in a middle.
With Solr 4.1, there is a pretty Admin UI and, also, there is a /browse page that shows how to use Velocity to do the pages backed by Solr. Or have a look at something like Project Blacklight for an example of how to get UI over Solr.
I found below link
http://cmusphinx.sourceforge.net/2012/06/building-a-java-application-with-apache-nutch-and-solr/
which answered my query.
I agree after reading the content available on above link, I felt very angry at me.
Solr package provides all the required objects to query solr.
Infact, the essential jars are just solr-solrj-3.4.0.jar, commons-httpclient-3.1.jar and slf4j-api-1.6.4.jar.
Anyone can build a java search engine using these objects to query the database and have a fancy UI.
Thanks again.
We have an in-house webapp running for internal use over the intranet for our firm.
Recently we decided to implement an efficient searching facility,and would want inputs from experts here about what all API's are available and which one would be most useful for the following use-cases:
The objects are divided into business groups in our firm, i.e an object can actually have various attributes, and the attributes as such are not common between any two objects from different BG(Business Groups)
Users might want to search for a specific attribute amongst an object
Users are from a business group, hence they have an idea about the kind of attributes related to their group
The API should be generic enough to have a full text/part text search if a list of object is passed to it, with the name of the attribute and the search text.More importantly it should be able to index this result.
As this is an internal app, there are no restrictions on the space as such, but we need a fast and generic API.
I am sure Java already has something which suits our needs.
More info on the technology stack:
Language:Java
Server: Apache Tomcat
Stack : Spring, iBatis, Struts
Cache in place : ECache
Other API : Shindig API
Thanks
Neeraj
You can use Solr for Apache Lucene if text based search has priotity. It might be more that what you want though have a look.
http://lucene.apache.org/solr/
http://lucene.apache.org/
Solr is a great tool for search. The downside is that it may require some work to get it the way you want it.
With it, you can set different fields for a document and give them custom priority in each query.
You can create facets easily from those fields like with Amazon. Sorting is easy and quick. And has a spellchecker and suggestions engine built in.
The documents are matched using the query mode dismax which you can customize.
We have recently installed a Google Search Appliance in order to power our internal search (via the Java API), and all seems to be well, however I have a question regarding 'automatic' site-map generation that I'm hoping you guys may know the answer to.
We are aware of the GSA's ability to auto-generate site maps for each of its collections, however this process is rather manual, and considering that we have around 10 regional sites that need to be updated as often as possible, its not ideal to have to log into the admin interface on a regular basis in order to export them to the site root where search engines can find them.
Unfortunately there doesn't seem to be any API support for this, at least none that I can find, so I was wondering if anyone had any ideas for a solution/workaround or, if all else fails, the best alternative.
At present I'm thinking that if we can get the full index back from the API in the form of a list, then we can write an XML file out using that the old fashioned way using a chronjob or similar, however this seems like a bit of a clumsy solution - any better ideas.
You could try the GSA Admin Toolkit, or simply write some code yourself which just logs in on the administration page and then uses that session to invoke the sitemap export URL (which is basically what the Admin Toolkit does).
I posted this in the Nabble group also, but figured may get some advice here.
is there a way to get SOLR to search whatever index i tell it to during search time without using multiple cores?
i dont build my indexes with SOLR, i build them with my own java class, but i do use SOLR to search them later. It would be nice to tell Solr during search time which index to access.
I have combined them as well, and this works but there are a few issues in my particular case, that make it easier to solve with sending the index name/path at search.
thanks
I don't really think you can really do what you are looking for here. Part of the simplicity of Solr comes from have the core (and therefore index) in the URL. What you could do is hack how Solr works to add another parameter to the url and then when Solr goes to do a search use that to determine which index it uses. I think you might end up throwing out all the auto warming of caches etc though.
Out of curiosity, why do you NOT want to use multiple cores? Is it that you expect to have thousands and thousands, or that each index is incredibly transient?
Eric
You might want to look into Lucid Works' Enterprise Solr release.
http://www.lucidimagination.com/products/lucidworks-search-platform/enterprise
They implement Collections in a way that is similar to your use case.