Hippo CMS tutorial and MVC - java

I'm new to Hippo CMS and went through the tutorial. Everything went smoothly. But, I have a couple of questions and was hoping to get answers.
1) Do I need to create a new controller for every document I create? Or Can I simply repeat the following line of code for every document in one controller:
Simpledocument document = (Simpledocument) ctx.getContentBean();
if (document != null) {
// Put the document on the request
request.setAttribute("document", document);
}
It just doesn't make total sense to me that I should have to create a new controller for every single document. This could get messy.
2) The steps done to create the dynamic hello world document in Hippo CMS Console. Do I have to follow all those steps for every document? I have a feeling I do..
public class SimpleComponent extends BaseHstComponent {
public static final Logger log = LoggerFactory.getLogger(SimpleComponent.class);
#Override
public void doBeforeRender(final HstRequest request, final HstResponse response) throws HstComponentException {
super.doBeforeRender(request, response);
final HstRequestContext ctx = request.getRequestContext();
// Retrieve the document based on the URL
HelloWorldTut document = (HelloWorldTut) ctx.getContentBean();
HelloWorldList docList = (HelloWorldList) ctx.getContentBean();
if (document != null) {
//Put the document on the request
request.setAttribute("doc", document);
request.setAttribute("docList", docList);
}
}
}
Of course, HelloWorldTut and HelloWorldList are two different document types.

every component needs a controller, and a page can have multiple components. But of course you can reuse code and components. A page is rendered based on which sitemap item is matched from the url. This is attached to a page configuration which defines the components (or containers for components used in the channel manager). You don't even need a sitemapitem per document. Using wildcards you can match urls based on patterns.
2) For every document type. If you have to configure for each document it would quickly become unmanageable. If you have documents all of one type, you can match to the same page configuration each time. By using wildcards in the sitemapitem and assuming that the url matches on the name of the document you can match every document.

I had a similar question that was answered today at https://community.bloomreach.com/t/controller-for-every-view/744/3
You don't have to have a controller if you don't need custom processing. You can use the <#assign document=hstRequestContext.contentBean />
in you view template to get the content.

Related

Adding links to POST/PUT/PATCH operations in REST Web Controllers

I have a Java-written Web API wherein I have web controllers handling HTTP requests. I'm trying to implement a RESTful architecture with HATEOAS, using Spring Boot. When adding HATEOAS links in methods I can easily add links for GET/DELETE requests, but I'm having trouble with POST/PUT/PATCH requests, mostly because those require me to supply a body of the thing I want to post, usually in JSON format. I've been googling for a while and I can't find out how to do it.
Here's how I'm adding links to GET / DELETE operations.
/**
* Shows all the Rooms present in the database.
*
* #return OK status and a list of Room Minimal DTO.
*/
#GetMapping(path = "/", produces = MediaType.APPLICATION_JSON_VALUE)
public ResponseEntity<Object> getRooms() {
List<RoomDTOMinimal> roomDTOList = roomRepository.getAllDTOWebInformation();
for (RoomDTOMinimal r : roomDTOList) {
if (userService.getUsernameFromToken().equals("ADMIN")) {
Link roomSensors = linkTo(methodOn(RoomsWebController.class).getSensors(r.getName())).withRel("Get Room" +
"Sensors");
Link deleteRoom = linkTo(methodOn(RoomsWebController.class).deleteRoom(r)).withRel("Delete this Room");
r.add(roomSensors);
r.add(deleteRoom);
} else if (userService.getUsernameFromToken().equals("REGULAR_USER")) {
Link roomTemp = linkTo(methodOn(RoomsWebController.class).getCurrentRoomTemperature(r.getName())).
withRel("Get Room Temperature");
r.add(roomTemp);
}
}
return new ResponseEntity<>(roomDTOList, HttpStatus.OK);
}
I want to add a Link to an "editRoom" request, something like:
Link editRoom = linkTo(methodOn(RoomsWebController.class).configureRoom(r.getName(), WHAT GOES HERE???).withSelfRel();
But configureRoom takes in the roomName and a roomDTO in its signature. RoomDTO is a #RequestBody, so I can't give it to the Link. How should I add the link to the objects in a way that then allows me to call on that method?
I'd like to have something like:
ROOM | Delete | Edit
On the client side, where if I click DELETE the room is deleted, and if I click Edit the client side expands, with text boxes, allowing me to insert the required parameters to edit the room. I have the client-side code implemented for the Edit function, with appropriate front-end; but I can't link to it on the server-side without already providing data that should come later, from the user input. What's the best way to do this?
I've since solved it after talking with a team lead. Apparently it's acceptable to either pass null or an empty DTO object as a parameter in the scenario above; the HATEOAS implementation cares specifically about those parameters that are of the path, and roughly speaking ignores the others. Those can then be replaced as needed on the client-side upon a user performing an action / inserting input.

Programmatically render template area in Magnolia CMS

I am using Magnolia CMS 5.4 and I want to build a module that will render some content of a page and expose it over REST API. The task is simple but not sure how to approach it and/or where to start.
I want my module to generate partial template or an area of a template for a given reference, let's say that is "header". I need to render the header template/area get the HTML and return that as a response to another system.
So questions are: is this possible at all and where to start?
OK after asking here and on Magnolia forum couldn't get answer I dug in the source code and found a way to do it.
First thing the rendering works based on different renderers and those could be JCR, plain text or Freemarker renderer. In Magnolia those are decided and used in RenderingEngine and the implementation: DefaultRenderingEngine. The rendering engine will allow you to render a whole page node which is one step closer to what I am trying to achieve. So let's see how could this be done:
I'll skip some steps but I've added command and made that work over REST so I could see what's happening when I send a request to the endpoint. The command extends BaseRepositoryCommand to allow access to the JCR repositories.
#Inject
public setDefaultRenderingEngine(
final RendererRegistry rendererRegistry,
final TemplateDefinitionAssignment templateDefinitionAssignment,
final RenderableVariationResolver variationResolver,
final Provider<RenderingContext> renderingContextProvider
) {
renderingEngine = new DefaultRenderingEngine(rendererRegistry, templateDefinitionAssignment,
variationResolver, renderingContextProvider);
}
This creates your rendering engine and from here you can start rendering nodes with few small gotchas. I've tried injecting the rendering engine directly but that didn't work as all of the internals were empty/null so decided to grab all construct properties and initialise my own version.
Next step is we want to render a page node. First of all the rendering engine works based on the idea it's rendering for a HttpServletResponse and ties to the request/response flow really well, though we need to put the generated markup in a variable so I've added a new implementation of the FilteringResponseOutputProvider:
public class AppendableFilteringResponseOutputProvider extends FilteringResponseOutputProvider {
private final FilteringAppendableWrapper appendable;
private OutputStream outputStream = new ByteArrayOutputStream();
public AppendableFilteringResponseOutputProvider(HttpServletResponse aResponse) {
super(aResponse);
OutputStreamWriter writer = new OutputStreamWriter(outputStream);
appendable = Components.newInstance(FilteringAppendableWrapper.class);
appendable.setWrappedAppendable(writer);
}
#Override
public Appendable getAppendable() throws IOException {
return appendable;
}
#Override
public OutputStream getOutputStream() throws IOException {
((Writer) appendable.getWrappedAppendable()).flush();
return outputStream;
}
#Override
public void setWriteEnabled(boolean writeEnabled) {
super.setWriteEnabled(writeEnabled);
appendable.setWriteEnabled(writeEnabled);
}
}
So idea of the class is to expose the output stream and still preserve the FilteringAppendableWrapper that will allow us the filter the content we want to write. This is not needed in the general case, you can stick to using AppendableOnlyOutputProvider with StringBuilder appendable and easily retrieve the entire page markup.
// here I needed to create a fake HttpServletResponse
OutputProvider outputProvider = new AppendableFilteringResponseOutputProvider(new FakeResponse());
Once you have the output provider you need a page node and since you are faking it you need to set the Magnolia global env to be able to retrieve the JCR node:
// populate repository and root node as those are not set for commands
super.setRepository(RepositoryConstants.WEBSITE);
super.setPath(nodePath); // this can be any existing path like: "/home/page"
Node pageNode = getJCRNode(context);
Now we have the content provider and the node we want to render next thing is actually running the rendering engine:
renderingEngine.render(pageNode, outputProvider);
outputProvider.getOutputStream().toString();
And that's it, you should have your content rendered and you can use it as you wish.
Now we come to my special case where I want to render just an area of the whole page in this case this is the Header of the page. This is all handled by same renderingEngine though you need to add a rendering listener that overrides the writing process. First inject it in the command:
#Inject
public void setAreaFilteringListener(final AreaFilteringListener aAreaFilteringListener) {
areaFilteringListener = aAreaFilteringListener;
}
This is where the magic happens, the AreaFilteringListener will check if you are currently rendering the requested area and if you do it enables the output provider for writing otherwise keeps it locked and skips all unrelated areas. You need to add the listener to the rendering engine like so:
// add the area filtering listener that generates specific area HTML only
LinkedList<AbstractRenderingListener> listeners = new LinkedList<>();
listeners.add(areaFilteringListener);
renderingEngine.setListeners(listeners);
// we need to provide the exact same Response instance that the WebContext is using
// otherwise the voters against the AreaFilteringListener will skip the execution
renderingEngine.initListeners(outputProvider, MgnlContext.getWebContext().getResponse());
I hear you ask: "But where do we specify the area to be rendered?", aha here is comes:
// enable the area filtering listener through a global flag
MgnlContext.setAttribute(AreaFilteringListener.MGNL_AREA_PARAMETER, areaName);
MgnlContext.getAggregationState().setMainContentNode(pageNode);
The area filtering listener is checking for a specific Magnolia context property to be set: "mgnlArea" if that's found it will read its value and use it as an area name, check if that area exists in the node and then enable writing once we hit the area. This could be also used through URLs like: https://demopublic.magnolia-cms.com/~mgnlArea=footer~.html and this will give you just the footer area generated as an HTML page.
here is the full solution: http://yysource.com/2016/03/programatically-render-template-area-in-magnolia-cms/
Just use the path of the area and make a http request using that url, e.g. http://localhost:9080/magnoliaAuthor/travel/main/0.html
As far as I can see there is no need to go through everything programmatically as you did.
Direct component rendering

How can I use the Wikipedia API to extract/parse the link I am looking for?

In Wikipedia 95% of the links leads to the Philosophy page. I am trying to write a program in Java that takes any link on wikipedia and clicks the first link(which is not citation/sound/extraneous link and also ignores parentsitzed link .)
For e.g if you start with this url http://en.wikipedia.org/wiki/Dutch_people, it should click Ethnic Group http://en.wikipedia.org/wiki/Ethnic_group and so on until it reaches Philosophy
You should see this Getting_to_Philosophy
Check http://xefer.com/wikipedia (type any word) to see how it works .
I already wrote the back end that stores the data in database in 3 columns
Unique_URL_Id URL_Link Next_URL_Id so latter on printing the whole path will be easier.
The backend works fine(if I give it just a list of links to follow). However extracting and finding the first link is something not working as it should work.
Here is sample code I wrote just for extracting from a URL using jSoap API
public static void extractWikiPage(String title) throws IOException{
Document doc = Jsoup.connect("http://en.wikipedia.org/wiki/Europe").get();
//int titles = doc.toString().indexOf("(");
//Get the first paragraph where the main body contents starts
String body = doc.getElementsByTag("p").first().toString();
System.out.println(body);
Document doc2= Jsoup.parse(body);
Elements href=doc2.getElementsByTag("a");
int x="".indexOf("");
for(Element h: href){
System.out.println(h.toString());
}
//System.out.println(linkText);
System.exit(1);
}
I am just finding the first occurence of '<p>' since that's where 95% of the links to the next page start. And in that paragraph, I am trying to get all the links but I need the first one that satisfies the condition I wrote above.
How can I use Wikipedia API to solve extracting the data I am looking for.I appreciate your help.
/w/api.php?action=query&prop=revisions&format=json&rvprop=content&rvlimit=1&rawcontinue=&titles=Dutch_people is the query that returns the wikitext for that page.
You'll have to parse that result to get the data you want back. You'll be looking for the first thing that is inside of [[double square brackets]] (probably after /\{\{Infobox(.*?)\}\}/i or something like that to exclude links in the infobox and any maintenance tags that might be on the page) that don't start with "something:" to eliminate all interwiki links and categories and file/media pages.

Getting data from the web using Android?

When using Eclipse for Java, I'm able to access data from websites and fill out online forms using Selenium. All I have to do is do WebDriver driver = new HtmlUnitDriver(); and driver.get("wwww.google.com"); and driver.findElement(). In order to accomplish this, I would go into the Java Build Path, access Libraries, and then add the external JAR file: selenium-server-standalone-2.39.0.jar.
I'd like to do the same for Android but am having difficulty. I understand there was something called Selenium for Android, but it's no longer being supported. Now there's Selendroid. But while the code is vaguely familiar to that of Eclipse for Java (i.e., SelendroidCapabilities capa = new SelendroidCapabilities("io.selendroid.testapp:0.12.0");, WebDriver driver = new SelendroidDriver(capa);, WebElement inputField = driver.findElement(By.id("my_text_field"));), I don't think this is actually the same as what I am looking for. I even tried to add selendroid-standalone-0.12.0-with-dependencies.jar to the Android library and all I got back was this error in the console:
Dx warning: Ignoring InnerClasses attribute for an anonymous inner class
(org.apache.xalan.lib.sql.SecuritySupport12$8) that doesn't come with an
associated EnclosingMethod attribute. This class was probably produced by a
compiler that did not target the modern .class file format. The recommended
solution is to recompile the class from source, using an up-to-date compiler
and without specifying any "-target" type options. The consequence of ignoring
this warning is that reflective operations on this class will incorrectly
indicate that it is *not* an inner class.
So my question is: Where can I go to learn about using Android to go to a web page and retrieve some data (but not actually open a web page on the screen, this is strictly background stuff)? Or, what are the steps to getting data from a website via Android using identifiers such as id, name, or Xpath, etc.?
Use JSOUP for the same. I think thats what you loking for.
jsoup is a Java library for working with real-world HTML. It provides a very convenient API for extracting and manipulating data, using the best of DOM, CSS, and jquery-like methods.
Download jar and include in project.
Simple example :
Document doc = Jsoup.connect("http://example.com/").get();
String title = doc.title();
Read apidocs for more info.
Also make sure to put network calls in an AsyncTask and not on main UI thread.
I eventually found something that is exactly what I wanted: HtmlCleaner. There's a good guide here.
Download the JAR file here and include it in the project's library.
Then use the following code to get your element from the XPath:
public class Main extends Activity {
// HTML page
static final String URL = "https://www.yourpage.com/";
// XPath query
static final String XPATH = "//some/path/here";
#Override
public void onCreate(Bundle savedInstanceState) {
// init view layout
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
// decide output
String value = getData();
}
public String getData() {
String data = "";
// config cleaner properties
HtmlCleaner htmlCleaner = new HtmlCleaner();
CleanerProperties props = htmlCleaner.getProperties();
props.setAllowHtmlInsideAttributes(false);
props.setAllowMultiWordAttributes(true);
props.setRecognizeUnicodeChars(true);
props.setOmitComments(true);
// create URL object
URL url = new URL(URL);
// get HTML page root node
TagNode root = htmlCleaner.clean(url);
// query XPath
Object[] statsNode = root.evaluateXPath(XPATH);
// process data if found any node
if(statsNode.length > 0) {
// I already know there's only one node, so pick index at 0.
TagNode resultNode = (TagNode)statsNode[0];
// get text data from HTML node
stats = resultNode.getText().toString();
}
// return value
return data;
}
}

wicket: mapping different paths to the same class on request to generate different content in markup

I developed a shopsystem. there is a product page, which lists the available items filtered by some select menus. there is also one item detail page to view some content about each product. the content of that page will be loaded out of an xml property file. if one would click the link in the listview of an item, to view some details, an item specific GET parameter is set. with the parameters value, i can dynamically load the content for that specific item from my properties, by altering the loaded keys name.
so far so good, but not really good. so much to the backgroud. lets get to some details.
most of all, this is some SEO motivated stuff. so far there is also a problem with the pageinstance Id in the url for statefull pages, not only because of the nonstable url, also because wicket is doing 302 redirects to manipulate the url. maybe I will remove the statefull components of the item detailpage to solve that problem.
so now there are some QR-code on the products being sold, that contain a link to my detail page. these links are not designed by myself and as you can imagine, they look a whole lot of different like the actual url. lets say the QR-code url path would be "/shop/item1" where item1 would be the product name. my page class would be ItemDetailPage .
I wrote an IRequestMapper that I am mounting in my WebApplication#init() that is resolving the incoming requests URL and checks wether it needs to be resolved by this IRequestMapper. If so, I build my page with PageProvider and return a requesthandler for it.
public IRequestHandler mapRequest(Request request) {
if(compatibilityScore>0) {
PageProvider provider = new PageProvider(ItemDetailPage.class, new ItemIDUrlParam(request.getUrl().getPath().split("/")[1]));
provider.setPageSource(Application.get().getMapperContext());
return new RenderPageRequestHandler(provider);
}
return null;
}
So as you can see, I build up a parameter that my detailpage can handle. But the resulting URL is not very nice. I'd like to keep the original url by mapping the bookmarkable content to it, without any redirect.
My first thought was to implement an URLCodingStrategy to rebuild the URL with its parameters in the form of a path. I think the HybridUrlCodingStrategy is doing something like that.
After resolving the URL path "/shop/item1/" with the IRequestMapper it would look like "/shop/item?1?id=item1" where the first parameter off course is the wicket pageinstance Id, which will most likely be removed as I will rebuild the detail page to be stateless :(
after applying an HybridURLCodingStrategy it might look like "/shop/item/1/id/item1" or "/shop/item/id/item1" without pageinstance Id. another Idea would be to remove the second path part and the parameter name and only use the parameters value so the url would look like "/shop/item1" which is then the same url as it was in the request.
Do you guys have any experience with that or any smart ideas?
The rewuirements are
Having one fix URL for each product the SE bot can index
no parameters
stateless and bookmarkable
no 302 redirects in any way.
the identity of the requested item must be available for the detailpage
with kind regards from germany
Marcel
As Bert stated, your use case should be covered with normal page mounting, see also the MountedMapper wiki page, for your case a concrete example:
mountPage("/shop/${id}", ShopDetailPage.class);
Given that "item1" is the ID of the item (which is not very clear to me), you can retrieve it now as the named page parameter id in Wicket. Another example often seen in SEO links, containing both the unique ID and the (non-unique, changing) title:
mountPage("/shop/${id}/${title}", ShopDetailPage.class);
Regarding the page instance ID, there are some ways to get rid of it, perhaps the best way is to make the page stateless as you said, another easy way is to configure IRequestCycleSettings.RenderStrategy.ONE_PASS_RENDER as the render strategy (see API doc for consequences).

Categories