Adding content to Liferay via API - java

I am starting using Liferay Portal and I have two basic needs which I would like to achieve with Liferay.
Is there a posibility to add content to CMS through API level? I would like to insert some data "from code".
More important. How to achieve such situation that for every created user there will be its own homepage generated with some predefined template elements on it?
I have tried to Google something so far, but I did not find it helpful. Maybe some keywords?
After some analysis of documentation devoted to services and ServiceBuilder I realized that it is not what I want.
Let me show an example based on Websphere.
In Websphere we have bunch of EJB components available to perform some actions, exchange information with portal, easy to use. Isn't there any similar mechanism in Liferay not involving web services?

My recommendation for this kind of question is to take a look at the sevencogs-hook sourcecode. The structure of this hook is basically just a long script that runs once, setting up a complete demo site with users, sites, pages, content etc. The code runs once (after the first deployment) and then never again. There are no (obvious) conditionals, no context to understand etc.
You can basically just step through everything and - in that process - understand how content (and pages, images, blog posts, etc.) are created and positioned on pages in Liferay.
This hook accesses the Java API, a very similar API is available through Webservices. Basically all of Liferay's portlets also use the same API to do their business.
Edit: Additional information to keep this answer valuable/current: Sevencogs is discontinued, but still available in old releases (source & binary). The API has slightly changed, so compiling/running it will need a bit of work. James Falkner has blogged about the leftovers and lessons learnt - those snippets are extracted from sevencogs and contain the relevant code pieces to work with the API.

Looking at this page from the documentation: It smells like a SOAP interface (they mention some sort of document uploader service and I've read axis).
You'll find some url examples that should give a list of available webservices.

For number 1, you can use the one of the:
JournalArticleLocalServiceUtil.addArticle()
methods to programmatically add Liferay Web Content from a portlet. If you download the Liferay Portal Source you can see the structure of these methods.
For number 2, can create page templates with preconfigured portlets on them (through the Plugins-SDK), and then use the API to programmatically create the pages using one of the:
LayoutLocalServiceUtil.addLayout()
methods.
If you have any more speific questions about these comment back, and I hope this helps!

Related

How to modify search result page given by Solr?

I intend to make a niche search engine. I am using apache-nutch-1.6 as the crawler and apache-solr-3.6.2 as the searcher. I must say there is very less updated information on web about these technologies.
I followed this tutorial http://wiki.apache.org/nutch/NutchTutorial and have successfully installed apache and solr on my ubuntu system. I was also successful in injecting seed url to webdb and perform the crawl.
Using solr interface at http://localhost:8983/solr/admin, I can also query the crawled results. But this is the output I receive. .
Am I missing something here, the earlier apache-nutch-0.7 had a war which generated a clear html output like this. . How do I achieve this... Or if anyone could point me to a latest tutorial or guidebook, highly appreciated.
A couple of things:
If you are just starting, do not use Solr 3.6, go straight to latest 4.1+. A bunch of things have changed and a lot of new features are added.
You seem to be saying that you will expose Solr + UI directly to general web - that's a really bad idea, as Solr is completely unsecured and allows web-based delete queries. You really want a business layer in a middle.
With Solr 4.1, there is a pretty Admin UI and, also, there is a /browse page that shows how to use Velocity to do the pages backed by Solr. Or have a look at something like Project Blacklight for an example of how to get UI over Solr.
I found below link
http://cmusphinx.sourceforge.net/2012/06/building-a-java-application-with-apache-nutch-and-solr/
which answered my query.
I agree after reading the content available on above link, I felt very angry at me.
Solr package provides all the required objects to query solr.
Infact, the essential jars are just solr-solrj-3.4.0.jar, commons-httpclient-3.1.jar and slf4j-api-1.6.4.jar.
Anyone can build a java search engine using these objects to query the database and have a fancy UI.
Thanks again.

Service similar to Airbrake.io for java applications?

We made our own api for airbrake.io in java. This works fine but airbrake is displaying parameters and stacktraces in some kind of Rails style. This is somewhat annoying. Anyone know of similar services made for java?
Example of how data is displayed:
Parameters
{"controller"=>"", "action"=>""}
Stacktrace
/testapp/app/models/user.rb:53:in `public'
/testapp/app/controllers/users_controller.rb:14:in `index'
UPDATE 2015-02-13: This service no longer exists. The GitHub account linked below is gone, as is the company website.
Have you tried using Coalmine https://github.com/coalmine/coalmine_java Its meant to be used with the Coalmine service: https://getcoalmine.com/
I work at Coalmine and we have been using this internally for some time now. We just open sourced the java connector this week and I would be happy to help you get started with it. You can send me an email at brad#builtfromsource.com
Have you tried using http://code.google.com/p/hoptoad/ . It's a little out of date, but it should just need to update an endpoint to http://api.airbrake.io .
A quick google lead me to http://logdigger.com/ which is designed specific for JAVA specific sites.
I work at Airbrake, and I would be happy to work with you to make our site more JAVA friendly. Please get in touch ben#airbrake.io, and I'll see how we can better display java specific information.
Just adding to the others suggested here, but Raygun (http://raygun.io) has first class support for Java.
Read more here: http://raygun.io/java
I work for Mindscape who built Raygun so can answer any questions you may have about it: jd#mindscape.co.nz. We already have a large number of organizations using Raygun with their Java apps, although Raygun does support other platforms (.NET, Node, Rails, PHP, etc)

Should web service be separate from web site? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am building a website and also want to build a REST web service for accessing a lot of the same functionality (using google app engine and spring mvc3), and I'm not sure of the best practices for how integrated/separate the 2 parts should be.
For example if I want view a resource I can provide a url in the form:
{resourcetype}\{resourceid}
A GET request to this url can be redirected at a view which generates a webpage when the client is HTML/browser based. Spring has (from what I read - not tried it yet) the ability to use this same resource URL to serve up a view which returns HTML/Xml/JSON depending on the content type. This all seems great.
POST requests to a URL to create new resources in REST should return 201 CREATED (or so I read) along with the URL of the created resource, which seems fine for the Api but seems a little different from what would be expected from the norm in a web page (where you would likely be redirected to a page showing the resource you created or to a page saying it was created successfully or similar). Should I handle this by serving up a page at a different URL which contains the form for creating the resource, then submits to the Api URL via ajax and gets the response and redirects to the resource URL included in the response.
This pattern seems like it would work (and should work for DELETE too) but is this a good approach or am I better keeping the REST Api URLs and the web site URLs separate? this seems like it could introduce a fair bit of duplication and extra work. But having a single URL could mean that you are dependent on javascript being available on the client of a HTML 5 supporting browser.
I suggest you keep them separate. By doing this you gain several benefits.
First, you decouple your web urls from you API urls so they can each change independently. For example, you may need to release backwards incompatible change to your API, in which case you can create a /v2/ directory. Meanwhile, you might want an /about page on the website but don't need one for your API.
By using different URLs, you simplify your implementation. Now every method doesn't have to determine if it's fronting JSON/XML or HTML. This is true even if you have a framework like Spring doing the heavy lifting; you still have do extra things for the website with respect to the current user.
It also eliminates a whole class of bugs. For example, users won't get JSON or XML output back while browsing the siteā€”even if they have custom browser anonymity settings.
You can easily separate the logic for authentication. With a website, you need a login page and cookies. With an API, these aren't required but extra authentication headers are (for example, an HMAC+sha256 signature).
Finally, by separating the site from the API, you allow for different scaling needs. If your API is getting hit hard but not the website, you can throw more hardware at the API while keeping the minimal needed for the website.
Update:
To clarify, I'm not suggesting you code up everything twice. There are two different ways to look at this to remove duplication.
First, in MVC parlance, you have one model and two different views on that model. This is the whole point of MVC so that the view and model are not tied together. The code to get a particular resource is the same in both clients, so it you could write your model such that only one line of code gets that resource from the database or wherever it comes from. In short, your model is an easy-to-use library with two clients.
Another way to look at it is that your website is your very first client of your public REST API; the web server actually calls your RESTful API to gets it's information. This is the whole Eat Your Own Dog Food principle.
I disagree with Michael's answer and use it as a basis for my own:
To "decouple your web urls from you API urls so they can each change indepentently." is to spit in the face of REST. Cool URIs don't change. Don't concern yourself with changing your URLs. Don't version your API using URIs. REST uses links to espouse the OCP - a client operating on the previous version of your API should happily follow the links that existed when it went live, unknowing of new links that you've added to enhance your API.
If you insist on versioning your API, I'd ask that you do so using the media type, not the URI.
Next, " you simplify your implementation. Now every method doesn't have to determine if it's fronting JSON/XML or HTML."
If you're doing it this way, you're doing it wrong. Coming from Jersey, I return the same damn object from every method whether or not I produce HTML, XML or JSON. That's a completely cross-cutting concern that a marshaller takes care of. I use a VelocityMessageBodyWriter to emit HTML templates surrounding my REST representations.
Every resource in my system follows the same basic behavior:
class FooResource extends Resource {
#GET
public FooRepresentation get() {
return new FooRepresentation();
}
#POST
#Consumes(MediaType.APPLICATION_FORM_URLENCODED)
public Response postForm(MulivaluedMap<String, String> form) {
return post(buildRepresentationFromForm(form));
}
#POST
#Consumes(MediaType.APPLICATION_XML)
public Resopnse post(FooRepresentation representation) {
// return ok, see other, whatever
}
}
The GET method may need to build Links to other resources. These are used by consumers of a REST API as well as the HTML template. A form may have to POST and I use some particular Link (defined by the Relation) for the "action" attribute).
The path through the system may be different between a user-browser and a user-machine but this is not a separation of implementation - it is REST! The state transfer happens how it needs to, how you define it. How users (in a browser) proceed.
"Finally, by separating the site from the API, you allow for different scaling needs. If your API is getting hit hard but not the website, you can throw more hardware at the API while keeping the minimal needed for the website." - your scaling should not depend on who uses what here. Odds are you're doing some work behind the scenes that is more intense than just serving up HTML. By using the same implementation, scaling is even easier. The API itself (the marshaling between XML and domain objects and back) is not going to exceed the business logic and processing, database, etc.
Lastly, by keeping them the same, it is much easier to think about your system as a whole. In fact, start with the HTML. Define the relationships. If you're having a hard time expressing a particular action or user story in HTML anchors and forms, you're probably straying from REST.
Remember, you're expressing things as links (of a particular relation) to other things. Those URIs can even be different depending on whether you're producing XML or HTML - the HTML page might POST to URI some/uri/a and the API might POST to some/uri/b - that's irrelevant and being concerned with what the actual URI contents are is the dark path to POX and RPC
Another nifty feature is that if you do it this way, you're not dependent on JavaScript. You've defined your system to work with basic HTML and you can 'flip on' JavaScript when it is available. Then you're really working with your "API" anyway (I cringe at referring to them as different things, but I'm also trying to bridge my response in to your wording)
** I will add one final comment, when producing HTML, I use 303 instead of 201 in order to facilitate POST-then-GET. If JS is enabled, you're actually talking XML (or JSON) and you're back to 201.
In support of Michael, and in contrast to Doug, you should keep them separate.
Turns out that the browser, in its basic form, is not a particularly good REST client. Through lack of full support of HTTP, to lousy authentication support, to weak support for media types, the browser is actually quite limited. If you're limited to simply consuming content, and that content happens to be HTML, then the browser is fine, but going beyond that and the API suffers do to poor support in the browser.
JavaScript can improve the capability of the browser, and make it a better REST citizen, but few things work better in the browser than a static HTML page. Portable, performant, scaleable to different devices with some CSS fun. Everyone loves the snap of a static page, especially one that isn't hosting a zillion images and what not from other slow providers. Click, BANG, a fast appearing, fast scrolling page.
Since the browser is a sad citizen, you shouldn't limit your API to its weak capabilities. By separating them, you can write a nice, rich, animated, bouncy, exciting interface in HTML + JS, or Flash, or Java, Obj-C for iOS, or Android, or whatever.
You could also write a nice front end in PHP, hosted on your server, and pushing the results out to browser clients. The PHP app doesn't have to be REST at all, it can be just a generic web app, working in the domain and constraints of a web app (stateful sessions, non-semantic markup, etc. etc.). Browser talks to PHP, PHP talks to your REST service. The PHP app lets you segregate the demands of the browser from the semantics of a REST service.
You can write more RESTful HTML apps, even with pure HTML. They just turn out to be pretty crummy apps that folks don't like to use.
Obviously there is a lot of possible overlap between a generic web app and a REST service, but overlap is not equality, and they are different. HTTP != REST, using HTTP does not mean you're using REST. HTTP is well suited to REST applications, but you can certainly use HTTP in non-RESTful ways. People do it all day long.
So, if you want to use REST as a service layer, then do that. Leverage REST for REST sake, and build up your service. Then, start working on clients and interfaces that leverage that service. Don't let your initial client choice color the REST service itself. Focus on use cases of functionality, then build your clients around that.
As the needs of each component change, they can grow in congruence with or separately from each other as necessary. You don't have to punish the mobile apps for a change that a browser requires, or vice a versa. Let each piece be their own master.
Addenda:
Sam -
There's nothing wrong with offering a hybrid approach where some of the requests are served directly by your REST service layer while others are handled through a server side proxy. As long as the semantics are the same, it doesn't really matter. Your REST service certainly doesn't care. But it does potentially become problematic if the REST service returns link rels that are specific to the "raw" REST service rather than the hybrid. Now you have an issue of translating the representation, etc. My basic point is to not let the browser limitations wag your REST API, you can use a separate facade and let the browser influence that.
And this is a logical separation, whether that's manifested in the URL patterns, I have no opinion. That's more a development/maintenance/deployment call. I find that logical separations that can be manifested physically has some benefits in terms of clarity and understanding, but that's me.
Doug -
The raw HTML user experience is a crummy one. If it weren't, there wouldn't be an entire industry surrounding making the browser user application experience un-crummy. Of course it can be functional, and using HTML is an excellent media type for REST applications BECAUSE of the tooling around browsers and such that make working with the interface, viewing artifacts, interacting with the service when possible easier to do. But you don't design your service API around your debugger, and the raw browser is an incomplete tool for fully exploiting HTTP. As a host for JS through XHR, it gets more capable, but now we're talking "rich client" rather than just straight HTML in a browser.
While you can make service POST facades for delete et al, as in your example, the only reason you ARE doing that is because of the limitations of the browser, not for the sake of the API itself. If anything it's cluttering and complicating the API. "More than one way to do it."
Now, obviously you can tunnel everything through POST, simply because you like to tunnel everything through POST. But by hiding things this way you bypass other aspects of the protocol. If you POST foo/1 to /deleteFoos, you won't be able to leverage things like caches and such. A DELETE on foo/1 would invalidate any caches that see the operation, but a POST would slip right through, leaving the old, now deleted, resource behind.
So, there's reasons why there are other verbs than POST and GET in the protocol, even if the native browser chooses not to use them.
I think it is reasonble of you to require javascript to allow you to utilize ajax techniques to do what you suggested in the third paragraph of your question.
Also, to clarify, no matter the client use the Location header of the 201 response to indicate the canonical URI of the newly created resource. These can be checked by your javascript, for clients that use it.
About 'dumber' clients (like: a browser with js disabled), a somewhat ugly way to have them do the redirect is to have a html meta refresh in the returned resource's html representation head section from the POST. The body of the response can then just briefly describe your new resource so long as you use the Location header.

what is the best strategy for a log analyze applications

i will develop a web application to view and analyze log files from both remote machines and locally and planning to use java. At first glance it seems like application must work with big data sets effectively. For example to list a log file on browser i should implement a paginated list working with ajax (server will give data accordingly with current page number). Also i like to use AJAX.
My question is how should i design an application like this. i have three possibilities:
AJAX with RESTful service.
JSP and servlet
JSF with AJAX
I would suggest you have a look at Chainsaw - http://logging.apache.org/chainsaw/index.html - and Lilith - http://lilith.huxhorn.de/ - to see how others have approached this.
The released version of Chainsaw is pretty old - a MAJOR update will be released shortly. If you want to try out a pre-release version, you can see a screenshot and get the tarball or Mac DMG here:
http://people.apache.org/~sdeboy/

Automatic sitemap generation

We have recently installed a Google Search Appliance in order to power our internal search (via the Java API), and all seems to be well, however I have a question regarding 'automatic' site-map generation that I'm hoping you guys may know the answer to.
We are aware of the GSA's ability to auto-generate site maps for each of its collections, however this process is rather manual, and considering that we have around 10 regional sites that need to be updated as often as possible, its not ideal to have to log into the admin interface on a regular basis in order to export them to the site root where search engines can find them.
Unfortunately there doesn't seem to be any API support for this, at least none that I can find, so I was wondering if anyone had any ideas for a solution/workaround or, if all else fails, the best alternative.
At present I'm thinking that if we can get the full index back from the API in the form of a list, then we can write an XML file out using that the old fashioned way using a chronjob or similar, however this seems like a bit of a clumsy solution - any better ideas.
You could try the GSA Admin Toolkit, or simply write some code yourself which just logs in on the administration page and then uses that session to invoke the sitemap export URL (which is basically what the Admin Toolkit does).

Categories