JSP/Tomcat: Navigation system with sub-folders but one page - java

My JSP project is the back-end of a fairly simple site with the purpose to show many submissions which I want to present on the website. They are organized in categories, basically similar to a typical forum.
The content is loaded entirely from a database since making separate files for everything would be extremely redundant.
However, I want to give the users the possibility to navigate properly on my site and also give unique links to each submission.
So for example a link can be: site.com/category1/subcategory2/submission3.jsp
I know how to generate those links, but is there a way to automatically redirect all the theoretically possible links to the main site.com/index.jsp ?
The Java code of the JSP needs access to the original link of course.
Hope someone has an idea..
Big thanks in advance! :)

Alright, in case someone stumbles across this one day...
The way I've been able to solve this was by using a Servlet. Eclipse allows their creation directly in the project and the wizard even allows you to set the url-mapping, for example /main/* so you don't have to mess with the web.xml yourself.
The doGet function simply contains the redirection as follows:
request.getRequestDispatcher("/index.jsp").forward(request,response);
This kind of redirection unfortunately causes all relative links in the webpage to fail. This can be solved by hardlinking to the root directory for example though. See the neat responses here for alternatives: Browser can't access/find relative resources like CSS, images and links when calling a Servlet which forwards to a JSP

Related

Confused about integration of Java file, JSPs, servlets?

This is my first time working with Java and tomcat and I'm a little confused about how everything fits together - I've googled endlessly but can't seem to wrap my head around a few concepts.
I have completed a Java program that outputs bufferedImages. My goal is to eventually get these images to display on a webpage.
I'm having trouble understanding how my java file (.java) which is currently running in NetBeans interacts with a servlet and/or JSP.
Ideally, a servlet or JSP (not 100% clear on how either of those works. I mostly understand the syntax by looking at various examples, however) could get my output (the bufferedImages) when the program runs and the HTML file could somehow interact with whatever they are doing so that the images could be displayed on the webage. I'm not sure if this is possible. If anyone could suggest a general order of going about things, that would be awesome.
In every example/tutorial i find, no one uses .java files - there are .classes in the WEB-INF folder -- it doesn't seem like people are using full on java programs. However, I need my .java program to run so that I can retrieve the output and use it on the webapp.
Any general guidance would be greatly appreciated!
I think this kind of documentation is sadly lacking; too many think that an example is an explanation, and for all the wonderful things you can get out of an example, sometimes an explanation is not one of them. I'm going to attempt to explain some of the overall concepts you mentioned; they aren't going to help you solve your buffered image display problem directly, unfortunately.
Tomcat and other programs like it are "web servers"; these are programs that accept internet connections from other computers and return information in a particular format. When you enter a "www" address in a browser, the string in that address eventually ends up (as a "request") at a web server, which then returns you a web page (also called a "response"). Tomcat, Apache, Jetty, JBoss, and WebSphere are all similar programs that do this sort of thing. In the original form of the world-wide-web, the request string represented a file on the server machine, and the web server's job was to return that (html) file for display in the browser.
A Servlet is a kind of java program that runs on some web servers. The servlet itself is a java class with methods defined by the javax.servlet.Servlet interface. In webservers that handle servlets, someone familiar with the configuration files can instruct the web server program to accept certain requests and, instead of returning an HTML file (or whatever) from the server, to instead execute the servlet code. A servlet, by its nature, returns content itself - think of a program that outputs HTML and you're on the right track.
But it turns out to be a pain to output complete HTML from a program -- there's a tedious amount of HTML that doesn't have much to do with the "heavy lifting" for which you need a programming language of some sort. You have to have Java (or some language) to make database inquiries, filter results, etc., but you don't really need Java to put in the and the hundreds of other tags that a modern web page needs.
So a JavaServerPage (JSP) is a special kind of hybrid, a combination of HTML and things related to servlets. You CAN put java code directly in a JSP file, but it is usually considered better to use html-like 'tags' which are then interpreted by a "JSP compiler" and turned into a servlet. So the creator of the JSP page learns how to use these tags, which are (if correctly constructed) more logical for web page creators than the java programming language is, and in fact doesn't have to be a programmer at all. So a programmer, working with this content-oriented person, creates tags for the page to use to describe how it wants its page to look, then the programmer does the programming and the content-person creates the web pages with it.
For your specific problem, we'll need more detail to help you. Do you envision this program running and using some information provided by the user as part of his request to generate the images? Or are the images generated once and now you just need to display them? I think that's a topic for another question, actually.
This ought to be enough to get you started. I would now suggest the wikipedia articles on these things to get more details, and good luck getting your head around the concepts. I hope this has helped.
This addendum provided after a comment you made about wanting to do a slideshow.
An important web programming concept is the client-server and request-response nature of it. In the traditional, non-Javascript web environment, the client (read browser) sends a request to the server, and the server sends back bytes. There is no ongoing connection between the two computers after the stream of bytes finishes, and there are restrictions on how long that stream of bytes can continue. Additionally, outside of this request and response, the server usually has no capability to send anything to the client unless the client requests it; the client 'drives' the exchange of data.
So a 'slideshow', for instance, where the server periodically sends bytes representing an additional image, is not the way HTML works (or was meant to work). You could do one under the user's control: the user presses a button for each next picture, the browser sends a request for the next picture and it appears in the place where the previous one was. That fits the request-response paradigm.
Now, the effect of an automatic slideshow is possible using Javascript. Javascript, based on Java but otherwise unrelated, is a scripting language; it is part of an HTML page, is downloaded with the page to the browser, and it runs in the browser's environment (as opposed to a JSP/servlet, which executes on the server). You can write a timer in Javascript, and it can wait N seconds and send another request to the server (for another picture or whatever). Javascript has its own rules, etc., but even so I think it a good idea to keep in mind that you aren't just doing HTML any more.
If a slideshow is what you are after, then you don't need JSP at all. You can create an HTML page with places for the picture being displayed, labels and text and etc., buttons for stopping the slideshow and so forth, in HTML, and Javascript for requesting additional pictures.
You COULD use JSP to create the page, and it might help you depending on how complex the page is, but it isn't going to help you with an essential function: getting the next picture for the slideshow. When the browser requests a JSP page:
the request goes to the server,
the server determines the page you want and that it is a JSP page,
the server compiles that page to a servlet if it hasn't already,
the servlet runs, producing HTML output according to the tags now compiled into Java,
the server returns HTML to the browser.
Then the server is done, and more bytes won't go to the browser until another request is made.
Again, I hope this has helped. Your example of a slideshow has revealed some basic concepts that need to be understood about web programming, servers, HTML, JSPs, and Javascript, and I wish you luck on your journey through them all. And if you come to think of it all as a bit more convoluted than it seems it needed to be, well, you won't be the first.
You can create a JSP that invokes a method in your Java class to retrieve the BufferedImage. Then you must set the content type to the adequate image type:
response.setContentType()
The tricky part is that you must print the image from the JSP, so you have to call:
response.getOutputStream()
from your JSP, and with that OutputStream you must pass the bytes of your BufferedImage.
Note that in that JSP you'll not be able to print out HTML, only the image.
I'm not sure where you need more clarification, as it seems you're a bit confused about the concepts.
BTW.: A JSP is just a servlet that has an easier syntax to write HTML and Java code together.

Make GWT Crawlable (SEO)

I like to make my GWT-App crawlable by the google bot. I found this article (https://developers.google.com/webmasters/ajax-crawling/). It states there should be a servlet filter, that serves a different view to the google bot. But how can this work? If i use for example the activities and places pattern, than the page changes are on the client-side only and there is no servlet involved -> servlet filter does not work here.
Can someone give me an explanation? Or is there another good tutorial tailored to gwt how to do this?
If you use Activities&Places your "pages" will have a bookmarkable URL (usually composed of the HTML host page, a #, and some tokens separated by ! or other character).
Thus, you can place links ('s) in your application to make it crawlable. If the link contains the proper structure (the one with # and tokens), it will navigate to the proper Place.
Have a look at https://developers.google.com/web-toolkit/doc/latest/DevGuideMvpActivitiesAndPlaces
So here is the solution to the actual problem:
I wanted to make my GWT (running on Google App Engine) crawlable by the google bot and followed this documentation: "https://developers.google.com/webmasters/ajax-crawling/". I was trying to apply a servlet filter that filters every request to my app and checks for the special fragment in the escaped url that is added by the google bot and present a special view to the bot with a headless browser.
But the servlet did not work for the "MyApp.html"-file. I found out then, that all files are treated as static files and are not affected by the filter. I had to exclude the ".html"-Files from these static files. I did this by adding the line "" to the static files in the "appengine-web.xml".
I hope this will help some people with the same problem to save some time :)
Thanks and best regards
jan

Interacting with an AJAX site from Java

I am trying to download the contents of a site. The site is a magneto site where one can filter results by selecting properties on the sidebar. See zennioptical.com for a good example.
I am trying to download the contents of a site. So if we are using zennioptical.com as an example i need to download all the rectangular glasses. Or all the plastic etc..
So how do is send a request to the server to display only the rectangular frames etc?
Thanks so much
You basic answer is you need to do a HTTP GET request with the correct query params. Not totally sure how you are trying to do this based on your question, so here are two options.
If you are trying to do this from javascript you can look at this question. It has a bunch of answers that show how to perform AJAX GETs with the built in XMLHttpRequest or with jQuery.
If you are trying to download the page from a java application, this really doesn't involve AJAX at all. You'll still need to do a GET request but now you can look at this other question for some ideas.
Whether you are using javascript or java, the hard part is going to be figuring out the right URLs to query. If you are trying to scrape someone else's site you will have to see what URLs your browser is requesting when you filter the results. One of the easiest ways to see that info is in Firefox with the Web Console found at Tools->Web Developer->Web Console. You could also download something like Wireshark which is a good tool to have around, but probably overkill for what you need.
EDIT
For example, when I clicked the "rectangle frames" option at zenni optical, this is the query that fired off in the Web Console:
[16:34:06.976] GET http://www.zennioptical.com/?prescription_type=single&frm_shape%5B%5D=724&nav_cat_id=2&isAjax=true&makeAjaxSearch=true [HTTP/1.1 200 OK 2328ms]
You'll have to do a sufficient number of these to figure out how to generate the URLs to get the results you want.
DISCLAIMER
If you are downloading someone's else data, it would be best to check with them first. The owner of the server may not appreciate what they might consider stealing their data/work. And then depending on how you use the data you pull down, you could be venturing into all sorts of ethical issues... Then again, if you are downloading from your own site, go for it.

How to edit HTML file's content dynamically

Up until now, when I needed to update the content of any pages, I have always had to update the source code directly and re-deploy the whole application. Right now, I want to implement a feature such that I can update the content of any HTML pages dynamically without having to re-deploy the application.
I tried to implement the feature with PrimeFaces's <p:editor> component but it does not work. To be more precise, my functions can correctly update the required page. When I goes to the source code folder, I can actually see my changes. However, subsequent requests for the page still render the old content.
I'd be very grateful if you could show me what I have done wrong. I'd also appreciate it very much if you could show me any other ways to achieve the same goal.
I think you are editing your work-space from your deployment. :)
You have 2 places with the code. One is deployed, and the other in your "working space".
First, it sounds to me like you want your working space to be the deployment. This way whenever you are editing something, you will be changing the deployment directly. For that, simply create a new project in your IDE and point it to the deployment folder.
I bet that :
C:\\Users\\James\\Documents\\NetBeansProjects\\MyProject\\MyProject-war\\web\\
points to your work-space and not the deployment. so effectively, your deployment is editing your work-space.
I think you are looking for this one:
FacesContext.getCurrentInstance().getExternalContext().getRealPath("/")
and if you want the location of the WEB-INF
use the following
String fullpath = FacesContext.getCurrentInstance().getExternalContext().getRealPath("/")+File.separator+"WEB-INF";
and so on...
My code actually was working perfectly. From the above answer of user1068746, I did some research and found this article. The solution is very simple: creating a virtual directory mapping to my hard-disk's directory. As a result, any updates to my files on the hard-disk will immediately be visible to future requests.

Crawlers in JSP/Struts/Session controlled Webapps

i got a struts web application (running on tomcat 6) with all files except the first one which invokes a starting action located in the WEB-INF and u always need a Session to use it otherwise you will be redirected to the starting action and starting page again.
The app main function is a search which provide products from a database. How does a crawler navigate in my app? Does it trigger the search which could lead it to error pages? Or can it only follow links that are not embedded in forms (well struts makes nearly everything to forms therefore there are only some links and mostly onclick redirects and form actions)
How can i provide useful information that can be indexed to a crawler like this?
thanks for advice :)
Sounds like you would best off reading up on some seo guidelines: http://www.google.com.au/search?q=seo+guidelines&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-GB:official&client=firefox-a&safe=high,
To answer your questsions:
Crawlers will generally navigate to your app from external links on the web, or after you submit your site to the search engine.
The crawler won't fill in inputs and submit forms, it will follow hyperlinks between your pages.
If you want the crawler to index your search results (can't really see why you would want this) you can put links to common searches on one of your already indexed pages.
You should make sure that your product pages are SEO friendly and are indexed instead of your search results.

Categories