How to create google like instant search using JSP and servlets? [closed] - java

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am working on a basic instant search function that basically searches the database and displays the results instantly just like google instant. This here http://woorkup.com/2010/09/13/how-to-create-your-own-instant-search/ looks promising but I want to know if there is a way to implement this using JSP, java/servlets.

Java and servlets alone will not be sufficient, you will need JavaScript on the client side. Basically you attach a listener to the input field and send an AJAX request to a JSP that does the search and returns the results which you then only have to format and display in a drop-down box below the input field.

This is also a very good tutorial about instant search:
http://www.w3schools.com/php/php_ajax_livesearch.asp
It uses Java Script and PHP. By reading / doing this tutorial you should get a idea how instant search works. So I hope this helps even if you want to use JSP.

You can do this using jQuery. The jQuery UI autocomplete is nice and easy to implement:
http://jqueryui.com/demos/autocomplete/

As previous posters have pointed out, you will have to use JavaScript to do this. The least painful way to use JavaScript here is to use JQuery UI
There is a fairly straightforward walkthrough here: http://blog.comperiosearch.com/2012/06/make-an-instant-search-application-using-json-ajax-and-jquery/

This is an oldie-but-goodie:
http://lab.abhinayrathore.com/autocomplete/
Combines Google,Bing,Yahoo,Wiki,Amazon, etc. all in 1 instant autocomplete. Allows you to easily add/remove websites.

Related

every day xpath are changing [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
driver.findElement(By.xpath("//*[#id=\"__box23-arrow\"]")).click();dropdown
driver.manage().timeouts().implicitlyWait(50, TimeUnit.SECONDS);
Thread.sleep(5000);
driver.findElement(By.xpath("//*[#id=\"__item1283-__box23-2\"]")).click();
Every time my xpath changes //*[#id=\"__box23-arrow\ example //*[#id=\"__box24-arrow\, im doing automation for SAP, Can you please give any other solution
If your xpath will always be changing, to get your Selenium code to work atleast there should be some pattern in how it changes, for example it may be dependent on current date. Then you can code accordingly to generate your xpath dynamically every time you run your script. If there is no such pattern and no static content to be able to use contains in xpath, you should check out other tools like Sikuli. It uses image recognition to identify your element. This again assumes that the visible aspect of your element remains same.
There is also a wave of new testing products powered by AI like Testim which are "self healing", meaning they will adapt to changes in the source code. I haven't used them but they are probably what you want.
If you know the beginning of your id which is static throughout in that case you can go for
"//*[#id*='__box']"
This will give you element(s) whose id starts with '__box'
Hope this helps!
You can write dynamic xpath using contains keyword as well.
Please refer example below -
//a[contains(#id, 'ctl00_btnAircraftMapCell')
As per the HTML you have shared with us , You can try with this xpath :
//span[#role='button' and contains(#class,'sapMComboBoxArrow sapMComboBoxBaseArrow sapMComboBoxTextFieldArrow')]

Best way to run Java code then send an email in JSP? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am creating a website for work using a Tomcat Web Application. One of the pages requires the user to enter their name, email, and phone number. From that point I need to grab the values from each field, generate a PDF, and email it to the user.
I suspect this is going to be a large chunk of Java code, so I'd rather do it by calling methods in a .class file rather than using
<% /* code here */ %>
What is the best way of going about something like this?
I am currently using an MVC approach, found here http://simple.souther.us/ar01s06.html,
although I believe I am over complicating the process.
I just need to simply grab text fields, run a Java method (lots of Java code), then display "The PDF has been sent to EMAIL".
Thank you for the help.
You can let a servlet process the data inputted by the user from the first JSP page. Then from there you can write or call your PDF and email methods. Your methods may be called from a utility class. Just redirect the user to another JSP page displaying "The PDF has been sent to EMAIL" or an error page if something goes wrong.

Algorithm of crawling Top10 PR/Alexa sites [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm trying to write a script which will crawl current top 10 PR/Alexa sites. since PR/Alexa frequently changes. so my script should take care of this I mean if today there is not a site in top 10 but could be tomorrow.
I dont know how to start with. I know crawling concepts but here I'm stuck. there could be top50 sites or even top500 sites. which I can configure of course.
I read about Google spider but its very complicated for this simple task. How do Google,Yahoo,Bing crawl billions of sites around the web. I'm just curious. what is the cursor point, I mean how google can Identify newly launch site.
Ok these are very deep details, I would read about these later. right now I'm more concern about my problem. how could I crawl top10 PR sites.
Can you provide a sample program so that I can understand better?
It's rather simple to fetch top25sites (if I understood correctly what you wanted to do)
Code:
from bs4 import BeautifulSoup
from urllib.request import urlopen
b = BeautifulSoup(urlopen("http://www.alexa.com/topsites").read())
paragraphs = b.find_all('p', {'class':'desc-paragraph'})
for p in paragraphs:
print(p.a.text)
Output:
Google.com
Facebook.com
Youtube.com
Yahoo.com
Baidu.com
Wikipedia.org
(...)
But have in mind that law in some countries could be more strict. Do it on own risk.
Alexa has a paid API you can use
**There is also a free API**
There is a free API (though I haven't been able to find any documentation for it anywhere).
http://data.alexa.com/data?cli=10&url=%YOUR_URL%
You can also query for more data the following way:
http://data.alexa.com/data?cli=10&dat=snbamz&url=%YOUR_URL%
All the letters in dat are the ones that determine wich info you get. This dat string is the one I've been able to find wich seems to have more options. Also, cli changes the output completly, this option makes it return an XML with quite a lot of information.
EDIT: This API is the one used by the Alexa toolbar.
Fetching Alexa data

Selenium Webdriver (Java): What are the benefits (if any) of using an objectmap.properties file instead of Page Objects classes? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm implementing Selenium Webdriver 2 automated testing for our website, and am unable to find a clear assessment of what the benefits are of using an objectmap.properties file to store all the element locators, versus storing them in page objects java classes?
Also, it seems that using java classes for Page Objects allows exposing and abstracting page operations in those page objects classes too, whereas I'm not clear how this would be done if using an objectmap.properties file instead?
Or have I missed the point and the 2 are best used in conjunction?
Thanks in advance!
This is purely subjective. Some people prefer the simplicity of my_object=something then just fetching it using objectmap.get('my_object') while others, prefer using objects in Java. e.g. using LoginPage.TXT_USERNAME
Depending on your personal preference, and philosophies, you should determine which way is easier to you.
Personally, I think using java page objects are much more efficient because of the auto-complete that eclipse provides. I could do
LoginPage.TXT_USERNAME
LoginPage.TXT_PASSWORD
instead of having the possibilty of misspelling your object if you use a properties file like this:
objectmap.getProperty('TXT_USRNAME') # oops! forgot the E, and i wouldn't've known it until runtime.

Turn HTML into XML and parse it -- Android Apps [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have been learning how to build android apps this summer. I am currently trying to work on xml parsing which falls under java in this case. I have a few questions that are mostly conceptual and one specific one.
First, in most of the examples I have seen pages already in xml are used. Can I use a page in regular html format and with whatever the program does turn it to xml and then parse it? Or is that what is normally done anyway?
Secondly, I could use a little explanation on how the parser actually works and saves the data so I will better know how to use it (extract it from whatever it is saved in), when the parsing is done.
So for my specific example I am trying to work with some weather data from the NWS. My program will take the data from this page, and after some user input take you to a page like this, which sometimes will have various alerts. I want to select certain ones. This is what I could use help with. I haven't really coded anything on that yet because I don't know what I am doing.
If I need to clarify or rephrase anything in here I am happy too and let me know. I am trying to be a good contributor on here!
Yes you can parse HTML and there are many parsers available too, there is a question about it here Parse HTML in Android, then we have an answer here about parsing html https://stackoverflow.com/a/7114346/826657
Although its a bad idea, as the tag names aren't well named, so you will have to write lots of code searching attributes for a specific data tag, so you always have to prefer XML,for saving lots of code space and also time.
Here is a text from CodingHorror which says at general parsing html is a bad idea.
http://www.codinghorror.com/blog/2009/11/parsing-html-the-cthulhu-way.html
Here is something which explains parsing an XML document using XML PullParser http://www.ibm.com/developerworks/library/x-android/

Categories