Last year I made an Android application that scrapped the informations on my train company in Belgium ( application is BETrains: http://www.cyrket.com/p/android/tof.cv.mpp/)
This application was really cool and allowed users to talk with other people in the train ( a messagery server is runned by me) and the conversations wre also on Twitter: http://twitter.com/betrains
Everybody in Belgium loved it. The company tried to avoid us to use their data, make some users websites closed, but their was some lawyers that attack the company and finally we have no more problems and the websites are open: http://blog.tuinslak.org/2010/07/irail-is-back
So, legally my application is ( for now) totally correct and legal, but I get no help from the train company.
So my question is a little help to get the datas. I am now an android/java beginner and spend some weeks to try to find a solution, but maybe people like will fint it in a few minuts.
So the problem is the next one. You may have a look at the following URL, and you will find 2 cities names within URL: Mons and Tournai, and also informations on the date and time. That was the old method that worked one year:
http://hari.b-holding.be/Hafas/bin/query.exe/en?&REQ0JourneyStopsS0A=1&REQ0JourneyStopsS0G=MONS%20[b]&REQ0JourneyStopsZ0A=1&REQ0JourneyStopsZ0G=TOURNAI%20[b]&REQ0JourneyDate=27.010.10&REQ0JourneyTime=19:030&Timesel=depart&ViaName=&ViaMode=NEE&DateMode=ANDERS&PLANNER=TRUE&start=1&queryPageDisplayed=yes
But now, the URL bring me on a confirmation page and I have to click on the confirm button to get to the next page.
So my code won't work anymore, I need to click on this button programmatically to arrive on the correct webpage.
Have you any idea on how to simulate a click on this button? For now my code is the classic scrapping code with the URL given a few line on the top. I assumed that the Url give me the result page. That was the case till last week.
DefaultHttpClient httpclient = new DefaultHttpClient();
HttpGet get = new HttpGet(mon_url);
HttpResponse response;
try {
response = httpclient.execute(get);
HttpEntity entity = response.getEntity();
BufferedReader buf = new BufferedReader(new InputStreamReader(entity.getContent()));
etc...
Have you any idea on how to improve the code?
As the software is free, I cannot send paypal money, but a whole country would be really thankfull to the man that might help!
Thank a lot.
Instead of trying to automate clicking the JavaScript button, try monitoring what request is sent and then replicate this in your app. There are various firefox extensions that will help you do this, such as TamperData, Firebug, and LiveHttp.
Related
Hello i just wanna get some advise.
My Problem is a little complicated.
summarize
Now, I am developing JAVA programming with SPRING FRAMEWORK.
client program is solution that is a kind of image viewer, (so I can't modify the code) bring some images from our NAS.
the problem is that the Search Condition of the solution program.
the more NAS has images, the more User have to wait( because result list don't have paging function)
when the Search Condition is so rough, my Server(service) is dead (out of memory)
So I want to Alert or Popup to User that use the solution. (without modifying the solution.)
here is my virtual scenario.
user search button click(Too rough condition)
call #RequestMapping("/***") and go to Controller
Server send to user Message that your search condition is too rough and stop searching.
Is it impossible?
if it is possible, how can i solve it?
there is code in google, i adapt my code, but it does't work, because 'HttpServletResponse response' make exception.
this code is not same to my trial, just motive.
My Core Question is this!
Is it possible that Server send to user Message without modifying client program?
#RequestMapping("/****")
public String loginaction(
#RequestParam(value="XXXX", required=true) String XXXX
, #RequestParam(value="XXXX",required=true) String XXXX
, HttpServletResponse response
) throws Exception{
if(rough condition search){
response.setContentType("text/html; charset=UTF-8");
PrintWriter out = response.getWriter();
out.println("<script>alert('check your Search Condition');</script>");
out.flush();
}
return XXXX;
}
I usually don't like to post questions because I would rather figure things out myself, but I am ready to pull my hair out with this one. I am trying to interface with a Sony IP Camera using Java. One of the products of the company I work for uses a Sony IP camera (IPela EP550). I have been tasked with writing the new interface. I can connect to the stream using the VLC ActiveX embedded control, but I can't manipulate the PTZ of the camera from in Java. If I type: "http://xxx.xxx.xxx.xxx/command/ptzf.cgi?Move=left,0" in a web browser it will move, but I have tried every bit of code I can find with Google to get it to move with no success. This last thing I tried (because a page on Oracle said all I should have to do is open the connection):
URL url1 = new URL("http://xxx.xxx.xxx.xxx/command/ptzf.cgi?Move=left,0&t="+new Date().getTime());
HttpURLConnection con = (HttpURLConnection)url1.openConnection();
Any help will be appreciated. Thank you.
Joe
Check out whether the camera needs login.
type the url in the browser, get HTTP request header and put header data into your code!
I figured out how to do this. I am posting the solution in case anybody is looking to fix a similar problem. I took the basic idea in this Dr. Dobbs article and used it to get movement from the camera. I don't yet know why I can't get the camera to respond with URLConnection and HttpURLConnection, but using a Socket and PrintWriter to specifically print the GET request to the socket.
I'm trying to write a little application that will block sites (ip) while using browser (chrome, ie, firefox). It can also redirect to other site. As long as user won't be able to use this site I would be satisfied with result.
The problem is that I've searched few hours for solution in google and I still can't find good solution to my problem. There were two solutions for now:
Use host file - this would be a little problematic for my aplication, because I want to block site for period of time. If application will crash - it won't redo host file.
Use "Windows Filtering Platform" - it's written in C++ so it will be harder for me to do. I would love to use java. I can still use C++ in java application but it still isn't satisfying solution.
I would appreciate for any help.
I think I have found solution:
Blocking a website from access for all browsers
Well will try :). But still if anybody have any better ideas don't hesitate to answer this post :).
I did a similar work some time ago, I used the hosts files to block all the entries that spybot search & destroy marked as "dangerous" sites. If you want to secure the site will be freeed when the app crashes, you could use a second program or thread (don't know how complex your application is) that checks if the programm is still running.
Microsoft has the following entry for displaying task names:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa446864(v=vs.85).aspx
Maybe try this code and check for your application to be alive.
However, the user will notice a second task in his taskmanager........!
To patch host files I used this java-method which saves entrys from a Default list model:
try
{
BufferedWriter out;
this.out = new BufferedWriter(new FileWriter("C:\\Windows\\System32\\drivers\\etc\\hosts"));
for (int save = 0; save < Blocker.model.size(); save++) {
this.out.write((String)Blocker.model.getElementAt(save));
this.out.newLine();
}
this.out.close();
} catch (IOException fail) {
JOptionPane.showMessageDialog(null, "Speichern konnte nicht abgeschlossen werden",
"About", 0);
}
I'm not sure if this will really help you, anyway good luck at your project.
(Note that you have to run as administrator to get write rights to hosts-file)
I am completely new to this site. I was searching for an answer for my problem. But I saw the same problem asked by someone in this website. The question is here
I am using windows 7. I didn't not get answer there in that link..so I am asking the same question again. I want to open a gmail account link in a browser from a java application. Yes I do know about browse() method in Desktop class. The thing is that I can open the gmail website but I need to open directly the specified gmail account while username and password are provided. Any ideas?
Okay, so take this with a couple of caveats: 1. the last time I played with Google APIs was in an older version, so this may be quite different now, 2. this code isn't tested, I'm just writing it up partially from memory and partially from an old project of mine. Think of it more like pseudo-code, and 3. if this does by chance work, this is a pretty dirty solution. Hopefully this can set you on the track of finding a better way to do this with the API.
GoogleOAuthParameters oauthParameters = new GoogleOAuthParameters();
oauthParameters.setOAuthConsumerKey( [insert consumer key here] );
oauthParameters.setOAuthConsumerSecret( [insert consumer secret here] );
OAuthSigner signer = new OAuthHmacSha1Signer();
GoogleOAuthHelper oauthHelper = new GoogleOAuthHelper(signer);
oauthParameters.setScope("https://mail.google.com/mail"); //no clue if this is a valid scope or not
oauthHelper.getUnauthorizedRequestToken(oauthParameters);
String requestUrl = oauthHelper.createUserAuthorizationUrl(oauthParameters);
Desktop desktop = Desktop.getDesktop();
URI url;
url = new URI(requestUrl);
//this will make the user log in to authorize your app
desktop.browse(url);
//auth token response from Google, you can use this to authenticate your app if there are other requests you want to make against the user account
String token = oauthHelper.getAccessToken(oauthParameters);
//since you made sure the user is logged into their account to authorize your app, their gmail can now just be opened. Yes, very dirty. I know. (if it all works)
desktop.browse("https://www.gmail.com/");
I need to programmly create topics on my board. I use Java and HtmlUnit for this.
But there is one problem — if program post once all is okay (forum response is http 200), but if start program again then PhpBB response is «http 304» and redirection to category where new topic should be located but topic not added. The question is how to fix this?
Here is WireShark dump of first successfull topic addition (login, posting):
http://a2k.in/2ai
And here is same request but with 304 redirect:
http://a2k.in/2aj
Posting is from admin account with not posting time limitations.
Here is posting from browser (Chrome) log:
http://a2k.in/2ak
What is the problem? The difference between my request and browser request is in header «Cache-Control: max-age=0», «Origin: http://localhost». Maybe there is problem in cache-controller?
maybe a bit late.. but just saw this...
had the same problem when posting more then one thread.
looks like phpbb has some kind of flood protection.
At least for my implementation it helped to simply add a timer /delay between posts... (think i got it set to somewhere around 3 sec. may work with one or two as well not sure... wasnt in a hurry.)