Thanks to this thread How to download and save a file from Internet using Java?
I know how to download a file, now my problem is that I need to authenticate on the sever from which I'm dowloading. It's an http interface to a subversion server. Which field do I need to look up into ?
Using the code posted in the last comment, I get this exception:
java.io.IOException: Server returned HTTP response code: 401 for URL: http://myserver/systemc-2.0.1.tgz
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1305)
at java.net.URL.openStream(URL.java:1009)
at mypackage.Installer.installSystemc201(Installer.java:29)
at mypackage.Installer.main(Installer.java:38)
Thanks,
You extend the Authenticator class and register it. The javadocs at the link explain how.
I don't know if this works with the nio method that got the accepted answer to the question, but it for sure works for the old fashioned way that was the answer under that one.
Within the authenticator class implementation, you are probably going to use a PasswordAuthentication and override the getPasswordAuthentication() method of your Authenticator implementation to return it. That will be the class which is passed the user name and password you need.
Per your request, here is some sample code:
public static final String USERNAME_KEY = "username";
public static final String PASSWORD_KEY = "password";
private final PasswordAuthentication authentication;
public MyAuthenticator(Properties properties) {
String userName = properties.getProperty(USERNAME_KEY);
String password = properties.getProperty(PASSWORD_KEY);
if (userName == null || password == null) {
authentication = null;
} else {
authentication = new PasswordAuthentication(userName, password.toCharArray());
}
}
protected PasswordAuthentication getPasswordAuthentication() {
return authentication;
}
And you register it in the main method (or somewhere along the line before you call the URL):
Authenticator.setDefault(new MyAuthenticator(properties));
The usage is simple, but I find the API convoluted and kind of backwards for how you typically think about these things. Pretty typical of singleton design.
This is some code I wrote that fetches a website and displays the contents to System.out. It uses Basic authentication:
import java.net.*;
import java.io.*;
public class foo {
public static void main(String[] args) throws Exception {
URL yahoo = new URL("http://www.MY_URL.com");
String passwdstring = "USERNAME:PASSWORD";
String encoding = new
sun.misc.BASE64Encoder().encode(passwdstring.getBytes());
URLConnection uc = yahoo.openConnection();
uc.setRequestProperty("Authorization", "Basic " + encoding);
InputStream content = (InputStream)uc.getInputStream();
BufferedReader in =
new BufferedReader (new InputStreamReader (content));
String line;
while ((line = in.readLine()) != null) {
System.out.println (line);
}
in.close();
}
Problems with the above code:
This code isn't production-ready (but it gets the point across.)
The code yields this compiler warning:
foo.java:11: warning: sun.misc.BASE64Encoder is Sun proprietary API and may be removed in a future release
sun.misc.BASE64Encoder().encode(passwdstring.getBytes());
^ 1 warning
One really should use the Authenticator class, but for the life of me, I could not figure out how and I couldn't find any examples either, which just goes to show that the Java people don't actually like it when you use their language to do cool things. :-P
So the above isn't a good solution, but it does work and could easily be modified later.
Write your overriding class for Authenticator:
import java.net.Authenticator;
import java.net.PasswordAuthentication;
public class MyAuthenticator extends Authenticator {
private static String username = "";
private static String password = "";
protected PasswordAuthentication getPasswordAuthentication() {
return new PasswordAuthentication (MyAuthenticator.username,
MyAuthenticator.password.toCharArray());
}
public static void setPasswordAuthentication(String username, String password) {
MyAuthenticator.username = username;
MyAuthenticator.password = password;
}
}
Write your main class:
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.Authenticator;
import java.net.MalformedURLException;
import java.net.URL;
public class MyMain{
public static void main(String[] args) {
URL url;
InputStream is = null;
BufferedReader br;
String line;
// Install Authenticator
MyAuthenticator.setPasswordAuthentication("Username", "Password");
Authenticator.setDefault (new MyAuthenticator ());
try {
url = new URL("Your_URL_Here");
is = url.openStream(); // throws an IOException
br = new BufferedReader(new InputStreamReader(is));
while ((line = br.readLine()) != null) {
System.out.println(line);
}
} catch (MalformedURLException mue) {
mue.printStackTrace();
} catch (IOException ioe) {
ioe.printStackTrace();
} finally {
try {
if (is != null) is.close();
} catch (IOException ioe) {
// nothing to see here
}
}
}
}
Have you tried building your URL in the form http://user:password#domain/path?
I would suggest checking out HttpClient from apache http://hc.apache.org/httpclient-3.x/ it makes downloading/authenticating very easy
This open source library, http://spnego.sourceforge.net, also has some examples on how to use it's SpnegoHttpURLConnection class.
One of the constructors in the class allows you to pass-in a username and password.
Take a look at the class's java doc for the examples.
Related
My java code is:
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.MalformedURLException;
import java.net.URL;
import java.util.ArrayList;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
public class celebGrepper {
static class CelebData {
URL link;
String name;
CelebData(URL link, String name) {
this.link=link;
this.name=name;
}
}
public static String grepper(String url) {
URL source;
String data = null;
try {
source = new URL(url);
HttpURLConnection connection = (HttpURLConnection) source.openConnection();
connection.connect();
InputStream is = connection.getInputStream();
/**
* Attempting to fetch an entire line at a time instead of just a character each time!
*/
StringBuilder str = new StringBuilder();
BufferedReader br = new BufferedReader(new InputStreamReader(is));
while((data = br.readLine()) != null)
str.append(data);
data=str.toString();
} catch (IOException e) {
e.printStackTrace();
}
return data;
}
public static ArrayList<CelebData> parser(String html) throws MalformedURLException {
ArrayList<CelebData> list = new ArrayList<CelebData>();
Pattern p = Pattern.compile("<td class=\"image\".*<img src=\"(.*?)\"[\\s\\S]*<td class=\"name\"><a.*?>([\\w\\s]+)<\\/a>");
Matcher m = p.matcher(html);
while(m.find()) {
CelebData current = new CelebData(new URL(m.group(1)),m.group(2));
list.add(current);
}
return list;
}
public static void main(String... args) throws MalformedURLException {
String html = grepper("https://www.forbes.com/celebrities/list/");
System.out.println("RAW Input: "+html);
System.out.println("Start Grepping...");
ArrayList<CelebData> celebList = parser(html);
for(CelebData item: celebList) {
System.out.println("Name:\t\t "+item.name);
System.out.println("Image URL:\t "+item.link+"\n");
}
System.out.println("Grepping Done!");
}
}
It's supposed to fetch the entire HTML content of https://www.forbes.com/celebrities/list/. However, when I compare the actual result below to the original page, I find the entire table that I need is missing! Is it because the page isn't completely loaded when I start getting the bytes from the page via the input stream? Please help me understand.
The Output of the page:
https://jsfiddle.net/e0771aLz/
What can I do to just extract the Image link and the names of the celebs?
I know it's an extremely bad practice to try to parse HTML using regex and is the stuff of nightmares, but on a certain video training course for android, that's exactly what the guy did, and I just wanna follow along since it's just in this one lesson.
I'm writing a basic application in Android, the application will be connected to MySql server by quest in PHP, in Android Internet connection have to make in diffrent thread, so I create class which implements Runnable interface.
package com.company.opax.loginmysql;
import android.util.Log;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.OutputStreamWriter;
import java.net.MalformedURLException;
import java.net.URL;
import java.net.URLConnection;
import java.util.ArrayList;
/**
* Created by opax on 30.08.2015.
*/
public class HttpPostMethod implements Runnable{
private String fileInHost;
private ArrayList<PostParameters> postParameterses;
private StringBuffer postResult;
public HttpPostMethod(String fileInHost, ArrayList<PostParameters> postParameterses){
this.fileInHost = fileInHost;
this.postParameterses = new ArrayList<PostParameters>(postParameterses);
}
public String getResult() {
return postResult.toString();
}
#Override
public void run() {
try {
String urlParameters = generateParameters();
URLConnection conn = initializeUrlConnection();
OutputStreamWriter writer = new OutputStreamWriter(conn.getOutputStream());
writer.write(urlParameters);
writer.flush();
String line;
BufferedReader reader = new BufferedReader(new InputStreamReader(conn.getInputStream()));
while ((line = reader.readLine()) != null) {
postResult.append(line);
}
writer.close();
reader.close();
} catch (IOException e) {
Log.e("Exception", this.getClass().getName() + " name: " + e.toString());
}
}
private URLConnection initializeUrlConnection() throws MalformedURLException {
URL url = new URL(fileInHost);
URLConnection conn;
try {
conn = url.openConnection();
conn.setDoOutput(true);
}catch(IOException e){
throw new MalformedURLException();
}
return conn;
}
private String generateParameters(){
StringBuffer finishPostQuery = new StringBuffer();
for(PostParameters p : postParameterses){
finishPostQuery.append(p.getNameParam());
finishPostQuery.append("=");
finishPostQuery.append(p.getValueParam());
finishPostQuery.append("&");
}
if(!finishPostQuery.toString().equals("login=seba&password=pass&"))
throw new AssertionError("blad generatora zapytania: " + finishPostQuery);
return finishPostQuery.toString();
}
}
and login class:
public class Login {
private User user;
private final String paramLogin = "login";
private final String paramPass = "password";
public Login(User user){
this.user = user;
}
public boolean tryLogin(){
try{
ArrayList<PostParameters> postParameterses = new ArrayList<>();
postParameterses.add(new PostParameters(paramLogin, user.getUserName()));
postParameterses.add(new PostParameters(paramPass, user.getUserPass()));
HttpPostMethod httpPostMethod = new HttpPostMethod("http://blaba.php", postParameterses);
httpPostMethod.run();
Log.i("bla", httpPostMethod.getResult());
} catch (Exception e) {
Log.i("Exception", e.toString());
}
return false;
}
}
I'm trying to connect in other thread, but I still have an error: 'android.os.NetworkOnMainThreadException'
I would be grateful for the all suggestion what I do wrong.
Instead of:
httpPostMethod.run();
do:
new Thread(httpPostMethod).start();
In case your login call failed for some reasons (timeout, wrong login), you should report that somehow to user - this is what AsyncTask class is for. It allows you to run background code in doInBackkground, and after network operation ends - in onPostExecute you can execute UI related stuff - like show errors/results.
I suggest you two things.
First use AsyncTask instead of pure java threads.
But the main advice is to use a library that make http requests.
I like to use Retrofit, it may handle all request and thread part for you, but there are others.
I've tried googling the web but every question seems to address web development. I'm simply wondering if it is even possible to fetch data from internet (game results and in game events) that is updated every second ,or every 10 second and so on ,from a website that's not mine and to display it in a Java desktop client with the Swing library interface? And if yes, what is the best method?
ThankYou
Yes, you can do it. You should use java.net package to work with network.
Data fetching depends on the site from which you are going to fetch data, for example:
If site have API, like Stack Overflow, you should use it.
If data is presented on the page, you can use parser like jsoup (if page is html, of course)
I get stock data when requested, rather than on a timer, but you can look at my code and see how I get the stock data.
Here's what the JPanel looks like. It's the panel on the right.
This is the HistoricalDataRunnable class.
package com.ggl.stock.picker.controller;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.MalformedURLException;
import java.net.URL;
import java.net.URLConnection;
import javax.swing.SwingUtilities;
import com.ggl.stock.picker.model.Company;
import com.ggl.stock.picker.model.StockDay;
import com.ggl.stock.picker.model.StockHistory;
import com.ggl.stock.picker.model.StockPickerModel;
import com.ggl.stock.picker.view.StockPickerFrame;
public class HistoricalDataRunnable implements Runnable {
private static final String URLString =
"http://www.google.com/finance/historical?output=csv&q=";
private Company company;
private StockPickerFrame frame;
private StockPickerModel model;
public HistoricalDataRunnable(StockPickerFrame frame,
StockPickerModel model, Company company) {
this.frame = frame;
this.model = model;
this.company = company;
}
#Override
public void run() {
InputStream is = null;
BufferedReader br = null;
try {
String symbol = company.getStockSymbol();
URL url = new URL(URLString + symbol);
URLConnection hc = url.openConnection();
hc.setRequestProperty("User-Agent", "Mozilla/5.0 (Macintosh; U; "
+ "Intel Mac OS X 10.4; en-US; rv:1.9.2.2) "
+ "Gecko/20100316 Firefox/3.6.2");
is = hc.getInputStream();
br = new BufferedReader(new InputStreamReader(is));
processCSVFile(br);
} catch (MalformedURLException e) {
e.printStackTrace();
String message = e.getMessage();
message = "<html>" + message;
setMessageLabel(message);
} catch (IOException e) {
String message = e.getMessage();
message = "<html>" + message;
setMessageLabel(message);
} finally {
closeReaders(is, br);
}
}
private void processCSVFile(BufferedReader br) throws IOException {
String line = "";
int count = 0;
StockHistory history = new StockHistory(company);
while ((line = br.readLine()) != null) {
if (count > 0) {
StockDay stockDay = createStockDay(line);
if (stockDay != null) {
history.addStockDay(stockDay);
}
}
count++;
}
if (history.calculateNumbers()) {
model.addStockHistory(history);
addStockHistory();
setMessageLabel(" ");
} else {
String message = "<html>There is no data for "
+ company.getCompanyName();
setMessageLabel(message);
}
}
private StockDay createStockDay(String line) {
String[] parts = line.split(",");
if (parts[1].equals("-"))
return null;
double open = Double.valueOf(parts[1]);
double high = Double.valueOf(parts[2]);
double low = Double.valueOf(parts[3]);
double close = Double.valueOf(parts[4]);
long volume = Long.valueOf(parts[5]);
return new StockDay(parts[0], open, high, low, close, volume);
}
private void addStockHistory() {
SwingUtilities.invokeLater(new Runnable() {
#Override
public void run() {
frame.addStockHistory();
}
});
}
private void setMessageLabel(final String text) {
SwingUtilities.invokeLater(new Runnable() {
#Override
public void run() {
frame.setMessageLabel(text);
;
}
});
}
private void closeReaders(InputStream is, BufferedReader br) {
try {
if (br != null)
br.close();
if (is != null)
is.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
This class makes a URL connection with Google, and returns the historical stock information. This information is captured in a StockHistory instance, and stored in the StockPickerModel instance.
The User-Agent property is set to simulate a browser. Some web sites don’t allow programs to access their web server. By setting the User-Agent property, you can pretend to be a web browser. Your program should respect the web server, and not submit too many requests. How much is too many depends on the web server.
This class also updates the view. The only way we’ll know when the request is complete is when the HistoricalDataRunnable method receives a response from Google. It’s up to this class to update the model and the view.
Since this class is run in a separate thread, we have to call the SwingUtilities invokeLater method to execute any Swing GUI commands. That’s why the addStockHistory and setMessageLabel methods are enclosed in an invokeLater method.
This class displays any errors in the JLabel message. The stock might not be kept by Google. The stock may not have any stock day values. These error messages are displayed.
To see the rest of the code, take a look at my Stock Picker Using Java Swing article.
I'm trying to get a JSON format of all the websites found when querying google.
Code:
import java.io.FileWriter;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.URL;
/**
* Created by Vlad on 19/03/14.
*/
public class Query {
public static void main(String[] args){
try{
String arg;
arg = "random";
URL url = new URL("GET https://www.googleapis.com/customsearch/v1?key=&cx=017576662512468239146:omuauf_lfve&q=" + arg);
InputStreamReader reader = new InputStreamReader(url.openStream(),"UTF-8");
int ch;
while((ch = reader.read()) != -1){
System.out.print(ch);
}
}catch(Exception e)
{
System.out.println("This ain't good");
System.out.println(e);
}
}
}
Exception:
java.net.MalformedURLException: no protocol: GET https://www.googleapis.com/customsearch/v1?key=AIzaSyCS26VtzuCs7bEpC821X_l0io_PHc4-8tY&cx=017576662512468239146:omuauf_lfve&q=random
You should delete the GET at the beginning ;)
You should replace your code by :
URL url = new URL("https://www.googleapis.com/customsearch/v1?key=AIzaSyCS26VtzuCs7bEpC821X_l0io_PHc4-8tY&cx=017576662512468239146:omuauf_lfve&q=" + arg);
Url never start by GET or POSTor anything like that ;)
Urls are supposed to start with a protocol for transfer and GET https://www.googleapis.com/customsearch/v1?key=AIzaSyCS26VtzuCs7bEpC821X_l0io_PHc4-8tY&cx=017576662512468239146:omuauf_lfve&q=random is starting with GET, that is why the exception is received.
Change it to https://www.googleapis.com/customsearch/v1?key=AIzaSyCS26VtzuCs7bEpC821X_l0io_PHc4-8tY&cx=017576662512468239146:omuauf_lfve&q=random
First the revised code which throws javax.swing.text.ChangedCharSetException:
import java.io.*;
import java.net.*;
public class Main
{
public static void main(String[] args) throws IOException, Exception
{
String query = "#pragma";
Socket s = new Socket("google.com",80);
PrintStream p = new PrintStream(s.getOutputStream());
p.print("GET /search?q="+query+" HTTP/1.0\r\n");
p.print("User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)\r\n");
p.print("Connection: close\r\n\r\n");
InputStreamReader in = new InputStreamReader(s.getInputStream());
BufferedReader buffer = new BufferedReader(in);
// String line;
//
// while ((line = buffer.readLine()) != null)
// { System.out.println(line); }
HTMLUtils.ParseLinks (buffer);
in.close();
}
}
import java.io.BufferedReader;
import java.io.IOException;
//import java.io.FileReader;
import java.io.Reader;
import java.util.List;
import java.util.ArrayList;
import javax.swing.text.html.parser.ParserDelegator;
import javax.swing.text.html.HTMLEditorKit.ParserCallback;
import javax.swing.text.html.HTML.Tag;
import javax.swing.text.html.HTML.Attribute;
import javax.swing.text.MutableAttributeSet;
public class HTMLUtils
{
private HTMLUtils() {}
public static List<String> extractLinks(Reader reader) throws IOException
{
final ArrayList<String> list = new ArrayList<String>();
ParserDelegator parserDelegator = new ParserDelegator();
ParserCallback parserCallback = new ParserCallback()
{
public void handleText(final char[] data, final int pos) { }
public void handleStartTag(Tag tag, MutableAttributeSet attribute, int pos)
{
if (tag == Tag.A) {
String address = (String) attribute.getAttribute(Attribute.HREF);
list.add(address);
}
}
public void handleEndTag(Tag t, final int pos) { }
public void handleSimpleTag(Tag t, MutableAttributeSet a, final int pos) { }
public void handleComment(final char[] data, final int pos) { }
public void handleError(final java.lang.String errMsg, final int pos) { }
};
parserDelegator.parse(reader, parserCallback, false);
return list;
}
public static void ParseLinks(BufferedReader buffer) throws Exception{
//FileReader reader = new FileReader("buffer");
List<String> links = HTMLUtils.extractLinks(buffer);
for (String link : links) {
System.out.println(link);
}
}
}
Notice that the user agent is for IE in this example.
Now I Have 3 problems:
How/can I pass the HTMLUtils.ParseLinks method a "raw buffer" instead of an HTML file she's expecting (I can write the buffer to a file but I guess that is unnecessary)
I don't know how to enter inverted commas (" ") inside the query statment in order to get the whole string i.e.: query=" "New York Yankees" "
Is it so complicated to get the User-Agent string from the host machine ???
link text
I have to say that it is imported class that I use and I don't really understand whats going on there. I'll try to understand when it will work [-8
THNX
Have a read of http://code.google.com/apis/ajaxsearch/, it's going to be a lot easier to get the data out of a JSON string than digging through acres of HTML. There's an open source Java class for digesting JSON: http://www.json.org/java/. Transferring the JSON will require a lot less bandwidth too!
If you want to do it in Java, you should consider using XPath to extract all links from the response. Therefore you first have to convert the response to XML. Then you can apply an XPath query like
//a/#href
to extract all href attributes for links. You can modify the query to only include links from the Google results and not from advertisements etc.
Here is another Tutorial to get you started.
Happy coding.
BTW: To avoid mistakes when you create your HTTP request and (even more important) to avoid unnecessary work, you could use a library like Apache Commons HTTPClient. This would reduce your work to:
HttpClient client = new HttpClient();
HttpMethod method = new GetMethod("http://www.google.com/search?q=" + query);
int statusCode = client.executeMethod(method);
if (statusCode != HttpStatus.SC_OK) {
System.err.println("Method failed: " + method.getStatusLine());
}
String response = new String(method.getResponseBody());