ImageIO.readImage IIOException while I can open it in Chrome - java

I can open this image in my browser but it won't load in my java application, why? It is supposed to be a free-to-use database, I can't see why I can't use it.
I'm using this piece of code:
public static String getContentsFromURL(String address){
String contents = "";
try{
URL url = new URL(address);
BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(url.openStream()));
String line;
while((line = bufferedReader.readLine()) != null){
contents += line;
}
bufferedReader.close();
}catch(IOException e){
e.printStackTrace();
}
return contents;
}
And I'm getting an IIOException "Can't find input file!"

try this code
URL url = new URL("http://ddragon.leagueoflegends.com/cdn/9.20.1/img/champion/Gragas.png");
Image image1 = ImageIO.read(url);
image screenshot from my debbuger.

Related

How to get connection with API using JFrame

I am working on an English dictionary using an online free API: https://api.dictionaryapi.dev/api/v2/entries/en_US/
The code was working fine just one month ago, but now the returned data is always "error". It is not giving back response :: 200. Could anyone tell me what's wrong with the code?
public String getOnlineData(String word){
String data = "";
String decodeData = "";
try{
URL url = new URL("https://api.dictionaryapi.dev/api/v2/entries/en_US/"+word);//store the url link in the variable url
HttpURLConnection con = (HttpURLConnection)url.openConnection();//start a new connection
if(con.getResponseCode()==200){
InputStream im = con.getInputStream();//store the text result in the variable im
BufferedReader br = new BufferedReader(new InputStreamReader(im));
//read the result using bufferedreader
String line=br.readLine();//read each line
while(line!=null){//stop until there is no line to read
data = data + line;
line=br.readLine();
}
br.close();//close the buffered reader
decoder jd = new decoder();//decode the result
decodeData = jd.Decoder(data);//store the decoded result in decodeData
}
else{
decodeData="error";//if the system doesn't get response, the result will be error
}
}
catch(Exception e){
try{
decodeData="error";//if the connection failed, result is error
System.out.println(e);
}
catch(Exception e1){
System.out.println(e1);
}
}
return decodeData;
}

How to download/read html file via ftp url?

I am having trouble getting the html text from this html file via ftp. I use beautiful soup to read an html file via http/https but for some reason I cannot download/read from an ftp. Please help!
Here is the url.
a link
Here is my code so far.
BufferedReader reader = null;
String total = "";
String line;
ur = "ftp://ftp.legis.state.tx.us/bills/832/billtext/html/house_resolutions/HR00001_HR00099/HR00014I.htm"
try {
URL url = new URL(ur);
URLConnection urlc = url.openConnection();
InputStream is = urlc.getInputStream(); // To download
reader = new BufferedReader(new InputStreamReader(is, "UTF-8"));
while ((line = reader.readLine()) != null)
total += reader.readLine();
} finally {
if (reader != null)
try { reader.close();
} catch (IOException logOrIgnore) {}
}
This code working for me, Java 1.7.0_25. Notice that you were storing one of every two lines, calling reader.readLine() both in the condition and in the body of the while loop.
public static void main(String[] args) throws MalformedURLException, IOException {
BufferedReader reader = null;
String total = "";
String line;
String ur = "ftp://ftp.legis.state.tx.us/bills/832/billtext/html/house_resolutions/HR00001_HR00099/HR00014I.htm";
try {
URL url = new URL(ur);
URLConnection urlc = url.openConnection();
InputStream is = urlc.getInputStream(); // To download
reader = new BufferedReader(new InputStreamReader(is, "UTF-8"));
while ((line = reader.readLine()) != null) {
total += line;
}
} finally {
if (reader != null) {
try {
reader.close();
} catch (IOException logOrIgnore) {
}
}
}
}
First thought this is related to a wrong path resolution as discussed here but this does not help.
I don't know what is exactly going wrong here but I can only reproduce this error on this ftp-server and with the MacOS Java 1.6.0_33-b03-424. I can't reproduce it with Java 1.7.0_25. So perhaps you check for a Java update.
Or you could use commons FTPClient to retrieve the file:
FTPClient client = new FTPClient();
client.connect("ftp.legis.state.tx.us");
client.enterLocalPassiveMode();
client.login("anonymous", "");
client.changeWorkingDirectory("bills/832/billtext/html/house_resolutions/HR00001_HR00099");
InputStream is = client.retrieveFileStream("HR00014I.htm");

Android java read html content

I have a problem with this code to display the html content. When I try it on your smartphone, I would print "Error" that is capturing an error, where am I wrong?
String a2="";
try {
URL url = new URL("www.google.com");
InputStreamReader isr = new InputStreamReader(url.openStream());
BufferedReader in = new BufferedReader(isr);
String inputLine;
while ((inputLine = in.readLine()) != null){
a2+=inputLine;
}
in.close();
tx.setText("OUTPUT \n"+a2);
} catch (Exception e) {
tx.setText("Error");
}
URL requires a correctly formed url. You should use:
URL url = new URL("http://www.google.com");
Update:
As you are getting a NetworkOnMainThreadException, it appears that you are attempting to make the connection in the main thread.
Ths solution is to run the code in an AsyncTask.

Web content is different using Java than in browser

I have strange problem with BufferedReader reading from web.
This URL content is different in browsers than in pasted Java code.
In content fetched using Java first elements result is empty in browser it is not.
My code:
public static void main(String[] args) {
try {
String url = "https://api.freebase.com/api/service/mqlread?queries={\"q1\":{\"query\":[{\"name\":\"Pulp Fiction\",\"*\":null,\"type\":\"/film/film\"}]},\"q3\":{\"query\":[{\"name\":\"Portal\",\"*\":null,\"type\":\"/cvg/computer_videogame\"}]}}";
URL u = new URL(url);
System.out.println(u.toString());
URLConnection urlConn = u.openConnection();
InputStreamReader is = new InputStreamReader(urlConn.getInputStream());
BufferedReader br = new BufferedReader(is);
String line = null;
String data = "";
while ((line = br.readLine()) != null) {
data += line + "\n";
}
br.close();
System.out.println(data);
} catch (Exception ex) {
System.err.println(ex);
}
}
EDIT: Ahh. Figured it out. No space characters in URLs. Just replace them with %20.

Extract HTML from URL

I'm using Boilerpipe to extract text from url, using this code:
URL url = new URL("http://www.example.com/some-location/index.html");
String text = ArticleExtractor.INSTANCE.getText(url);
the String text contains just the text of the html page, but I need to extract to whole html code from it.
Is there anyone who used this library and knows how to extract the HTML code?
You can check the demo page for more info on the library.
For something as simple as this you don't really need an external library:
URL url = new URL("http://www.google.com");
InputStream is = (InputStream) url.getContent();
BufferedReader br = new BufferedReader(new InputStreamReader(is));
String line = null;
StringBuffer sb = new StringBuffer();
while((line = br.readLine()) != null){
sb.append(line);
}
String htmlContent = sb.toString();
Just use the KeepEverythingExtractor instead of the ArticleExtractor.
But this is using the wrong tool for the wrong job. What you want is just to download the HTML content of a URL (right?), not extract content. So why use a content extractor?
With Java 7 and a trick of Scanner, you can do the following:
public static String toHtmlString(URL url) throws IOException {
Objects.requireNonNull(url, "The url cannot be null.");
try (InputStream is = url.openStream(); Scanner sc = new Scanner(is)) {
sc.useDelimiter("\\A");
if (sc.hasNext()) {
return sc.next();
} else {
return null; // or empty
}
}
}

Categories