I am having trouble getting the html text from this html file via ftp. I use beautiful soup to read an html file via http/https but for some reason I cannot download/read from an ftp. Please help!
Here is the url.
a link
Here is my code so far.
BufferedReader reader = null;
String total = "";
String line;
ur = "ftp://ftp.legis.state.tx.us/bills/832/billtext/html/house_resolutions/HR00001_HR00099/HR00014I.htm"
try {
URL url = new URL(ur);
URLConnection urlc = url.openConnection();
InputStream is = urlc.getInputStream(); // To download
reader = new BufferedReader(new InputStreamReader(is, "UTF-8"));
while ((line = reader.readLine()) != null)
total += reader.readLine();
} finally {
if (reader != null)
try { reader.close();
} catch (IOException logOrIgnore) {}
}
This code working for me, Java 1.7.0_25. Notice that you were storing one of every two lines, calling reader.readLine() both in the condition and in the body of the while loop.
public static void main(String[] args) throws MalformedURLException, IOException {
BufferedReader reader = null;
String total = "";
String line;
String ur = "ftp://ftp.legis.state.tx.us/bills/832/billtext/html/house_resolutions/HR00001_HR00099/HR00014I.htm";
try {
URL url = new URL(ur);
URLConnection urlc = url.openConnection();
InputStream is = urlc.getInputStream(); // To download
reader = new BufferedReader(new InputStreamReader(is, "UTF-8"));
while ((line = reader.readLine()) != null) {
total += line;
}
} finally {
if (reader != null) {
try {
reader.close();
} catch (IOException logOrIgnore) {
}
}
}
}
First thought this is related to a wrong path resolution as discussed here but this does not help.
I don't know what is exactly going wrong here but I can only reproduce this error on this ftp-server and with the MacOS Java 1.6.0_33-b03-424. I can't reproduce it with Java 1.7.0_25. So perhaps you check for a Java update.
Or you could use commons FTPClient to retrieve the file:
FTPClient client = new FTPClient();
client.connect("ftp.legis.state.tx.us");
client.enterLocalPassiveMode();
client.login("anonymous", "");
client.changeWorkingDirectory("bills/832/billtext/html/house_resolutions/HR00001_HR00099");
InputStream is = client.retrieveFileStream("HR00014I.htm");
Related
Two ways to http request give another results one by JAVA code and second by copy element from website.
I am try to match them to same result!
When I use Java Code
private static boolean isValid(URL url, HttpURLConnection connection) {
BufferedReader reader;
String line;
StringBuffer responseContant = new StringBuffer();
try {
connection = (HttpURLConnection) url.openConnection();
connection.setRequestMethod("GET");
connection.setConnectTimeout(4000);
connection.setReadTimeout(4000);
int status = connection.getResponseCode();
if (status > 299) {
reader = new BufferedReader(new InputStreamReader(connection.getErrorStream()));
while ((line = reader.readLine()) != null)
responseContant.append(line);
reader.close();
} else {
reader = new BufferedReader(new InputStreamReader(connection.getInputStream()));
while ((line = reader.readLine()) != null)
responseContant.append(line);
reader.close();
return true;
}
} catch (Exception e) {
e.printStackTrace();
}
return false;
}
It give me another value instead use inspect> copy> copy element
How it possible? and how to fix this issue?
Thanks guys!!
Hey I am having a file nearly 110MB size at apache. I am reading that file into input stream and then converting that input stream to List of String based on all suggestion i find on stack overflow. But still i am facing out of memory issue.
Below is my code.
private List<String> readFromHttp(String url, PlainDiff diff) throws Exception {
HttpUrlConnection con = new HttpUrlConnection();
con.setGetUrl(url);
List<String> lines = new ArrayList<String>();
final String PREFIX = "stream2file";
final String SUFFIX = ".tmp";
final File tempFile = File.createTempFile(PREFIX, SUFFIX);
tempFile.deleteOnExit();
StringBuilder sb = new StringBuilder();
try {
InputStream data = con.sendGetInputStream();
if(data==null)
throw new UserAuthException("diff is not available at the location");
else {
try (FileOutputStream out = new FileOutputStream(tempFile)) {
IOUtils.copy(data, out);
LineIterator it = FileUtils.lineIterator(tempFile, "UTF-8");
try {
while (it.hasNext()) {
String line = it.nextLine();
lines.add(line);
sb.append(line);
}
} finally {
LineIterator.closeQuietly(it);
}
}
data.close();
diff.setLineAsString(sb.toString());
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
//System.out.println(lines);
return lines;
}
public InputStream sendGetInputStream() throws IOException {
String encoding = Base64.getEncoder().encodeToString(("abc:$xyz$").getBytes("UTF-8"));
URL obj = new URL(getGetUrl());
// Setup the connection
HttpURLConnection con = (HttpURLConnection) obj.openConnection();
// Set the parameters from the headers
con.setRequestMethod("GET");
con.setDoOutput(true);
con.setRequestProperty ("Authorization", "Basic " + encoding);
InputStream is;
int responseCode = con.getResponseCode();
logger.info("GET Response Code :: " + responseCode);
if (responseCode == HttpURLConnection.HTTP_OK) {
is = con.getInputStream();
}
else {
is = null;
}
return is;
}
Is something in memory i am doing that is consuming lot of heap? Is there a better way to do it?
Your code has multiple issues. I am not going to solve each and every issue but point that out so that you can review your code and learn to write better code.
In method readFromHttp(..):
There is no need to create a new file by IOUtils.copy(data, out);
No use of String Builder StringBuilder sb = new StringBuilder();
No use of line iterator LineIterator
And there are multiple other memory-related issues but for the time being correct these points and test with the below-mentioned code.
Change your reading lines from file to very simple way after correcting the above mistakes:
try(BufferedReader reader = new BufferedReader(new InputStreamReader(data, StandardCharsets.UTF_8))) {
for (String line; (line = reader.readLine()) != null;) {
lines.add(line);
}
}
I'm doing a simple JSON grab from two links with the same code. I'm doing it two separate times, so the cause of my issue isn't because they're running into each other or something.
Here is my code:
#Override
protected String doInBackground(Object... params) {
try {
URL weatherUrl = new URL("my url goes here");
HttpURLConnection connection = (HttpURLConnection) weatherUrl
.openConnection();
connection.connect();
responseCode = connection.getResponseCode();
if (responseCode == HttpURLConnection.HTTP_OK) {
InputStream inputStream = connection.getInputStream();
Reader reader = new InputStreamReader(inputStream);
int contentLength = connection.getContentLength();
char[] charArray = new char[contentLength];
reader.read(charArray);
String responseData = new String(charArray);
Log.v("test", responseData);
When I try this with:
http://www.google.com/calendar/feeds/developer-calendar#google.com/public/full?alt=json
I get an error of having an array lenth of -1
For this link:
http://api.openweathermap.org/data/2.5/weather?id=5815135
It returns fine and I get a log of all of the JSON. Does anyone have any idea why?
Note: I tried stepping through my code in debug mode, but I couldn't catch anything. I also downloaded a Google chrome extension for parsing json in the browser and both urls look completely valid. I'm out of ideas.
Log this: int contentLength = connection.getContentLength();
I don't see the google url returning a content-length header.
If you just want String output from a url, you can use Scanner and URL like so:
Scanner s = new Scanner(new URL("http://www.google.com").openStream(), "UTF-8").useDelimiter("\\A");
out = s.next();
s.close();
(don't forget try/finally block and exception handling)
The longer way (which allows for progress reporting and such):
String convertStreamToString(InputStream is) throws UnsupportedEncodingException {
BufferedReader reader = new BufferedReader(new
InputStreamReader(is, "UTF-8"));
StringBuilder sb = new StringBuilder();
String line = null;
try {
while ((line = reader.readLine()) != null)
sb.append(line + "\n");
} catch (IOException e) {
// Handle exception
} finally {
try {
is.close();
} catch (IOException e) {
// Handle exception
}
}
return sb.toString();
}
}
and then call String response = convertStreamToString( inputStream );
How do I retrieve the contents of a file and assign it to a string?
The file is located on a https server and the content is plain text.
I suggest Apache HttpClient: easy, clean code and it handles the character encoding sent by the server -- something that java.net.URL/java.net.URLConnection force you to handle yourself:
String url = "http://example.com/file.txt";
HttpClient client = new DefaultHttpClient();
HttpResponse response = client.execute(new HttpGet(url));
String contents = EntityUtils.toString(response.getEntity());
Look at the URL Class in the Java API.
Pretty sure all you need is there.
First download the file from the server using the URL class of java.
String url = "http://url";
java.io.BufferedInputStream in = new java.io.BufferedInputStream(new
java.net.URL(url).openStream());
java.io.FileOutputStream fos = new java.io.FileOutputStream("file.txt");
java.io.BufferedOutputStream bout = new BufferedOutputStream(fos,1024);
byte data[] = new byte[1024];
while(in.read(data,0,1024)>=0)
{
bout.write(data);
}
bout.close();
in.close();
Then read the downloaded file using FileInputStream class of java
File file = new File("file.txt");
int ch;
StringBuffer strContent = new StringBuffer("");
FileInputStream fin = null;
try {
fin = new FileInputStream(file);
while ((ch = fin.read()) != -1)
strContent.append((char) ch);
fin.close();
} catch (Exception e) {
System.out.println(e);
}
System.out.println(strContent.toString());
Best answer I found:
public static String readPage(String url, String delimeter)
{
try
{
URL URL = new URL(url);
URLConnection connection = URL.openConnection();
BufferedReader reader = new BufferedReader(new InputStreamReader(connection.getInputStream()));
String line, lines = "";
while ((line = reader.readLine()) != null)
{
if(lines != "")
{
lines += delimeter;
}
lines += line;
}
return lines;
}
catch (Exception e)
{
return null;
}
}
I have strange problem with BufferedReader reading from web.
This URL content is different in browsers than in pasted Java code.
In content fetched using Java first elements result is empty in browser it is not.
My code:
public static void main(String[] args) {
try {
String url = "https://api.freebase.com/api/service/mqlread?queries={\"q1\":{\"query\":[{\"name\":\"Pulp Fiction\",\"*\":null,\"type\":\"/film/film\"}]},\"q3\":{\"query\":[{\"name\":\"Portal\",\"*\":null,\"type\":\"/cvg/computer_videogame\"}]}}";
URL u = new URL(url);
System.out.println(u.toString());
URLConnection urlConn = u.openConnection();
InputStreamReader is = new InputStreamReader(urlConn.getInputStream());
BufferedReader br = new BufferedReader(is);
String line = null;
String data = "";
while ((line = br.readLine()) != null) {
data += line + "\n";
}
br.close();
System.out.println(data);
} catch (Exception ex) {
System.err.println(ex);
}
}
EDIT: Ahh. Figured it out. No space characters in URLs. Just replace them with %20.