I've modified the following code from an example on the internet. Currently it POSTs and returns the response line by line. How can I modify the code so it returns the entire response in one line, so I can parse it more easily.
static void updateIp() throws MalformedURLException, IOException {
String urlParameters = "name=sub&a=rec_edit&id=9001";
URL url = new URL("http://httpbin.org/post");
URLConnection con = url.openConnection();
con.setDoOutput(true);
BufferedReader reader;
try (OutputStreamWriter writer = new OutputStreamWriter(con.getOutputStream())) {
writer.write(urlParameters);
writer.flush();
String line;
reader = new BufferedReader(new InputStreamReader(con.getInputStream()));
while ((line = reader.readLine()) != null) {
System.out.println(line);
}
}
reader.close();
}
Any help would be greatly appreciated!
You can't determine how many lines the URL response will be over, so you need to join them all together yourself in one line using StringBuilder:
static void updateIp() throws MalformedURLException, IOException {
String urlParameters = "name=sub&a=rec_edit&id=9001";
URL url = new URL("http://httpbin.org/post");
URLConnection con = url.openConnection();
con.setDoOutput(true);
BufferedReader reader;
try (OutputStreamWriter writer = new OutputStreamWriter(con.getOutputStream())) {
writer.write(urlParameters);
writer.flush();
String line;
StringBuilder urlResponse = new StringBuilder();
reader = new BufferedReader(new InputStreamReader(con.getInputStream()));
while ((line = reader.readLine()) != null) {
urlResponse.append(line);
}
String response = urlResponse.toString();
System.out.println(response);
}
reader.close();
}
The response string variable will now contain all the output in a single line.
Related
I am trying to create a simple command line program that will determine if a playlist is a media playlist or master based on the tag returned. Unfortunately both type of playlist first line tags are the same so I was wondering is their a way I could adjust my code to read the text starting at the second line?
private static String getPlaylistUrl(String theUrl) throws
FileNotFoundException, MalformedURLException, IOException{
String content = "";
//Creates a url variable
URL url = new URL(theUrl);
//Cretes a urlConnection variable
URLConnection urlConnection = (HttpURLConnection) url.openConnection();
//Wraps the urlConnection in a BufferedReader
BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(urlConnection.getInputStream()));
String line;
while ((line = bufferedReader.readLine()) != null) {
content += line + "\n";
}
bufferedReader.close();
return content;
}
Just read the first line before the loop starts.
private static String getPlaylistUrl(String theUrl) throws IOException {
try (InputStream is = new URL(theUrl).openConnection().getInputStream();
BufferedReader reader = new BufferedReader(new InputStreamReader(is));
Stream<String> stream = reader.lines()) {
return stream
// skip the first line
.skip(1)
// join all other lines using a new line delimiter
.collect(Collectors.joining("\n"));
}
}
Skip the header like this
String line;
bool IsHeader=true;
while ((line = bufferedReader.readLine()) != null) {
if (IsHeader){
IsHeader=false; //skip header..
}else{
content += line + "\n";
}
}
bufferedReader.close();
i am new to web services i am calling a web service that should returns JSON with the folliwng code - the problem is i am getting the response in xml format
when i am trying the same parameters using google rest api - the response is in jSON
any ideas what i am doing wrong ?
public static String getSFData(String urlSuffix) throws MalformedURLException, ProtocolException , IOException
{
String header = "Basic XXXXX";
URL url = new URL("https://api2.successfactors.eu/odata/v2/"+urlSuffix);
HttpsURLConnection connection = (HttpsURLConnection) url.openConnection();
connection.setRequestMethod("GET");
connection.setRequestProperty("authorization",header);
connection.setRequestProperty("Content-Type", "application/json");
BufferedReader bf = new BufferedReader(new InputStreamReader(connection.getInputStream()));
StringBuffer stringBuffer = new StringBuffer();
String line;
while ((line = bf.readLine()) != null )
{
stringBuffer.append(line);
}
String response = stringBuffer.toString();
System.out.println("response"+response);
return response;
}
UPDATE
You could try the API URL like http://api2.successfactors.eu/odata/v2/User?
$format=json to get data in JSON.
Use StringBuilder instead of StringBuffer.
Try the following after set content type.
connection.connect();
int status = connection.getResponseCode();
switch (status) {
case 200:
BufferedReader bf = new BufferedReader(new InputStreamReader(connection.getInputStream()));
StringBuilder stringBuilder = new StringBuilder();
String line;
while ((line = bf.readLine()) != null) {
stringBuilder.append(line);
}
String response = stringBuilder.toString();
System.out.println("response : " + response);
}
I'm trying to get a json object from the url:
http://www.alfanous.org/jos2?action=search&unit=aya&fuzzy=True&query=حم
However, when I run my code with that url, I got an empty json, and when I'm request the url from my browser, the josn is filled.
what is wrong with my code?
URL url = new URL("http://www.alfanous.org/jos2?action=search&unit=aya&fuzzy=True&query=حم");
URLConnection conn = url.openConnection();
InputStream is = conn.getInputStream();
Scanner scan = new Scanner(is);
while (scan.hasNextLine()) {
System.out.println(scan.nextLine());
}
And I tried also
// Create URL object
URL obj = new URL("http://www.alfanous.org/jos2?action=search&unit=aya&fuzzy=True&query=حم");
// Communicate with the URL by HTTP
HttpURLConnection con = (HttpURLConnection) obj.openConnection();
// optional default is GET
con.setRequestMethod("GET");
// add request header
con.setRequestProperty("User-Agent", "Mozilla/5.0");
// Getting response data
BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));
String inputLine;
StringBuffer response = new StringBuffer();
while ((inputLine = in.readLine()) != null) {
response.append(inputLine);
}
in.close();
System.out.println(response.toString());
The solution was to encode the url string before passing it to the URL constructor.
String urlstring = "http://www.alfanous.org/jos2?action=search&unit=aya&fuzzy=True&query=حم";
URLEncoder.encode(urlstring, "UTF-8");
URL url = new URL(urlstring);
Then continues with the previous code shown in the original post.
URLConnection conn = url.openConnection();
InputStream is = conn.getInputStream();
Scanner scan = new Scanner(is);
while (scan.hasNextLine()) {
System.out.println(scan.nextLine());
}
And the moral is.. I should encode the url before I use it!
Try to use BufferedReader like this:
URL url = new URL("http://www.alfanous.org/jos2?action=search&unit=aya&fuzzy=True&query=حم");
URLConnection conn = url.openConnection();
BufferedReader br =new BufferedReader(new InputStreamReader(conn.getInputStream()));
while ((thisLine = br.readLine()) != null) {
System.out.println(thisLine);
}
I am having trouble getting the html text from this html file via ftp. I use beautiful soup to read an html file via http/https but for some reason I cannot download/read from an ftp. Please help!
Here is the url.
a link
Here is my code so far.
BufferedReader reader = null;
String total = "";
String line;
ur = "ftp://ftp.legis.state.tx.us/bills/832/billtext/html/house_resolutions/HR00001_HR00099/HR00014I.htm"
try {
URL url = new URL(ur);
URLConnection urlc = url.openConnection();
InputStream is = urlc.getInputStream(); // To download
reader = new BufferedReader(new InputStreamReader(is, "UTF-8"));
while ((line = reader.readLine()) != null)
total += reader.readLine();
} finally {
if (reader != null)
try { reader.close();
} catch (IOException logOrIgnore) {}
}
This code working for me, Java 1.7.0_25. Notice that you were storing one of every two lines, calling reader.readLine() both in the condition and in the body of the while loop.
public static void main(String[] args) throws MalformedURLException, IOException {
BufferedReader reader = null;
String total = "";
String line;
String ur = "ftp://ftp.legis.state.tx.us/bills/832/billtext/html/house_resolutions/HR00001_HR00099/HR00014I.htm";
try {
URL url = new URL(ur);
URLConnection urlc = url.openConnection();
InputStream is = urlc.getInputStream(); // To download
reader = new BufferedReader(new InputStreamReader(is, "UTF-8"));
while ((line = reader.readLine()) != null) {
total += line;
}
} finally {
if (reader != null) {
try {
reader.close();
} catch (IOException logOrIgnore) {
}
}
}
}
First thought this is related to a wrong path resolution as discussed here but this does not help.
I don't know what is exactly going wrong here but I can only reproduce this error on this ftp-server and with the MacOS Java 1.6.0_33-b03-424. I can't reproduce it with Java 1.7.0_25. So perhaps you check for a Java update.
Or you could use commons FTPClient to retrieve the file:
FTPClient client = new FTPClient();
client.connect("ftp.legis.state.tx.us");
client.enterLocalPassiveMode();
client.login("anonymous", "");
client.changeWorkingDirectory("bills/832/billtext/html/house_resolutions/HR00001_HR00099");
InputStream is = client.retrieveFileStream("HR00014I.htm");
I'm using the Stackoverflow JSON API to retrieve questions marked with a given tag.
I have this small program in Java which retrieves questions marked with the "Java" tag.
public static void main(String[] args) throws Exception
{
String urlString = "https://api.stackexchange.com/2.1/questions?order=desc&sort=votes&tagged=java&site=stackoverflow";
URL url = new URL( urlString );
BufferedReader reader = null;
StringBuffer buffer = new StringBuffer();
try
{
URLConnection connection = url.openConnection();
InputStream isConn = connection.getInputStream();
reader = new BufferedReader( new InputStreamReader( new GZIPInputStream( isConn ) ) );
String inputLine;
while (( inputLine = reader.readLine() ) != null)
{
buffer.append( inputLine );
}
}
finally
{
if (reader != null)
{
reader.close();
}
}
JSONObject jsonObject = new JSONObject( buffer.toString() );
JSONArray jsonArray = jsonObject.getJSONArray( "items" );
System.out.println( buffer );
System.out.println( jsonArray.length() );
}
My problem is that it returns only 30 questions. Since my goal is to build a dataset for further textual analysis, I need to access way more than just 30 questions.
Is there a way to adjust the size of the returned list?
If not, how can I workaround this situation?
Notice the has_more property in the returned JSON, this indicates that more results are available. You can page through these results using the page and pagesize parameters in the url. The issue I foresee is the code will be pulling a large number of questions considering it will iterate through all java questions, so you may want to add a conditional that stops at a certain number of pages. Here is a quick example:
public static void main(String[] args) throws Exception {
BufferedReader reader = null;
int page = 1;
JSONObject jsonObject = null;
try {
while (jsonObject == null || jsonObject.getBoolean("has_more")) {
String urlString = "https://api.stackexchange.com/2.1/questions?order=desc&sort=votes&tagged=java&site=stackoverflow&pagesize=100";
urlString += "&page=" + page++;
URL url = new URL(urlString);
URLConnection connection = url.openConnection();
InputStream isConn = connection.getInputStream();
StringBuffer buffer = new StringBuffer();
reader = new BufferedReader(new InputStreamReader(new GZIPInputStream(isConn)));
String inputLine;
while ((inputLine = reader.readLine()) != null) {
buffer.append(inputLine);
}
jsonObject = new JSONObject(buffer.toString());
JSONArray jsonArray = jsonObject.getJSONArray("items");
System.out.println(buffer);
System.out.println(jsonArray.length());
}
} finally {
if (reader != null) {
reader.close();
}
}
}