I get random error (not all connection) java.lang.IllegalArgumentException: byteCount < 0: -291674 from this code
URL connurl = new URL(newUrl);
conn = (HttpURLConnection) connurl.openConnection();
conn.setRequestProperty("User-Agent", "Android");
conn.setConnectTimeout(5000);
conn.setRequestMethod("GET");
if (finish != 0L)
conn.setRequestProperty("Range", "bytes=" + finish + "-" + size);
conn.connect();
int getResponseCode = conn.getResponseCode();
String getContenttype = conn.getContentType();
int contentLenght = conn.getContentLength();
if (!StringUtils.startsWith(String.valueOf(getResponseCode), "20")) {
}
else if(StringUtils.startsWithIgnoreCase(getContenttype, "text/html")) {
}
else if(contentLenght < (size-finish)) {
}
else {
ReadableByteChannel channel = Channels.newChannel(conn.getInputStream());
accessFile = new RandomAccessFile(filePath, "rwd");
accessFile.seek(finish);
fileStream = new FileOutputStream(filePath, true);
ByteBuffer buffer = ByteBuffer.allocate(1024);
byte[] data;
int temp;
while ((temp = channel.read(buffer)) > 0) { <-- error in this line
}
Error :
java.lang.IllegalArgumentException: byteCount < 0: -1647333
at com.android.okhttp.okio.RealBufferedSource.read(RealBufferedSource.java:46)
at com.android.okhttp.internal.http.HttpConnection$FixedLengthSource.read(HttpConnection.java:418)
at com.android.okhttp.okio.RealBufferedSource$1.read(RealBufferedSource.java:349)
at java.io.InputStream.read(InputStream.java:162)
at java.nio.channels.Channels$InputStreamChannel.read(Channels.java:306)
How to fix it?
Thank you
This is probably linked to that issue: https://github.com/square/okhttp/issues/3104
As suggested, consider using okhttp-urlconnection like:
OkHttpClient okHttpClient = new OkHttpClient();
URL.setURLStreamHandlerFactory(new OkUrlFactory(okHttpClient));
Related
In my local it works perfectly, but when I deploy it gives me this error
nested exception is java.net.ConnectException: Connection timed out (Connection timed out)
with https everything works normal, but http does not work and it gives me the timeout error.
I also just did the tests with restTemplate, OkHttpClient and I get the same result
What am I doing wrong or what should I configure to work, I hope your help, I would be too grateful
public String getFile(String baseName, String extensioFile) {
String rpt;
int BUFFER_SIZE = 4096;
String urlDonwload = "http://datos.susalud.gob.pe/node/223/download";
try {
URL url = new URL(urlDonwload);
HttpURLConnection httpConn = (HttpURLConnection) url.openConnection();
httpConn.setRequestMethod("GET");
httpConn.setRequestProperty("User-Agent", "Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 (.NET CLR 3.5.30729)");
httpConn.setConnectTimeout(900000);
httpConn.setReadTimeout(7200000);
int responseCode = httpConn.getResponseCode();
if (responseCode == HttpURLConnection.HTTP_OK) {
String fileName = "";
String disposition = httpConn.getHeaderField("Content-Disposition");
if (disposition != null) {
// extracts file name from header field
int index = disposition.indexOf("filename=");
if (index > 0) {
fileName = disposition.substring(index + 10, disposition.length() - 1);
}
} else {
// extracts file name from URL
// fileName = urlCamaUci.substring(urlCamaUci.lastIndexOf("/") + 1,
// urlCamaUci.length());
LocalDateTime currentDate = LocalDateTime.now(ZoneOffset.of("-05:00"));
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
String formatDateTime = currentDate.format(formatter);
System.out.println();
fileName = baseName + "_" + formatDateTime.replace(" ", "_").replace(":", "-") + "." + extensioFile;
}
InputStream inputStream = httpConn.getInputStream();
// String saveFilePath = PATH + File.separator + fileName;
File pathSave = new File(fileName);
FileOutputStream outputStream = new FileOutputStream(pathSave.getCanonicalPath());
int bytesRead = -1;
byte[] buffer = new byte[BUFFER_SIZE];
while ((bytesRead = inputStream.read(buffer)) != -1) {
outputStream.write(buffer, 0, bytesRead);
}
outputStream.close();
inputStream.close();
rpt = pathSave.getCanonicalPath();
} else {
rpt = "FAILED";
}
httpConn.disconnect();
} catch (Exception e) {
System.out.println("error search path");
System.out.println(e.getMessage());
rpt = "FAILED";
}
return rpt;
}
I want to code a way to resume the download file in Java and show the progress if possible.
The following code was used to subtract the total size of the downloaded file (totalSize - downloaded) instead of completing the download.
URL url = new URL(urlFile);
HttpURLConnection urlConnection = (HttpURLConnection) url.openConnection();
File SDCardRoot = Environment.getExternalStorageDirectory();
file = new File(SDCardRoot,"/MySchool/"+Folder+"/"+nameBook.getText().toString()+".pdf");
urlConnection.setRequestProperty("Range", "bytes=" + file.length() + "-");
urlConnection.setDoOutput(true);
urlConnection.connect();
outputStream = new FileOutputStream(file);
InputStream inputStream = urlConnection.getInputStream();
//this is the total size of the file which we are downloading
totalSize = urlConnection.getContentLength();
byte[] buffer = new byte[1024];
int bufferLength = 0;
while ( (bufferLength = inputStream.read(buffer)) > 0 ) {
outputStream.write(buffer, 0, bufferLength);
downloadedSize += bufferLength;
}
outputStream.close();
This might be useful for pause and resume but please specify exact problem.
if (outputFileCache.exists())
{
connection.setAllowUserInteraction(true);
connection.setRequestProperty("Range", "bytes=" + outputFileCache.length() + "-");
}
connection.setConnectTimeout(14000);
connection.setReadTimeout(20000);
connection.connect();
if (connection.getResponseCode() / 100 != 2)
throw new Exception("Invalid response code!");
else
{
String connectionField = connection.getHeaderField("content-range");
if (connectionField != null)
{
String[] connectionRanges = connectionField.substring("bytes=".length()).split("-");
downloadedSize = Long.valueOf(connectionRanges[0]);
}
if (connectionField == null && outputFileCache.exists())
outputFileCache.delete();
fileLength = connection.getContentLength() + downloadedSize;
input = new BufferedInputStream(connection.getInputStream());
output = new RandomAccessFile(outputFileCache, "rw");
output.seek(downloadedSize);
byte data[] = new byte[1024];
int count = 0;
int __progress = 0;
while ((count = input.read(data, 0, 1024)) != -1
&& __progress != 100)
{
downloadedSize += count;
output.write(data, 0, count);
__progress = (int) ((downloadedSize * 100) / fileLength);
}
output.close();
input.close();
}
I have coded a web crawler. But when crawling it downloads too many GBs of data.
I want to read only the text (avoiding images ...etc).
I use Boilerpipe to extract the content from html
Here is how I find the final redirected url
public String getFinalRedirectedUrl(String url) throws IOException{
HttpURLConnection connection;
String finalUrl = url;
int redirectCount = 0;
do {
connection = (HttpURLConnection) new URL(finalUrl)
.openConnection();
connection.setConnectTimeout(Config.HTTP_CONNECTION_TIMEOUT_TIME);
connection.setReadTimeout(Config.HTTP_READ_TIMEOUT_TIME);
connection.setInstanceFollowRedirects(false);
connection.setUseCaches(false);
connection.setRequestMethod("GET");
connection.connect();
int responseCode = connection.getResponseCode();
if (responseCode >= 300 && responseCode < 400) {
String redirectedUrl = connection.getHeaderField("Location");
if (null == redirectedUrl)
break;
finalUrl = redirectedUrl;
redirectCount++;
if(redirectCount > Config.MAX_REDIRECT_COUNT){
throw new java.net.ProtocolException("Server redirected too many times ("+Config.MAX_REDIRECT_COUNT+")");
}
} else{
break;
}
} while (connection.getResponseCode() != HttpURLConnection.HTTP_OK);
connection.disconnect();
return finalUrl;
}
This is how I fetch the url
private HTMLDocument fetch(URL url) throws IOException{
final HttpURLConnection httpcon = (HttpURLConnection) url.openConnection();
httpcon.setFollowRedirects(true);
httpcon.setConnectTimeout(Config.HTTP_CONNECTION_TIMEOUT_TIME);
httpcon.setReadTimeout(Config.HTTP_READ_TIMEOUT_TIME);
httpcon.addRequestProperty("User-Agent", "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:10.0.2) Gecko/20100101 Firefox/10.0.2");
final String ct = httpcon.getContentType();
Charset cs = Charset.forName("Cp1252");
if (ct != null) {
if(!ct.contains("text/html")){
System.err.println("Content type is:"+ct);
return new HTMLDocument("");
}
Matcher m = PAT_CHARSET.matcher(ct);
if(m.find()) {
final String charset = m.group(1);
try {
cs = Charset.forName(charset);
} catch (UnsupportedCharsetException | IllegalCharsetNameException e) {
// keep default
}
}
}
InputStream in = httpcon.getInputStream();
final String encoding = httpcon.getContentEncoding();
if(encoding != null) {
if("gzip".equalsIgnoreCase(encoding)) {
in = new GZIPInputStream(in);
} else {
System.err.println("WARN: unsupported Content-Encoding: "+encoding);
}
}
ByteArrayOutputStream bos = new ByteArrayOutputStream();
byte[] buf = new byte[4096];
int r;
while ((r = in.read(buf)) != -1) {
bos.write(buf, 0, r);
}
in.close();
final byte[] data = bos.toByteArray();
return new HTMLDocument(data, cs);
}
And to get the body using Boilerpipe
HTMLDocument htmlDoc = fetch(new URL(url));
String body = ArticleExtractor.INSTANCE.getText(htmlDoc.toInputSource());
How to reduce the amount of data downloaded?
Reduced the GB downloaded and increased the efficiency by using JSoup
public HashMap<String, String> fetchWithJsoup(String url, String iniUrl, int redirCount)
throws IOException
{
HashMap<String, String> returnObj = new HashMap<>();
Connection con;
try{
con = Jsoup.connect(url);
}catch(IllegalArgumentException ex){
if(ex.getMessage().contains("Malformed URL")){
System.err.println("Malformed URL:: "
+ex.getClass().getName()+": "+ex.getMessage()+" > "+iniUrl);
}else{
Logger.getLogger(ContentGetter.class.getName()).log(Level.SEVERE, null, ex);
}
returnObj.put(RETURN_FINAL_URL, url);
returnObj.put(RETURN_BODY, "");
return returnObj;
}
con.userAgent("Mozilla/5.0 (Windows NT 6.1; WOW64; rv:10.0.2) Gecko/20100101 Firefox/10.0.2");
con.timeout(Config.HTTP_CONNECTION_TIMEOUT_TIME);
Document doc = con.get();
String uri = doc.baseUri();
returnObj.put(RETURN_FINAL_URL, uri);
Elements redirEle = doc.head().select("meta[http-equiv=refresh]");
if(redirEle.size() > 0){
String content = redirEle.get(0).attr("content");
Pattern pattern = Pattern.compile("^.*URL=(.+)$", Pattern.CASE_INSENSITIVE);
Matcher matcher = pattern.matcher(content);
if (matcher.matches() && matcher.groupCount() > 0) {
String redirectUrl = matcher.group(1);
if(redirectUrl.startsWith("'")){
/*removes single quotes of urls within single quotes*/
redirectUrl = redirectUrl.replaceAll("(^')|('$)","");
}
if(redirectUrl.startsWith("/")){
String[] splitedUrl = url.split("/");
redirectUrl = splitedUrl[0]+"//"+splitedUrl[2]+redirectUrl;
}
if(!redirectUrl.equals(url)){
redirCount++;
if(redirCount < Config.MAX_REDIRECT_COUNT){
return fetchWithJsoup(redirectUrl, iniUrl, redirCount);
}
}
}
}
HTMLDocument htmlDoc = new HTMLDocument(doc.html());
String body = "";
try{
if(htmlDoc != null){
body = ArticleExtractor.INSTANCE.getText(htmlDoc.toInputSource());
}
}catch(OutOfMemoryError ex){
System.err.println("OutOfMemoryError while extracting text !!!!!!!!");
System.gc();
} catch (BoilerpipeProcessingException ex) {
Logger.getLogger(ContentGetter.class.getName()).log(Level.SEVERE, null, ex);
}
returnObj.put(RETURN_BODY, body);
return returnObj;
}
I am trying to download images into byte array but it gives me an error message,
What should i do?? please help me guys
05-29 12:28:13.324: D/ImageManager(6527): Error: java.lang.IllegalArgumentException: Buffer capacity may not be negative
byte []bg1=getLogoImage("http://onlinemarketingdubai.com/hotelmenu/images/874049310_gm.png");
private byte[] getLogoImage(String url){
try {
Log.d("Url",url);
URL imageUrl = new URL(url);
URLConnection ucon = imageUrl.openConnection();
HttpURLConnection conn= (HttpURLConnection)imageUrl.openConnection();
conn.setDoInput(true);
conn.connect();
int length = conn.getContentLength();
InputStream is = ucon.getInputStream();
BufferedInputStream bis = new BufferedInputStream(is);
ByteArrayBuffer baf = new ByteArrayBuffer(length);
int current = 0;
while ((current = bis.read()) != -1) {
baf.append((byte) current);
}
return baf.toByteArray();
} catch (Exception e) {
Log.d("ImageManager", "Error: " + e.toString());
}
return null;
}
Take a look at the ByteArrayBuffer class:
public final class ByteArrayBuffer {
private byte[] buffer;
private int len;
public ByteArrayBuffer(int capacity) {
super();
if (capacity < 0) {
throw new IllegalArgumentException("Buffer capacity may not be negative");
}
You are initializing it passing it the length value as the buffer's capacity you aquired from:
int length = conn.getContentLength();
So the problem comes from the connection length, which I believe it's -1 since the content length is not known. The server may not be setting a "Content-Length" header in the response message.
Take a look at this answer for solving this.
I implemented Java Network program that reads a txt file from an HTML content. I was able to use HTML_OK scenario but
When I am trying to get a "Partial GET" request, the connection returns again "HTML_OK". I could not find out why does this happen, I searched the internet but I could not find any answer. The code I wrote is:
import java.io.*;
import java.net.HttpURLConnection;
import java.net.MalformedURLException;
import java.net.URL;
import java.net.URLConnection;
public class FileDownloader {
public static void main(String[] args){
try{
int bufSize = 8 * 1024;
URL url = null;
BufferedInputStream fromURL = null;
BufferedOutputStream toFile = null;
/*
if(args[1].charAt(0) == 'h' && args[1].charAt(1) == 't' &&
args[1].charAt(2) == 't' && args[1].charAt(2) == 'p'){
url = new URL(args[1]);
}
else{
url = new URL("http://" + args[1]);
}
*
*/
// silinecek
url = new URL("http://www.cs.bilkent.edu.tr/~morhan/cs421/file_2.txt");
// Conncecting the URL to HttpURLConnection
HttpURLConnection con = (HttpURLConnection) url.openConnection();
// Setting up the outputfileName
String outputfileName = url.getPath().substring(url.getPath().lastIndexOf("/") + 1);
File outputFile = new File(outputfileName);
// Scenario - 1 ( 200 OK Message From HTML )
if(args.length == 3){ // 3 OLACAK
con.setRequestMethod("GET");
System.out.println("Size of the file is: " + con.getContentLength());
fromURL = new BufferedInputStream(con.getInputStream(), bufSize);
toFile = new BufferedOutputStream(new FileOutputStream(outputFile), bufSize);
if(con.getResponseCode() == HttpURLConnection.HTTP_OK){
// READING BYTE BY BYTE HERE
int read = -1;
byte[] buf = new byte[bufSize];
while ((read = fromURL.read(buf, 0, bufSize)) >= 0) {
toFile.write(buf, 0, read);
}
toFile.close();
System.out.println("ok");
}
// Scenario - 2 (206 Partial Get Message From HTML
}else if(args.length == 0){ // 5 OLACAK
con.setRequestMethod("HEAD");
if(con.getResponseCode() == HttpURLConnection.HTTP_OK){
System.out.println("Size of the file is: " + con.getContentLength());
//byte startRange = 0; //Byte.parseByte(args[3]);
//byte finishRange = 24;//Byte.parseByte(args[4]);
if(startRange < 0 || finishRange > ((byte)con.getContentLength()) - 1
|| startRange > finishRange){
System.out.println("Range is not OK.");
}else{
con.setRequestMethod("GET");
// I am Setting the range here, however the program
// always returns 200 OK message instead of a 206 one
con.setRequestProperty("Range: ", "bytes=0-20");
System.out.println(con.getRequestMethod());
fromURL = new BufferedInputStream(con.getInputStream(), bufSize);
toFile = new BufferedOutputStream(new FileOutputStream(outputFile), bufSize);
System.out.println(con.getResponseCode());
if(con.getResponseCode() == HttpURLConnection.HTTP_PARTIAL){
// NOT DOING THE IF STATEMENT
System.out.println("aaaa");
}
System.out.println("bbbb");
}
}
}else{
System.out.println("Wrong argument count.");
}
}catch(MalformedURLException mue){
mue.printStackTrace();
}catch (IOException ioe) {
ioe.printStackTrace();
}
}
}
Can anyone help me abut this?