I’m using java.net.URL.openStream() to access a HTTPS resource. The returned stream is incomplete for some URLs: for the example below, it yields a 1,105,724 byte-file whereas the same URL accessed from a browser yields a 5,755,858 byte-file (even when "disabling" Content-Encoding).
And it doesn’t even throw an exception.
What am I missing?
import static java.nio.file.Files.copy;
import java.io.IOException;
import java.io.InputStream;
import java.net.URL;
import java.nio.file.Paths;
public class Test {
public static void main(String... args) throws IOException {
try (final InputStream in = new URL(
"https://upload.wikimedia.org/wikipedia/commons/9/95/Germany_%28orthographic_projection%29.svg").openStream()) {
copy(in, Paths.get("germany.svg"));
}
}
}
Edit
I’ve tested this code a lot of times (on different networks, but always on JRE 1.8.0_60 / Mac OS X 10.11.4), and sometimes it’s suddenly "starting to work".
However, switching to another of my problematic URLs (e.g. "https://upload.wikimedia.org/wikipedia/commons/c/ce/Andorra_in_Europe_%28zoomed%29.svg") enables me to reproduce the issue.
Does this mean that it is a server issue? I’ve never seen it on a browser though.
It's working fine.
As others have suggested there may be a problem with your network, try connecting to another network.
package test;
import java.io.InputStream;
import java.net.URL;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.StandardCopyOption;
public class TestMain2 {
public static void main(String[] args) {
System.out.println("Started");
try (final InputStream in = new URL(
"https://upload.wikimedia.org/wikipedia/commons/9/95/Germany_%28orthographic_projection%29.svg")
.openStream()) {
Path outputFile = Paths.get("test.svg");
Files.copy(in, outputFile, StandardCopyOption.REPLACE_EXISTING);
System.out.println("Output file size : " + outputFile.toFile().length());
System.out.println("Finished");
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
Output
Started
Output file size : 5755858
Finished
Related
document4j looks like a great api and I'd love to use it. I just want to bulk convert docx to pdf on my mac (with Microsoft office installed).
I have written this but I get the error that the LocalConverter cannot be resolved. What am I doing wrong? Have I imported the correct jars?
package Input;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.URISyntaxException;
import java.sql.SQLException;
import java.text.ParseException;
import java.util.concurrent.Future;
import org.xml.sax.SAXException;
import com.documents4j.api.DocumentType;
import com.documents4j.api.IConverter;
public class TBB {
public static FileInputStream convert(InputStream docxInputStream) throws FileNotFoundException {
FileInputStream inputStream = null;
try (OutputStream outputStream = new FileOutputStream(new File("/Users/sebastianzeki/mydoc.docx"))) {
IConverter converter = LocalConverter.builder().build();
converter
.convert(docxInputStream).as(DocumentType.DOCX)
.to(outputStream).as(DocumentType.PDF)
.prioritizeWith(1000).schedule();
inputStream = new FileInputStream("/Users/sebastianzeki/mydoc.docx");
} catch (Exception e) {
e.getMessage();
}
return inputStream;
}
}
documents4j is not compatible with Mac. Look at your stacktrace and you will find something like: com.documents4j.throwables.ConverterAccessException: Unable to run script: /Applications/Tomcat-9.0.1/temp/1564683313105-0/word_start775650809.vbs Documents4j is running a generated vbScript under the hood. There is no way for mac to run vbScript as far as I know. I had to install a Windows vm with Word installed on my server and use the documents4j remote api to call into the windows vm to do the conversions.
My code here is compiling correctly, but I am running into the problem that my ArrayList of BufferedImages is always empty. Honestly I don't have any knowledge regarding ImageIO or the likes!
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import javax.imageio.IIOException;
import javax.imageio.ImageIO;
import javax.imageio.ImageReader;
import net.sf.javavp8decoder.imageio.WebPImageReader;
import net.sf.javavp8decoder.imageio.WebPImageReaderSpi;
class MyProj{
public static void main(String[] args) throws IOException{
System.out.println("Main");
ArrayList<BufferedImage> collectedImg=getFrames();
}
static ArrayList<BufferedImage> getFrames() throws IIOException{
File MyWebM= new File("/users/case3/mcclusm4/workspace/LineTech/src/goal.webm");
ArrayList<BufferedImage> frames = new ArrayList<BufferedImage>();
try{
ImageReader ir = new WebPImageReader(new WebPImageReaderSpi());
ir.setInput(ImageIO.createImageInputStream(MyWebM));
for(int i = 0; i < ir.getNumImages(true); i++)
frames.add(ir.read(i));
}catch(IOException e){}
return frames;
}
}
First of all dont catch Exception and do nothing:
catch (Exception e) {}
Now when an exception is catched it silently fails without any information.
Change catch to print stacktrace: e.printStackTrace() and post it.
Disclaimer: I have never tested the code in question myself. But...
From looking at the source code of net.sf.javavp8decoder.imageio.WebPImageReader it cannot decode WebM files. It only supports single frame WebP files.
If you stop swallowing the exception and ignoring it, as suggested by #robocoder, you should get an IIOException with the message "Bad WEBP signature!".
I'm learning Java and sometimes I have some problem to retrieve the information I need from objects...
When I debug my code I can see in targetFile, a path property but I don't know how to get it in my code.
This is a screenshot:
(source: toile-libre.org)
This is my complete code:
package com.example.helloworld;
import com.github.axet.wget.WGet;
import com.github.axet.wget.info.DownloadInfo;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import java.io.File;
import java.io.IOException;
import java.net.URL;
public class HelloWorld {
public static void main(String[] args) throws IOException {
nodejs();
}
public static void nodejs() throws IOException {
// Scrap the download url.
Document doc = Jsoup.connect("http://nodejs.org/download").get();
Element link = doc.select("div.interior:nth-child(2) > table:nth-child(2) > tbody:nth-child(1) > tr:nth-child(1) > td:nth-child(3) > a:nth-child(1)").first();
String url = link.attr("abs:href");
// Print the download url.
System.out.println(url);
// Download file via the scraped url.
URL download = new URL(url);
File target = new File("/home/lan/Desktop/");
WGet w = new WGet(download, target);
w.download();
// Get the targetFile property
// ???
}
}
How can I get this value?
I do not know your code but the field you are interested in may be encapsulated and thus not accessible in your code, but the debugger can see it at runtime :)
Update:
https://github.com/axet/wget/blob/master/src/main/java/com/github/axet/wget/WGet.java
The field is default package, you can only access it from within the package.
This can be frustrating at times, but you should ask yourself why the designers of this class decided to hide this field.
I want to know the JUnit test cases for the following program.please help. I have not included the main method here. Want to know the JUnit test cases for the url() method in the code. This code is to read HTML from a website and save it in a file in local machine
package Java3;
import java.io.BufferedReader;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.PrintStream;
import java.net.MalformedURLException;
import java.net.URL;
import java.util.logging.Level;
import java.util.logging.Logger;
public class Urltohtml
{
private String str;
public void url() throws IOException
{
try
{
FileOutputStream f=new FileOutputStream("D:/File1.txt");
PrintStream p=new PrintStream(f);
URL u=new URL("http://www.google.com");
BufferedReader br=new BufferedReader(new InputStreamReader(u.openStream()));
//str=br.readLine();
while((str=br.readLine())!=null)
{
System.out.println(str+"\n");
p.println(str);
}
}
catch (MalformedURLException ex)
{
Logger.getLogger(Urltohtml.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
I would rename that class to UrlToHtml and write a single JUnit test class UrlToHtmlTest.
Part of the reason why you're having problems testing this is that the class is poorly designed and implemented:
You should pass in the URL you want to scrape, not hard code it.
You should return the content as a String or List, not print it to a file.
You might want to throw that exception rather than catch it. Your logging isn't exactly "handling" the exceptional situation. Let it bubble out and have clients log if they wish.
You don't need that private data member; return the contents. That lets you make this method static.
Good names matter. I don't like what you have for the class or the method.
Why are you writing this when you could use a library to do it?
Here's what the test class might look like:
public class UrlToHtmlTest {
#Test
public void testUrlToHtml() {
try {
String testUrl = "http://www.google.com" ;
String expected = "";
String actual = UrlToHtml.url(testUrl);
Assert.assertEquals(expected, actual);
} catch (Exception e) {
e.printStackTrace();
Assert.fail();
}
}
}
i have axis m1114 ip camera
i want to display live streaming as well as record streaming using java. i tried following code but frame rate is very low
please suggest some api gives me more frame rate and recording functionality.
import java.io.File;
import java.net.URL;
import com.googlecode.javacpp.Loader;
import com.googlecode.javacv.*;
import com.googlecode.javacv.cpp.*;
import static com.googlecode.javacv.cpp.opencv_core.*;
import static com.googlecode.javacv.cpp.opencv_imgproc.*;
import static com.googlecode.javacv.cpp.opencv_calib3d.*;
import static com.googlecode.javacv.cpp.opencv_objdetect.*;
import java.awt.image.BufferedImage;
import java.io.IOException;
import java.io.InputStream;
import javax.imageio.ImageIO;
import demo.authenticator.MyAuthenticator;
import java.net.Authenticator;
import java.net.MalformedURLException;
import org.jcodec.api.SequenceEncoder;
public class Demo {
public static void main(String[] args) throws IOException {
CanvasFrame CamWindow = new CanvasFrame("Camera");
int i=0,j=0;
URL url = null ;
SequenceEncoder encoder=new SequenceEncoder(new File("video.mp4"));
try {
// Create a new URL object from the URL-string of our camera.
url = new URL("http://192.168.168.92/axis-cgi/jpg/image.cgi");
} catch (MalformedURLException m) {
m.getMessage();
}
// Check if this camera requires authentication.
// If so, then create and set the authenticator object.
MyAuthenticator authenticator = new MyAuthenticator("root",
"pass");
Authenticator.setDefault(authenticator);
Long stime=System.currentTimeMillis();
while(true){
i++;
//InputStream is = url.openStream();
BufferedImage image = ImageIO.read(url);
CamWindow.showImage(image);
// is.close();
/* if(i<100)
{
encoder.encodeImage(image);
}
else
{
if(j==0)
{
encoder.finish();
j++;
System.out.println("video saved");
System.out.println((System.currentTimeMillis()-stime)/1000+"seconds");
}
}*/
System.out.println((System.currentTimeMillis()-stime));
}
}
}
The Axis camera API is here: http://www.axis.com/files/manuals/vapix_video_streaming_52937_en_1307.pdf
You need to use this:
http:///axis-cgi/mjpg/video.cgi
instead of the image URL you have now. Getting a still image from the Axis camera will be very choppy. You need to use the Motion JPEG feed it spits out.
I have also gone through for these solutions and one of the good API i found is WEBCAM-Capture. I rate it good for some reasons
It is updated
It is open source
Many Examples
And most IMPORTANT its support from its developers is fast and accurate rather than other Camera supporting APIs.
Hope this would help you.