Apache FOP: fox:external-document and fo:external-graphic - java

i've got a problem with Apache FOP and creating PDF files. I'm trying to include an image as a header and an another pdf as an attachment.
Running my application (compiled with Java 8) on Windows it creates PDF with image header and PDF attachment; running the same jar on AS400 machine I got PDF only with image header and no attachment.
My jar is created with shade plugin; does exists a particular order in pom dependencies to obtain the same result on Windows and AS400?
Thanks in advance.

Related

File content is blank, when raw file downloaded from gitlab using gitlab api

Using GitLab Api, I am able to download the raw file and writing to a new file using Angular8 functionality. After writing when I am extracting that file it's showing blank content. Please suggest I have a requirement for download of zip file from gitlab repository in a file explorer. Below api i have used.
Get raw file from repository
GET /projects/:id/repository/files/:file_path/raw
curl --header "PRIVATE-TOKEN: <your_access_token>"
"https://gitlab.example.com/api/v4/projects/13083/repository/files/app%2Fmodels%2Fkey%2Erb/raw?ref=master"

Apache PDFBox Removes Horizontal Lines When Converting to PNG

I have a PDF that when I render it to a png it removes the horizontal and vertical lines. This is the PDF and what it should look like: https://drive.google.com/file/d/1sAXwnaoZ-QJn1Kbpw85hhzV_X5zwgfkA/view?usp=sharing
And here is the PNG of the PDF using PDFBox 2.0.13:
Why are those lines removed and how can I get them to be rendered in the PNG?
The problem (most likely) is that you have no Java ImageIO plugin for the JBIG2 image format installed as the missing lines and headings are actually JBIG2 images.
When I run the PDFBox PDF Debugger without such a plugin and open your PDF in it, it does not display the missing parts either; having added such a plugin to its classpath, it suddenly does display them.
For more details on the PDFBox dependencies please read the PDFBox 2.0 Dependencies page. In particular
JAI Image I/O
PDF supports embedded image files, however support for some formats require third party libraries which are distributed under terms incompatible with the Apache 2.0 license:
Reading JBIG2 images: JBIG2 ImageIO
Reading JPEG 2000 (JPX) images: JAI Image I/O Tools Core
Writing TIFF images requires JAI Image I/O Tools Core also.
These libraries are optional and will be loaded if present on the classpath, otherwise support for these image formats will be disabled and a warning will be logged when an unsupported image is encountered.
Maven dependencies for these components can be found in parent/pom.xml. Change the scope of the components if needed. Please make sure that any third party licenses are suitable for your project.
To include the JBIG2 library the following part can be included in your project pom.xml:
<dependency>
<groupId>org.apache.pdfbox</groupId>
<artifactId>jbig2-imageio</artifactId>
<version>3.0.0</version>
</dependency>

JasperReports Server: How to export report as html file using url

I need to export a report to a plain html file without the viewport of jasper reports.
If I do &output=pdf in url I get a pdf file, &output=xls, a xls file, but &output=html will show html inside a viewport of jasper reports. A want a html file too.
How can i achieve that? I am using the http api.
From documentation
The following example executes the same report as shown in the previous section, but also passes 4012 as an input control parameter and exports to PDF instead of HTML:
http://<host>:<port>/<context>/flow.html?_flowId=viewReportFlow&reportUnit=
/supermart/Details/CustomerDashboard&customerID=4012&output=pdf
How to export JasperReport to HTML?
Related question, however not helpful to me.
You would have to use the REST API to export
http://localhost:8080/jasperserver-pro/rest_v2/reports/repo/path/to/report.html

Using Solr CELL's ExtractingRequestHandler to index/extract files from package formats

Can you use ExtractingRequestHandler and Tika with any of
the compressed file formats (zip, tar, gz, etc) to extract the content out for indexing?
I am sending solr the archived.tar file using curl. curl "
http://localhost:8983/solr/update/extract?literal.id=doc1&fmap.content=body_texts&commit=true"
-H 'Content-type:application/octet-stream' --data-binary
"#/home/archived.tar"
The result I get when I query the document is that the file names inside the
archive are indexed as the "body_texts", but the content of those files is
not extracted or included. This is not the behavior I expected. Ref:
http://www.lucidimagination.com/Community/Hear-from-the-Experts/Articles/Content-Extraction-Tika#article.tika.example.
When I send 1 of the actual documents inside the archive using the same curl
command the extracted content is then stored in the "body_texts" field. Am
I missing a step for the compressed files?
I have added all the extraction dependencies as indicated by mat in
http://outoftime.lighthouseapp.com/projects/20339/tickets/98-solr-cell and
am able to successfully extract data from MS Word, PDF, HTML documents.
I'm using the following library versions.
Solr 1.40, Solr Cell 1.4.1, with Tika Core 0.4
Given everything I have read this version of Tika should support extracting
data from all files within a compressed file. Any help or suggestions would
be appreciated.
The short answer: Solr Cell 1.4.1 and Tika Core 0.6.
The long answer: After a lot of headaches I was able to get this working. I'll answer it for both people using solr directly and for people using solr with the Ruby library sunspot (which was my problem).
Here was what I did: I used this https://github.com/tomasc/sunspot_cell plugin to extend sunspot and give it the attachment feature. (Ignore this step if you're not using ruby/sunspot)
v1.4.1 works for individual files but not with compressed files, so I had to explore a bit. I downloaded the v1.4.1 codebase from http://lucene.apache.org/solr/ and grabbed the dist/apache-solr-cell-1.4.1.jar then I had to pull down the Tika libraries from the 1.5 branch http://svn.apache.org/viewvc/lucene/solr/branches/branch-1.5-dev/contrib/extraction/lib/.
You can download each individually, or you can use svn to checkout the branch by
svn co http://svn.apache.org/repos/asf/lucene/solr/branches/branch-1.5-dev
Or just checkout the library folder:
svn co http://svn.apache.org/repos/asf/lucene/solr/branches/branch-1.5-dev/contrib/extraction/lib/

Java applet error

I am doing a project on applets. I designed the applet using netbeans. After building the project in netbeans, I took the directory "classes" and a .html file from the "build" directory and moved it to another new directory. This .html file includes the applet. The .html file displays the applet correctly, when it is viewed from my desktop.
I uploaded the "classes" folder and the .html file to my free server (host4ufree.com) using FileZilla. If I try to view the webpage online, I get the following error instead of the applet getting displayed:
java.lang.ClassFormatError: Extra bytes at the end of class file
I am using JDk 1.6.0 update 18, and uploaded the file using FileZilla both ASCII and binary format manner. Yet, I am not able to solve the error problem. Does anybody know the solution to this? Is there something wrong in the manner in which I'm trying to add the applet to my webpage?
The question is quite unclear :S Anyway...
I uploaded the "classes" folder and the .html file to my free server
(host4ufree.com) using FileZilla.
If your applet contains more that one class I do not recommend upload the project classes folder itself but wrap your applet classes to jar file before delpoying it.
Report if that helped

Categories