We're running a Spring application within a docker container. Our application can take SVG files and transform them into PDF format to be embedded within a PDF.
The application works correctly on osx and transcodes as expected. However when run from inside a docker container, which has a different file system, the transcoder gets stuck and thrashes the cpu in some bizarre recursive file searching loop.
java.lang.Thread.State: RUNNABLE
at java.io.UnixFileSystem.getBooleanAttributes0(Native Method)
at java.io.UnixFileSystem.getBooleanAttributes(UnixFileSystem.java:242)
at java.io.File.isFile(File.java:882)
at org.apache.commons.io.filefilter.FileFileFilter.accept(FileFileFilter.java:59)
at org.apache.commons.io.filefilter.AndFileFilter.accept(AndFileFilter.java:122)
at org.apache.commons.io.filefilter.AndFileFilter.accept(AndFileFilter.java:122)
at org.apache.commons.io.filefilter.OrFileFilter.accept(OrFileFilter.java:118)
at java.io.File.listFiles(File.java:1291)
at org.apache.commons.io.DirectoryWalker.walk(DirectoryWalker.java:357)
at org.apache.commons.io.DirectoryWalker.walk(DirectoryWalker.java:364)
at org.apache.commons.io.DirectoryWalker.walk(DirectoryWalker.java:364)
at org.apache.commons.io.DirectoryWalker.walk(DirectoryWalker.java:364)
at org.apache.commons.io.DirectoryWalker.walk(DirectoryWalker.java:364)
at org.apache.commons.io.DirectoryWalker.walk(DirectoryWalker.java:364)
at org.apache.commons.io.DirectoryWalker.walk(DirectoryWalker.java:364)
at org.apache.commons.io.DirectoryWalker.walk(DirectoryWalker.java:364)
at org.apache.commons.io.DirectoryWalker.walk(DirectoryWalker.java:364)
at org.apache.commons.io.DirectoryWalker.walk(DirectoryWalker.java:364)
at org.apache.commons.io.DirectoryWalker.walk(DirectoryWalker.java:364)
at org.apache.commons.io.DirectoryWalker.walk(DirectoryWalker.java:364
Here's a look at the stack trace of a thread that ran the PDFTranscoder. Walk is called recursively for a while and then eventually getBooleanAttributes0 is called and everything blocks.
After some further research, we found out we could take a closer look at what is happening with the strace command and saw that the system is essentially spamming the following in an endless loop.
stat("/./sys/devices/pci0000:00/PNP0103:00/subsystem/devices/PNP0103:00/subsystem/devices/PNP0103:00/subsystem/devices/PNP0103:00/subsystem/devices/PNP0103:00/subsystem/devices/PNP0103:00/subsystem/devices/PNP0103:00/subsystem/devices/PNP0103:00/subsystem/devices/PNP0103:00/subsystem/devices/PNP0103:00/subsystem/devices/PNP0103:00/subsystem/devices/PNP0103:00/subsystem/devices/PNP0103:00/subsystem/devices/PNP0103:00/subsystem/devices/PNP0103:00/subsystem/devices/PNP0103:00/subsystem/devices/PNP0103:00/subsystem/devices/PNP0103:00/subsystem/devices/pcspkr/input/input1/subsystem/input0/subsystem/input0/uniq", {st_mode=S_IFREG|0444, st_size=4096, ...}) = 0 <0.000224>
We seem to be getting blocked or hanging in the stat call. But we've delved so deep into system calls now that it's proving hard to debug. Does anyone have any ideas?
I was getting the same error. After trying many things to fix it, I came to the conclusion that it's an issue with your having fonts available to you on Mac OS X, while your (headless) Docker container OS has no fonts. The transcoder is not failing gracefully while searching for fonts all over the place. I solved it by forcing the transcoder to use the default fonts (and to not automatically look for other fonts) like this:
...
PDFTranscoder transcoder = new PDFTranscoder();
transcoder.addTranscodingHint(PDFTranscoder.KEY_AUTO_FONTS, false);
...
transcoder.transcode(transcoderInput, transcoderOutput);
...
Note this has the downside, of course, of falling back to its known fonts when it encounters one outside of the 14 fonts. I tried things to fix that but so far no luck.
I hope this helps someone.
I had the same issues and solved it in my case. This thread helped a lot. Now I'd like to put all the parts together - maybe also for other people who come accross this.
The reason for this is the directory in which you start your Java application. I recognized that this problem occurs under the following circumstances:
The Java application was started in the filesystem root.
Auto scanning for fonts is enabled in Apache FOP.
I found a similar post in Infinite scan for fonts in Apache FOP on CentOS. The explanation of Fyodor Sherstobitov sounds plausible.
Apache FOP uses the working directory of your Java application to scan for fonts. In this case this is the filesystem root. Therefore the whole filesystem will be scanned.
The following code is copied from PDFDocumentGraphics2DConfigurator. It shows that new File(".").getAbsoluteFile().toURI() is used - which is the working directory resp. the directory in which the Java application was started.
/**
* Creates the {#link FontInfo} instance for the given configuration.
* #param cfg the configuration
* #param useComplexScriptFeatures true if complex script features enabled
* #return the font collection
* #throws FOPException if an error occurs while setting up the fonts
*/
public static FontInfo createFontInfo(Configuration cfg, boolean useComplexScriptFeatures)
throws FOPException {
FontInfo fontInfo = new FontInfo();
final boolean strict = false;
if (cfg != null) {
URI thisUri = new File(".").getAbsoluteFile().toURI();
InternalResourceResolver resourceResolver
= ResourceResolverFactory.createDefaultInternalResourceResolver(thisUri);
//TODO The following could be optimized by retaining the FontManager somewhere
FontManager fontManager = new FontManager(resourceResolver, FontDetectorFactory.createDefault(),
FontCacheManagerFactory.createDefault());
//TODO Make use of fontBaseURL, font substitution and referencing configuration
//Requires a change to the expected configuration layout
DefaultFontConfig.DefaultFontConfigParser parser
= new DefaultFontConfig.DefaultFontConfigParser();
DefaultFontConfig fontInfoConfig = parser.parse(cfg, strict);
DefaultFontConfigurator fontInfoConfigurator
= new DefaultFontConfigurator(fontManager, null, strict);
List<EmbedFontInfo> fontInfoList = fontInfoConfigurator.configure(fontInfoConfig);
fontManager.saveCache();
FontSetup.setup(fontInfo, fontInfoList, resourceResolver, useComplexScriptFeatures);
} else {
FontSetup.setup(fontInfo, useComplexScriptFeatures);
}
return fontInfo;
}
You can solve this in two ways:
Disable auto scanning for fonts in Apache FOP as Bob Schultz mentioned. If you do that, you will have to configure the fonts for Apache FOP manually.
Don't start the Java application in the filesystem root as snyman mentioned. In this case you can continue using the auto scanning for fonts.
Disable Auto Scanning
This is a snippet of the code, which configures Apache FOP with a config file. If you don't enable the auto scan in that file, you don't have to disable it programatically.
// Load configuration for manually configuring fonts
DefaultConfigurationBuilder cfgBuilder = new DefaultConfigurationBuilder();
Configuration cfg = cfgBuilder.build(ResourceUtil.getResourceStream("path/to/config"));
PDFTranscoder transcoder = new PDFTranscoder();
transcoder.configure(cfg);
// Disable auto scanning for fonts programatically - not necessary if you
// don't enable auto scan in your config file
// transcoder.addTranscodingHint(PDFTranscoder.KEY_AUTO_FONTS, false);
Start the Application in a Separate Folder
By specifying the WORKDIR everything happens in this folder. The auto scan runs there and finishes fast and smooth.
FROM openjdk:8-jre-alpine
WORKDIR /app
ARG JAR_FILE=target/myapp-0.0.1-SNAPSHOT.jar
COPY ${JAR_FILE} app.jar
...
ENTRYPOINT ["java","-jar","app.jar"]
I had the same issue.
Solved it by setting the WORKDIR variable in the DockerFile.
I set it to my deployment dir, where I copy the spring jar file. Ie:
WORKDIR ${DEPLOYMENT_DIR}
using latest batik libraries in pom
<dependency>
<groupId>org.apache.xmlgraphics</groupId>
<artifactId>batik-all</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.xmlgraphics</groupId>
<artifactId>fop</artifactId>
<version>2.2</version>
</dependency>
I had the same problem on my project. I solve it with the downgrade of batik to 1.7 version.
I hope this will work for you.
Try adding the parameter '-Duser.dir=/%CATALINA_HOME/' to your CATALINA_OPTS. I encountered the same issue on my centos server.
Related
we have recently upgraded the DataStage from 9.1 to 11.7 on Server AIX 7.1 .
and i'm trying to use the new connector "File Connector" to write on parquet file. i created simple job takes from teradata as a source and write on the parquet file as a target.
Image of the job
but facing below error :
> File_Connector_20,0: java.lang.NoClassDefFoundError: org.apache.hadoop.fs.FileSystem
at java.lang.J9VMInternals.prepareClassImpl (J9VMInternals.java)
at java.lang.J9VMInternals.prepare (J9VMInternals.java: 304)
at java.lang.Class.getConstructor (Class.java: 594)
at com.ibm.iis.jis.utilities.dochandler.impl.OutputBuilder.<init> (OutputBuilder.java: 80)
at com.ibm.iis.jis.utilities.dochandler.impl.Registrar.getBuilder (Registrar.java: 340)
at com.ibm.iis.jis.utilities.dochandler.impl.Registrar.getBuilder (Registrar.java: 302)
at com.ibm.iis.cc.filesystem.FileSystem.getBuilder (FileSystem.java: 2586)
at com.ibm.iis.cc.filesystem.FileSystem.writeFile (FileSystem.java: 1063)
at com.ibm.iis.cc.filesystem.FileSystem.process (FileSystem.java: 935)
at com.ibm.is.cc.javastage.connector.CC_JavaAdapter.run (CC_JavaAdapter.java: 444)
i followed the steps in below link :
https://www.ibm.com/support/knowledgecenter/SSZJPZ_11.7.0/com.ibm.swg.im.iis.conn.s3.usage.doc/topics/amaze_file_formats.html
1- i uploaded the jar files into "/ds9/IBM/InformationServer/Server/DSComponents/jars"
2- added them to CLASSPATH in agent.sh then restarted the datastage.
3- i have set The environment variable CC_USE_LATEST_FILECC_JARS to the value parquet-1.9.0.jar:orc-2.1.jar.
i tried also to add the CLASSPATH as an environment variable in the job but not worked.
noting that i'm using Local in File System.
so any hint is appreciated as i'm searching a lot time ago.
Thanks in advance,
Which File System mode you are using ? If you are using Native HDFS as File System mode, then you would need to configure CLASSPATH to include some third party jars.
Perhaps these links should provide you with some guidance.
https://www.ibm.com/support/pages/node/301847
https://www.ibm.com/support/pages/steps-required-configure-file-connector-use-parquet-or-orc-file-format
Note : Based on the hadoop distribution and version you are using, the version of the jars could be different.
If the above information does not help in resolving the issue, then you may have to reach out to IBM Support to get this addressed.
TO use File Connector, there is no need to add CLASSPATH in agent.sh unless you want to import HDFS files from IMAM.
If your requirement is reading Parquet files, then set
$CC_USE_LATEST_FILECC_JARS=parquet-1.9.0.jar
$FILECC_PARQUET_AVRO_COMPAT_MODE=TRUE
If you are still seeing issue, then run job with $CC_MSG_LEVEL=2 and open IBM support case along with job design, FULL job log and Version.xml file from Engine tier.
My laptop just got upgraded to windows 10 from windows 7, and a piece of code stopped working. A large application using velocity templates which used to work on windows 7 just fine now cannot find the template files.
The templates are kept at the path WebContent\WEB-INF\config\templates under the project directory. An EngineInitializer class is used to load them. The code for the class is as follows:
private static Logger logger = Logger.getLogger(EngineInitializer.class);
private static String RELATIVE_PATH_FOR_TEMPLATES = "/WEB-INF/config/templates";
if(logger.isDebugEnabled())
logger.debug("About to initialize the Velocity Engine");
Properties p = new Properties();
String absolutePath=new File(Thread.currentThread().getContextClassLoader().getResource("").getFile()).getParentFile().getParentFile().getPath();//this goes to webapps directory
//configure the velocity logger to use the default logging
p.put(RuntimeConstants.RUNTIME_LOG_LOGSYSTEM_CLASS,"org.apache.velocity.runtime.log.Log4JLogChute");
p.put("runtime.log.logsystem.log4j.logger", "defaultLog");
p.put("file.resource.loader.path", absolutePath + RELATIVE_PATH_FOR_TEMPLATES);
p.put("file.resource.loader.cache", "true");
p.put("file.resource.loader.modificationCheckInterval", "-1");
p.put("parser.pool.size", "30");
Velocity.init(p);
if(logger.isInfoEnabled())
logger.info("The velocity engine is now initialized..");
The following lines in the applicationBeans.xml file initializes the engine:
<!-- initialize the velocity engine before the listener thread starts -->
<bean id="engineInitializer" class="com.file.myprogram.template.processor.EngineInitializer"
init-method="initializeEngine" />
At the beginning, the log is getting printed. Inside the individual classes, the templates are called using Velocity.getTemplate() method. This is now returning a org.apache.velocity.exception.ResourceNotFoundException: Unable to find resource 'MediationZone.vm' error. Nothing other than the underlying OS has been changed. This code runs fine on an RHEL server as a web app. Code has been downloaded from subversion and run on the windows 10 laptop using eclipse v4.9.
What is going wrong here?
I had the same issue and nothing worked (file:/ prefix variants, C: or c:).
Finally I tried it with some relative path (like used in the production environment) and voilĂ , it worked.
Figured out current run dir using System.getProperty("user.dir"):
vel.getTemplate( "foo/bar/some.vm", UTF_8.name() )
I want to write logging-messages to a defined file into the tomcat's log-folder, using eclipse, maven, tinylog.
Problem: There is no webapp.log as soon as I run the app in tomcat.
In eclipse everything works fine.
What I did:
add Maven-dependency tinylog-1.2.jar
set configuration-parameter in Run Configuration (Main-Tab) so the tinylog-properties can be found for the build-process:
name: -Dtinylog.configuration
value: C:\Program
Files\Tomcat\apache-tomcat-9.0.0.M13\webapps\folder\subfolder\tinylog.properties
in Java-Class:
import org.pmw.tinylog.Logger;
...
Logger.info(message);
tinylog.properties looks like:
tinylog.writer = file
tinylog.writer.filename = webapp.log
tinylog.writer.buffered = true
tinylog.writer.append = true
tinylog.level = info
I also tried different file-references but none of them worked:
tinylog.writer.file = C:\Program Files\Tomcat\apache-tomcat-9.0.0.M13\logs\webapp.log
tinylog.writer.file= "C:\Program Files\Tomcat\apache-tomcat-9.0.0.M13\logs\webapp.log"
Does anybody know how to write the logs into the named path-file?
Thanks for any valuable hint.
I propose to use the tinylog-jul artifact instead of the usual tinylog artifact. tinylog-jul provides the tinylog API, but uses the Tomcat logging back end. So, you don't need to configure tinylog. All log entries will be automatically output as you are used to with other logging APIs on Tomcat.
When I call PDField.setValue to set the value for a form field, I get the following stacktrace:
FileSystemFontProvider.saveDiskCache(349) | Could not write to font cache
java.io.FileNotFoundException: /.pdfbox.cache (Permission denied)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:194)
at java.io.FileOutputStream.<init>(FileOutputStream.java:145)
at java.io.FileWriter.<init>(FileWriter.java:73)
at org.apache.pdfbox.pdmodel.font.FileSystemFontProvider.saveDiskCache(FileSystemFontProvider.java:290)
at org.apache.pdfbox.pdmodel.font.FileSystemFontProvider.<init>(FileSystemFontProvider.java:226)
at org.apache.pdfbox.pdmodel.font.FontMapperImpl$DefaultFontProvider.<clinit>(FontMapperImpl.java:130)
at org.apache.pdfbox.pdmodel.font.FontMapperImpl.getProvider(FontMapperImpl.java:149)
at org.apache.pdfbox.pdmodel.font.FontMapperImpl.findFont(FontMapperImpl.java:413)
at org.apache.pdfbox.pdmodel.font.FontMapperImpl.findFontBoxFont(FontMapperImpl.java:376)
at org.apache.pdfbox.pdmodel.font.FontMapperImpl.getFontBoxFont(FontMapperImpl.java:350)
at org.apache.pdfbox.pdmodel.font.PDType1Font.<init>(PDType1Font.java:145)
at org.apache.pdfbox.pdmodel.font.PDType1Font.<clinit>(PDType1Font.java:79)
at org.apache.pdfbox.pdmodel.font.PDFontFactory.createFont(PDFontFactory.java:62)
at org.apache.pdfbox.pdmodel.PDResources.getFont(PDResources.java:143)
at org.apache.pdfbox.pdmodel.interactive.form.PDDefaultAppearanceString.processSetFont(PDDefaultAppearanceString.java:164)
at org.apache.pdfbox.pdmodel.interactive.form.PDDefaultAppearanceString.processOperator(PDDefaultAppearanceString.java:131)
at org.apache.pdfbox.pdmodel.interactive.form.PDDefaultAppearanceString.processAppearanceStringOperators(PDDefaultAppearanceString.java:107)
at org.apache.pdfbox.pdmodel.interactive.form.PDDefaultAppearanceString.<init>(PDDefaultAppearanceString.java:85)
at org.apache.pdfbox.pdmodel.interactive.form.PDVariableText.getDefaultAppearanceString(PDVariableText.java:93)
at org.apache.pdfbox.pdmodel.interactive.form.AppearanceGeneratorHelper.<init>(AppearanceGeneratorHelper.java:94)
at org.apache.pdfbox.pdmodel.interactive.form.PDTextField.constructAppearances(PDTextField.java:262)
at org.apache.pdfbox.pdmodel.interactive.form.PDTerminalField.applyChange(PDTerminalField.java:228)
at org.apache.pdfbox.pdmodel.interactive.form.PDTextField.setValue(PDTextField.java:218)
I am running PDFBox 2.0.4 which is the newest version. My webserver most likely does not have access to write to .pdfbox.cache in the default location (which seems to be the JVM property user.home). Is there any way to disable the disk caching or change the location of the cache file?
I did notice that I can set a JVM wide system property called pdfbox.fontcache, but my webapp shares a jvm with other applications so this isn't an optimal solution. I also tried using that solution and setting the pdfbox.fontcache to /tmp, but it didn't actually create a file (although it now only throws the stacktrace once per boot).
I looked into the code in the FileSystemFontProvider and the problematic code seems to be in the saveDiskCache method. In that method, it first tries to write the file, but a FileNotFoundException is thrown instead of a SecurityException. FileNotFoundException inherits from IOException.
File file = getDiskCacheFile();
try
{
writer = new BufferedWriter(new FileWriter(file));
}
catch (SecurityException e)
{
return;
}
When you set pdfbox.fontcache with a temporary folder like /tmp where your JVM can write new file inside then a cache file called .pdfbox.cache can be created when you generate PDF with PDFBox (I also use PDFBox 2.0.4).
Maybe your JVM cannot create a new file inside your /tmp directory? To check this try to create a new file with the user running your JVM with an interactive command prompt (shell).
With the command ls -lA /tmp you should see a .pdfbox.cache file in the temporary folder that you configure (example with a tomcat JVM and user):
-rw-r--r-- 1 tomcat tomcat 2050 Dec 29 16:13 .pdfbox.cache
It's not an optimal solution because you can't set multiple pdfbox.fontcache system property on a single JVM.
I will like to setup performance monitoring on an application running on Railo 4 using New Relic. I have consulted the java docs on Railo, Railo google groups, etc but no one seems to have a perfect step by step.
Here is what I have done so far:
Extracted newrelic into Railo's install folder.
Added this line to setenv.sh
export JAVA_OPTS="$JAVA_OPTS -javaagent:c/railo/newrelic/newrelic.jar"
Restarted the Railo-Tomcat service
Added this line to the onapplicationstart function
application.NewRelic = createObject( "java", "com.newrelic.api.agent.NewRelic" );
Added this line to the onrequeststart function
if ( structKeyExists( application, "NewRelic" ) ) {
application.NewRelic.setTransactionName( "CFML", CGI.SCRIPT_NAME );
}
My application is still not sending metrics to New Relic. I will appreciate a step by step instruction of what to do as I can't seem to find that anywhere else and I have no idea what to do.
You can't use setenv.sh on Windows. Instead modify the catalina.bat file, or use the Configure Tomcat utility in the Start Menu to set the javaagent option. These steps can be found in more detail in the New Relic documentation
We have more detailed instructions for installing with our supported platforms and frameworks in our documentation. To see a list of the supported frameworks we are compatible with check out https://docs.newrelic.com/docs/java/new-relic-for-java#h2-compatibility
We may be able to work with the Tomcat portion of your environment and you can find helpful installation information at https://docs.newrelic.com/docs/java/java-agent-manual-installation
Should you run into any obstacles, I suggest opening a ticket at http://support.newrelic.com