Xtext ecore file size limited to 50kb? - java

The Xtext projects ecore file exceeds the 50kb.
The workflow generation always works fine. But when i start the editor it crashes.
If i comment out some grammar rules, reducing the ecore file size to less than 50kb it works well. But as soon it exceeds the limit following exception arise:
!MESSAGE com.sample.mydsl.ui.internal.MyDslActivator - Failed to create injector for com.sample.mydsl.MyDsl
...
Caused by: java.lang.RuntimeException: Missing serialized package: myDsl.ecore
at com.sample.mydsl.myDsl.impl.MyDslPackageImpl.loadPackage(MyDslPackageImpl.java:5897)
at com.sample.mydsl.myDsl.impl.MyDslPackageImpl.init(MyDslPackageImpl.java:1084)
at com.sample.mydsl.myDsl.MyDslPackage.<clinit>(MyDslPackage.java:58)
I am pretty sure that it's not the rules logic itself, because i also tested to limit the grammar to running conditions. And extended it then by mock rules to increase the file size. Anyway it crashed...
I guess the problem is lying deeper than exceptions message shows.
My workflow is configured as follows:
fragment = parser.antlr.XtextAntlrGeneratorFragment auto-inject {
options = {
classSplitting=true
fieldsPerClass = "500"
methodsPerClass = "500"
}
}
same settings for XtextAntlrUiGeneratorFragment
Did anyone collect experience with those problem already? I would be very grateful for some suggestions.

Related

Why does java.awt.Font.getStringBounds give different result on different machines?

I have an application that generates PDF reports (using JasperReports), however if I run my application on my development laptop the text fields have a slightly different size than when I generate the exact same report on the server. I eventually reduced the issue to the following code:
final Font font = Font.createFont(
Font.TRUETYPE_FONT,
MyTest.class.getResourceAsStream("/fonts/europa/Europa-Bold.otf")
).deriveFont(10f);
System.out.println(font);
System.out.println(font.getStringBounds(
"Text",
0,
4,
new FontRenderContext(null, true, true)
));
On my Laptop this prints:
java.awt.Font[family=Europa-Bold,name=Europa-Bold,style=plain,size=10]
java.awt.geom.Rectangle2D$Float[x=0.0,y=-9.90999,w=20.080002,h=12.669988]
On the server this prints:
java.awt.Font[family=Europa-Bold,name=Europa-Bold,style=plain,size=10]
java.awt.geom.Rectangle2D$Float[x=0.0,y=-7.6757812,w=20.06897,h=10.094452]
As you can see I actually ship the font file with the application, so I believe that there is no chance that both machines actually work with a different font.
I would have guessed that under these conditions the output of getStringBounds is system-independent. Obviously it is not. What could possibly cause the difference?
Disclaimer! : I'm not a font dev expert, just sharing my experience.
Yes, it's kind of native. Even new JavaFX web view for example, is depends on webkit.
If you dive into debugging for getStringBounds, you will realize it reaches a point, where font manager should decide to load a concreted font manager, where the class name is supposed to be system property sun.font.fontmanager.
Source code of sun.font.FontManagerFactory
...
private static final String DEFAULT_CLASS;
static {
if (FontUtilities.isWindows) {
DEFAULT_CLASS = "sun.awt.Win32FontManager";
} else if (FontUtilities.isMacOSX) {
DEFAULT_CLASS = "sun.font.CFontManager";
} else {
DEFAULT_CLASS = "sun.awt.X11FontManager";
}
}
...
public static synchronized FontManager getInstance() {
...
String fmClassName = System.getProperty("sun.font.fontmanager", DEFAULT_CLASS);
}
Those DEFAULT_CLASS values could validate your Obviously it is not. What could possibly cause the difference? mark.
The val for sun.font.fontmanager may be sun.awt.X11FontManager for some nix systems, but could be null for windows for instance, so the manager will be sun.awt.Win32FontManager.
Now each manager, may depends on vary underlying shaping/rendering engine/impl for sure(this may help).
Main reason could be the nature of fonts. As they are mostly vector stuffs. So based on platform/env, a rendered text could be bigger, or smaller. e.g., maybe windows apply desktop cleartype, and screen text size(DPI) on requested text rendering.
It seems, even if you have exactly two sun.awt.X11FontManager manager, the results will be vary. this may help too
If you just try out the sample code, on online compilers, you will face with vary results for sure.
Result of ideaone (https://ideone.com/AuQvMV), could not be happened, stderr has some interesting info
java.lang.UnsatisfiedLinkError: /opt/jdk/lib/libfontmanager.so: libfreetype.so.6: cannot open shared object file: No such file or directory
at java.base/java.lang.ClassLoader$NativeLibrary.load0(Native Method)
at java.base/java.lang.ClassLoader$NativeLibrary.load(ClassLoader.java:2430)
at java.base/java.lang.ClassLoader$NativeLibrary.loadLibrary(ClassLoader.java:2487)
at java.base/java.lang.ClassLoader.loadLibrary0(ClassLoader.java:2684)
at java.base/java.lang.ClassLoader.loadLibrary(ClassLoader.java:2638)
at java.base/java.lang.Runtime.loadLibrary0(Runtime.java:827)
at java.base/java.lang.System.loadLibrary(System.java:1902)
at java.desktop/sun.font.FontManagerNativeLibrary$1.run(FontManagerNativeLibrary.java:57)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:310)
at java.desktop/sun.font.FontManagerNativeLibrary.<clinit>(FontManagerNativeLibrary.java:32)
at java.desktop/sun.font.SunFontManager$1.run(SunFontManager.java:270)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:310)
at java.desktop/sun.font.SunFontManager.<clinit>(SunFontManager.java:266)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:415)
at java.desktop/sun.font.FontManagerFactory$1.run(FontManagerFactory.java:82)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:310)
at java.desktop/sun.font.FontManagerFactory.getInstance(FontManagerFactory.java:74)
at java.desktop/java.awt.Font.getFont2D(Font.java:497)
at java.desktop/java.awt.Font.getFamily(Font.java:1410)
at java.desktop/java.awt.Font.getFamily_NoClientCode(Font.java:1384)
at java.desktop/java.awt.Font.getFamily(Font.java:1376)
at java.desktop/java.awt.Font.toString(Font.java:1869)
at java.base/java.lang.String.valueOf(String.java:3042)
at java.base/java.io.PrintStream.println(PrintStream.java:897)
at Ideone.main(Main.java:19)
Note the failed/missed load of libfreetype which is a native/C font rendering app
Result of coding ground (https://www.tutorialspoint.com/compile_java_online.php)
fnt manager: sun.awt.X11FontManager
java.awt.Font[family=Dialog,name=tahoma,style=plain,size=10]
java.awt.geom.Rectangle2D$Float[x=0.0,y=-9.282227,w=22.09961,h=11.640625]
Result of jdoodle (https://www.jdoodle.com/online-java-compiler/)
fnt manager: sun.awt.X11FontManager
java.awt.Font[family=Dialog,name=tahoma,style=plain,size=10]
java.awt.geom.Rectangle2D$Float[x=0.0,y=-9.839991,w=24.0,h=12.569988]
My machine
fnt manager: null
java.awt.Font[family=Tahoma,name=tahoma,style=plain,size=10]
java.awt.geom.Rectangle2D$Float[x=0.0,y=-10.004883,w=19.399414,h=12.0703125]
My Story (may help, you may try)
I had similar issue back in some years ago, where text rendering using exactly same embedded font failed on macOs/jdk8, for complex text rendering(lots of ligatures). Not just sizes, but also broken ligatures, kerning, etc...
I could fix my issue(cannot remember if fixed the sizing, but no broken ligatures for sure), using another workground, as following
InputStream is = Main.class.getResourceAsStream(fontFile);
Font newFont = Font.createFont(Font.TRUETYPE_FONT, is);
GraphicsEnvironment.getLocalGraphicsEnvironment().registerFont(newFont);
//later load the font by constructing a Font ins
Font f = new Font(name/*name of the embedded font*/, style, size);
Registering font using GraphicsEnvironment, and then instancing it using Font fixed our issue. So you may also give it a try.
Solution
Finally, I just step-down the jdk stuffs(it's really great pain in neck) for good, and came up with harfbuzz(shaping) + freetype(rendering) native impl that indeed was a peace in mind.
So...
• You may consider your production server(easy way) as reference for rendered font advance and rendering, and validate the result based on it(rather than dev machine)
• Or, use a cross and standalone (and probably native) shaping/rendering font engine/impl to make sure dev and production results will be the same.

Getting OutOfMemoryError with PDFBox Annotation constructAppearances() method

In a Nutshell
I've been working on a program that gets a pdf, highlights some words (via pdfbox Mark Annotation) and saves the new pdf.
In order to these annotations be visible on some viewers like pdf.js, it's needed to call the pdAnnotationTextMarkup.constructAppearances() before adding the mark annotation into the page Annotation list.
However, by doing so, I get an OutOfMemoryError when dealing with huge documents that would contain thousands of mark annotations.
I'd like to know if there's a way to prevent this from happening.
(this is a kind of a sequel of this ticket, but that's not much relevant for this one)
Technical Specification:
PDFBox 2.0.17
Java 11.0.6+10, AdoptOpenJDK
MacOS Catalina 10.15.2, 16gb, x86_64
My Code
//my pdf has 216 pages
for (int pageIndex = 0; pageIndex < numberOfPages; pageIndex++) {
PDPage page = document.getPage(pageIndex);
List<PDAnnotation> annotations = page.getAnnotations();
// each coordinate obj represents a hl annotation. crashing with 7.816 elements
for (CoordinatePoint coordinate : coordinates) {
PDAnnotationTextMarkup txtMark = new PDAnnotationTextMarkup(PDAnnotationTextMarkup.SUB_TYPE_HIGHLIGHT);
txtMark.setRectangle(pdRectangle);
txtMark.setQuadPoints(quadPoints);
txtMark.setColor(getColor());
txtMark.setTitlePopup(coordinate.getHintDescription());
txtMark.setReadOnly(true);
// this is what makes everything visible on pdf.js and what causes the Java heap space error
txtMark.constructAppearances();
annotations.add(txtMark);
}
}
Current Result
This is the heavy pdf doc that is leading to the issue:
https://pdfhost.io/v/I~nu~.6G_French_Intensive_Care_Society_International_congress_Ranimation_2016.pdf
My program tries to add 7.816 annotations to it throughout 216 pages.
and the stacktrace:
[main] INFO highlight.PDFAnnotation - Highlighting 13613_2016_Article_114.pdf...
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at org.apache.pdfbox.io.ScratchFile.<init>(ScratchFile.java:128)
at org.apache.pdfbox.io.ScratchFile.getMainMemoryOnlyInstance(ScratchFile.java:143)
at org.apache.pdfbox.cos.COSStream.<init>(COSStream.java:61)
at org.apache.pdfbox.pdmodel.interactive.annotation.handlers.PDAbstractAppearanceHandler.createCOSStream(PDAbstractAppearanceHandler.java:106)
at org.apache.pdfbox.pdmodel.interactive.annotation.handlers.PDHighlightAppearanceHandler.generateNormalAppearance(PDHighlightAppearanceHandler.java:136)
at org.apache.pdfbox.pdmodel.interactive.annotation.handlers.PDHighlightAppearanceHandler.generateAppearanceStreams(PDHighlightAppearanceHandler.java:59)
at org.apache.pdfbox.pdmodel.interactive.annotation.PDAnnotationTextMarkup.constructAppearances(PDAnnotationTextMarkup.java:175)
at org.apache.pdfbox.pdmodel.interactive.annotation.PDAnnotationTextMarkup.constructAppearances(PDAnnotationTextMarkup.java:147)
at highlight.PDFAnnotation.drawHLAnnotations(PDFAnnotation.java:288)
I've already tried to increase my jvm xmx and xms parameters to like -Xmx10g -Xms10g, which only postponed the crash a little bit.
What I Want
I want to prevent this memory issue and still be able to see my annotations in pdf.js viewer. Without calling constructAppearances the process is much more faster, I don't have this issue, but the annotations can only be seen on some pdf viewers, like Adobe.
Any suggestions? Am I doing anything wrong here or missing something?
In the upcoming version 2.0.19, construct the appearances like this:
annotation.constructAppearances(document);
In 2.0.18 and earlier, you need to initialize the appearance handler yourself:
setCustomAppearanceHandler(new PDHighlightAppearanceHandler(annotation, document));
That line can be removed in 2.0.19 as this is the default appearance handler.
Why all this? So that the document common memory space ("scratch file") is used in the annotation handler instead to create a new one each time (which is big). The later is done when calling new COSStream() instead of document.getDocument().createCOSStream().
All this is of course only important when doing many annotations.
related PDFBox issues: PDFBOX-4772 and PDFBOX-4080

How stop execution of the staging process in KIT DataManager?

I would like to know how to stop the Processor in Staging phase (e.g. this processor in member performPreTransferProcessing).
Repo on GitHub: https://github.com/kit-data-manager/base
Not sure how to find out what should be changed to go on with the next Processor (if any).
I imagine something like Context.status = CONSTANTS.stopped but lack to find the context.
The question was answered by the main contributor of the project … I will answer a bit more excessive still.
performPreTransferProcessing(TransferTaskContainer pContainer) throws StagingProcessorException;
// signature for AbstractStagingProcessor
The exception StagingProcessorException is located in package
import edu.kit.dama.staging.exceptions.StagingProcessorException;
and can be raised e.g. simply with a message
throw new StagingProcessorException("this ends the processor and renders it unsuccessful");
and of course this should only be raised conditionally, e.g. from one of the two pre-processing members.
But:
data still is first uploaded before the Processor runs
there is no cleanup swiping it from cache

What to do for memory related exception while working with NLP Stanford?

I am trying to run the Class : Word2VecSentimentRNN from the following link:
https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/recurrent/word2vecsentiment/Word2VecSentimentRNN.java
The example is big one, hence given link of the example here.
Also I have downloaded the Sample vector file from the following link:
https://github.com/mmihaltz/word2vec-GoogleNews-vectors
I am getting the following error:
Exception in thread "main" java.lang.OutOfMemoryError: Cannot allocate 3103474 + 3600000000 bytes (> Pointer.maxBytes)
at org.bytedeco.javacpp.Pointer.deallocator(Pointer.java:484)
at org.bytedeco.javacpp.Pointer.init(Pointer.java:118)
at org.bytedeco.javacpp.FloatPointer.allocateArray(Native Method)
at org.bytedeco.javacpp.FloatPointer.<init>(FloatPointer.java:68)
at org.nd4j.linalg.api.buffer.BaseDataBuffer.<init>(BaseDataBuffer.java:457)
at org.nd4j.linalg.api.buffer.FloatBuffer.<init>(FloatBuffer.java:57)
at org.nd4j.linalg.api.buffer.factory.DefaultDataBufferFactory.createFloat(DefaultDataBufferFactory.java:238)
at org.nd4j.linalg.factory.Nd4j.createBuffer(Nd4j.java:1201)
at org.nd4j.linalg.factory.Nd4j.createBuffer(Nd4j.java:1176)
at org.nd4j.linalg.api.ndarray.BaseNDArray.<init>(BaseNDArray.java:230)
at org.nd4j.linalg.cpu.nativecpu.NDArray.<init>(NDArray.java:111)
at org.nd4j.linalg.cpu.nativecpu.CpuNDArrayFactory.create(CpuNDArrayFactory.java:247)
at org.nd4j.linalg.factory.Nd4j.create(Nd4j.java:4261)
at org.nd4j.linalg.factory.Nd4j.create(Nd4j.java:4227)
at org.nd4j.linalg.factory.Nd4j.create(Nd4j.java:3501)
at org.deeplearning4j.models.embeddings.loader.WordVectorSerializer.readBinaryModel(WordVectorSerializer.java:219)
at org.deeplearning4j.models.embeddings.loader.WordVectorSerializer.loadGoogleModel(WordVectorSerializer.java:118)
at com.nyu.sentimentanalysis.core.Word2VecSentimentRNN.run(Word2VecSentimentRNN.java:77)
I have tried to launch the application with the parameters -Xmx2g and -Xms2g. Even changed the values from time to time to check if it helps or work.
Kindly let me know what should I do. Getting locked up here.
I had this problem running standard Word2vec code and system dies after a while with OutOfMemory.
Following settings worked for me for sustaining long term load on production for DL4J/ND4J based app using Word2vec pre-trained model
java -Xmx2G -Dorg.bytedeco.javacpp.maxbytes=6G -Dorg.bytedeco.javacpp.maxphysicalbytes=6G

XML-22900: (Fatal Error) An internal error condition occurred

I am facing this weird issue. I was working on a project, I had written some code to generate and XML using XML parser. The thing is when i run the code when on my local system, it runs fine. But when i deploy the code on to the environment, i doesn't. I am suspecting some sort of JAR thing. But i cant quite place it.
XML-22900: (Fatal Error) An internal error condition occurred.
Caused by: java.lang.NullPointerException
at oracle.xml.xslt.XSLEventHandler.characters(XSLEventHandler.java:866)
at oracle.xml.xslt.XSLTContext.reportNode(XSLTContext.java:426)
at oracle.xml.xslt.XSLTContext.reportNode(XSLTContext.java:390)
at oracle.xml.xslt.XSLTContext.reportNode(XSLTContext.java:390)
at oracle.xml.xslt.XSLTContext.reportNode(XSLTContext.java:1340)
at oracle.xml.xslt.XSLCopyOf.processAction(XSLCopyOf.java:136)
at oracle.xml.xslt.XSLNode.processChildren(XSLNode.java:480)
at oracle.xml.xslt.XSLTemplate.processAction(XSLTemplate.java:205)
at oracle.xml.xslt.XSLStylesheet.execute(XSLStylesheet.java:581)
at oracle.xml.xslt.XSLStylesheet.execute(XSLStylesheet.java:548)
at oracle.xml.xslt.XSLProcessor.processXSL(XSLProcessor.java:339)
at oracle.xml.jaxp.JXTransformer.transform(JXTransformer.java:454)
... 3 more
The input is the same, the code is the same, not sure what else i can provide, if you do need some more info let me know.
I had the same error. The error seems to be related to the transformer being used. Try using the Xalan factory.
TransformerFactory factory = new org.apache.xalan.processor.TransformerFactoryImpl();

Categories