I am using StAX to create a quite large xml document. Until now I was using the IndentingXMLStreamwriter class to get a well formatted document (see also this answer). A few days ago we setup a jenkins server with an older jdk version (6.26), on which i get build errors.
package com.sun.xml.internal.txw2.output does not exist
I assume the package cannot be found because of the installed jdk version. For different reasons this cannot be changed
(by the way, does anyone know the jdk version, at which this package (com.sun.xml.internal.txw2.output) was added?).
Therefore I am looking for an alternative to do the indenting. I would prefer a solution similar to the one I was using, which means without reparsing the document. Any ideas or suggestions?
Thanks
Lars
Instead of com.sun.xml.internal.txw2.output.IndentingXMLStreamWriter use com.sun.xml.txw2.output.IndentingXMLStreamWriter that can be found in:
<dependency>
<groupId>org.glassfish.jaxb</groupId>
<artifactId>txw2</artifactId>
<version>2.2.11</version>
</dependency>
Just expanding on Michael Kay's answer ( https://stackoverflow.com/a/10108591/2722227 ).
maven dependencies:
<dependency>
<groupId>net.sf.saxon</groupId>
<artifactId>Saxon-HE</artifactId>
<version>9.6.0-5</version>
</dependency>
java code:
import java.io.OutputStream;
import net.sf.saxon.Configuration;
import net.sf.saxon.s9api.Processor;
import net.sf.saxon.s9api.SaxonApiException;
import net.sf.saxon.s9api.Serializer;
import net.sf.saxon.s9api.Serializer.Property;
public class Main {
public static void main(String[] args) throws Exception {
OutputStream outputStream = System.out;
writeXmlDocument(outputStream);
}
private static void writeXmlDocument(OutputStream outputStream){
Configuration config = new Configuration();
Processor processor = new Processor(config);
Serializer serializer = processor.newSerializer();
serializer.setOutputProperty(Property.METHOD, "xml");
serializer.setOutputProperty(Property.INDENT, "yes");
serializer.setOutputStream(outputStream);
try {
XMLStreamWriter writer = serializer.getXMLStreamWriter();
try {
writer.writeStartDocument();
{
writer.writeStartElement("root_element_name");
{
writer.writeStartElement("child_element");
writer.writeEndElement();
}
writer.writeEndElement();
}
writer.writeEndDocument();
writer.flush();
writer.close();
} catch (XMLStreamException e) {
e.printStackTrace();
}
} catch (SaxonApiException e) {
e.printStackTrace();
}
}
}
If other suggestions don't work, you can get an indenting XMLStreamWriter from Saxon like this:
Processor p = new net.sf.saxon.s9api.Processor();
Serializer s = p.newSerializer();
s.setOutputProperty(Property.METHOD, "xml");
s.setOutputProperty(Property.INDENT, "yes");
s.setOutputStream(....);
XMLStreamWriter writer = s.getXMLStreamWriter();
One advantage is that this allows you a lot of control over the serialization using other serialization properties.
There is an alternative implementation of IndentingXmlStreamWriter, which is provided as part of the open source stax-utils project here: http://java.net/projects/stax-utils/pages/Home
stax-utils seems to be a project set up to provide utilities based around the jsr-173 streaming xml api for Java
You'd need to add the stax-utils jar as a dependency for your project. Then you can import javanet.staxutils.IndentingXmlStreamWriter
Since stax-utils is in the maven central repository, if you use maven for your dependencies you can get it with:
<dependency>
<groupId>net.java.dev.stax-utils</groupId>
<artifactId>stax-utils</artifactId>
<version>20070216</version>
<exclusions>
<exclusion>
<groupId>com.bea.xml</groupId>
<artifactId>jsr173-ri</artifactId>
</exclusion>
</exclusions>
</dependency>
Functionality seems very similar / equivalent to the txw2 class
I have excluded jsr173-ri since I am using jdk 1.7. I think 1.6+ has the jsr173 api as a standard feature, but if you are using 1.5 or lower you'd need the extra jsr173 jar.
If you are using Maven and Java 8, you can import the following to use this class:
<!-- https://mvnrepository.com/artifact/com.sun.xml.bind/jaxb-impl -->
<dependency>
<groupId>com.sun.xml.bind</groupId>
<artifactId>jaxb-impl</artifactId>
<version>2.1.17</version>
</dependency>
And then, you import it as: import com.sun.xml.txw2.output.IndentingXMLStreamWriter;
Related
I'm trying to upgrade dependencies for a java application that uses com.vividsolutions.jts. I have removed all the references to this library from pom.xml and replaced them by the ones from org.locationtech.jts.
I have updated all the imports to use org.locationtech version. However, in my function I'm still getting an error related to com.vividsolutions object not being imported.
import org.locationtech.spatial4j.context.jts.JtsSpatialContext;
import org.locationtech.jts.geom.Coordinate;
import org.locationtech.jts.geom.GeometryFactory;
import org.locationtech.jts.geom.LinearRing;
import org.locationtech.spatial4j.shape.jts.JtsGeometry;
// ... other stuff
public static myFunc() {
GeometryFactory gf = new GeometryFactory();
LinearRing linear = gf.createLinearRing(coordinates);
JtsGeometry poly = new JtsGeometry(fact.createPolygon(linear), JtsSpatialContext.GEO, true, true);
}
Here's the error that I get for the last line of the above code: [ERROR] cannot access com.vividsolutions.jts.geom.Geometry [ERROR] class file for com.vividsolutions.jts.geom.Geometry not found
I'm clearly importing JtsGeometry from the new library at org.locationtech, however, it's still thinking the old library should be used.
The old library isn't in the dependency tree or the code anymore, as the followings don't return anything:
mvn dependency:tree | grep vivid
rg vivid
Any idea what I'm missing here or how I should troubleshoot this?
I'm not too sure what was wrong with the vividsolutions library inclusion. However, I was able to resolve my issue by including both of these in pom.xml:
<dependency>
<groupId>org.locationtech.jts</groupId>
<artifactId>jts-core</artifactId>
<version>1.18.2</version>
</dependency>
<dependency>
<groupId>org.locationtech.spatial4j</groupId>
<artifactId>spatial4j</artifactId>
<version>0.8</version>
</dependency>
Initially I didn't have the second dependency in the pom.xml file.
I'm trying out full text search engine frameworks for a Java EE framework with JPA and don't want to switch to Hibernate which offers the quite neat Hibernate Search feature, so I'm starting with Apache Lucene, now.
I'd like to search through String fields of JPA entities (after creating an index for them, i.e. writer/reader example). I'll use an EJB wrapping the persistence layer to keep the index up-to-date. I assume it's irrelevant that I'm using JPA and Java EE.
Since Apache projects don't have a policy to keep their documentation up-to-date at all or at least mark them outdated most of the examples at https://wiki.apache.org/lucene-java/TheBasics and similar sites don't work with because classes have and methods have been removed. The same goes for blog posts found via search engine. It's possible to find them, but anything one finds needs to be tried out because there's ca. 90% change that one figures out that the example refers to classes or methods which no longer exist...
I'm looking for any example showing the above use case with an up-to-date version of Lucene which is 6.5.0 afaik.
I am not sure what all has changed in 6.5 but below are codes for Lucene 6.0.0 and compile / run with 6.5.0 too.
IndexCreation
import java.io.File;
import java.io.IOException;
import org.apache.lucene.analysis.core.SimpleAnalyzer;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.index.IndexWriterConfig.OpenMode;
import org.apache.lucene.store.FSDirectory;
public class IndexCreator {
public static IndexWriter getWriter() throws IOException{
File indexDir = new File("D:\\Experiment");
SimpleAnalyzer analyzer = new SimpleAnalyzer();
IndexWriterConfig indexWriterConfig = new IndexWriterConfig(analyzer);
indexWriterConfig.setOpenMode(OpenMode.CREATE_OR_APPEND);
IndexWriter indexWriter = new IndexWriter(FSDirectory.open(indexDir
.toPath()), indexWriterConfig);
indexWriter.commit();
return indexWriter;
}
}
Now you can use this writer to Index your documents using writer.updateDocument(...) writer.addDocument(...) methods.
Fields can be added to document like below,
doc.add(new Field("NAME", "Tom", new FieldType(TextField.TYPE_STORED)));
Searching
import java.io.IOException;
import java.nio.file.Paths;
import org.apache.lucene.document.Document;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.Term;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.search.WildcardQuery;
import org.apache.lucene.store.FSDirectory;
public class LuceneSearcher {
public static void searchIndex() throws IOException{
IndexReader reader = DirectoryReader.open(FSDirectory.open(Paths.get("D:\\Experiment")));
IndexSearcher searcher = new IndexSearcher(reader);
TopDocs hits = searcher.search(new WildcardQuery(new Term("NAME", "*")), 20);
if (null == hits.scoreDocs || hits.scoreDocs.length <= 0) {
System.out.println("No Hits1 Found");
return;
}
System.out.println(hits.scoreDocs.length + " hits1 Docs found !!");
for (ScoreDoc hit : hits.scoreDocs) {
Document doc = searcher.doc(hit.doc);
}
reader.close();
}
}
Searcher code assumes that you have an indexed document with NAME as field name.
I think, this should be enough to get you started.
Let me know if you need anything else.
I have these maven dependencies,
<dependencies>
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-core</artifactId>
<version>6.5.0</version>
</dependency>
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-analyzers-common</artifactId>
<version>6.5.0</version>
</dependency>
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-queryparser</artifactId>
<version>6.5.0</version>
</dependency>
</dependencies>
I just want to do some 2D Matrix operation in using JavaRDD and looked into this link https://spark.apache.org/docs/latest/mllib-data-types.html. I tried doing exactly the same sample codes that are given here. But eclipse doesn't seem to recognize the mllib in the first place. Here is my code snippet (same as that in the above link)
import org.apache.spark.mllib.linalg.Vector;
import org.apache.spark.mllib.regression.LabeledPoint;
import org.apache.spark.mllib.regression.LabeledPoint;
import org.apache.spark.mllib.util.MLUtils;
import org.apache.spark.mllib.linalg.Matrix;
import org.apache.spark.mllib.linalg.Matrices;
JavaRDD<Vector> rows = ... // a JavaRDD of local vectors
// Create a RowMatrix from an JavaRDD<Vector>.
RowMatrix mat = new RowMatrix(rows.rdd());
// Get its size.
long m = mat.numRows();
long n = mat.numCols();
// QR decomposition
QRDecomposition<RowMatrix, Matrix> result = mat.tallSkinnyQR(true);
I am using Spark 2.0.2. Where am I going wrong? Do we need any maven dependency? I checked my spark home directory, and I have the mllib directory and mllib-local directory in my spark directory.
Check your pom.xml to see if there is a spark-mllib dependency. If not, get the right version from here: https://mvnrepository.com/artifact/org.apache.spark/spark-mllib_2.11
At the point of my answering, the latest version is:
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-mllib_2.11 -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-mllib_2.11</artifactId>
<version>2.1.0</version>
</dependency>
Make sure that your spark-mllib configuration in pom.xml hasn't been runtime.
I am trying to use an open source tool built on Batik and I am running into trouble with one of the dependencies when I try to build it. Pretty sure this is something to do with classpaths and library locations, but I can't figure out what is happening.
So the project I am working with ( SVG2EMF ) is using the FreeHep EMF Driver, which in turn uses the FreeHep GraphicsIO project. Because these three have not been playing nicely on my system ( Ubuntu 14.04 ) I've downloaded the source for all three to try and step through the problem.
Everything builds correctly and I can step through the code successfully, but the unit tests on SVG2EMF fail at the point where the EMF Driver makes a call to something from GraphicsIO- the relevant parts of the code in question is here:
import org.freehep.graphicsio.ImageGraphics2D;
import org.freehep.graphicsio.ImageConstants;
// ...snip...
public class AlphaBlend extends EMFTag implements EMFConstants
{
// ...snip...
public void write(int tagID, EMFOutputStream emf) throws IOException
{
emf.writeRECTL(bounds);
emf.writeLONG(x);
emf.writeLONG(y);
emf.writeLONG(width);
emf.writeLONG(height);
dwROP.write(emf);
emf.writeLONG(xSrc);
emf.writeLONG(ySrc);
emf.writeXFORM(transform);
emf.writeCOLORREF(bkg);
emf.writeDWORD(usage);
emf.writeDWORD(size); // bmi follows this record immediately
emf.writeDWORD(BitmapInfoHeader.size);
emf.writeDWORD(size + BitmapInfoHeader.size); // bitmap follows bmi
emf.pushBuffer();
int encode;
// plain
encode = BI_RGB;
ImageGraphics2D.writeImage(
(RenderedImage) image,
ImageConstants.RAW.toLowerCase(),
ImageGraphics2D.getRAWProperties(bkg, "*BGRA"),
new NoCloseOutputStream(emf));
// emf.writeImage(image, bkg, "*BGRA", 1);
// png
// encode = BI_PNG;
// ImageGraphics2D.writeImage(image, "png", new Properties(), new
// NoCloseOutputStream(emf));
// jpg
// encode = BI_JPEG;
// ImageGraphics2D.writeImage(image, "jpg", new Properties(), new
// NoCloseOutputStream(emf));
int length = emf.popBuffer();
emf.writeDWORD(length);
emf.writeLONG(image.getWidth());
emf.writeLONG(image.getHeight());
BitmapInfoHeader header = new BitmapInfoHeader(image.getWidth(), image
.getHeight(), 32, encode, length, 0, 0, 0, 0);
bmi = new BitmapInfo(header);
bmi.write(emf);
emf.append();
}
This throws a NoClassDefFoundError specifically relating to org.freehep.graphicsio.ImageGraphics2D on that writeImage call. When I step through in the debugger, a watch on ImageConstants.RAW has the value of Unknown type "org.freehep.graphicsio.ImageConstants" even though the application built quite happily with those references. Any references to ImageGraphics2D behave in exactly the same way.
The dependency in the SVG2EMF pom.xml looks like this:
<dependencies>
<!-- some other dependencies -->
<dependency>
<groupId>org.freehep</groupId>
<artifactId>freehep-graphicsio-emf</artifactId>
<version>2.1.1</version>
</dependency>
</dependencies>
Dependency from the FreeHEP EMF Driver looks like this:
<dependencies>
<!-- necessary because transitive deps seem to go above inhertied deps -->
<dependency>
<groupId>org.freehep</groupId>
<artifactId>freehep-util</artifactId>
<version>2.0.2</version>
</dependency>
<dependency>
<groupId>org.freehep</groupId>
<artifactId>freehep-graphicsio</artifactId>
<version>2.1.1</version>
</dependency>
<!-- Other dependencies -->
</dependencies>
Can anybody shed any light on what is actually going on here or what I need to be doing in order to enable this to work?
EDIT: I think I have found where the problem is coming from- way down the StackTrace I see a "Caused by: ExceptionInInitializerError" - which appears to mark the class as inaccessible from then on. So the dependency does exist, but an exception is being thrown by the initializer which causes the JRE to mark it as unusable.
Further Edit: To solve these problems it can be useful ( although it is not mentioned anywhere on the freehep.org website ) to know that the project is now hosted on Github so you can find newer versions from there. In my case going straight to the latest version solved the problem.
I have a really simple code in Java which reads data from hdfs
try{
InputStream s = new GzipCompressorInputStream(hdfsFileSystem.open(filePath), false);
ByteStreams.copy(s, outputStream);
s.close();
}
catch (Exception ex){
logger.error("Problem with file "+ filePath,ex);
}
Sometimes (not always) it throws me exception
java.lang.NoSuchMethodError: org.apache.commons.io.IOUtils.closeQuietly(Ljava/io/Closeable;)V
at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:1099)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:533)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:749)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:793)
at java.io.DataInputStream.read(DataInputStream.java:149)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
at org.apache.commons.compress.compressors.gzip.GzipCompressorInputStream.init(GzipCompressorInputStream.java:136)
at org.apache.commons.compress.compressors.gzip.GzipCompressorInputStream.<init>(GzipCompressorInputStream.java:129)
[...]
On line below line:
InputStream s = new GzipCompressorInputStream(hdfsFileSystem.open(filePath), false);
I am using bellow maven dependency to load hadoop client:
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.2.0</version>
</dependency>
Does anybody knows how to fix this problem? Of cource I can change catch(Exception e) to catch(Error e), but it isn't a solution just workaround.
Looks like in yours classpath present several "commons-io.jar" with different versions.
Method "closeQuietly(Ljava/io/Closeable;)" appeared in version 2.0.
Sometimes "commons-io.jar" with older version loaded first, and exception appeared.
Classpath fix required.