Geotools PolygonExtractionProcess is not working - java

I want to convert a raster file into a shapefile. I am creating a GridCoverage2D object and want to use the execute method of Geotools PolygonExtractionProcess class but this method is not executing. I sadly cannot find any useful example usages of this class. This is my code:
try {
File rasterFile = new File("C:\\Data\\mytif.tif");
AbstractGridFormat format = GridFormatFinder.findFormat(rasterFile);
Hints hints = new Hints();
if (format instanceof GeoTiffFormat) {
hints = new Hints(Hints.FORCE_LONGITUDE_FIRST_AXIS_ORDER, Boolean.TRUE);
}
reader = format.getReader(rasterFile, hints);
GridCoverage2D coverage = reader.read(null); // It all works until this point
final PolygonExtractionProcess process = new PolygonExtractionProcess();
//System.out.println("This gets printed");
SimpleFeatureCollection sfColl = process.execute(coverage, null, Boolean.TRUE, null, null, null, null);
//System.out.println("This does not get printed anymore");
Style style = SLD.createPolygonStyle(Color.RED, null, 0.0f);
Layer layer = new FeatureLayer(sfColl, style);
map.addLayer(layer);
} catch (Exception e) {
System.err.println(e);
}

First, if all you want to do is extract vectors from a raster you could probably save using the process API completely by looking inside PolygonExtractionProcess to see how it works using the JAI.
But if you want to run a process then you should read the process tutorial which describes how they work and how to call them from your own code (this is usually used for testing). Essentially, the issue you are having comes from not understanding that processes are called from the processing engine (ProcessExecutor) which manages the input and output, and threading etc for you.
So your code should look something like this:
public class RasterToVector {
public static void main(String[] args)
throws IllegalArgumentException, IOException, InterruptedException, ExecutionException {
RasterToVector rtv = new RasterToVector();
SimpleFeatureCollection features = rtv.extract(args[0]);
Style style = SLD.createPolygonStyle(Color.RED, null, 0.0f);
Layer layer = new FeatureLayer(features, style);
MapContent map = new MapContent();
map.addLayer(layer);
JMapFrame.showMap(map);
}
org.geotools.process.Process process;
public RasterToVector() {
Name name = new NameImpl("ras", "PolygonExtraction");
process = Processors.createProcess(name);
}
private SimpleFeatureCollection extract(String filename)
throws IllegalArgumentException, IOException, InterruptedException, ExecutionException {
File rasterFile = new File(filename);
AbstractGridFormat format = GridFormatFinder.findFormat(rasterFile);
Hints hints = new Hints();
if (format instanceof GeoTiffFormat) {
hints = new Hints(Hints.FORCE_LONGITUDE_FIRST_AXIS_ORDER, Boolean.TRUE);
}
AbstractGridCoverage2DReader reader = format.getReader(rasterFile, hints);
GridCoverage2D coverage = reader.read(null);
ProcessExecutor engine = Processors.newProcessExecutor(2);
Map<String, Object> input = new KVP("data", coverage);
Progress working = engine.submit(process, input);
Map<String, Object> result = working.get();
SimpleFeatureCollection features = (SimpleFeatureCollection) result.get("result");
return features;
}
}
For a local terrain file I get the following result:

Related

Unable to open pdf after creation

I'm writing a program that takes a template PDF with a bunch of blank form fields, makes a copy of it, fills in the forms, then flattens the fields.
One of these templates has a ton of fields so writing a method that fills in all the fields results in an error that says code is too large due to the size limits on methods.
To work around this, I closed the destination file then tried to open it in another method where I could continue to fill in the fields, but this results in an error that says "(The requested operation cannot be performed on a file with a user-mapped section open)"
I ended the first method by closing the PDF, so I'm not sure what the issue is. The program will execute the first method, fill the fields but throws the error when it get to the 2nd method. Sample code below.
public void E2fill(String srcE2, String destE2) throws IOException
{
try
{
PdfDocument pdf2 = new PdfDocument(new PdfReader(destE2), new PdfWriter(dest2E2));
PdfAcroForm form2 = PdfAcroForm.getAcroForm(pdf2, true);
Map<String, PdfFormField> fields2 = form2.getFormFields();
PdfFormField field2;
fields2.get("fieldname1").setValue(stringname1);
//lots more field fills
pdf2.close()
}
catch(Exception x)
{
System.out.println(x.getMessage());
}
}
public void E2fill2(String destE2, String dest2E2) throws IOException
{
try
{
PdfDocument pdf2 = new PdfDocument(new PdfReader(destE2), new PdfWriter(dest2E2));
PdfAcroForm form2 = PdfAcroForm.getAcroForm(pdf2, true);
Map<String, PdfFormField> fields2 = form2.getFormFields();
PdfFormField field2;
fields2.get("fieldname546").setValue(stringname546);
//more field fills
form2.flattenFields();
pdf2.close();
}
catch(Exception x)
{
System.out.println(x.getMessage());
}
}
I suggest you try this:
public void fill(String srcE2, String destE2) {
PdfDocument pdf2 = new PdfDocument(new PdfReader(destE2), new PdfWriter(dest2E2));
PdfAcroForm form2 = PdfAcroForm.getAcroForm(pdf2, true);
Map<String, PdfFormField> fields2 = form2.getFormFields();
E2fill(fields2);
E2fill2(fields2);
form2.flattenFields();
pdf2.close();
}
public void E2fill(Map<String, PdfFormField> fields2) throws IOException
{
fields2.get("fieldname1").setValue(stringname1);
//lots more field fills
}
public void E2fill2(PdfAcroForm form2) throws IOException {
fields2.get("fieldname546").setValue(stringname546);
//more field fills
}

GeoTools: insert custom Polygons into existiong .shp file

I'm new to Geotools. Now I want to insert a custom area (Polygon) in a Shapefile of Austria.
My code:
public static void main(String[] args) throws IOException {
File file = new File("src/main/java/org/geotools/austria.shp");
Map<String, Object> map = new HashMap<>();
map.put("url", file.toURI().toURL());
DataStore dataStore = DataStoreFinder.getDataStore(map);
String typeName = dataStore.getTypeNames()[0];
FeatureSource<SimpleFeatureType, SimpleFeature> source =
dataStore.getFeatureSource(typeName);
MapContent showmap = new MapContent();
showmap.setTitle("Austria");
Style style = SLD.createSimpleStyle(source.getSchema());
Layer layer = new FeatureLayer(source, style);
showmap.addLayer(layer);
// display the map
JMapFrame.showMap(showmap);
}
My current result:
This image shows my current output. I drew a red hexagon to show what I want to have in future.
How can I insert and display this Polygon into a Shapefile?
First you need to create a new Shapefile (you could overwrite the old one but it is easy to lose your data that way).
SimpleFeatureType TYPE = dataStore.getSchema(typeName);
File newFile = new File("output.shp");
ShapefileDataStoreFactory dataStoreFactory = new ShapefileDataStoreFactory();
Map<String, Serializable> params = new HashMap<String, Serializable>();
params.put("url", URLs.fileToURL(newFile));
params.put("create spatial index", Boolean.TRUE);
ShapefileDataStore newDataStore = (ShapefileDataStore) dataStoreFactory.createNewDataStore(params);
newDataStore.createSchema(TYPE);
Then you need to copy the existing polygons to the new file (I'm assuming they are in a SimpleFeatureCollection called collection) followed by the new feature(s):
Transaction transaction = new DefaultTransaction("create");
String typeName = newDataStore.getTypeNames()[0];
SimpleFeatureSource featureSource = newDataStore.getFeatureSource(typeName);
if (featureSource instanceof SimpleFeatureStore) {
SimpleFeatureStore featureStore = (SimpleFeatureStore) featureSource;
featureStore.setTransaction(transaction);
try {
featureStore.addFeatures(collection);
// Now add the hexagon
featureStore.addFeatures(DataUtilities.collection(hexagon));
transaction.commit();
} catch (Exception problem) {
problem.printStackTrace();
transaction.rollback();
System.exit(-1);
} finally {
transaction.close();
}
} else {
System.out.println(typeName + " does not support read/write access");
System.exit(1);
}

reading datamatrix with xzing lib in java

My test case is very simple: I'm generating a data matrix code and then I want to read it again. Both with xzing vs3.0.0. I'm doing this the same way with qr-code and pdf417 - and it works perfectly.
This is my code:
#Test
public void testDataMatrix() throws Exception {
writeDataMatrix();
String result = readDataMatrix("out/data_matrix.png", "UTF-8", new EnumMap<DecodeHintType, Object>(DecodeHintType.class));
assertEquals("my message", result);
}
public static void writeDataMatrix() throws IOException {
DataMatrixWriter writer = new DataMatrixWriter();
BitMatrix matrix = writer.encode("my message", BarcodeFormat.DATA_MATRIX, 100, 100);
MatrixToImageWriter.writeToPath(matrix, "PNG", Paths.get("out/data_matrix.png"));
}
public static String readDataMatrix(String filePath, String charset, Map hintMap)
throws FileNotFoundException, IOException, NotFoundException {
BinaryBitmap binaryBitmap = new BinaryBitmap(new HybridBinarizer(
new BufferedImageLuminanceSource(
ImageIO.read(new FileInputStream(filePath)))));
Result qrCodeResult = new MultiFormatReader().decode(binaryBitmap,
hintMap);
return qrCodeResult.getText();
}
If I run the test above, a data matrix image will be generated in out. This file is readable by the xzing online reader. But it works not in my own code:
com.google.zxing.NotFoundException
Any ideas? Thanks in advance.
I had the same problem but this worked for me. I think by default the library expects margins in the barcode so if you don't have them use the PURE_BARCODE hint.
public static String readDataMatrix(String filePath, String charset)
throws FileNotFoundException, IOException, NotFoundException
{
HashMap<DecodeHintType, Object> decodeHintMap = new HashMap<DecodeHintType, Object>();
decodeHintMap.put(DecodeHintType.PURE_BARCODE, Boolean.TRUE);
BinaryBitmap binaryBitmap = new BinaryBitmap(new HybridBinarizer(new BufferedImageLuminanceSource(ImageIO.read(new FileInputStream(filePath)))));
Result codeResult = new DataMatrixReader().decode(binaryBitmap, decodeHintMap);
return codeResult.getText();
}

Hadoop - Writing to HBase directly from the Mapper

I have a haddop job that its output should be written to HBase. I do not really needs reducer, the kind of row I would like to insert is determined in the Mapper.
How can I use TableOutputFormat to achieve this? From all the examples I have seen the assumption is that the reducer is the one creating the Put, and that TableMapper is just for reading from HBase table.
In my case the input is HDFS the output is Put to specific table, I cannot find anything in TableMapReduceUtil that can help me with that either.
Is there any example out there that can help me with that?
BTW, I am using the new Hadoop API
This is the example of reading from file and put all lines into Hbase. This example is from "Hbase: The definitive guide" and you can find it on repository. To get it just clone repo on your computer:
git clone git://github.com/larsgeorge/hbase-book.git
In this book you can also find all the explanations about the code. But if something is incomprehensible for you, feel free to ask.
` public class ImportFromFile {
public static final String NAME = "ImportFromFile";
public enum Counters { LINES }
static class ImportMapper
extends Mapper<LongWritable, Text, ImmutableBytesWritable, Writable> {
private byte[] family = null;
private byte[] qualifier = null;
#Override
protected void setup(Context context)
throws IOException, InterruptedException {
String column = context.getConfiguration().get("conf.column");
byte[][] colkey = KeyValue.parseColumn(Bytes.toBytes(column));
family = colkey[0];
if (colkey.length > 1) {
qualifier = colkey[1];
}
}
#Override
public void map(LongWritable offset, Text line, Context context)
throws IOException {
try {
String lineString = line.toString();
byte[] rowkey = DigestUtils.md5(lineString);
Put put = new Put(rowkey);
put.add(family, qualifier, Bytes.toBytes(lineString));
context.write(new ImmutableBytesWritable(rowkey), put);
context.getCounter(Counters.LINES).increment(1);
} catch (Exception e) {
e.printStackTrace();
}
}
}
private static CommandLine parseArgs(String[] args) throws ParseException {
Options options = new Options();
Option o = new Option("t", "table", true,
"table to import into (must exist)");
o.setArgName("table-name");
o.setRequired(true);
options.addOption(o);
o = new Option("c", "column", true,
"column to store row data into (must exist)");
o.setArgName("family:qualifier");
o.setRequired(true);
options.addOption(o);
o = new Option("i", "input", true,
"the directory or file to read from");
o.setArgName("path-in-HDFS");
o.setRequired(true);
options.addOption(o);
options.addOption("d", "debug", false, "switch on DEBUG log level");
CommandLineParser parser = new PosixParser();
CommandLine cmd = null;
try {
cmd = parser.parse(options, args);
} catch (Exception e) {
System.err.println("ERROR: " + e.getMessage() + "\n");
HelpFormatter formatter = new HelpFormatter();
formatter.printHelp(NAME + " ", options, true);
System.exit(-1);
}
return cmd;
}
public static void main(String[] args) throws Exception {
Configuration conf = HBaseConfiguration.create();
String[] otherArgs =
new GenericOptionsParser(conf, args).getRemainingArgs();
CommandLine cmd = parseArgs(otherArgs);
String table = cmd.getOptionValue("t");
String input = cmd.getOptionValue("i");
String column = cmd.getOptionValue("c");
conf.set("conf.column", column);
Job job = new Job(conf, "Import from file " + input + " into table " + table);
job.setJarByClass(ImportFromFile.class);
job.setMapperClass(ImportMapper.class);
job.setOutputFormatClass(TableOutputFormat.class);
job.getConfiguration().set(TableOutputFormat.OUTPUT_TABLE, table);
job.setOutputKeyClass(ImmutableBytesWritable.class);
job.setOutputValueClass(Writable.class);
job.setNumReduceTasks(0);
FileInputFormat.addInputPath(job, new Path(input));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}`
You just need to make the mapper output the pair. OutputFormat only specifies how to persist the output key-values. It does not necessarily mean that the key values come from reducer.
You would need to do something like this in the mapper:
... extends TableMapper<ImmutableBytesWritable, Put>() {
...
...
context.write(<some key>, <some Put or Delete object>);
}

Running OpenOffice Macro from Java API

I'm trying to write a Java program that will run an OpenOffice Macro. I'm getting this error:
java.lang.RuntimeException:
com.sun.star.script.provider.ScriptFrameworkErrorException: Incorrect
format for Script URI: vnd.sun.star.script:Name of macro
I believe it has something to do with the way that I'm calling the macro (String cmd)
I've searched high and low but can't seem to find any information on this. There are a few posts on the OO forums but none of them seemed to help. Here is some of the code:
public static void main(String[] args) throws BootstrapException {
if(args.length == 0)
{
System.out.println("Must enter a filename");
System.exit(1);
}
try
{
String param = args[0];
//String cmd = "Standard.Conversion.ConvertHTMLToWord?langauge=Basic&location=application";
String cmd = "Name.Of.Macro?langauge=Basic&location=Document";
System.out.println("Running macro on " + param);
Macro macObj = new Macro();
macObj.executeMacro(cmd, new Object[]{param}]);
System.out.println("Completed");
}
catch(Exception e)
{
System.out.println(e.toString());
//e.printStackTrace();
}
Macro Class:
class Macro {
private static final String ooExecPath = "C:/Program Files/OpenOffice.org 3/program";
public Object executeMacro(String strMacroName, Object[] aParams) throws BootstrapException
{
try
{
com.sun.star.uno.XComponentContext xContext;
System.out.println("Connecting to OpenOffice");
xContext = BootstrapSocketConnector.bootstrap(ooExecPath);
System.out.println("Connected to a running instance of OpenOffice");
System.out.println("Trying to execute macro...");
com.sun.star.text.XTextDocument mxDoc = openWriter(xContext);
XScriptProviderSupplier xScriptPS = (XScriptProviderSupplier) UnoRuntime.queryInterface(XScriptProviderSupplier.class, mxDoc);
XScriptProvider xScriptProvider = xScriptPS.getScriptProvider();
XScript xScript = xScriptProvider.getScript("vnd.sun.star.script:"+strMacroName);
short[][] aOutParamIndex = new short[1][1];
Object[][] aOutParam = new Object[1][1];
return xScript.invoke(aParams, aOutParamIndex, aOutParam);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public static com.sun.star.text.XTextDocument openWriter(com.sun.star.uno.XComponentContext xContext)
{
com.sun.star.frame.XComponentLoader xCLoader;
com.sun.star.text.XTextDocument xDoc = null;
com.sun.star.lang.XComponent xComp = null;
try {
// get the remote office service manager
com.sun.star.lang.XMultiComponentFactory xMCF =
xContext.getServiceManager();
Object oDesktop = xMCF.createInstanceWithContext(
"com.sun.star.frame.Desktop", xContext);
xCLoader = (com.sun.star.frame.XComponentLoader)
UnoRuntime.queryInterface(com.sun.star.frame.XComponentLoader.class,
oDesktop);
com.sun.star.beans.PropertyValue [] szEmptyArgs =
new com.sun.star.beans.PropertyValue [0];
/*
ArrayList<PropertyValue> props = new ArrayList<PropertyValue>();
PropertyValue p = new PropertyValue();
p.Name = "Hidden";
p.Value = new Boolean(true);
props.add(p);
PropertyValue[] properties = new PropertyValue[props.size()];
props.toArray(properties);
String strDoc = "private:factory/swriter";
xComp = xCLoader.loadComponentFromURL(strDoc, "_blank", 0, properties);
*/
String strDoc = "private:factory/swriter";
xComp = xCLoader.loadComponentFromURL(strDoc, "_blank", 0, szEmptyArgs);
xDoc = (com.sun.star.text.XTextDocument)
UnoRuntime.queryInterface(com.sun.star.text.XTextDocument.class,
xComp);
} catch(Exception e){
System.err.println(" Exception " + e);
e.printStackTrace(System.err);
}
return xDoc;
}
}
I suppose your problem is in the "Name.Of.Macro": it must be: Library.Module.NameOfMacro.
"langauge=Basic" of course sets the language name, and "location=application" means the macro library should be searched in the opened document, and not in global OO libraries.
As far as parameters are involved, I use:
XScriptProviderSupplier xScriptPS = (XScriptProviderSupplier) UnoRuntime.queryInterface(XScriptProviderSupplier.class, xComponent);
XScriptProvider xScriptProvider = xScriptPS.getScriptProvider();
XScript xScript = xScriptProvider.getScript("vnd.sun.star.script:"+macroName);
short[][] aOutParamIndex = new short[1][1];
Object[][] aOutParam = new Object[1][1];
Object[] aParams = new String[2];
aParams[0] = myFirstParameterName;
aParams[1] = mySecondParameterName;
#SuppressWarnings("unused")
Object result = xScript.invoke(aParams, aOutParamIndex, aOutParam);
System.out.println("xScript invoke macro " + macroName);
Hope it can be useful, after such long time... :-(
XScriptProviderSupplier xScriptPS = (XScriptProviderSupplier) UnoRuntime.queryInterface(XScriptProviderSupplier.class, xComponent);
What is xComponent in the above code?
Compare: ?langauge=Basic&location=Document"
to: ?language=Basic&location=Document"
wring is: "langauge" :D, swap "au" to "ua". :)

Categories