I have a C# application that performs mail merges with MS Office (using Interop API).
I am now trying to have it support Open office.
I want to use OpenOffice SDK:
http://www.openoffice.org/api/docs/common/ref/com/sun/star/text/MailMerge.html#Command
Does not look crystal clear to me right now....
I somehow managed to get the mail merge code to work.
The thing is we need to create a "DataSource" before actually performing the MailMerge and I encounter difficulties to do it.
I can get a sample in Java here:
https://wiki.openoffice.org/wiki/Documentation/DevGuide/Database/The_DataSource_Service
I would need to convert this into C#.
My difficulty is that Java uses this object to perform its casts:
XStorable store = ( XStorable)UnoRuntime.queryInterface(XStorable.class, xDs);
There is nothing equivalent in C#.
I converted the code this way:
public static void CreateDataSource(string dataSourceProvidedFilePath, string dataSourceSavedFilePath)
{
XComponentContext oStrap = uno.util.Bootstrap.bootstrap();
XMultiServiceFactory _rMSF = (XMultiServiceFactory)oStrap.getServiceManager();
// the XSingleServiceFactory of the database context creates new generic
// com.sun.star.sdb.DataSources (!)
// retrieve the database context at the global service manager and get its
// XSingleServiceFactory interface
XSingleServiceFactory xFac = (XSingleServiceFactory) _rMSF.createInstance("com.sun.star.sdb.DatabaseContext");
//(XSingleServiceFactory)UnoRuntime.queryInterface(XSingleServiceFactory.class, _rMSF.createInstance("com.sun.star.sdb.DatabaseContext"));
// instantiate an empty data source at the XSingleServiceFactory
// interface of the DatabaseContext
Object xDs = xFac.createInstance();
// register it with the database context
XNamingService xServ = (XNamingService)xFac;
//(XNamingService)UnoRuntime.queryInterface(XNamingService.class, xFac);
XStorable store = ( XStorable) xDs;
//( XStorable)UnoRuntime.queryInterface(XStorable.class, xDs);
XModel model =( XModel) xDs;
//( XModel)UnoRuntime.queryInterface(XModel.class, xDs);
//on détermine le fichier ou sera sauvegardée la data source
string dataSourcePathURL = Path.Combine(Path.GetDirectoryName(dataSourceProvidedFilePath), dataSourceSavedFilePath + ".odb").ConvertToOpenOfficeURL();
store.storeAsURL(/*"file:///c:/test.odb"*/dataSourcePathURL,model.getArgs());
xServ.registerObject("NewDataSourceName", xDs);
// setting the necessary data source properties
XPropertySet xDsProps = (XPropertySet)xDs;
//(XPropertySet)UnoRuntime.queryInterface(XPropertySet.class, xDs);
// Adabas D URL
xDsProps.setPropertyValue("URL", new uno.Any("sdbc:adabas::MYDB1"));
// force password dialog
//xDsProps.setPropertyValue("IsPasswordRequired", new Boolean(true));
// suggest dsadmin as user name
xDsProps.setPropertyValue("User", new uno.Any("dsadmin"));
store.store();
}
Some casts worked fine:
XNamingService xServ = (XNamingService)xFac;
//(XNamingService)UnoRuntime.queryInterface(XNamingService.class, xFac);
But some other casts throw an exception:
XStorable store = ( XStorable) xDs;
//( XStorable)UnoRuntime.queryInterface(XStorable.class, xDs);
->
Unable to cast transparent proxy to type 'unoidl.com.sun.star.frame.XStorable'.
Is there a way to have this code correctly converted to C#?
Otherwise, do you know any other resource showing how to create an Open Office DataSource in Java?
Thx
First I tried using C# and encountered the same error you described.
Then I tried the example using Java and ended up with a null value for XStorable. So I think your problem is not due to C#, but because for some reason the empty data source is not getting created properly.
In Create a libreoffice text-based datasource and set settings with java, the poster seems to have had success, so I'm not sure what went wrong when I tried it.
This code to print data sources does work for me: https://wiki.openoffice.org/wiki/Documentation/DevGuide/Database/Data_Sources_in_OpenOffice.org_API.
Related
I am new to OWL 2, and I want to parse a ".ttl" file with OWL API, but I found that OWL API is not same as the API I used before. It seems that I should write a "visitor" if I want to get the content within a OWLAxiom or OWLEntity, and so on. I have read some tutorials, but I didn't get the proper way to do it. Also, I found the tutorials searched were use older version of owl api. So I want a detailed example to parse a instance, and store the content to a Java class.
I have made some attempts, my codes are as follows, but I don't know to go on.
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
File file = new File("./source.ttl");
OWLOntology localAcademic = manager.loadOntologyFromOntologyDocument(file);
Stream<OWLNamedIndividual> namedIndividualStream = localAcademic.individualsInSignature();
Iterator<OWLNamedIndividual> iterator = namedIndividualStream.iterator();
while (iterator.hasNext()) {
OWLNamedIndividual namedIndividual = iterator.next();
}
Instance for example are as follows. Specially, I want store the "#en" in the object of "ecrm:P3_has_note".
<http://data.doremus.org/performance/4db95574-8497-3f30-ad1e-f6f65ed6c896>
a mus:M42_Performed_Expression_Creation ;
ecrm:P3_has_note "Créée par Teodoro Anzellotti, son commanditaire, en novembre 1995 à Rotterdam"#en ;
ecrm:P4_has_time-span <http://data.doremus.org/performance/4db95574-8497-3f30-ad1e-f6f65ed6c896/time> ;
ecrm:P9_consists_of [ a mus:M28_Individual_Performance ;
ecrm:P14_carried_out_by "Teodoro Anzellotti"
] ;
ecrm:P9_consists_of [ a mus:M28_Individual_Performance ;
ecrm:P14_carried_out_by "à Rotterdam"
] ;
efrbroo:R17_created <http://data.doremus.org/expression/2fdd40f3-f67c-30a0-bb03-f27e69b9f07f> ;
efrbroo:R19_created_a_realisation_of
<http://data.doremus.org/work/907de583-5247-346a-9c19-e184823c9fd6> ;
efrbroo:R25_performed <http://data.doremus.org/expression/b4bb1588-dd83-3915-ab55-b8b70b0131b5> .
The contents I want are as follows:
class Instance{
String subject;
Map<String, Set<Object>> predicateToObject = new HashMap<String,Set<Object>>();
}
class Object{
String value;
String type;
String language = null;
}
The version of owlapi I am using is 5.1.0. I download the jar and the doc from there. I just want to know how to get the content I need in the java class.
If there are some tutorials that describe the way to do it, please tell me.
Thanks a lot.
Update: I have known how to do it, when I finish it, I will write an answer, I hope it can help latecomers of OWLAPI.
Thanks again.
What you need, once you have the individual, is to retrieve the data property assertion axioms and collect the literals asserted for each property.
So, in the for loop in your code:
// Let's rename your Object class to Literal so we don't get confused with java.lang.Object
Instance instance = new Instance();
localAcademic.dataPropertyAssertionAxioms()
.forEach(ax -> instance.predicateToObject.put(
ax.getProperty().getIRI().toString(),
Collections.singleton(new Literal(ax.getObject))));
This code assumes properties only appear once - if your properties appear multiple times, you'll have to check whether a set already exists for the property and just add to it instead of replacing the value in the map. I left that out to simplify the example.
A visitor is not necessary for this scenario, because you already know what axiom type you're interested in and what methods to call on it. It could have been written as an OWLAxiomVisitor implementing only visit(OWLDataPropertyAssertionAxiom) but in this case there would be little advantage in doing so.
I'm having a scheduler that gets our cluster metrics and writes the data onto a HDFS file using an older version of the Cloudera API. But recently, we updated our JARs and the original code errors with an exception.
java.lang.ClassCastException: org.apache.hadoop.io.ArrayWritable cannot be cast to org.apache.hadoop.hive.serde2.io.ParquetHiveRecord
at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:31)
at parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:116)
at parquet.hadoop.ParquetWriter.write(ParquetWriter.java:324)
I need help in using the ParquetHiveRecord class write the data (which are POJOs) in parquet format.
Code sample below:
Writable[] values = new Writable[20];
... // populate values with all values
ArrayWritable value = new ArrayWritable(Writable.class, values);
writer.write(value); // <-- Getting exception here
Details of "writer" (of type ParquetWriter):
MessageType schema = MessageTypeParser.parseMessageType(SCHEMA); // SCHEMA is a string with our schema definition
ParquetWriter<ArrayWritable> writer = new ParquetWriter<ArrayWritable>(fileName, new
DataWritableWriteSupport() {
#Override
public WriteContext init(Configuration conf) {
if (conf.get(DataWritableWriteSupport.PARQUET_HIVE_SCHEMA) == null)
conf.set(DataWritableWriteSupport.PARQUET_HIVE_SCHEMA, schema.toString());
}
});
Also, we were using CDH and CM 5.5.1 before, now using 5.8.3
Thanks!
I think you need to use DataWritableWriter rather than ParquetWriter. The class cast exception indicates the write support class is expecting an instance of ParquetHiveRecord instead of ArrayWritable. DataWritableWriter likely breaks down the individual records in ArrayWritable to individual messages in the form of ParquetHiveRecord and sends each to the write support.
Parquet is sort of mind bending at times. :)
Looking at the code of the DataWritableWriteSupport class:
https ://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/write/DataWritableWriteSupport.java
You can see it is using the DataWritableWriter, hence you do not need to create an instance of DataWritableWriter, the idea of Write support is that you will be able to write different formats to parquet.
What you do need is to wrap your writables in ParquetHiveRecord
This questions describes how to reuse a pipeline in dkpro but if I only create one JCas and then try to change the text then I get the exception
org.apache.uima.cas.CASRuntimeException: Data for Sofa feature setLocalSofaData() has already been set.
How do I get around this?
The sofa data in the CAS can only be set once. It cannot be modified after it has been set.
In order to re-use a CAS, call the reset() method on it. This clears all annotations and allows you to set the sofa/text again.
To build a CAS incrementally, a common strategies is to add annotations to the CAS while adding text to a string buffer and setting the text only at the end of the process.
An uimaFIT-based example could look something like this:
Strings[] texts = {
"Hello world.",
"This is a test." };
// Create empty CAS/JCas initialized using uimaFIT typesystem auto-detection
JCas jcas = JCasFactory.createJCas();
// Instantiate some analysis engine
AnalysisEngine engine = AnalysisEngineFactory.createEngine(...);
// Process texts re-using the previously created CAS/JCas instance
for (String t : texts) {
jcas.reset();
jcas.setDocumentText(t);
jcas.setDocumentLanguage("en");
engine.process(jcas);
}
engine.collectionProcessComplete();
engine.destroy();
Disclosure: I am working on the Apache UIMA project.
Background
My application connects to the Genesys Interaction Server in order to receive events for actions performed on the Interaction Workspace. I am using the Platform SDK 8.5 for Java.
I make the connection to the Interaction Server using the method described in the API reference.
InteractionServerProtocol interactionServerProtocol =
new InteractionServerProtocol(
new Endpoint(
endpointName,
interactionServerHost,
interactionServerPort));
interactionServerProtocol.setClientType(InteractionClient.AgentApplication);
interactionServerProtocol.open();
Next, I need to register a listener for each Place I wish to receive events for.
RequestStartPlaceAgentStateReporting requestStartPlaceAgentStateReporting = RequestStartPlaceAgentStateReporting.create();
requestStartPlaceAgentStateReporting.setPlaceId("PlaceOfGold");
requestStartPlaceAgentStateReporting.setTenantId(101);
isProtocol.send(requestStartPlaceAgentStateReporting);
The way it is now, my application requires the user to manually specify each Place he wishes to observe. This requires him to know the names of all the Places, which he may not necessarily have [easy] access to.
Question
How do I programmatically obtain a list of Places available? Preferably from the Interaction Server to limit the number of connections needed.
There is a method you can use. If you check methods of applicationblocks you will see cfg and query objects. You can use it for get list of all DNs. When building query, try blank DBID,name and number.
there is a .net code similar to java code(actually exatly the same)
List<CfgDN> list = new List<CfgDN>();
List<DN> dnlist = new List<Dn>();
CfgDNQuery query = new CfgDNQuery(m_ConfService);
list = m_ConfService.RetrieveMultipleObjects<CfgDN>(query).ToList();
foreach (CfgDN item in list)
{
foo = (DN) item.DBID;
......
dnlist.Add(foo);
}
Note : DN is my class which contains some property from platform SDK.
KeyValueCollection tenantList = new KeyValueCollection();
tenantList.addString("tenant", "Resources");
RequestStartPlaceAgentStateReportingAll all = RequestStartPlaceAgentStateReportingAll.create(tenantList);
interactionServerProtocol.send(all);
Is there any java API/java plugin which can generate Database ER diagram when java connection object is provided as input.
Ex: InputSream generateDatabaseERDiagram(java connection object)// where inputsream will point to generated ER diagram image
The API should work with oracle,mysql,postgresql?
I was going through schemacrawler(http://schemacrawler.sourceforge.net/) tool but didint got any API which could do this.
If no API like this is there then let me know how can write my own API? I want to generate ER diagram for all the schema in a database or any specific schema if the schema name is provided as input.
It will be helpful if you show some light on how to achieve this task.
If I understood you question correctly, you might take a look at: JGraph
This is an old question but in case anyone else stumbles across it as I did when trying to do the same thing I eventually figured out how to generate the ERD using Schemacrawler's java API.
//Get your java connection however
Connection conn = DriverManager.getConnection("DATABASE URL");
SchemaCrawlerOptions options = new SchemaCrawlerOptions();
// Set what details are required in the schema - this affects the
// time taken to crawl the schema
options.setSchemaInfoLevel(SchemaInfoLevelBuilder.standard());
// you can exclude/include objects using the options object e.g.
//options.setTableInclusionRule(new RegularExpressionExclusionRule(".*qrtz.*||.*databasechangelog.*"));
GraphExecutable ge = new GraphExecutable();
ge.setSchemaCrawlerOptions(options);
String outputFormatValue = GraphOutputFormat.png.getFormat();
OutputOptions outputOptions = new OutputOptions(outputFormatValue, new File("database.png").toPath());
ge.setOutputOptions(outputOptions);
ge.execute(conn);
This still requires graphviz to be installed and on the path to work.