Integration of Rapidminer with Java: Obtaining the output Example Set (Process Result) - java

I want to execute a Rapidminer process from Java to use the output ExampleSet (Process Result) for subsequent operations (with Java).
I managed the process execution with the code below, but I don't have a clue how to obtain the process Result Example Set.
Ideally, I want to get any Example Set independent of the variables, but if you need to generate the metadata beforehand, will have to be.
package com.companyname.rm;
import com.rapidminer.Process;
import com.rapidminer.RapidMiner;
import com.rapidminer.operator.OperatorException;
import com.rapidminer.tools.XMLException;
import java.io.File;
import java.io.IOException;
public class RunProcess {
public static void main(String[] args) {
try {
RapidMiner.setExecutionMode(RapidMiner.ExecutionMode.COMMAND_LINE);
RapidMiner.init();
Process process = new Process(new File("//my_path/..../test_JAVA.rmp"));
process.run();
} catch (IOException | XMLException | OperatorException ex) {
ex.printStackTrace();
}
}
}

To obtain the ExampleSet of the process you need add
IOContainer ioResult = process.run();
A shortened example taken from http://allinoneat.blogspot.de/2013/04/integrate-rapidminer-wtih-java.html
IOContainer ioResult = process.run();
if (ioResult.getElementAt(0) instanceof ExampleSet) {
ExampleSet resultSet = (ExampleSet) ioResult.getElementAt(0);
for (Example example : resultSet) {
Iterator<Attribute> allAtts = example.getAttributes().allAttributes();
while (allAtts.hasNext()) {
Attribute a = allAtts.next();
if (a.isNumerical()) {
double value = example.getValue(a);
System.out.print(value + " ");
} else {
String value = example.getValueAsString(a);
System.out.print(value + " ");
}
}
System.out.println("\n");
}
}

Option 1:
Click on context, and save process output to files.
Then, read from files.
Option2:
Use the WriteAsText to save what you want. Then read from the file.
I just run the rapidminer as a script:
http://rapid-i.com/rapidforum/index.php?topic=3009.0

Related

java Gherkin parser stream does not release file locks

I am using Gherkin parser to parse feature files and returning the list of Gherkin documents see the function below:
import io.cucumber.gherkin.Gherkin;
import io.cucumber.messages.IdGenerator;
import io.cucumber.messages.Messages;
import io.cucumber.messages.Messages.Envelope;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import java.util.ArrayList;
import java.util.List;
import java.util.stream.Collectors;
import java.util.stream.Stream;
public class GherkinUtils {
private static final Logger LOG = LogManager.getLogger(GherkinUtils.class);
public static ArrayList<Messages.GherkinDocument> getGherkinDocumentsFromFiles() {
IdGenerator idGenerator = new IdGenerator.Incrementing();
ArrayList<Messages.GherkinDocument> listOfGherkinDocuments = new ArrayList<>();
String pathFolderFrameworkFeatures = SettingsUtils.getPathFolderFrameworkFeatures();
List<String> listOfPathsForFeatureFiles = FileUtils.getAllFilePathsFromFolder(pathFolderFrameworkFeatures);
try (Stream<Envelope> dataStream = Gherkin.fromPaths(listOfPathsForFeatureFiles, false, true, false, idGenerator)){
List<Envelope> envelopes = dataStream.collect(Collectors.toList());
for (Envelope env : envelopes) {
Messages.GherkinDocument gherkinDocument = env.getGherkinDocument();
listOfGherkinDocuments.add(gherkinDocument);
}
} catch (Exception e) {
LOG.error("Error occurred while trying to read the feature files", new Exception(e));
}
FileUtils.renameAllFeatureFiles("b");
return listOfGherkinDocuments;
}
}
Just before the return statement, you can see the function that will update the name for all feature files just to check if they are not locked.
The problem is that only the first file is always renamed and the rest of them are always locked.
If I will place the rename function at the top, then all the files are successfully renamed...
My understanding is that the try statement will automatically close the stream. Also, I tried to close it manually inside the try block but the results are the same.
What am I missing? How can I make it to release the file locks?
Update 1:
This exact line is making the files (except the first one to be locked):
List<Envelope> envelopes = dataStream.collect(Collectors.toList());
Here is the file name update function definition in case you want to test it:
public static void renameAllFeatureFiles(String fileName) {
String pathFeaturesFolder = SettingsUtils.getPathFolderFrameworkFeatures();
List<String> pathList = FileUtils.getAllFilePathsFromFolder(pathFeaturesFolder);
int counter = 0;
for (String path : pathList) {
counter ++;
File file = new File(path);
File newFile = new File(pathFeaturesFolder + "\\" + fileName +counter+".feature");
System.out.println("File: " + path + " locked: " + !file.renameTo(newFile));
}
}
And here is a sample feature file content:
Feature: Test
Scenario: test 1
Given User will do something
And User will do something
Update 2:
Tried with separate thread using javafx Task, still the same issue :(
Except for one file (this is really strange) all files are locked...
public static void runInNewThread() {
// define the execution task that will run in a new thread
Task<Void> newTask = new Task<>() {
#Override
protected Void call() {
ArrayList<Messages.GherkinDocument> listOfGherkinDocuments = GherkinUtils.getGherkinDocumentsFromFiles();
return null;
}
};
// run the task in a new thread
Thread th = new Thread(newTask);
th.setDaemon(true);
th.start();
}
For now, I have used workaround with creating copies of the specific files and using parser on the copies to prevent locking of the original versions...

How can I update custom properties in alfresco workflow task using only Java?

First, I want to say thanks to everyone that took their time to help me figure this out because I was searching for more than a week for a solution to my problem. Here it is:
My goal is to start a custom workflow in Alfresco Community 5.2 and to set some custom properties in the first task trough a web script using only the Public Java API. My class is extending AbstractWebScript. Currently I have success with starting the workflow and setting properties like bpm:workflowDescription, but I'm not able to set my custom properties in the tasks.
Here is the code:
public class StartWorkflow extends AbstractWebScript {
/**
* The Alfresco Service Registry that gives access to all public content services in Alfresco.
*/
private ServiceRegistry serviceRegistry;
public void setServiceRegistry(ServiceRegistry serviceRegistry) {
this.serviceRegistry = serviceRegistry;
}
#Override
public void execute(WebScriptRequest req, WebScriptResponse res) throws IOException {
// Create JSON object for the response
JSONObject obj = new JSONObject();
try {
// Check if parameter defName is present in the request
String wfDefFromReq = req.getParameter("defName");
if (wfDefFromReq == null) {
obj.put("resultCode", "1 (Error)");
obj.put("errorMessage", "Parameter defName not found.");
return;
}
// Get the WFL Service
WorkflowService workflowService = serviceRegistry.getWorkflowService();
// Build WFL Definition name
String wfDefName = "activiti$" + wfDefFromReq;
// Get WorkflowDefinition object
WorkflowDefinition wfDef = workflowService.getDefinitionByName(wfDefName);
// Check if such WorkflowDefinition exists
if (wfDef == null) {
obj.put("resultCode", "1 (Error)");
obj.put("errorMessage", "No workflow definition found for defName = " + wfDefName);
return;
}
// Get parameters from the request
Content reqContent = req.getContent();
if (reqContent == null) {
throw new WebScriptException(Status.STATUS_BAD_REQUEST, "Missing request body.");
}
String content;
content = reqContent.getContent();
if (content.isEmpty()) {
throw new WebScriptException(Status.STATUS_BAD_REQUEST, "Content is empty");
}
JSONTokener jsonTokener = new JSONTokener(content);
JSONObject json = new JSONObject(jsonTokener);
// Set the workflow description
Map<QName, Serializable> params = new HashMap();
params.put(WorkflowModel.PROP_WORKFLOW_DESCRIPTION, "Workflow started from JAVA API");
// Start the workflow
WorkflowPath wfPath = workflowService.startWorkflow(wfDef.getId(), params);
// Get params from the POST request
Map<QName, Serializable> reqParams = new HashMap();
Iterator<String> i = json.keys();
while (i.hasNext()) {
String paramName = i.next();
QName qName = QName.createQName(paramName);
String value = json.getString(qName.getLocalName());
reqParams.put(qName, value);
}
// Try to update the task properties
// Get the next active task which contains the properties to update
WorkflowTask wfTask = workflowService.getTasksForWorkflowPath(wfPath.getId()).get(0);
// Update properties
WorkflowTask updatedTask = workflowService.updateTask(wfTask.getId(), reqParams, null, null);
obj.put("resultCode", "0 (Success)");
obj.put("workflowId", wfPath.getId());
} catch (JSONException e) {
throw new WebScriptException(Status.STATUS_BAD_REQUEST,
e.getLocalizedMessage());
} catch (IOException ioe) {
throw new WebScriptException(Status.STATUS_BAD_REQUEST,
"Error when parsing the request.",
ioe);
} finally {
// build a JSON string and send it back
String jsonString = obj.toString();
res.getWriter().write(jsonString);
}
}
}
Here is how I call the webscript:
curl -v -uadmin:admin -X POST -d #postParams.json localhost:8080/alfresco/s/workflow/startJava?defName=nameOfTheWFLDefinition -H "Content-Type:application/json"
In postParams.json file I have the required pairs for property/value which I need to update:
{
"cmprop:propOne" : "Value 1",
"cmprop:propTwo" : "Value 2",
"cmprop:propThree" : "Value 3"
}
The workflow is started, bpm:workflowDescription is set correctly, but the properties in the task are not visible to be set.
I made a JS script which I call when the workflow is started:
execution.setVariable('bpm_workflowDescription', 'Some String ' + execution.getVariable('cmprop:propOne'));
And actually the value for cmprop:propOne is used and the description is properly updated - which means that those properties are updated somewhere (on execution level maybe?) but I cannot figure out why they are not visible when I open the task.
I had success with starting the workflow and updating the properties using the JavaScript API with:
if (wfdef) {
// Get the params
wfparams = {};
if (jsonRequest) {
for ( var prop in jsonRequest) {
wfparams[prop] = jsonRequest[prop];
}
}
wfpackage = workflow.createPackage();
wfpath = wfdef.startWorkflow(wfpackage, wfparams);
The problem is that I only want to use the public Java API, please help.
Thanks!
Do you set your variables locally in your tasks? From what I see, it seems that you define your variables at the execution level, but not at the state level. If you take a look at the ootb adhoc.bpmn20.xml file (https://github.com/Activiti/Activiti-Designer/blob/master/org.activiti.designer.eclipse/src/main/resources/templates/adhoc.bpmn20.xml), you can notice an event listener that sets the variable locally:
<extensionElements>
<activiti:taskListener event="create" class="org.alfresco.repo.workflow.activiti.tasklistener.ScriptTaskListener">
<activiti:field name="script">
<activiti:string>
if (typeof bpm_workflowDueDate != 'undefined') task.setVariableLocal('bpm_dueDate', bpm_workflowDueDate);
if (typeof bpm_workflowPriority != 'undefined') task.priority = bpm_workflowPriority;
</activiti:string>
</activiti:field>
</activiti:taskListener>
</extensionElements>
Usually, I just try to import all tasks for my custom model prefix. So for you, it should look like that:
import java.util.Set;
import org.activiti.engine.delegate.DelegateExecution;
import org.activiti.engine.delegate.DelegateTask;
import org.apache.log4j.Logger;
public class ImportVariables extends AbstractTaskListener {
private Logger logger = Logger.getLogger(ImportVariables.class);
#Override
public void notify(DelegateTask task) {
logger.debug("Inside ImportVariables.notify()");
logger.debug("Task ID:" + task.getId());
logger.debug("Task name:" + task.getName());
logger.debug("Task proc ID:" + task.getProcessInstanceId());
logger.debug("Task def key:" + task.getTaskDefinitionKey());
DelegateExecution execution = task.getExecution();
Set<String> executionVariables = execution.getVariableNamesLocal();
for (String variableName : executionVariables) {
// If the variable starts by "cmprop_"
if (variableName.startsWith("cmprop_")) {
// Publish it at the task level
task.setVariableLocal(variableName, execution.getVariableLocal(variableName));
}
}
}
}

Detect file type based on content

Tried the following:
import java.io.IOException;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.spi.FileTypeDetector;
import org.apache.tika.Tika;
import org.apache.tika.mime.MimeTypes;
/**
*
* #author kiriti.k
*/
public class TikaFileTypeDetector {
private final Tika tika = new Tika();
public TikaFileTypeDetector() {
super();
}
public String probeContentType(Path path) throws IOException {
// Try to detect based on the file name only for efficiency
String fileNameDetect = tika.detect(path.toString());
if (!fileNameDetect.equals(MimeTypes.OCTET_STREAM)) {
return fileNameDetect;
}
// Then check the file content if necessary
String fileContentDetect = tika.detect(path.toFile());
if (!fileContentDetect.equals(MimeTypes.OCTET_STREAM)) {
return fileContentDetect;
}
// Specification says to return null if we could not
// conclusively determine the file type
return null;
}
public static void main(String[] args) throws IOException {
Tika tika = new Tika();
// expects file path as the program argument
if (args.length != 1) {
printUsage();
return;
}
Path path = Paths.get(args[0]);
TikaFileTypeDetector detector = new TikaFileTypeDetector();
// Analyse the file - first based on file name for efficiency.
// If cannot determine based on name and then analyse content
String contentType = detector.probeContentType(path);
System.out.println("File is of type - " + contentType);
}
public static void printUsage() {
System.out.print("Usage: java -classpath ... "
+ TikaFileTypeDetector.class.getName()
+ " ");
}
}
The above program is checking based on file extension only. How do I make it to check content type also(mime) and then determine the type. I am using tika-app-1.8.jar in netbean 8.0.2. What am I missing?
The code checks the file extension first and returns the MIME type based on that, if it finds a result. If you want it to check the content first, just switch the two statements:
public String probeContentType(Path path) throws IOException {
// Check contents first
String fileContentDetect = tika.detect(path.toFile());
if (!fileContentDetect.equals(MimeTypes.OCTET_STREAM)) {
return fileContentDetect;
}
// Try file name only if content search was not successful
String fileNameDetect = tika.detect(path.toString());
if (!fileNameDetect.equals(MimeTypes.OCTET_STREAM)) {
return fileNameDetect;
}
// Specification says to return null if we could not
// conclusively determine the file type
return null;
}
Be aware that this may have huge performance impact.
You can use Files.probeContentType(path)

How to implement auto suggest using Lucene's new AnalyzingInfixSuggester API?

I am a greenhand on Lucene, and I want to implement auto suggest, just like google, when I input a character like 'G', it would give me a list, you can try your self.
I have searched on the whole net.
Nobody has done this , and it gives us some new tools in package suggest
But i need an example to tell me how to do that
Is there anyone can help ?
I'll give you a pretty complete example that shows you how to use AnalyzingInfixSuggester. In this example we'll pretend that we're Amazon, and we want to autocomplete a product search field. We'll take advantage of features of the Lucene suggestion system to implement the following:
Ranked results: We will suggest the most popular matching products first.
Region-restricted results: We will only suggest products that we sell in the customer's country.
Product photos: We will store product photo URLs in the suggestion index so we can display them in the search results, without having to do an additional database lookup.
First I'll define a simple class to hold information about a product in Product.java:
import java.util.Set;
class Product implements java.io.Serializable
{
String name;
String image;
String[] regions;
int numberSold;
public Product(String name, String image, String[] regions,
int numberSold) {
this.name = name;
this.image = image;
this.regions = regions;
this.numberSold = numberSold;
}
}
To index records in with the AnalyzingInfixSuggester's build method you need to pass it an object that implements the org.apache.lucene.search.suggest.InputIterator interface. An InputIterator gives access to the key, contexts, payload and weight for each record.
The key is the text you actually want to search on and autocomplete against. In our example, it will be the name of the product.
The contexts are a set of additional, arbitrary data that you can use to filter records against. In our example, the contexts are the set of ISO codes for the countries we will ship a particular product to.
The payload is additional arbitrary data you want to store in the index for the record. In this example, we will actually serialize each Product instance and store the resulting bytes as the payload. Then when we later do lookups, we can deserialize the payload and access information in the product instance like the image URL.
The weight is used to order suggestion results; results with a higher weight are returned first. We'll use the number of sales for a given product as its weight.
Here's the contents of ProductIterator.java:
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.ObjectOutputStream;
import java.io.UnsupportedEncodingException;
import java.util.Comparator;
import java.util.HashSet;
import java.util.Iterator;
import java.util.Set;
import org.apache.lucene.search.suggest.InputIterator;
import org.apache.lucene.util.BytesRef;
class ProductIterator implements InputIterator
{
private Iterator<Product> productIterator;
private Product currentProduct;
ProductIterator(Iterator<Product> productIterator) {
this.productIterator = productIterator;
}
public boolean hasContexts() {
return true;
}
public boolean hasPayloads() {
return true;
}
public Comparator<BytesRef> getComparator() {
return null;
}
// This method needs to return the key for the record; this is the
// text we'll be autocompleting against.
public BytesRef next() {
if (productIterator.hasNext()) {
currentProduct = productIterator.next();
try {
return new BytesRef(currentProduct.name.getBytes("UTF8"));
} catch (UnsupportedEncodingException e) {
throw new Error("Couldn't convert to UTF-8");
}
} else {
return null;
}
}
// This method returns the payload for the record, which is
// additional data that can be associated with a record and
// returned when we do suggestion lookups. In this example the
// payload is a serialized Java object representing our product.
public BytesRef payload() {
try {
ByteArrayOutputStream bos = new ByteArrayOutputStream();
ObjectOutputStream out = new ObjectOutputStream(bos);
out.writeObject(currentProduct);
out.close();
return new BytesRef(bos.toByteArray());
} catch (IOException e) {
throw new Error("Well that's unfortunate.");
}
}
// This method returns the contexts for the record, which we can
// use to restrict suggestions. In this example we use the
// regions in which a product is sold.
public Set<BytesRef> contexts() {
try {
Set<BytesRef> regions = new HashSet();
for (String region : currentProduct.regions) {
regions.add(new BytesRef(region.getBytes("UTF8")));
}
return regions;
} catch (UnsupportedEncodingException e) {
throw new Error("Couldn't convert to UTF-8");
}
}
// This method helps us order our suggestions. In this example we
// use the number of products of this type that we've sold.
public long weight() {
return currentProduct.numberSold;
}
}
In our driver program, we will do the following things:
Create an index directory in RAM.
Create a StandardTokenizer.
Create an AnalyzingInfixSuggester using the RAM directory and tokenizer.
Index a number of products using ProductIterator.
Print the results of some sample lookups.
Here's the driver program, SuggestProducts.java:
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.UnsupportedEncodingException;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester;
import org.apache.lucene.search.suggest.Lookup;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.BytesRef;
import org.apache.lucene.util.Version;
public class SuggestProducts
{
// Get suggestions given a prefix and a region.
private static void lookup(AnalyzingInfixSuggester suggester, String name,
String region) {
try {
List<Lookup.LookupResult> results;
HashSet<BytesRef> contexts = new HashSet<BytesRef>();
contexts.add(new BytesRef(region.getBytes("UTF8")));
// Do the actual lookup. We ask for the top 2 results.
results = suggester.lookup(name, contexts, 2, true, false);
System.out.println("-- \"" + name + "\" (" + region + "):");
for (Lookup.LookupResult result : results) {
System.out.println(result.key);
Product p = getProduct(result);
if (p != null) {
System.out.println(" image: " + p.image);
System.out.println(" # sold: " + p.numberSold);
}
}
} catch (IOException e) {
System.err.println("Error");
}
}
// Deserialize a Product from a LookupResult payload.
private static Product getProduct(Lookup.LookupResult result)
{
try {
BytesRef payload = result.payload;
if (payload != null) {
ByteArrayInputStream bis = new ByteArrayInputStream(payload.bytes);
ObjectInputStream in = new ObjectInputStream(bis);
Product p = (Product) in.readObject();
return p;
} else {
return null;
}
} catch (IOException|ClassNotFoundException e) {
throw new Error("Could not decode payload :(");
}
}
public static void main(String[] args) {
try {
RAMDirectory index_dir = new RAMDirectory();
StandardAnalyzer analyzer = new StandardAnalyzer(Version.LUCENE_48);
AnalyzingInfixSuggester suggester = new AnalyzingInfixSuggester(
Version.LUCENE_48, index_dir, analyzer);
// Create our list of products.
ArrayList<Product> products = new ArrayList<Product>();
products.add(
new Product(
"Electric Guitar",
"http://images.example/electric-guitar.jpg",
new String[]{"US", "CA"},
100));
products.add(
new Product(
"Electric Train",
"http://images.example/train.jpg",
new String[]{"US", "CA"},
100));
products.add(
new Product(
"Acoustic Guitar",
"http://images.example/acoustic-guitar.jpg",
new String[]{"US", "ZA"},
80));
products.add(
new Product(
"Guarana Soda",
"http://images.example/soda.jpg",
new String[]{"ZA", "IE"},
130));
// Index the products with the suggester.
suggester.build(new ProductIterator(products.iterator()));
// Do some example lookups.
lookup(suggester, "Gu", "US");
lookup(suggester, "Gu", "ZA");
lookup(suggester, "Gui", "CA");
lookup(suggester, "Electric guit", "US");
} catch (IOException e) {
System.err.println("Error!");
}
}
}
And here is the output from the driver program:
-- "Gu" (US):
Electric Guitar
image: http://images.example/electric-guitar.jpg
# sold: 100
Acoustic Guitar
image: http://images.example/acoustic-guitar.jpg
# sold: 80
-- "Gu" (ZA):
Guarana Soda
image: http://images.example/soda.jpg
# sold: 130
Acoustic Guitar
image: http://images.example/acoustic-guitar.jpg
# sold: 80
-- "Gui" (CA):
Electric Guitar
image: http://images.example/electric-guitar.jpg
# sold: 100
-- "Electric guit" (US):
Electric Guitar
image: http://images.example/electric-guitar.jpg
# sold: 100
Appendix
There's a way to avoid writing a full InputIterator that you might find easier. You can write a stub InputIterator that returns null from its next, payload and contexts methods. Pass an instance of it to AnalyzingInfixSuggester's build method:
suggester.build(new ProductIterator(new ArrayList<Product>().iterator()));
Then for each item you want to index, call the AnalyzingInfixSuggester add method:
suggester.add(text, contexts, weight, payload)
After you've indexed everything, call refresh:
suggester.refresh();
If you're indexing large amounts of data, it's possible to significantly speedup indexing using this method with multiple threads: Call build, then use multiple threads to add items, then finally call refresh.
[Edited 2015-04-23 to demonstrate deserializing info from the LookupResult payload.]

Calling Existing PipeLine in GATE

I am new to Java and I want to call my saved pipeline using GATE JAVA API through Eclipse
I am not sure how I could do this although I know how to create new documents etc
FeatureMap params = Factory.newFeatureMap();
params.put(Document.DOCUMENT_URL_PARAMETER_NAME, new URL("http://www.gate.ac.uk"));
params.put(Document.DOCUMENT_ENCODING_PARAMETER_NAME, "UTF-8");
// document features
FeatureMap feats = Factory.newFeatureMap();
feats.put("date", new Date());
Factory.createResource("gate.corpora.DocumentImpl", params, feats, "This is home");
//End Solution 2
// obtain a map of all named annotation sets
Document doc = Factory.newDocument("Document text");
Map <String, AnnotationSet> namedASes = doc.getNamedAnnotationSets();
System.out.println("No. of named Annotation Sets:" + namedASes.size());
// no of annotations each set contains
for (String setName : namedASes.keySet()) {
// annotation set
AnnotationSet aSet = namedASes.get(setName);
// no of annotations
System.out.println("No. of Annotations for " +setName + ":" + aSet.size());
There is a good example of GATE usage from java. Probably it does exactly what you want. BatchProcessApp.java.
In particular:
loading pipeline is done with lines
// load the saved application
CorpusController application =
(CorpusController)PersistenceManager.loadObjectFromFile(gappFile);
pipeli executed with
// run the application
application.execute();
Code is informative, clear and could be easy changed for your particular needs. The oxygen of open source project :)
Something like this could be used(do not forget to init GATE: set GATE home and etc):
private void getProcessedText(String textToProcess) {
Document gateDocument = null;
try {
// you can use your method from above to build document
gateDocument = createGATEDocument(textToProcess);
corpusController.getCorpus().add(gateDocument);
corpusController.execute();
// put here your annotations processing
} catch (Throwable ex) {
ex.printStackTrace();
} finally {
if (corpusController.getCorpus() != null) {
corpusController.getCorpus().remove(gateDocument);
}
if (gateDocument != null) {
Factory.deleteResource(gateDocument);
}
}
}
private CorpusController initPersistentGateResources() {
try {
Corpus corpus = Factory.newCorpus("New Corpus");
corpusController = (CorpusController) PersistenceManager.loadObjectFromFile(new File("PATH-TO-YOUR-GAPP-FILE"));
corpusController.setCorpus(corpus);
} catch (Exception ex) {
ex.printStackTrace();
}
return corpusController;
}

Categories