GATE ML Information Extraction process fail to produce proper class-level - java

I am trying to learn Machine Learning. In case of Information Extraction, save files are getting populated properly with data. But Number of class level is 0 in NLPFeaturesData.save file and log is also sounds like
93 #numTrainingDocs
0 #numClasses
53738 #numNullLabelInstances
9006940 #totalFeatures
C:\...\learnedModels.save #modelFile
SVMLibSvmJava #learnerName
null #learnerExecutable
-c 0.7 -t 0 -m 100 -tau 0.4 #learnerParams
I have run the following code base. But generated "NLPFeatureData.save" file's all class level is 0. Could someone please help me where I went wrong in machine learning.
try{
// ***************** load Gate & it's plugin [ Load learning] ***********************************
System.setProperty("gate.home", "C:\\Program Files\\GATE_Developer_7.1");
Gate.init();
Gate.getCreoleRegister().registerDirectories(new File(Gate.getPluginsHome(), ANNIEConstants.PLUGIN_DIR).toURI().toURL());
Gate.getCreoleRegister().registerDirectories(new URL(FILE_WORK_PATH+"/plugins/Learning"));
// ****************** Instantiate corpus and load training documents *****************************
gate.Corpus corpus = (Corpus) Factory.createResource("gate.corpora.CorpusImpl");
FileFilter fileFilter = new FileFilter() {
public boolean accept(File pathname) {
// TODO Auto-generated method stub
return true;
}
};
corpus.populate(new URL(".../corpus"),fileFilter,"UTF-8",false);
Gate.getCreoleRegister().registerDirectories();
//Make a pipeline and add the corpus
FeatureMap pfm = Factory.newFeatureMap();
pfm.put("corpus", corpus);
pipeline = (gate.creole.SerialAnalyserController)gate.Factory.createResource("gate.creole.SerialAnalyserController", pfm);
initAnnie();
//********************************* Configure with relations config file and learning api
File configFile = new File("../learning-config.xml"); //Wherever it is
RunMode mode = RunMode.TRAINING; //or TRAINING, or APPLICATION ..
FeatureMap fm = Factory.newFeatureMap();
fm.put("configFileURL", configFile.toURI().toURL());
fm.put("learningMode", mode);
gate.learning.LearningAPIMain learner = (gate.learning.LearningAPIMain)gate.Factory.createResource("gate.learning.LearningAPIMain", fm);
pipeline.add(learner);
pipeline.execute();
}catch(Exception e){
e.printStackTrace();
}
}
private static void initAnnie() throws GateException {
for(int i = 0; i < ANNIEConstants.PR_NAMES.length; i++) {
FeatureMap params = Factory.newFeatureMap(); // use default parameters
ProcessingResource pr = (ProcessingResource)
Factory.createResource(ANNIEConstants.PR_NAMES[i], params);
pipeline.add(pr);
}
}

Finally I have resolved this problem. I have added following AnnotationSet with gate.learning.LearningAPIMain instance.
learner.setInputASName("Key");
learner.setOutputASName("Key");
Now my saved files are generating with proper format.

Related

How to manage Apache-Beam TextIO exceptions into failures?

How to convert TextIO exceptions into failures?
Sometimes when i use TextIO.read() I have
org.apache.beam.sdk.Pipeline$PipelineExecutionException:
java.io.FileNotFoundException: No files matched spec:
src/test/resources/config/qqqqqqq
How to separate exceptions to independent list of failures?
For example this code:
I have a file with list of other files and need to read all lines from all files as one list
PipelineOptions options = PipelineOptionsFactory.create();
Pipeline pipeline = Pipeline.create(options);
PCollection<String> lines = pipeline
.apply(TextIO.read().from("src/test/resources/config/W-PSFV-LOG-FILE-2022-05-16_23-59-59.txt"))
.apply(MapElements.into(TypeDescriptors.strings()).via(line -> "src/test/resources/config/" + line))
.apply(TextIO.readAll());
;
lines.apply(Log.ofElements());
pipeline.run();
But if one of files is broken it throws FileNotFoundException and stops. Do not want to stop, I want to get a list of all existing files and list with broken files
I thinks you can use a dead letter queue in order to solve your problem.
Beam proposes natively error handling with TupleTags or exceptionInto and exceptionVia methods in MapElements.
It then returns a Result structure with good outputs PCollections and failures PCollection.
You can also use a library called Asgarde :
https://github.com/tosun-si/asgarde
PipelineOptions options = PipelineOptionsFactory.create();
Pipeline pipeline = Pipeline.create(options);
PCollection<String> lines = pipeline
.apply(TextIO.read().from("src/test/resources/config/W-PSFV-LOG-FILE-2022-05-16_23-59-59.txt"))
WithFailures.Result<PCollection<String>, Failure> result = CollectionComposer.of(lines)
.apply(MapElements.into(TypeDescriptors.strings()).via(line -> "src/test/resources/config/" + line));
;
// Gets outputs and Failure PCollections.
PCollection<String> output = result.output();
PCollection<Failure> failures = result.failures();
// Then you can sink your Failures in database, GCS file or topic if needed...
......
pipeline.run();
Failure object is proposed by Asgarde library and give the current input element as String and exception :
public class Failure implements Serializable {
private final String pipelineStep;
private final String inputElement;
private final Throwable exception;
If you want to use this code, you have to import Asgarde library, for example with Maven in your pom.xml file :
<dependency>
<groupId>fr.groupbees</groupId>
<artifactId>asgarde</artifactId>
<version>0.19.0</version>
</dependency>
or with Gradle :
implementation group: 'fr.groupbees', name: 'asgarde', version: '0.19.0'
PS : I am the creator of Asgarde library, the Readme of project shows many examples to apply Dead letter queue with native Beam and with Asgarde library.
Don't hesitate to read the Readme file of the project : https://github.com/tosun-si/asgarde
You can use a FileIO first to split the files into readable-existing files and non-existing files.
PCollection<KV<String, String>> categoryAndFiles = p
.apply(FileIO.match().filepattern("hdfs://path/to/*.gz"))
// withCompression can be omitted - by default compression is detected from the filename.
.apply(FileIO.readMatches().withCompression(GZIP))
.apply(MapElements
// uses imports from TypeDescriptors
.into(kvs(strings(), strings()))
.via((ReadableFile f) -> {
try {
f.open();
return KV.of(
"readable-existing",
f.getMetadata().resourceId().toString());
} catch (IOException ex) {
return KV.of(
"non-existing",
f.getMetadata().resourceId().toString());
}
}));
Adapted from an example.
#Rule
public transient TestPipeline pipeline = TestPipeline.create();
#Test
public void testTransformWithIOException3() throws FileNotFoundException {
PCollection<String> digits = pipeline.apply(Create.of("1", "2", "3"));
WithFailures.Result<PCollection<String>, Failure> result = pipeline
.apply("Read ", Create.of("1", "2", "3")) // PCollection<String>
.apply("Map", new PTransform<PCollection<String>, WithFailures.Result<PCollection<String>, Failure>>() {
#Override
public WithFailures.Result<PCollection<String>, Failure> expand(PCollection<String> input) {
return CollectionComposer.of(input)
.apply("//", MapElements.into(TypeDescriptors.strings()).via(s -> {
try {
if (s.equals("2")) throw new FileNotFoundException();
else
return s.toString();
} catch(Exception e){
throw new RuntimeException("error ");
}
}))
.getResult();
}
});
result.output().apply("out",
MapElements.into(TypeDescriptors.strings()).via(x -> {
System.out.println(x);
return x.toString();
}));
result.failures().apply("failures",
MapElements.into(TypeDescriptors.strings()).via(x -> {
System.out.println(x);
return x.toString();
}));
pipeline.run().waitUntilFinish();

Finding if printer is online and ready to print

The following 4 questions didn't help, therefore this isn't a duplicate:
ONE, TWO, THREE, FOUR
I need to find a way to discover if the Printer that my system reports is available to print or not.
Printer page:
In the picture, the printer "THERMAL" is available to print, but "HPRT PPTII-A(USB)" isn't available to print. The System shows me that, by making the non-available printer shaded
Using the following code, I'm able to find all the printers in the system
public static List<String> getAvailablePrinters() {
DocFlavor flavor = DocFlavor.SERVICE_FORMATTED.PRINTABLE;
PrintRequestAttributeSet aset = new HashPrintRequestAttributeSet();
PrintService[] services = PrintServiceLookup.lookupPrintServices(flavor, aset);
ArrayList<String> names = new ArrayList<String>();
for (PrintService p : services) {
Attribute at = p.getAttribute(PrinterIsAcceptingJobs.class);
if (at == PrinterIsAcceptingJobs.ACCEPTING_JOBS) {
names.add(p.getName());
}
}
return names;
}
output:
[HPRT PPTII-A(USB), THERMAL]
The problem is: This code shows all the printers that the system have ever installed.
What I need: This list should contain only the really available printers to print. In this example, it should only show "THERMAL", and not show "HPRT PPTII-A(USB)"
How can this be achieved?
If it is okay that the solution is Windows-specific, try WMI4Java. Here is my situation:
As you can see, my default printer "Kyocera Mita FS-1010" is inactive (greyed out) because I simply switched it off.
Now add this to your Maven POM:
<dependency>
<groupId>com.profesorfalken</groupId>
<artifactId>WMI4Java</artifactId>
<version>1.4.2</version>
</dependency>
Then it is as easy as this to list all printers with their respective status:
package de.scrum_master.app;
import com.profesorfalken.wmi4java.WMI4Java;
import com.profesorfalken.wmi4java.WMIClass;
import java.util.Arrays;
public class Printer {
public static void main(String[] args) {
System.out.println(
WMI4Java
.get()
.properties(Arrays.asList("Name", "WorkOffline"))
.getRawWMIObjectOutput(WMIClass.WIN32_PRINTER)
);
}
}
The console log looks as follows:
Name : WEB.DE Club SmartFax
WorkOffline : False
Name : Send To OneNote 2016
WorkOffline : False
Name : Microsoft XPS Document Writer
WorkOffline : False
Name : Microsoft Print to PDF
WorkOffline : False
Name : Kyocera Mita FS-1010 KX
WorkOffline : True
Name : FreePDF
WorkOffline : False
Name : FinePrint
WorkOffline : False
Name : Fax
WorkOffline : False
Please note that WorkOffline is True for the Kyocera printer. Probably this is what you wanted to find out.
And now a little modification in order to filter the printers list so as to only show active printers:
WMI4Java
.get()
.properties(Arrays.asList("Name", "WorkOffline"))
.filters(Arrays.asList("$_.WorkOffline -eq 0"))
.getRawWMIObjectOutput(WMIClass.WIN32_PRINTER)
Update: I was asked how to get a list of active printer names. Well, this is not so easy due to a shortcoming in WMI4Java for which I have just filed a pull request. It causes us to parse and filter the raw WMI output, but the code is still pretty straightforward:
package de.scrum_master.app;
import com.profesorfalken.wmi4java.WMI4Java;
import com.profesorfalken.wmi4java.WMIClass;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;
public class Printer {
public static void main(String[] args) {
String rawOutput = WMI4Java
.get()
.properties(Arrays.asList("Name", "WorkOffline"))
.filters(Arrays.asList("$_.WorkOffline -eq 0"))
.getRawWMIObjectOutput(WMIClass.WIN32_PRINTER);
List<String> printers = Arrays.stream(rawOutput.split("(\r?\n)"))
.filter(line -> line.startsWith("Name"))
.map(line -> line.replaceFirst(".* : ", ""))
.sorted()
.collect(Collectors.toList());
System.out.println(printers);
}
}
The console output looks like this:
[Fax, FinePrint, FreePDF, Microsoft Print to PDF, Microsoft XPS Document Writer, Send To OneNote 2016, WEB.DE Club SmartFax]
UPDATE:
Instead of querying WMI "win32_printer" object I would recommend using Powershell directly like this, its much cleaner API :
Get-Printer | where PrinterStatus -like 'Normal' | fl
To see all the printers and statuses:
Get-Printer | fl Name, PrinterStatus
To see all the attributes:
Get-Printer | fl
You can still use ProcessBuilder in Java as described below.
Before update:
Windows solution, query WMI "win32_printer" object:
public static void main(String[] args) {
// select printer that have state = 0 and status = 3, which indicates that printer can print
ProcessBuilder builder = new ProcessBuilder("powershell.exe", "get-wmiobject -class win32_printer | Select-Object Name, PrinterState, PrinterStatus | where {$_.PrinterState -eq 0 -And $_.PrinterStatus -eq 3}");
String fullStatus = null;
Process reg;
builder.redirectErrorStream(true);
try {
reg = builder.start();
fullStatus = getStringFromInputStream(reg.getInputStream());
reg.destroy();
} catch (IOException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
System.out.print(fullStatus);
}
For converting InputStream to String look here: comprehensive StackOverflow answer, or you can simply use:
public static String getStringFromInputStream(InputStream is) {
ByteArrayOutputStream result = new ByteArrayOutputStream();
byte[] buffer = new byte[1024];
int length;
try {
while ((length = is.read(buffer)) != -1) {
result.write(buffer, 0, length);
}
} catch (IOException e1) {
e1.printStackTrace();
}
// StandardCharsets.UTF_8.name() > JDK 7
String finalResult = "";
try {
finalResult = result.toString("UTF-8");
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
}
return finalResult;
}
Output:
Name PrinterState PrinterStatus
---- ------------ -------------
Foxit Reader PDF Printer 0 3
Send to OneNote 2010 0 3
Microsoft XPS Document Writer 0 3
Microsoft Print to PDF 0 3
Fax 0 3
\\192.168.50.192\POS_PRINTER 0 3
As you can see, you now have all the printers that are in working state in the string.
You can use your existing method (getAvailablePrinters()) and e.g. add something like this:
ArrayList<String> workingPrinter = new ArrayList<String>();
System.out.println("Working printers:");
for(String printer : getAvailablePrinters()){
if(fullStatus.contains("\n" + printer + " ")){ // add a newline character before the printer name and space after so that it catches exact name
workingPrinter.add(printer);
System.out.println(printer);
}
}
And now you will have a nice list of working printers.
Console output:
Working printers:
Send to OneNote 2010
Foxit Reader PDF Printer
Microsoft XPS Document Writer
Microsoft Print to PDF
Fax
\\192.168.50.192\POS_PRINTER
Of course you have to be careful with the names with this approach - e.g. if "POS_PRINTER" is in all printers but not in working printers list, it could still get added to the workingPrinters list, if there is a working printer named "POS_PRINTER 1" as that name contains "\nPOS_PRINTER " string...

Change output filename prefix for DataFrame.write()

Output files generated via the Spark SQL DataFrame.write() method begin with the "part" basename prefix. e.g.
DataFrame sample_07 = hiveContext.table("sample_07");
sample_07.write().parquet("sample_07_parquet");
Results in:
hdfs dfs -ls sample_07_parquet/
Found 4 items
-rw-r--r-- 1 rob rob 0 2016-03-19 16:40 sample_07_parquet/_SUCCESS
-rw-r--r-- 1 rob rob 491 2016-03-19 16:40 sample_07_parquet/_common_metadata
-rw-r--r-- 1 rob rob 1025 2016-03-19 16:40 sample_07_parquet/_metadata
-rw-r--r-- 1 rob rob 17194 2016-03-19 16:40 sample_07_parquet/part-r-00000-cefb2ac6-9f44-4ce4-93d9-8e7de3f2cb92.gz.parquet
I would like to change the output filename prefix used when creating a file using Spark SQL DataFrame.write(). I tried setting the "mapreduce.output.basename" property on the hadoop configuration for the Spark context. e.g.
public class MyJavaSparkSQL {
public static void main(String[] args) throws Exception {
SparkConf sparkConf = new SparkConf().setAppName("MyJavaSparkSQL");
JavaSparkContext ctx = new JavaSparkContext(sparkConf);
ctx.hadoopConfiguration().set("mapreduce.output.basename", "myprefix");
HiveContext hiveContext = new org.apache.spark.sql.hive.HiveContext(ctx.sc());
DataFrame sample_07 = hiveContext.table("sample_07");
sample_07.write().parquet("sample_07_parquet");
ctx.stop();
}
That did not change the output filename prefix for the generated files.
Is there a way to override the output filename prefix when using the DataFrame.write() method?
You cannot change the "part" prefix while using any of the standard output formats (like Parquet). See this snippet from ParquetRelation source code:
private val recordWriter: RecordWriter[Void, InternalRow] = {
val outputFormat = {
new ParquetOutputFormat[InternalRow]() {
// ...
override def getDefaultWorkFile(context: TaskAttemptContext, extension: String): Path = {
// ..
// prefix is hard-coded here:
new Path(path, f"part-r-$split%05d-$uniqueWriteJobId$bucketString$extension")
}
}
}
If you really must control the part file names, you'll probably have to implement a custom FileOutputFormat and use one of Spark's save methods that accept a FileOutputFormat class (e.g. saveAsHadoopFile).
Assuming that the output folder have only one csv file in it, we can rename this grammatically (or dynamically) using the below code. In the below code (last line), get all files from the output directory with csv type and rename that to a desired file name.
import org.apache.hadoop.fs.{FileSystem, Path}
import org.apache.hadoop.conf.Configuration
val outputfolder_Path = "s3://<s3_AccessKey>:<s3_Securitykey>#<external_bucket>/<path>"
val fs = FileSystem.get(new java.net.URI(outputfolder_Path), new Configuration())
fs.globStatus(new Path(outputfolder_Path + "/*.*")).filter(_.getPath.toString.split("/").last.split("\\.").last == "csv").foreach{l=>{ fs.rename(new Path(l.getPath.toString), new Path(outputfolder_Path + "/DesiredFilename.csv")) }}
Agree with #Tzach Zohar..
After saving your dataframe in to HDFS or S3 you can rename using below...
The below scala example is ready to eat :-) means you can directly use in your code or util
After writing in to HDFS or S3 you can rename files using the below def..
#Brief :
1) get all the files under a folder using globstatus.
2) loop through and rename the file with prefix or suffix what ever is your case.
Note : Apache Commons are already available in hadoop clusters so no need for any further dependencies.
/**
* prefixHdfsFiles
* #param outputfolder_Path
* #param prefix
*/
def prefixHdfsFiles(outputfolder_Path: String, prefix: String) = {
import org.apache.hadoop.fs.{_}
import org.apache.hadoop.conf.Configuration
import org.apache.commons.io.FilenameUtils._
import java.io.File
import java.net.URI
val fs = FileSystem.get(new URI(outputfolder_Path), new Configuration())
fs.globStatus(
new Path(outputfolder_Path + "/*.*")).foreach { l: FileStatus => {
val newhdfsfileName = new Path(getFullPathNoEndSeparator(l.getPath.toString) + File.separatorChar + prefix + getName(l.getPath.toString))
// fs.rename(new Path(l.getPath.toString),newhdfsfileName )
val change = s"""
|original ${ new Path(l.getPath.toString) } --> new $newhdfsfileName
|""".stripMargin
println( change)
}
}
}
Caller would be for example :
val outputfolder_Path = "/a/b/c/d/e/f/"
prefixHdfsFiles(outputfolder_Path, "myprefix_")

Migration from dcm4che2 to dcm4che3

I have used below mentioned API of dcm4che2 from this repository http://www.dcm4che.org/maven2/dcm4che/ in my java project.
dcm4che-core-2.0.29.jar
org.dcm4che2.data.DicomObject
org.dcm4che2.io.StopTagInputHandler
org.dcm4che2.data.BasicDicomObject
org.dcm4che2.data.UIDDictionary
org.dcm4che2.data.DicomElement
org.dcm4che2.data.SimpleDcmElement
org.dcm4che2.net.service.StorageCommitmentService
org.dcm4che2.util.CloseUtils
dcm4che-net-2.0.29.jar
org.dcm4che2.net.CommandUtils
org.dcm4che2.net.ConfigurationException
org.dcm4che2.net.NetworkApplicationEntity
org.dcm4che2.net.NetworkConnection
org.dcm4che2.net.NewThreadExecutor
org.dcm4che3.net.service.StorageService
org.dcm4che3.net.service.VerificationService
Currently i want to migrate to dcm4che3 but, above listed API is not found in dcm4che3 which i have downloaded from this repository http://sourceforge.net/projects/dcm4che/files/dcm4che3/
Could you please guide me for alternate approach?
As you have already observed, the BasicDicomObject is history -- alongside quite a few others.
The new "Dicom object" is Attributes -- an object is a collection of attributes.
Therefore, you create Attributes, populate them with the tags you need for RQ-behaviour (C-FIND, etc) and what you get in return is another Attributes object from which you pull the tags you want.
In my opinion, dcm4che 2.x was vague on the subject of dealing with individual value representations. dcm4che 3.x is quite a bit clearer.
The migration demands a rewrite of your code regarding how you query and how you treat individual tags. On the other hand, dcm4che 3.x makes the new code less convoluted.
On request, I have added the initial setup of a connection to some service class provider (SCP):
// Based on org.dcm4che:dcm4che-core:5.25.0 and org.dcm4che:dcm4che-net:5.25.0
import org.dcm4che3.data.*;
import org.dcm4che3.net.*;
import org.dcm4che3.net.pdu.AAssociateRQ;
import org.dcm4che3.net.pdu.PresentationContext;
import org.dcm4che3.net.pdu.RoleSelection;
import org.dcm4che3.net.pdu.UserIdentityRQ;
// Client side representation of the connection. As a client, I will
// not be listening for incoming traffic (but I could choose to do so
// if I need to transfer data via MOVE)
Connection local = new Connection();
local.setHostname("client.on.network.com");
local.setPort(Connection.NOT_LISTENING);
// Remote side representation of the connection
Connection remote = new Connection();
remote.setHostname("pacs.on.network.com");
remote.setPort(4100);
remote.setTlsProtocols(local.getTlsProtocols());
remote.setTlsCipherSuites(local.getTlsCipherSuites());
// Calling application entity
ApplicationEntity ae = new ApplicationEntity("MeAsAServiceClassUser".toUpperCase());
ae.setAETitle("MeAsAServiceClassUser");
ae.addConnection(local); // on which we may not be listening
ae.setAssociationInitiator(true);
ae.setAssociationAcceptor(false);
// Device
Device device = new Device("MeAsAServiceClassUser".toLowerCase());
device.addConnection(local);
device.addApplicationEntity(ae);
// Configure association
AAssociateRQ rq = new AAssociateRQ();
rq.setCallingAET("MeAsAServiceClassUser");
rq.setCalledAET("NameThatIdentifiesTheProvider"); // e.g. "GEPACS"
rq.setImplVersionName("MY-SCU-1.0"); // Max 16 chars
// Credentials (if appropriate)
String username = "username";
String passcode = "so secret";
if (null != username && username.length() > 0 && null != passcode && passcode.length() > 0) {
rq.setUserIdentityRQ(UserIdentityRQ.usernamePasscode(username, passcode.toCharArray(), true));
}
Example, pinging the PACS (using the setup above):
String[] TRANSFER_SYNTAX_CHAIN = {
UID.ExplicitVRLittleEndian,
UID.ImplicitVRLittleEndian
};
// Define transfer capabilities for verification SOP class
ae.addTransferCapability(
new TransferCapability(null,
/* SOP Class */ UID.Verification,
/* Role */ TransferCapability.Role.SCU,
/* Transfer syntax */ TRANSFER_SYNTAX_CHAIN)
);
// Setup presentation context
rq.addPresentationContext(
new PresentationContext(
rq.getNumberOfPresentationContexts() * 2 + 1,
/* abstract syntax */ UID.Verification,
/* transfer syntax */ TRANSFER_SYNTAX_CHAIN
)
);
rq.addRoleSelection(new RoleSelection(UID.Verification, /* is SCU? */ true, /* is SCP? */ false));
try {
// 1) Open a connection to the SCP
Association association = ae.connect(local, remote, rq);
// 2) PING!
DimseRSP rsp = association.cecho();
rsp.next(); // Consume reply, which may fail
// Still here? Success!
// 3) Close the connection to the SCP
if (as.isReadyForDataTransfer()) {
as.waitForOutstandingRSP();
as.release();
}
} catch (Throwable ignore) {
// Failure
}
Another example, retrieving studies from a PACS given accession numbers; setting up the query and handling the result:
String modality = null; // e.g. "OT"
String accessionNumber = "1234567890";
//--------------------------------------------------------
// HERE follows setup of a query, using an Attributes object
//--------------------------------------------------------
Attributes query = new Attributes();
// Indicate character set
{
int tag = Tag.SpecificCharacterSet;
VR vr = ElementDictionary.vrOf(tag, query.getPrivateCreator(tag));
query.setString(tag, vr, "ISO_IR 100");
}
// Study level query
{
int tag = Tag.QueryRetrieveLevel;
VR vr = ElementDictionary.vrOf(tag, query.getPrivateCreator(tag));
query.setString(tag, vr, "STUDY");
}
// Accession number
{
int tag = Tag.AccessionNumber;
VR vr = ElementDictionary.vrOf(tag, query.getPrivateCreator(tag));
query.setString(tag, vr, accessionNumber);
}
// Optionally filter on modality in study if 'modality' is provided,
// otherwise retrieve modality
{
int tag = Tag.ModalitiesInStudy;
VR vr = ElementDictionary.vrOf(tag, query.getPrivateCreator(tag));
if (null != modality && modality.length() > 0) {
query.setString(tag, vr, modality);
} else {
query.setNull(tag, vr);
}
}
// We are interested in study instance UID
{
int tag = Tag.StudyInstanceUID;
VR vr = ElementDictionary.vrOf(tag, query.getPrivateCreator(tag));
query.setNull(tag, vr);
}
// Do the actual query, needing an AppliationEntity (ae),
// a local (local) and remote (remote) Connection, and
// an AAssociateRQ (rq) set up earlier.
try {
// 1) Open a connection to the SCP
Association as = ae.connect(local, remote, rq);
// 2) Query
int priority = 0x0002; // low for the sake of demo :)
as.cfind(UID.StudyRootQueryRetrieveInformationModelFind, priority, query, null,
new DimseRSPHandler(as.nextMessageID()) {
#Override
public void onDimseRSP(Association assoc, Attributes cmd,
Attributes response) {
super.onDimseRSP(assoc, cmd, response);
int status = cmd.getInt(Tag.Status, -1);
if (Status.isPending(status)) {
//--------------------------------------------------------
// HERE follows handling of the response, which
// is just another Attributes object
//--------------------------------------------------------
String studyInstanceUID = response.getString(Tag.StudyInstanceUID);
// etc...
}
}
});
// 3) Close the connection to the SCP
if (as.isReadyForDataTransfer()) {
as.waitForOutstandingRSP();
as.release();
}
}
catch (Exception e) {
// Failure
}
More on this at https://github.com/FrodeRanders/dicom-tools

Detect SDCard in Blackberry [duplicate]

I want to openOrCreate database in SDcard / Media Card. When i run the application in device (BlackBerry Curve 8900), i find only one root i.e "system/" and running application in simulator (9500), i find three roots as shown in comment in code. I am getting error at;
_db = DatabaseFactory.openOrCreate(_uri);
(error: Method "toString" with signature "()Ljava/lang/String;" is not applicable on this object)
And i am not able to understand what is this error about.
Here is the code.
public void getValues() throws Exception
{
boolean sdCardPresent = false;
String root = null;
Enumeration e = FileSystemRegistry.listRoots();
while (e.hasMoreElements())
{
root = (String)e.nextElement();
System.out.println("Value of root::" +root); // value of root = "system/" when run in device and
// value of root = "store/" "SDCard/" "system/" when run in simulator
if(root.equalsIgnoreCase("system/"))
{
sdCardPresent = true;
}
}
System.out.println("--------------------getValues()----------------------------------");
URI _uri = URI.create(Global.DB_PATH + Global.DB_Main);
System.out.println("Valud of uri::" +_uri);
_db = DatabaseFactory.openOrCreate(_uri); //getting error here.
System.out.println("Valud of _db::" +_db);
_db.close();
I tried these three paths, getting output with "/store"(when run in simulator) but error with rest two paths.Even using "/store" in device is showing the same error.
Global.DB_PATH = "/MediaCard/databases/";
Global.DB_PATH = "/SDCard/databases/";
Global.DB_PATH = "/store/databases/";
Is there any way how to get SDCard/Media Card as root so that i can copy the database in there?
My guess is when you are running your app on a real device you have USB cable plugged in to the device. If this is the case, try to unplug the cable and rerun the app. You may use Dialog.inform() to quickly check what roots you get this time.
private ObjectListField getFileList() {
if (fileList == null) {
fileList = new ObjectListField();
String[] roots = new String[3];
Enumeration enum = FileSystemRegistry.listRoots();
int x = 0;
while (enum.hasMoreElements()) {
if (x < 3) {
roots[x] = enum.nextElement().toString();
}
x++;
}
enum = FileSystemRegistry.listRoots();
fileList.set((roots[2] != null) ? roots : new String[]{"system/", "SDCard/", "store/"});
}
return fileList;
}
Try this code.

Categories