Read and Write on DICOM attribute with pixelmed - java

I'm trying to read data attribute from its tag in Java with the library pixelmed.
the code that I had is :
public static void main(String args[]) throws IOException, DicomException {
DicomInputStream my_image = new DicomInputStream(new File("/Volumes/CDP/20130212101717421/20130212101636203"));
AttributeList list = new AttributeList();
SpecificCharacterSet sc=new SpecificCharacterSet(list);
PersonNameAttribute pna=new PersonNameAttribute(TagFromName.PatientName,1000,my_image,sc);
System.out.println(pna.getDelimitedStringValuesOrEmptyString());
}
With this code i get data of all attributes :
���UL������OB��������UI�1.2.840.10008.5.1.4.1.1.4���UI6�1.2.840.113619.2.244.6945.224850.21460.1360606914.740���UI�1.2.840.10008.1.2.1���UI�1.2.376.99999.1.1.20041017��SH�CDP_V3��AE�MRS��CS
�ISO_IR ... etc etc
But I just want to get the information on the tag (0x0010,0x0010).

Have you considered:
AttributeList list = new AttributeList();
list.read(file);
String patientName=Attribute.getDelimitedStringValuesOrEmptyString(list,TagFromName.PatientName);

Related

How to extract the text from the web site?

I'm working on code for parsing the weather site.
I found a CSS class with needed data on the web-site. How to pick up from there "on October 12" in the form of a string? (Tue, Oct 12)
public class Pars {
private static Document getPage() throws IOException {
String url = "https://www.gismeteo.by/weather-mogilev-4251/3-day/";
Document page = Jsoup.parse(new URL(url), 3000);
return page;
}
public static void main(String[] args) throws IOException {
Document page = getPage();
Element Nameday = page.select("div [class=date date-2]").first();
String date = Nameday.select("div [class=date date-2").text();
System.out.println(Nameday);
}
}
The code is written for the purpose of parsing the weather site. On the page I found the right class in which only the date and day of the week I need. But at the stage of converting data from a class, an error crashes into a string.
The problem is with class selector, it should look like this: div.date.date-2
Working code example:
public class Pars {
private static Document getPage() throws IOException {
String url = "https://www.gismeteo.by/weather-mogilev-4251/3-day/";
return Jsoup.parse(new URL(url), 3000);
}
public static void main(String[] args) throws IOException {
Document page = getPage();
Element dateDiv = page.select("div.date.date-2").first();
if(dateDiv != null) {
String date = dateDiv.text();
System.out.println(date);
}
}
}
Here is an answer to Your problem: Jsoup select div having multiple classes
In future, please make sure Your question is more detailed and well structured. Here is the "asking questions" guideline: https://stackoverflow.com/help/how-to-ask

How to Execute single GET request multiple time with different set of Headers & then need to validate response as well using java API automation

This is my code where I am using headers. I wasn't to execute same login API GET request with multiple set of headers and then need to validate response as well .
// API Test for Login
#Test(priority = 1)
public void GetLoginAPI() throws ClientProtocolException, IOException, JSONException {
HashMap<String, String> header = new HashMap<String, String>();
header.put("auth-id", prop.getProperty("authid1"));
header.putAll(header());
httpResp=restClient.getResult(prop.getProperty("LoginURL"), header);
//Status code Json String & Headers
JSONObject respJson = TestUtil.staCodeJsonStringHeaders(httpResp);
String idValue = TestUtil.getValueByJPath(respJson, "/user/id");
String uidValue = TestUtil.getValueByJPath(respJson, "/user/uid");
String locationValue = TestUtil.getValueByJPath(respJson, "/user/location");
System.out.println("Value of type : "+idValue);
System.out.println("Value of date : "+uidValue);
System.out.println("Value of date : "+locationValue);
Assert.assertEquals(Integer.parseInt(idValue), 319);
Assert.assertEquals(Integer.parseInt(uidValue), 20190807);
Assert.assertEquals(locationValue, "bangalore");
}
I believe you are using Testng
option 1- using dataprovider
for example
#DataProvider(name = "data-provider")
public Object[][] dataProviderMethod() {
return new headers[][] { { "data1" }, { "data1" } };
}
#Test(dataProvider = "data-provider")
public void GetLoginAPI(String header) throws ClientProtocolException, IOException, JSONException {
// Now use multiple header for GET request
}
Option 2-
Use Hash map to store header and add a loop and use it
HashMap<Integer, String> headervalues = new HashMap<Integer, String>();
headervalues.add("data 1)
headervalues.add("data 2) etc
and then using loop you can use multiple headers.
Option 3-
Using Excel use apacahe POI and then write down readExcel function and pass those values to your script.
Refer https://www.javatpoint.com/how-to-read-excel-file-in-java

How can I fill multiple docx files with data from one xml file using docx4j?

I'm trying to fill data from one xml file to a docx template that would generate multiple word files
public static final String input_DOCX = "E:\\Temp5\\Word document template.docx";
public static final String input_XML = "E:\\Temp5\\Word document data.xml";
public static final String output_DOCX = "E:\\Temp5\\Word document output.docx";
public static void main(String[] args) throws Exception {
WordprocessingMLPackage wordMLPackage = Docx4J.load(new File(input_DOCX));
FileInputStream xmlStream = new FileInputStream(new File(input_XML));
Docx4J.bind(wordMLPackage, xmlStream, Docx4J.FLAG_BIND_INSERT_XML | Docx4J.FLAG_BIND_BIND_XML | Docx4J.FLAG_BIND_REMOVE_SDT);
Docx4J.save(wordMLPackage, new File(output_DOCX), Docx4J.FLAG_NONE);
System.out.println("Saved: " + output_DOCX);
I want something that would be similar to this code but for multiple docx files. I'd really appreciate if someone could help me with this. Thanks in advance!

Insert row in a Google Worksheet with Java

How can I insert a row in a Spreadsheet?
I try with this code but have an error:
Exception in thread "main" com.google.gdata.util.ServiceException: Method Not Allowed
at line: row = service.insert(url, row);
Why Method Not Allowed ???
public class MyClass {
public static void main(String[] args)
throws AuthenticationException, MalformedURLException, IOException, ServiceException
{
SpreadsheetService service = new SpreadsheetService("MyApp");
FeedURLFactory factory = FeedURLFactory.getDefault();
String key = "***my_key***";
URL spreadSheetUrl = factory.getWorksheetFeedUrl(key,"public","full");
WorksheetFeed feed = service.getFeed(spreadSheetUrl, WorksheetFeed.class);
WorksheetEntry worksheet = feed.getEntries().get(13);
URL url = worksheet.getListFeedUrl();
ListEntry row = new ListEntry();
row.getCustomElements().setValueLocal("header", "aaa");
row = service.insert(url, row);
}
}
Help me! thanks
A similar problem has been found here. Perhaps you are not using the full correct formal for the URL, but I can understand why you wouldn't want to post that publicly. However, it's hard to tell if that it the problem or not without seeing it.

How to use Stanford parser

I downloaded the Stanford parser 2.0.5 and use Demo2.java source code that is in the package, but After I compile and run the program it has many errors.
A part of my program is:
public class testStanfordParser {
/** Usage: ParserDemo2 [[grammar] textFile] */
public static void main(String[] args) throws IOException {
String grammar = args.length > 0 ? args[0] : "edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz";
String[] options = { "-maxLength", "80", "-retainTmpSubcategories" };
LexicalizedParser lp = LexicalizedParser.loadModel(grammar, options);
TreebankLanguagePack tlp = new PennTreebankLanguagePack();
GrammaticalStructureFactory gsf = tlp.grammaticalStructureFactory();
...
the errors are:
Loading parser from serialized file edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz java.io.IOException: Unable to resolve edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz" as either class path, filename or URL
at edu.stanford.nlp.io.IOUtils.getInputStreamFromURLOrClasspathOrFileSystem(IOUtils.java:408)
at edu.stanford.nlp.io.IOUtils.readStreamFromString(IOUtils.java:356)
at edu.stanford.nlp.parser.lexparser.LexicalizedParser.getParserFromSerializedFile(LexicalizedParser.java:594)
at edu.stanford.nlp.parser.lexparser.LexicalizedParser.getParserFromFile(LexicalizedParser.java:389)
at edu.stanford.nlp.parser.lexparser.LexicalizedParser.loadModel(LexicalizedParser.java:157)
at edu.stanford.nlp.parser.lexparser.LexicalizedParser.loadModel(LexicalizedParser.java:143)
at testStanfordParser.main(testStanfordParser.java:19). Loading parser from text file edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz Exception in thread "main" java.lang.NoSuchMethodError: edu.stanford.nlp.io.IOUtils.readerFromString(Ljava/lang/String;)Ljava/io/BufferedReader;
at edu.stanford.nlp.parser.lexparser.LexicalizedParser.getParserFromTextFile(LexicalizedParser.java:528)
at edu.stanford.nlp.parser.lexparser.LexicalizedParser.getParserFromFile(LexicalizedParser.java:391)
at edu.stanford.nlp.parser.lexparser.LexicalizedParser.loadModel(LexicalizedParser.java:157)
at edu.stanford.nlp.parser.lexparser.LexicalizedParser.loadModel(LexicalizedParser.java:143)
at testStanfordParser.main(testStanfordParser.java:19)
please help me to solve it.
Thanks
All grammars are located in the included models jar.
Is the "stanford-parser-2.0.5-models.jar" in the execution directory or classpath?
I am using Stanford parser to extract entities like name ,location,organization.
Here is my code:
public class stanfrdIntro {
public static void main(String[] args) throws IOException, SAXException,
{
String serializedClassifier = "classifiers/english.all.3class.distsim.crf.ser.gz";
AbstractSequenceClassifier<CoreLabel> classifier = CRFClassifier
.getClassifierNoExceptions(serializedClassifier);
String s1 = "Good afternoon Rahul Kulhari, how are you today?";
s1 = s1.replaceAll("\\s+", " ");
String t=classifier.classifyWithInlineXML(s1);
System.out.println(Arrays.toString(getTagValues(t).toArray()));
}
private static final Pattern TAG_REGEX = Pattern.compile("<PERSON>(.+?)</PERSON>");
private static Set<String> getTagValues(final String str) {
final Set<String> tagValues = new HashSet<String>();
//final Set<String> tagValues = new TreeSet();
final Matcher matcher = TAG_REGEX.matcher(str);
while (matcher.find()) {
tagValues.add(matcher.group(1));
}
return tagValues;
}
This might help you but i am extracting only entities.

Categories