I must generate HL7 file but i 'm facing problem with PV1 segment.
I don't find how to set Facility variable with my value
i use hapi but i don't find their java method that allows that...
i succed for setting PV1-9 Consulting doctor field :
msg.getPV1().insertConsultingDoctor(0).getGivenName().setValue(nomMedecin);
but there is no insertXxx method for setting PV1-3.4 field, just one to get the value :
msg.getPV1().getPv13_AssignedPatientLocation().getFacility();
The API of HAPI is a bit unusual since most of the times, there is no need to instantiate objects. Just calling the get method gives you an object:
HD facility = msg.getPV1().getPv13_AssignedPatientLocation().getPl4_Facility();
This gives you an instance of HD which has more segments:
ST universalID = facility.getHd2_UniversalID();
Once you are down to a String (ST) data type, you can set a value:
universalID.setValue("FooBar");
Related
I am writing Java code to interact with a Firestore database. I need to access a field containing a forward-slash in the field name:
f.col1 = document.getString("username");
f.col2 = document.getString("subject");
f.col3 = document.getString("details/comments");
When I run the code, I get the following error for the line getting Col3:
java.lang.IllegalArgumentException: Use FieldPath.of() for field names containing '˜*/[]'.
I am unable to work out how to correctly use this method and I cannot find any documentation on how I should correctly use it (Googling brings up an equivalent JS method). When I attempt to use the FieldPath.of() method, as follows:
f.col3 = document.getString(FieldPath.of("details/comments"));
I get the following compiler error:
java: incompatible types: com.google.cloud.firestore.FieldPath cannot be converted to java.lang.String
I have no control over the structure of the Firebase data, so I need to work with this field name.
I am using the following documentation to interact with the database:
https://firebase.google.com/docs/firestore/quickstart#java_9
I think you're looking for FieldPath.of for that last field:
f.col3 = document.getString(FieldPath.of("details/comments"));
If the / was meant to indicate the comments is a subfield inside the details map, the correct separator would be a . btw:
f.col3 = document.getString(FieldPath.of("details.comments"));
For your second error, it looks like DocumentSnapshot.getString() only exists with a String parameter, so you'll want to use get() which accepts a FieldPath:
f.col3 = document.get(FieldPath.of("details/comments"));
or
f.col3 = document.get(FieldPath.of("details.comments"));
If you're getting a string conversion error here, add toString() to the call.
I am trying to deploy a DocumentDB instance using AWS CDK docdb package (Java) and I keep getting this error:
Invalid DB Instance class: db.d2.large (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterValue; Request
ID: 41494b4b-d14f-46ff-b077-9ee73aad515f; Proxy: null)
I know that, for example, an r5.large would work, but I cannot find a way to map from the InstanceType enum values (STANDARDx, STORAGE2, etc.) to the actual AWS Instance type; it does not appear to be documented anywhere, and the example (in TypeScript) use happily something like instanceType: 'r5.large' and move on.
This is my code, for completeness:
DatabaseCluster dbCluster = DatabaseCluster.Builder.create(scope, "ApiDocDb")
.dbClusterName(dataProps.getTableName())
.masterUser(Login.builder()
.username(masterUsername)
.password(SecretValue.plainText(masterPwd))
.build())
.instanceType(InstanceType.of(InstanceClass.STORAGE2, InstanceSize.LARGE))
.vpc(Vpc.Builder.create(scope, "DocDB-VPC")
.cidr("10.2.0.0/16")
.build())
.vpcSubnets(SubnetSelection.builder().subnetType(SubnetType.PUBLIC).build())
.build();
By trial and error (largely) I finally arrived at this:
.instanceType(InstanceType.of(InstanceClass.MEMORY5, InstanceSize.LARGE))
which seems to work.
Some help from this page - cross-correlating with the fact that, in the console, if one tries to add an instance, only r5 are allowed in the drop-down.
I just wish AWS had documented that enum a bit better.
For everyone else tackling the same problem, I created a gist 1 of all currently available enum keys (as of CDK v2.12.0) mapped to their corresponding instance types.
Keep in mind that each instance type has a set of corresponding instance sizes 2.
Following i my code in JCO3.0 to connect to RFC and get the data from function module:
try {
JCoDestination destination = JCoDestinationManager.getDestination(DESTINATION_NAME);
JCoFunction function = destination.getRepository().getFunction("funtion_abap");
***function.getImportParameterList().setValue("IM_ID_NAME", "MTC_ZPR008_TEMPB");***
function.execute(destination);
JCoTable table = function.getTableParameterList().getTable("export_table");
}
catch(Exception e){
}
Following is my ABAP function:
CALL FUNCTION 'funtion_abap' DESTINATION m_vsyid
EXPORTING
IM_ID_NAME = table_vname
IMPORTING
export_table = table_tvarvc
EXCEPTIONS
system_failure = 1
communication_failure = 2
resource_failure = 3
OTHERS = 4.
following is an error m getting while passing String as import parameter while it wants Table field as import parameter:
Exception in thread "main" com.sap.conn.jco.ConversionException: (122) JCO_ERROR_CONVERSION: Cannot convert a value of 'MTC_ZPR008_TEMPB' from type java.lang.String to TABLE at field IM_ID_NAME
at com.sap.conn.jco.rt.AbstractRecord.createConversionException(AbstractRecord.java:468)
at com.sap.conn.jco.rt.AbstractRecord.createConversionException(AbstractRecord.java:462)
at com.sap.conn.jco.rt.AbstractRecord.setValue(AbstractRecord.java:2958)
at com.sap.conn.jco.rt.AbstractRecord.setValue(AbstractRecord.java:4074)
at com.amgen.rfc.RFC_Connection.main(RFC_Connection.java:47)
Please tell me how to solve this problem.
The RFC definition and your code are in direct opposition. According to the ABAP function (as far as I read it) the result of the call is the value in field IM_ID_NAME and the table is the input parameter.
I'm not 100% familiar with the declaration of RFCs in ABAP (I only know the Java side of it), but if I interpret the error message correctly, the table seems to be in the input parameter list rather than the table parameter list (not usual but not never seen before, either). So instead of getTableParameterList you will possible have to call getInputParameterList. Also you should omit the setting of the field IM_ID_NAME because that's the response value and resides in the output parameter list.
I know the question is quite old but someone may find my response useful one day since I had the same problem:
JcoTable tab = function.getImportParameterList().getTable("IM_ID_NAME");
tab.appendRow();
tab.firstRow(); // I'm not sure if this is actually reqiured
tab.setValue("PARAM_NAME", paramValue);
I'm new to sphinx, and I've encountered a few problems:
$1 After setting max_matches = 200 in class searchd in csft.conf, I called
org.sphx.api.test.main(new String[]{"-h", "127.0.0.1","-i", "magnet","-p", "9312", "-l", "100", "keyword"});
in a java main method. The error returned is
Error: searchd error: per-query max_matches=1000 out of bounds (per-server max_matches=200)
As you can see, I've added the param: -l = 100, what else should I set to prevent this error in Java?
$2 I want to use sortMode = SphinxClient.SPH_SORT_TIME_SEGMENTS to have the search result ordering by time desc. My attribute is written like this in csft.conf:
sql_attr_timestamp=UNIX_TIMESTAMP(upload_time) as dt
Could anyone tell me how can I set the attribute in Java code? I've tried to set the sortClause String in java, but it always said that Attribute XXX has not been found.
$3 I want to know whether SphinxClient in Java is thread safe, becaust I don't like to create a SphinxClient instance every time a person do a query.
Thanks in advance!
If the class you are using is https://code.google.com/p/sphinxtools/source/browse/trunk/src/org/sphx/test/test.java?r=2
then the function never even inspects 'argv'. It hardcodes all the variables. There is nothing passed as the third param to setLimits
sql_attr_timestamp simply accepts a column name, no functions or anything. The function call HAS to be in the main sql_query
My java is very rusty, but would have to say no. It stores all sort of state in private varibles. Multiple threads using the client at once will clober them.
I have tried using AttachVolumeRequest but in response i get following error
Caught Exception: The request must contain the parameter volume
Reponse Status Code: 400
Error Code: MissingParameter
here is my code , in this code ec2 is my amazonclient object and its work fine so far
AttachVolumeRequest attachRequest=new AttachVolumeRequest()
.withInstanceId("my instance id");
attachRequest.setRequestCredentials(credentials);
EbsBlockDevice ebs=new EbsBlockDevice();
ebs.setVolumeSize(2);
//attachRequest.withVolumeId(ebs.getSnapshotId());
AttachVolumeResult result=ec2.attachVolume(attachRequest);
any help is highly appreciated. thanks in advance
Cause
Class EbsBlockDevice from the AWS SDK for Java serves a different purpose, accordingly method getSnapshotId() only returns The ID of the snapshot from which the volume will be created, i.e. not the volume ID, hence the respective exception.
Solution
You most likely want to use class CreateVolumeRequest instead, e.g. (from the top of my head):
CreateVolumeRequest createVolumeRequest = new CreateVolumeRequest()
.withAvailabilityZone("my instance's AZ") // The AZ in which to create the volume.
.withSize(2); // The size of the volume, in gigabytes.
CreateVolumeResult createVolumeResult = ec2.createVolume(createVolumeRequest);
AttachVolumeRequest attachRequest = new AttachVolumeRequest()
.withInstanceId("my instance id");
.withVolumeId(createVolumeResult.getVolume().getVolumeId());
AttachVolumeResult attachResult = ec2.attachVolume(attachRequest);