j8583 cannot handle field 128 - java

I've been using j8583 to parse and construct ISO 8583 message in Java. All seems well until one of the message has field 128 in it. That field is always missing when I construct or parse a message that has bit 128, but the other bit (2...127) are fine.
I've double check the xml configuration, and nothing wrong there.
Is it just me or there are actually a bug in j8583? anybody know how to solve this? I'm on a really tight schedule, so changing library for iso 8583 is very unlikely

I'm the author of j8583. I just reviewed the code and there is indeed a problem with MessageFactory.newMessage() where it won't assign field 128 to new messages. I just committed the change, so you can get the latest source from the repository and your new messages will include field 128.
I also reviewed the parsing code and I couldn't find anything wrong there. If you parse a message with field 128 and it's in your parsing guide, the message should contain it.
However, I've encountered certain ISO8583 implementations in which a message has the 128 field set in the bitmap but it's really not in the message. In these cases j8583 can't parse the message because there's missing data. I'm still trying to figure out how to handle this.
When you find any bugs in j8583 please post them in the project page, so I get notified and solve them. I don't usually look for j8583 tagged questions in this site (but I should probably start doing so).

Related

JAVA EWS: setExtendedProperty with MapiPropertyType.Binary

I have to enhance an existing JAVA app to store deadlines into several calendar all owned (created) by the same Shared-Mailbox.
Synchronisation of the deadlines in the app and the outlook calendars is no problem.
Apart from syncing the dates the JAVA app should be able to send invitations to the specific calendar. I have used the description https://willcode4foodblog.wordpress.com/2012/04/13/understanding-sharing-invitation-requests-ews-managed-api-1-2-part-1/. I needed to port that stuff from c# to JAVA. On sendAndSaveMessage I always receive this Exception:
exchange.webservices.data.core.exception.service.remote.ServiceRequestException: The request failed. An internal server error occurred. The operation failed.
at microsoft.exchange.webservices.data.core.request.SimpleServiceRequestBase.internalExecute(SimpleServiceRequestBase.java:74)
at microsoft.exchange.webservices.data.core.request.MultiResponseServiceRequest.execute(MultiResponseServiceRequest.java:158)
at microsoft.exchange.webservices.data.core.ExchangeService.sendItem(ExchangeService.java:789)
at microsoft.exchange.webservices.data.core.service.item.EmailMessage.internalSend(EmailMessage.java:156)
at microsoft.exchange.webservices.data.core.service.item.EmailMessage.sendAndSaveCopy(EmailMessage.java:300)
Using the debugger I found out, that all Extended Properties of MapiPropertyType.Binare have a null value. That's eg.
byte[] binInitiatorEntryId = hexStringToByteArray(initiatorEntryID);
ExtendedPropertyDefinition pidLidSharingInitiatorEntryId =
new ExtendedPropertyDefinition(propertySetSharing, 0x8A09, MapiPropertyType.Binary);
invitationRequest.setExtendedProperty(pidLidSharingInitiatorEntryId, binInitiatorEntryId);
So in the debugger the extended property exists, but the value is "null".
All other fields that are noted in the example look ok in the debugger. It's just that the binaries are all "null". I also compared the sharing_metadata.xml attachment with one created by Outlook and they are identical. Ok, there are a few fields I got to play with (let's say: do I have to use a "special folder" for pidLidSharingFlavor because I didn't find any explanations in the specs on what is a special folder). But since it is obvious that a "null" value for MapiPropertyType.Binary is not correct it does not make any sense to check other possibilities.
So mainly there are two questions regarding this issue which I hope someone with a bit more experience could explain:
Question 1: Is there any special way in JAVA to store Extended Properties of MapiPropertyType.Binary?
Question 2: Is there any way to get more information on the "internal server error occured" from EWS? Even enhancing the trace does not give any more information beside the XML representation of the message.
Thanks in advance.
ChriS

"Strict" Avro Parsing Mode (No dropping additional fields)

This is tangentially related to avro json additional field
The issue I have is that JSON Avro decoding allows for additional fields on the root level recrod while disallowing them on inner records because of a parsing failure. In the current project I work on we have a requirement that we cannot drop any data which means I need to find a solution somehow.
See this code example https://gist.github.com/GrafBlutwurst/4d5c108b026b34ce83d2569bc8991b3d
(Avro 1.8.2)
Does anyone know if there's a "strict" mode for the AVRO Parser or something similar? This ticket also seems to link somewhat to it https://issues.apache.org/jira/browse/AVRO-2034
Thanks!
EDIT: After more researching it seems there's a PR open to fix this
https://github.com/apache/avro/pull/321 but only for ruby
EDIT II: It most likely seems to be a parser bug it's not only in nested object but also an issue if the string contains several json objects and the first one contains additional fields. There's a drain method that is supposed to pop left over tokens from the stack but it doesn't seem to work. as the current parsing position is always 1 when it's entered (top of the stack) as of yet I haven't figured out why.

JTidy reports "3 errors were found!"... but does not say what they are

I have a large block of programmatically generated HTML. I ran it through Tidy (version r938) with the following Java code:
StringReader inStr = new StringReader(htmlInput);
StringWriter outStr = new StringWriter();
Tidy tidy = new Tidy();
tidy.setXHTML(true);
tidy.parseDOM(inStr, outStr);
I get the following output:
InputStream: Document content looks like HTML 4.01 Transitional
247 warnings, 3 errors were found!
This document has errors that must be fixed before
using HTML Tidy to generate a tidied up version.
Trouble is, Tidy doesn't tell me what 3 errors it found.
I'm fibbing here a little. The output above actually follows a long list of all 247 warnings (mostly trimming out empty div elements). I can suppress those with tidy.setShowWarnings(false); either way, I see no error report, so I can't figure out what I need to fix. 300Kb of HTML is too much for me to eyeball.
I've tried numerous approaches to finding the error. I can't run it through validate.w3.org, sadly, as the HTML file is on a proprietary network. The most informative approach was to open it in IntelliJ IDEA; this revealed a dozen or so duplicate div IDs, which I fixed. Errors still occurred.
I've looked around for other mentions of this problem. While I find plenty of hits on things like "How can I get the error/warning messages out of the parsed HTML using JTidy?", they all appear to be asking for dissimilar things, or assume conditions that simply aren't holding for me. I'm getting warnings just fine, for example; it's the errors I need, and they're not being reported, even if I call setShowErrors(100) or something.
Am I going to have to dive into Tidy's source code and debug it, starting where it reports errors? Or is there something much simpler I could do?
Here's what I ended up doing to track down the errors:
Download JTidy's source. Most people should be able to go straight to the source.
Unzip the source into my dev area. Right on top of my existing source code. This also meant removing the Maven entry for JTidy from my pom.xml. (It also meant beating IntelliJ into submission (re: editing the relevant .iml files and restarting IJ a lot) when it got extremely confused by this.)
Set a breakpoint in Report.error. The first line of org.w3.tidy.Report.error() increments lexer.errors; error() is called from many places in the lexer.
Run my program in debug mode. Expect this to take a little while if the input HTML is large; a 300k file took around 10-15 seconds on my machine to stop on an error that turned out to be at the very end of the file.
Look at the contents of lexbuf. lexbuf is a byte array, so your IDE might not show it as text. It might also be large. You probably want to look at what index the lexer was looking at within lexbuf. If you have to, take that section of the byte array and cross-reference it with an ASCII table to get the text.
Search for that text in your HTML. Assuming it appears only once, there's your error. (In my case, it appeared exactly three times, and sure enough, I had three errors reported.)
This was much more involved than it probably should have been. I suspect Report.error() was being called inappropriately.
In my case, error() was called with the constant BAD_CDATA_CONTENT. This constant is used only by Report.warning(). error() doesn't know what to do with it, and just exits silently with no message at all. If I change the call in Lexer.getCDATA() from error() to warning(), I get the exact line and column of my error. (I also get what appears to be reasonably well-formed XHTML, instead of an empty document.)
I'd submit a ticket to the JTidy project with some suggestions, but SourceForge isn't letting me log in for some reason. So, here:
Given that this "error" appears not to doom the document to unparseability, I'll tentatively suggest that that call be made a warning instead. (In my specific case, it was an HTML tag inside a string constant or comment inside a script element; shouldn't have hurt anything. I asked another question about it, just in case.)
Report.error() should have a default case that reports an unhandled error code if it gets one.
Hope this helps anyone else having what I'm guessing is a rather esoteric problem.

How to parse the Multiple OBR Segment in HL7 using HAPI

The following text is the hl7 message , i could able to parse many segments except NTE segment .'m using HAPI to parse the hl7 messages.'m newvbie to HL7 so please can any one suggest relevant classes in HAPI how to parse NTE segments ? it would be better if explanation is with few examples,
MSH|^~\&|LCS|LCA|LIS|TEST9999|199807311532||ORU^R01|3629|P|2.2
PID|2|2161348462|20809880170|1614614|20809880170^TESTPAT||19760924|M|||^^^^
00000-0000|||||||86427531^^^03|SSN# HERE
ORC|NW|8642753100012^LIS|20809880170^LCS||||||19980727000000|||HAVILAND
OBR|1|8642753100012^LIS|20809880170^LCS|008342^UPPER RESPIRATORY
CULTURE^L|||19980727175800||||||SS#634748641 CH14885 SRC:THROA
SRC:PENI|19980727000000||||||20809880170||19980730041800||BN|F
OBX|1|ST|008342^UPPER RESPIRATORY CULTURE^L||FINALREPORT|||||N|F||| 19980729160500|BN
ORC|NW|8642753100012^LIS|20809880170^LCS||||||19980727000000|||HAVILAND
OBR|2|8642753100012^LIS|20809880170^LCS|997602^.^L|||19980727175800||||G|||
19980727000000||||||20809880170||19980730041800|||F|997602|||008342
OBX|2|CE|997231^RESULT 1^L||M415|||||N|F|||19980729160500|BN
NTE|1|L|MORAXELLA (BRANHAMELLA) CATARRHALIS
NTE|2|L| HEAVY GROWTH
NTE|3|L| BETA LACTAMASE POSITIVE
OBX|3|CE|997232^RESULT 2^L||MR105|||||N|F|||19980729160500|BN
NTE|1|L|ROUTINE RESPIRATORY FLORA
EDITED
Here I am supposed to parse multiple OBR segments, can anybody please guide me?
It looks like the message you have is valid, but the issue that you may be having is with the formatting of the sample. It looks like a couple of the lines were wrapped. If you properly format them, then the message can be parsed properly.
In HL7 2.x, all new lines must start with a segment identifier (e.g. MSH, PID, OBX, ...). If the line does not start with one of these identifiers, then the parser will not know how to interpret that line or the remainder of the message.
If you are using HAPI and looking to test message, I would recommend using their HAPI test panel. It is an extremely easy to use tool that can help you verify a message and test message transmission.
Below is a screenshot of what the message looked like in the test panel, once the formatting was cleaned up.
I have solved the issue by creating looping for every other segments with NTE segments loop , every segments has the optional NTE segments so iterated with every segment . Now its working fine ...

What does the org.apache.xmlbeans.XmlException with a message of "Unexpected element: CDATA" mean?

I'm trying to parse and load an XML document, however I'm getting this exception when I call the parse method on the class that extends XmlObject. Unfortunately, it gives me no ideas of what element is unexpected, which is my problem.
I am not able to share the code for this, but I can try to provide more information if necessary.
Not being able to share code or input data, you may consider the following approach. That's a very common dichotomic approach to diagnostic, I'm afraid, and indeed you may readily have started it...
Try and reduce the size of the input XML by removing parts of it, ensuring that the underlying XML document remains well formed and possibly valid (if validity is required in your parser's setup). If you maintain validity, this may require to alter [a copy of] the Schema (DTD or other), as manditory elements might be removed during the cut-and-try approach... BTW, the error message seems to hint more at a validation issue that a basic well-formedness assertion issue.
Unless one has a particular hunch as to the area that triggers the parser's complaint, we typically remove (or re-add, when things start working) about half of what was previously cut or re-added.
You may also start with trying a mostly empty file, to assert that the parser does work at all... There again is the idea to "divide to prevail": is the issue in the XML input or in the parser ? (remembering that there could be two issues, one in the input and one in the parser, and thtat such issues could even be unrelated...)
Sorry to belabor basic diagnostics techniques which you may well be fluent with...
You should check the arguments you are passing to the method parse();
If you are directly passing a string to parse or file or inputstream accordingly (File/InputStream/String) etc.
The exception is caused by the length of the XML file. If you add or remove one character from the file, the parser will succeed.
The problem occurs within the 3rd party PiccoloLexer library that XMLBeans relies on. It has been fixed in revision 959082 but has not been applied to xbean 2.5 jar.
XMLBeans - Problem with XML files if length is exactly 8193bytes
Issue reported on XMLBean Jira

Categories