Studying the sip protocol, I got to the topic of the H264 codec. I began to receive data in the form of rtp packets. I managed to successfully get the following data from the package: Payload type (in my case 97), Timestamp, Sequence number and payload data (byte array). Next, I want to draw images encoded in this data. On the android platform, I use the android.media.MediaCodec class. I follow examples like MediaCodec failing on S7.
I create an instance of MediaCodec. Configuring with MediaFormat. Then I transfer the received bytes to inputBuffer and wait for updates via dequeueOutputBuffer. In my case, the dequeueOutputBuffer method always returns MediaCodec.INFO_TRY_AGAIN_LATER.
I was trying to process bytes before passing to MediaCodec. Defined nal_unit_type. I get 7, 8 and 28. I also defined startBit and endBit in the package. I tried to glue all packages starting with startBit and ending with endBit and transfer them to MediaCodec in a glued form. The result is the same - the dequeueOutputBuffer method always returns MediaCodec.INFO_TRY_AGAIN_LATER
Tell me what I missed.
The server sends the following information about the video to the SDP:
m=video 23542 RTP/AVP 97
b=TIAS:4096000
a=content:main
a=rtpmap:97 H264/90000
a=fmtp:97 profile-level-id=428028; max-fs=8192; packetization-mode=0; max-br=4096; max-fps=3000
a=sendrecv
Edit #1
For example, first received packet payload (array of unsigned byte):
27 42 00 28 95 a0 1e 00 89 f9 70 11 00 00 03 00
42 00 28 95 a0 1e 00 89 f9 70 11 00 00 03 00 01
00 28 95 a0 1e 00 89 f9 70 11 00 00 03 00 01 00
I would venture to suggest that this is a Single NAL Unit Packet. This packet has no padding.
By rfc3984/1.3 i got in first byte:
// 0 1 2 3 4 5 6 7
// +-----+---------+
// | type |
// +-----+---------+
val nal_unit_type = payload[0].toInt() and 0b0_0_0_1_1_1_1_1
nal_unit_type == 7 And I decided that this package contains Sequence Parameter Set data. Next, I want to get the decrypt SPS and get useful information from it (width and height, frame rate ...)
I got in second byte:
// 0 1 2 3 4 5 6 7
// +-+-+-----------+
// |s|e|
// +-+-+-----------+
val start_bit = payload[1].toInt() and 0b1_0_0_0_0_0_0_0 != 0
// +-+-+-----------+
val end_bit = payload[1].toInt() and 0b0_1_0_0_0_0_0_0 != 0
start_bit == false and end_bit == true
Starting from the third byte (payload[2]), I parse the SPS.
Edit #2
I was wrong when I decided that the second byte for nal_unit_type 7 or 8 is the FU header (with start and end bits). The second byte of the payload is already the first byte of the SPS. Thus, I managed to successfully decrypt the SPS and, for example, find out that the image size 1920/1080 is encrypted there (as was expected). But this has not yet helped me in any way to draw the resulting video stream to the android surface view.
So I am trying to read my transportation card using what I have learned so far about smartcards.
My ATR is: 3B 6F 00 00 80 5A 0A 07 06 20 04 01 03 01 F4 1F 82 90 00
when I looked in the ATR parser it didn't give me much information.
when I chose the MF file like this: "00 A4 04 00"
I got the response: "90 00"
output: but no data.
How can I go on from here to read files on my card?
Note: [it would be nice if someone can give me a link to a book or guide about smart cards, cause I found nice one about EMV cards but it is not working on all smartcards]
https://www.eftlab.com/knowledge-base/171-atr-list-full/ shows that from your atr, there are similar cards with similar ATR data.
You can try selecting the dedicated file using the offsets below and see what happens;
0x0002
0x0003
0x2000
0x2001
0x2004
0x2010
0x2020
0x202a
0x202b
0x202c
0x202d
0x2030
0x2040
0x2050
0x2069
0x206a
0x20f0
0x2100
0x2101
0x2104
0x2110
0x2120
0x2140
0x2150
0x2169
0x21f0
0x2f10
0x3f04
0xfeff
hope you can continue from there.
Select first the MF by performing CLA INS P1 P2 Lc DATA
EX.
CLA 00
INS A4
P1 04 - to select by Name
P2 00 - Select first or only occurrence
Lc - Length of FID
Data - FID
I have the following base 64 encoded string:
e4EdYQYDTpC7sN0K87elHA==
In Window.Atob in javascript it provides me with {aN»°Ý ó·¥ however when running the below code in Java it gives me
{�aN����
�
String encodedString = "e4EdYQYDTpC7sN0K87elHA==";
Decoder decoder = Base64.getDecoder();
byte[] decodedByte = decoder.decode(encodedString);
String decodedString = new String(decodedByte);
System.out.println(decodedString);
As you can see the output is extended ascii but I cannot seem to replicate the results of Window.atob in java.
The Byte output from Java is:
123
-127
29
97
6
3
78
-112
-69
-80
-35
10
-13
-73
-91
28
While the output should be:
123 194 129 029 097 006 003 078 194 144 194 187 194 176 195 157 032 195 179 194 183 194 165 028
Any ideas on what needs to be done in order to replicate the result.
I downloaded stanford-corenlp-full-2015-12-09.
And I created a training model with the following command:
java -mx8g edu.stanford.nlp.sentiment.SentimentTraining -numHid 25 -trainPath train.txt -devPath dev.txt -train -model model.ser.gz
When I finished training, I found many files in my directory.
the model list
Then I used the evaluation tool from the package and I ran the code like this:
java -cp * edu.stanford.nlp.sentiment.Evaluate -model model-0024-79.82.ser.gz -treebank test.txt
The test.txt was from trainDevTestTrees_PTB.zip. This is the result about code:
F:\trainDevTestTrees_PTB\trees>java -cp * edu.stanford.nlp.sentiment.Evaluate -model model-0024-79.82.ser.gz -treebank test.txt
EVALUATION SUMMARY
Tested 82600 labels
65331 correct
17269 incorrect
0.790932 accuracy
Tested 2210 roots
890 correct
1320 incorrect
0.402715 accuracy
Label confusion matrix
Guess/Gold 0 1 2 3 4 Marg. (Guess)
0 551 340 87 32 6 1016
1 956 5348 2476 686 191 9657
2 354 2812 51386 3097 467 58116
3 146 744 2525 6804 1885 12104
4 1 11 74 379 1242 1707
Marg. (Gold) 2008 9255 56548 10998 3791
0 prec=0.54232, recall=0.2744, spec=0.99423, f1=0.36442
1 prec=0.5538, recall=0.57785, spec=0.94125, f1=0.56557
2 prec=0.8842, recall=0.90871, spec=0.74167, f1=0.89629
3 prec=0.56213, recall=0.61866, spec=0.92598, f1=0.58904
4 prec=0.72759, recall=0.32762, spec=0.9941, f1=0.4518
Root label confusion matrix
Guess/Gold 0 1 2 3 4 Marg. (Guess)
0 50 60 12 9 3 134
1 161 370 147 94 36 808
2 31 103 102 60 32 328
3 36 97 123 305 265 826
4 1 3 5 42 63 114
Marg. (Gold) 279 633 389 510 399
0 prec=0.37313, recall=0.17921, spec=0.9565, f1=0.24213
1 prec=0.45792, recall=0.58452, spec=0.72226, f1=0.51353
2 prec=0.31098, recall=0.26221, spec=0.87589, f1=0.28452
3 prec=0.36925, recall=0.59804, spec=0.69353, f1=0.45659
4 prec=0.55263, recall=0.15789, spec=0.97184, f1=0.24561
Approximate Negative label accuracy: 0.638817
Approximate Positive label accuracy: 0.697140
Combined approximate label accuracy: 0.671925
Approximate Negative root label accuracy: 0.702851
Approximate Positive root label accuracy: 0.742574
Combined approximate root label accuracy: 0.722680
The accuracy about fine-grained and positive/negative was quite different from the paper "Socher, R., Perelygin, A., Wu, J.Y., Chuang, J., Manning, C.D., Ng, A.Y. and Potts, C., 2013, October. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the conference on empirical methods in natural language processing (EMNLP) (Vol. 1631, p. 1642)."
The paper states the accuracy about fine-grained and positive/negative is higher than mine.
The records in the paper
Were there any problems with my operation? Why was my result different from the paper?
The short answer is that the paper used a different system written in Matlab. The Java system does not match the paper. Though we do distribute the binary model we trained in Matlab with the English models jar. So you can RUN the binary model with Stanford CoreNLP, but you cannot TRAIN a binary model with similar performance with Stanford CoreNLP at this time.
I would like to get the result unique count value/text/ etc
A B
2 BADER 111
3 FAISA 112
4 NASSE 113
5 NASSE 113
6 MOHS 121
7 ASI 122
8 AHME 100
9 AHME 100
10 AHME 100
11 ASI 122
RESULT AS BELOW.
A B
2 BADER 111
3 FAISA 112
4 NASSE 113
5 NASSE 113
6 MOHS 121
7 ASI 122
8 AHME 100
9 AHME 100
10 AHME 100
11 ASI 122
6 6
For the number of different values in A2:A11 try this formula
=SUMPRODUCT((A2:A11<>"")/COUNTIF(A2:A11,A2:A11&""))
That will work for numeric or text values