when i use JPCap to forge ARP Request , i notice that jpcap is adding a trailer of 18 byte zeros to the tail of the ARP also i am not interested in sending this data. Is there a way to prevent this padding.
The zeros you're seeing are actually padding for the Ethernet frame. Ethernet packets have a minimum payload size of 42 bytes (the reasons have to do with the sender needing to transmit for a certain amount of time to detect collisions). As far as I know there's no way to prevent this, and doing so would be against the Ethernet specification.
Also see question at https://serverfault.com/questions/496324/arp-packet-received-larger-than-packet-sent-why
Related
I am trying to send data from an android phone in Host card emulation mode to a reader application. I do understand the maximum size of an apdu should be about 260bytes. However I need to send well beyond that (a few thousand bytes). I know I can divide the data and send it in "chunks", but I am really worried about cost of that on the general performance.Is there anyway I can send a bigger apdu than 260 bytes. I don't mind a little hack too if I have to. Cheers
To answer my own question. There are 2 types of APDUs according to the maximum size of data they accommodate. Normal sized APDUs (256 bytes) and extended APDUs with payload of upto 65536 bytes. However not all smartcards and readers support the extended APDU length.
Now on the android side of things, the extended length APDUs is not supported by the Android OS. Even though most of the NFC controllers support it. Therefore this is a software limitation and not a hardware one. See the
getMaxTransceiveLength method in
https://android.googlesource.com/platform/packages/apps/Nfc/+/master/nci/src/com/android/nfc/dhimpl/NativeNfcManager
I'm trying to read serial data from my Arduino using Java. I have followed the tutorial Arduino and Java.
So far I have it working except, I'm not reading the whole of the serial data at once. For example, the Arduino should be sending 0.71, and I read in a 0 then a .71 or some other combination. Sometimes I read it in fine, but more often than not, the data is broken up.
I have a hunch that if I changed the data type of what I am sending to a byte, my program would be okay, however I need float precision in the data I am transferring. How do I fix this problem?
Serial protocols such as the one used for serial over USB are byte oriented, not packet oriented. As such, there's no guarantee that a read will return the entire 'message' sent by the other end, or just part of it, as you're observing, because there's no concept of message or packet.
Instead, you need to delimit your messages in some way - such as by appending a newline - or preprend messages with a length field, so you know how many bytes to read.
I do this by making my Arduino send 'frames', separated by a 'gap'. It is easy to configure a timeout (at least in Perl it is) for reading data from the serial port. So what I do is:
Allow data being read during the duration of the data frame plus an extra few milliseconds:
[ (number of bytes) × 10 bits × 1000 ms / (baud rate) ] + 100 milliseconds
Then the gap between two values or frames being sent, should be longer than this value.
The program easily synchronizes to the data stream, because of the strategic timout.
I also add a simple preamble in my data to check data integrity.
I want to write an app in java that lets two clients talk via webcam. The way it works is both clients connect to a webcam that takes pictures at a specified frame-rate (20 per second maybe) then reduced the size and resolution, then sends it to the other client via a UDP packet. My question is - should I send every picture in its own Datagram Packet? I've read that they can only hold half a kilobyte at most so should every pic be cut down that much? Or should I have it split up into several Packets?
Are you sure you want to transmit whole images, instead of using an algorithm / codec that transfers only what needs to be updated?
If you choose the second option you can take some ideas from this previous question and a already used and tested library for the purpose. I believe i'd go with VLC java bindings if i had to do it. You should evaluate what is the best codec for your specific purpose (bitrates, quality, etc).
If you nevertheless want to transmit images i'd suggest you break them down into udp datagrams, remember that they should be somehow numbered/tagged so that the client can reconstruct the image as packets come (they won't necessarily come in the same order you send them), also you need to think what the client needs to do when some of the packets fails to arrive (discard the image, request previous packet, etc.).
One last thought, the udp datagram max size might not be the best option as well, your server-client should perhaps implement a algorithm and negotiate the udp frame size depending on the speed of the transmission.
What you should be doing is encoding a video stream. Leave the network layer alone, let it do fragmenting for you.
Also, if you are sending video over UDP, you will likely want to throw in a keyframe every 2 seconds or so.
Do not send each frame as its own image. Use a video compressor.
I am using simple UDP connection.
I would like to know if by default the connection has "Carriage return" enabled or disabled, and how could I set that property?
thanks,
ray.
Eh, that's not entirely accurate. UDP isn't differentiated by virtue of sending text vs. binary. All network protocols ultimately send data as bit streams (binary). What typically differentiates it is that unlike TCP, there is no back and forth to establish sequence numbers for tracking packets, and no ACK flag to signal that a packet was received. UDP will send packets with no regard to whether or not they get to the destination.
Edit: Ray maybe you should provide a little more detail about what you're trying to do. Carriage Return is an ascii character just like any other. It has a numerical representation and occupies a byte of space just like the other ascii characters. So asking if it's "enabled" for UDP transmission isn't really a valid question. Any series of bits can be sent via UDP, or TCP, or any other protocol - which means UDP doesn't even understand what ASCII is, or the letter "b", or a carriage return. It's all just a bunch of 1's and 0's, and UDP is aware of IP addresses and Port numbers - just enough to send your bits of data somewhere. What your application does with those bits is the question.
UDP traffic is session/connection less. So you can't have a "connection" on UDP.
UDP is used to pass binary data rather than text and there is no way to disable carriage return or any other character.
UDP broadcasts binary data - if you encode \r and/or \n to bytes and add it to the message, it will be sent. No filtering, no conversion on this protocol layer.
I am using a Java application to send UDP packets to an Android device. There I have another Java application that receives these UDP packets and displays its data - very simple.
Now I am working on some routing algorithms - therefore it would be nice to know how many hops a UDP packet did since it was send. My idea is to just read out the TTL (time-to-live) value of the packet and display it. Do you know if this is possible with pure Java? The class DatagramPacket doesn't give any hints at all.
I guess that this is not possible because this information might already have been removed at a lower layer, but I just want to be sure. :-)
The TTL field is, as you know, a feature of the underlying IP protocol (when used), not of UDP. So it makes sense for it not to be visible in the DatagramPacket API. However, I think you're right; it's not normally possible to get access to the IP packets through datagram-level API:s. You probably need to look into packet capture.
For your purpose, if I get it correctly, it would be sufficient to manipulate the TTL of the sender; e.g. set TTL of the sending socket to 1,2,3,4,5 and for each send one message with content "1","2","3","4" or "5" etc. Some will likely be missing on the receiver...