I am writing an application in Java that sends commands to a smartcard and parses responses over an NFC interface. This application can be used both on Android and on PC.
Using a USB contactless card reader through the PC I have no trouble connecting and communicating with any card I throw at it.
Android is another matter though. Using the application through a Nexus S produces less desirable results, depending on the card.
Some cards will connect and communicate with a 100% success rate. Most cards I have attempted to use have been very difficult to even make a connection, let alone communicate with it.
The NFC service on the Nexus S is attempting to connect with the cards. It makes a continuous low pulse sound, indicating that it cannot make a solid connection (as far as I can tell).
My current thought process is that the Nexus S has a lower powered NFC chip than the USB PC reader I'm using. From other articles I've read it seems as if different cards have different power requirements in order to use them.
How can I determine what power level is needed to power a card? Is it hidden somewhere in the ATR?
How can I determine what power level a particular NFC chip has? Is this documented somewhere?
This kind of problem is typically caused by (a combination of) any of the following:
Badly tuned antenna in the card
Micro-controller card requiring much power
Weak RF field generated by the NFC phone
This results in bad antenna coupling between phone and card, which results in bad or no communication. A desktop reader typically does not have this kind of problem as it generates a much more powerful field. NFC in a phone is quite low-power and the RF field it generates is often on the edge of what is still permissible by ISO 14443. The NFC chip in the Nexus S, the NXP PN544, generates a weak RF field. However, this is a function of both the NFC chip and the NFC antenna in the phone. In my experience, Type B cards often cause problems (rumor has it that they often require more power). Another example is electronic passports: the frequently have less optimal antennas.
Minimum power level required for a card: it is not in the ATR. ISO 14443 card do not have an ATR (they may have an EF.ATR file, but I have never seen any). The ATS (Answer To Select) response does not indicate required power levels. Cards have the possibility to indicate whether the power level is sufficient in the CID field of ISO 14443-4 S-Blocks (when present and supported by the card). I have never seen cards that do this, though.
To determine the power level of particular NFC chip combined with a particular antenna (and tuning circuit), you could use a spectrum analyzer to do the measurements. I measured several Android NFC phones (Galaxy Nexus, Nexus S, Galaxy S3, One X) that all contain a PN544. The results differ between phones, enough to make a difference in some cases (S3 generating the most power).
Related
Description :
I'm trying to find a way to calculate the distance between the application, and nearby Bluetooth devices.
That, or only detect devices that are x meters away from the device with the application.
Tried so far :
I tried using the Bluetooth's signal strength, but it is not reliable, as it has so many variables other than the distance (rotation of the device, objects between the 2 devices, etc). For example, I kept an eye on a device that was still on a table, and the numbers went up by 10 mBw without neither of the devices moving.
I also thought of using GPS for distance calculation, but GPS's accuracy is vary big compared to the accuracy I'm looking for (+-1m).
I look for lowering the strength of the Bluetooth signal before searching (on newer Bluetooth versions), to find less devices within a lower range. But the people who have tried it say it is unreliable because even at the lowest energy for Bluetooth, the Bluetooth was able to find devices that are about 10m away.
Examples around us :
If anyone has an Apple Watch and a Mac, they'd know that it is possible to unlock your Mac by simpley being close to your Mac while wearing your Watch.
Also, car keys. When you get close enough to the car while carrying the key on you, the car is unlocked.
Notes :
Assume all the devices are Android devices with high their hardware. It's a special implementation, not for everyone
A good discussion of techniques for calculating distance using Bluetooth devices is here: https://vimeo.com/171186055#t=40m15s.
With respect to the Apple Watch and Mac, Apple is using Time-of-Flight via peer-to-peer WiFi to determine proximity at that level of accuracy.
Typical automatic remote keyless entry systems utilize radio pulse, not bluetooth. More advanced systems, like Tesla's Phone key, uses Bluetooth on the phone device, but relies on the driver to physically touch the door handle to complete the process.
This might be possible but not much accurately.
You should approach to it like this:-
You should measure the signal strength, and then measure the distance using the speed of bluetooth (it usually travels 1cm in 100ps). Timing it would be difficult though.
Then, using the data you can easily measure the distance ( it is usually less that 10 m but can go farther).
You would get an answer but it would be really an approximate one.
As per me, the exact measuring is not possible.
The protocol IEC 62056:21 tells us how to deal with enegy meters, it's quite easy!
The part where I am stuck is the implementation over a GSM data channel. Normally I would set things like:
300 baudrate
1 parity bit (even)
But the meter is not connected via serial connection but, instead, it has a sim. Using a modem I can call the meter using:
AT&C1
ATDNumber
Problem 1: Settings
The modem calls the meter with different settings (baudrates, stopbits, parity) compared to the protocol ones, e.g.
9600 baudrate for call
300 baudrate for first messages
xxxxx new baudrate shared between master and slave
Can I change these parameters during call?
Problem 2: Send data
After I establish a call, I would send to the meter things like:
/ ? Device address ! CR LF
Here's the missing piece, I don't know how to send this data over the call
I am reading and trying several libraries (like J62056, pyserial), but I've found nothing about sending data via gsm call
EDIT
I read a trace of a proprietary software, and I got this:
TX: 140ms AT&C1E0V0
RX: 32ms 0
TX: 1203ms
ATDT ##########
RX: 34656ms 1
RX: 0ms 5
RX: 0ms
TX: 3234ms <NUL><NUL><NUL><NUL><NUL><NUL><NUL><NUL> *what is this?*
TX: 594ms /?########! (this the Request message) **start sending data**
The < NUL > part is not Clear, and this is where the modem starts to send data
Edit:
I read about the 8 null chars, they're just a check-in sequence.
At the moment, after the modem established the call I translate my 8 bit no parity sequence into a 7 + parity one. Now i am able to send and receive data from the meter, I must test other feature before writing my solution to this answer
Without knowing anything about IEC 62056:21, if your energy meter supports this over GSM circuit switched data (CSD), nothing it says about speed and parity in the normal non-GSM case is relevant at all.
Because the data call you set up will be non-transparent CSD (NTCSD). A transparent call would have treated the gsm connection as much as just an electrical wire as possible (which in best case is difficult1), forwarding each byte received immediately and with no buffering nor with any retransmission support. A non-transparent connection on the other hand will receive/send its data to an intermediate entity which in turn communicates with the other end point, and will support buffering and retransmission.
For GSM NTCSD, the part of the phone responsible for data handling is called TAE (Terminal Adapter Equipment) or TAF (T. A. function), and the relevant protocol2 is called RLP (Radio Link Protocol) which is specified by 3GPP in specification 24.022. It is a link-layer protocol similar to HDLC, and it communicates with a unit in the GSM network called MSC (Message Switching Centre). It is then then MSC which communicates with the other end, on a different and completely separate communication line (which can be PSTN, ISDN or mobile network depending on what kind of device the remote end is).
mobile modem <----link1----> MSC <----link2----> remote endpoint
The important thing here is that the two links are 100% independent, and they do not have to be the same speed.
Thus whatever speed your energy meter is using over some serial interface between itself and the embedded modem with SIM card is independent of the radio link1 speed which itself is independent of the network link2 speed which your modem will have.
So the above should be an answer to your question, but let me fill in some more information with regards to the speeds of link1 and link2, because this can be controlled with two AT commands AT+CBST and AT+CHSN (specified in 27.007).
The basic GSM data traffic channel is called TCH/9.6 (Traffic Channel), which is a channel with net 9600bit/s speed (seen by the user) and 12000bit/s gross speed (seen by the network). In order to enhance throughput
HSCSD (High Speed CSD) was developed which introduced a new channel coding and a new channel TCH/14.4 which had 14500bit/s gross speed and 13200bit/s net speed (which of course all the marketing people presented as 14.4 speed even though that not was really true).
In addition HSCSD allowed for bundling multiple timeslots together (multislot). A frequency band in GSM is divided into 8 timeslots, where an active call occupies one timeslot in the downlink direction and one timeslot in the uplink direction. Thus a cell tower configured to support only one frequency band supports maximum 8 simultaneous calls.
What HSCSD introduced was the possibility to set up a call that could use multiple (adjacent) timeslots, for instance two downlink timeslots and one uplink (denoted 2+1). The different multislot configurations a phone or network supported were categorised in multislot classes3 (with 2+1 being multislot class 10 as far as I remember).
Since HSCSD both was an added value and occupied more network resources this was something that was billed higher than a normal 1+1 9600 call, and thus the users had to have some control over if they used HSCSD or not. This was
done by introducing the AT+CHSN command which controls the link1 speed.
Support for hinting to the MSC what speed it should use for link2 was implemented by the AT+CBST command (which already existed before HSCSD).
Since the AT command is terminated in the mobile phone, its value had to be forwarded to the MSC in some way, and this was done out-of-band (with regards to the RLP data link) in a Bearer Capability Information Element in some of the call setup messages.
So, that's probably more than you need to know about speeds in a GSM network for a NTCSD call, but having developed and maintained the NTCSD call stack in Ericsson's mobile phones for over a decade, this is something I know quite well...
1
While there are standardized some support for transparent data, this was more of a legacy thing done in the 90-s in order to support equipment made in the 80-s. Operators do not want to have this supported today because it is a pain in the butt to get working and to support.
Fax over GSM for instance was such a transparent bearer, and it was a massive test everywhere approach with no guarantee that it would work even if you followed the specification fully. You had to do your best effort implementation and then you had to travel all over the world testing it and try to fix all the issues that would pop up (which they absolutely did. And even if the problem was something wrong with the network, the operator might not want to fix it so you had to add some custom workaround).
One of the guys working with fax support told me that in some cases they had to start sending the response before they had gotten the request in order for the timing to work out (e.g. they had to guess and anticipate what the remote fax would do).
It is not without a reason that phones manufactured and operators today do not support fax like they did in the 90-s, early 2000.
2
In addition there is L2RCOP which is just a framing adoption between packet based RLP and the physical serial interface.
3
CSD and GPRS both supported multislot, but not necessarily the same class, i.e. CSD multislot class is independent from GPRS multislot class.
My girlfriend recently bought a product for her skin, its basically a mask with lights on the inside that runs a power cord similar to an auxiliary cord into an auxiliary port on a mini controller. It is only good for 30 uses, every time you turn it on, an lcd screen counts down til it hits 0, then you have to buy a new controller.
I find it extremely wasteful to buy a new plastic controller after 30 uses. My question is it possible to somehow connect this device to my laptop through the aux port or aux port extension and modify the code written on it.
I work as a web developer by trade, so I am no stranger to code. I just need to know how to connect it, read the code and compile the code, etc, to modify the counter or remove it.
It is ЯU 94v-0 mini controller (Yes, the R is backwards)
Interestingly enough if I plug the mask into my iPhone or Mac, it will power one set of lights (there are two types/sets), but not the other.
Thanks in advance for the hackery advice.
I Have Better IDEA! Just buy new device (This device has an memory ATMLH436, which is basically same as AT24C02 eeprom chip), disconnect the WP pin and connect it only with the VCC pin, it then can't write the counter down (since it will be write protected), every time you pull the batteries out and in again, it will be as fresh as new. Should work like a charm :D
Here is the historical answer, maybe usefull to someone in the future:
How to hack instruction.
This device has an memory ATMLH436, which is basically same as AT24C02 eeprom chip, which is an EPROM with I2C interface, it is 2 KBytes memory. And the counter is most likely stored on this element.
You need to buy a new device, connect the not used eeprom to a i2c programator (you need to buy one, or ask a friend, I remeber this as a simple device connected to a rs232 port, but you can find one for usb), read the memory content and store it in a file and then you can use this file to reprogram the eprom to the original "new" state every time you want.
How to connect the i2c eeprom to mac:
hmmm, you need to have an i2c programator, that's first step.
Check the one you're about to buy if it has a Macintosh compatible software for reading/ writing.
If not, maybe use other computer.
remeber that in order to be able to program the device you need to connect the pin 7 (Write protect pin) to the ground. Here's the chip spec: http://www.atmel.com/Images/doc0180.pdf
Basically in oreder to communicate with the device you need:
know the address of the device - it is set by the A0,A1,and A2 pins connected either to ground or VCC - the programator soft will require that address.
connect the SDA, SCL and GND pins to the programator.
the chip need power supply of 5V connected beetween GND (-) and VCC (+) to operate.
In order to program the WP pin needs to be connected to GND.
There are big chances that: A0,A1,A2 and WP are grounded, but I can't be sure.
In case of this scenario the address of the device is 1010000 and there's nothing left that needs to be done in oreder to program it. I assume if the WP pin is not grounded you can disconnect it from whatever it is connected to and hardwire it to ground - should not affect normal operation of the device. probably you don't need to unsolder the chip in order to be able to read/write it, you need to connect GND, SDA, SCL and VCC. I would make a connector for these 4 pins to have it accessible from ouside of the device.
In order to make my life easier for many reprogram cycles, I would solder some connector to not have to disssassemble the device each time I need to reprogram it.
There are small chances that the counter is in the fat black round dot on the PCB, in which case there's nothing you can do to reset it since it is some custom chip without spec - if you have a great lab with X-RAY machine (like https://www.hawkerrichardson.com.au/electronic-production-systems/inspection-test-a-repair/unicomp-ax-8200) and other such stuff + lot of experience you could :) but rather not many people have such toys since they are very expensive :)
There are some pins to connect to the rounded chip, but I don't have any idea how to use them, what's the protocol or anything...
but if they could produce eprom inside of it, they wouldn't probably use any additional external eeprom because of the costs. But since eeprom production is not so easy as the regular chip, they use rather external memory from other supplier rather that producing one by themself - it's one logical argument that the counter is there in the AT24C02.
The correct way to hack this thing would be to listen to the I2C communication line with a scope. Note the exact binary sequence.
Then remove the external eeprom entirely and replace it with another MCU, which only has one task and that is to reply as the main MCU expects it to do. Though of course it never saves the down-counter.
Essentially you'd get this sequence each time you power up:
Main MCU: "Hello my eeprom, can I get the counter?"
Hack MCU: "Err yes I am totally an eeprom, the counter is 5."
Main MCU: "Store the counter value 6".
Hack MCU: "Roger that" (does nothing).
You'll get the same sequence over and over.
To succeed you need to know: microcontroller programming, I2C, basic electronics, soldering.
I'm wanting to create an Android app that will send IR signals, but I don't know how to find the signal patterns or the frequencies from the original transmitter.
I purchased a small USB IR receiver (not Microsoft, but uses the Microsoft EHome drivers) and I can see the patterns from WinLIRC, but they are barely consistent, and trying to use any of them doesn't work on the receiver. I also am not sure what frequency to transmit on. I have my app sending back the signals and I can see them in LIRC, and they're accurate, but the receiver from the original remote doesn't respond to it.
How can I get the info from the original transmitter, accurately, without spending tons of money on something like an oscilloscope?
I have a question how I can learn to handle (with perhaps java) high data rates. My task is:
I will have a fluorescence microscopy camera with around 1Gigabyte/s and number of images between 100/s and 1000/s.
The image data should be written uncompressed as raw data on the disk. The storage system is not yet decided and should be dimensioned based on the needed performance. During the data acquisition a more or less live image should be shown.
Has somebody some suggestions for books or lecture notes for me?
Your question is pretty open ended, but I can give you advice based upon my past experience building multi-camera, real time data acquisition systems.
Typically these data acquisition systems require (though you may have to purchase it separately) a video capture card. the cards typically buffer some number of frames and the frame rate you can support depends on home long you need to run the acquisition system and the slowest data transfer rate in the "camera->capture card->hard drive" chain. These cards typically come with a documented API (typically in a C variant, I've never seen a Java variant but that doesn't mean it doesn't exist) and libraries that you can compile against that support code using the documented API functions to record data to storage.
When I have worked on systems that just needed a full motion video frame rate (~ 30 Hz) a windows box with a capture card has sufficed just fine. I am pretty sure you can get cards that will sample in the 1kHz range or higher (depending on your camera resolution), but you may be limited on the duration you can sample (given the limited available storage) during the acquisition process if you are sampling data faster than the buffer can clear it to final storage.
Also there is no reason for you to display >30 Hz at one time, no display system is going to support a 1kHz refresh rate, and the human eye can't process >30 Hz anyway.
Unfortunately in my experience these systems are put together piece meal because they are highly specialized which limits the market and disincentivies a standardized approach. The bottom line is that you are probably looking at either using a capture card manufacturer provided API interface (I'd advocate against wrapping it in JAVA because you'll just be adding extra latency that you can't afford at the acquisition rates you are talking about) or having an Electrical Engineer custom fit your solution. If I were in your shoes, I'd be searching for a capture card that meets my requirements, perhaps from the microscopy camera manufacturer.