RFID/ NFC based Buss pass system - java

I'm trying to work on Buss Pass System for my FYP. I am familiar with C# and Java programming languages and I wanted to use a concept where the travelers enter the bus and scan their RFID/NFC tag and if they are approved to board, the system should display Green Light if not it should display Red Light with a corresponding sound effect. My question for you is what kind of device should I use. Thank you!

Basically you have to decide whether you want some Android mobile phone/tablet to emulate the reader in the bus - consequently you will need to program it in Java.
Or you can decide to go with a RasPi with a NFC reader and use the GPIOs to drive some LEDs.

Related

How to open a specific camera with camera2 API

I am using a phone with 3 back cameras (Realme GT Pro 2) and I want to access the one with the widest FOV. I am currently trying to implement it with the help of the Multi-camera API, but it's relatively confusing and I'm unable to implement it/find a solution for this. Can anyone give me some tips on how to access the specific back camera and display its stream?
I can get the physicalCameraId with CameraCharacteristics.getPhysicalCameraIds(). But how can I use this to open the correct (physical) camera? I also know that some manufacturers haven't yet implemented/allowed this access

Android emulate NFC with Smartphone as reader

I'm looking for a way to use my phone instead of a nfc card. To do that I would create an Android App.
I live in Switzerland and the national rail company uses a nfc card called "Swisspass" to load annual tickets, ... on it for conductors to read it and look if the ticket is valid. A year ago they issued a new function called "Swisspass Mobile" where in your app you can display a QR code for the conductors to scan. You can also add your pass to Google Pay as a loyalty program card. If you do so it shows you a url which looks like that:
HTTP://1SP.CH?S=SXXXXXXXXXXX
Where the SXXXXXXXXXXX is your member id. The problem with the QR code is that it takes way longer for the conductor to check and it is in general difficult to do so, as the train is constantly moving. If you take this url and generate a QR code form it in any other app it works as well.
So my thought was to use my phone to send this URL to the smartphone the conductor uses to make their and my life easier. I had a look at this
https://developer.android.com/guide/topics/connectivity/nfc/hce
but I can't really figure out what AID I would have to use or if I even need to use HCE in that case as it's not a NFC tag reader but a smartphone. Do you know any more information? Could I use peer to peer or just send the url by nfc to the conductors smartphone? How would I do that?
Thanks a lot in advance

How do I get the Arduino Nano to communicate with a phone through Java?

My goal: Make my phone control an Arduino which controls servos which will do cool animations.
How do I get to my goal: I use Java to communicate to the Arduino Nano through the yellow pins (visible on the right picture) through serial (RxD/TxD) (RS-232 protocol) which will tell the Arduino how to control the servos or motors.
Problems:
I don't know which one of the pins that does what, like which one that is GND, which one that is TxD or RxD
I don't know how to tell the phone to do things with its yellow pins (which the Arduino will read and understand to control its servos/motors)
[_] I don't know how to listen to the phone's accelerometer through Java - The thing that lets it tell what is down
[_] I don't know how to listen to the phone's light-sensor
[_] I don't know how to listen to the phone's microphone
[_] I don't know how to listen to the phone's camera
[_] = Things I'm just very curious about that I will use for future projects.
Extra information: I got NetBeans IDE 7.1.1, and I got an Arduino Nano, I got the phone shown above which is called "J10i2 Elm Sony Ericsson".
So I guess what I'm really looking for is someone who knows how to control a phone 100% through Java. Or being pointed in the right direction is also nice!
If you think that I can do this in a better/simpler/smarter way then feel free to leave a comment stating why and how it is better ;)
I would suggest one easy solution, use a REST web service (SOA arquitechture) to solve this problem i did it in my personal case and now im able to control and monitor my house :)
You could simply create a database table representation of your arduino's pins...then just create a service that receives some parameters (i.e> pin no, status -> 1/0, arduino id, etc) and saves them in the database, with your mobile app you could just use the same web service for changing the values.
Finally write down an Arduino app that keeps reading statuses from that service and database (of course using another endpoint/ method in which case you just need to pass your arduino id and it will return a response of all of the different pin status so you can read/process them from your arduino (for example since JSON library is heavy and slow on arduino im passing my return values back from the server to the arduino using simple CSV values i.e> "[pin:status,.....]" i have some code for making HTTP/GET/POST on Arduino you just need the ethernet shield , i could bring you the code if you are interested

Android Camera autofocus when user holds camera still

I'm sure most of you have used an android phone before and taken a picture. Whenever the user changes the mobile phone's position and holds it steady, the camera focusses automatically. I'm having a hard time replicating this in my app. The autofocus() method is being called only once when the application is being launched. I have been searching for a solution these past 3 days and while reading the google documentation I stumbled upon the sensor method calls (such as when the user tilts the mobile forwards or backwards). I could use this API to achieve what I need but it sounds too dirty and too complicated. I'm sure there's another way around it.
All examples on the internet which I have found only focus when the user presses the screen or a button. I have also gone through several questions on SO to hopefully find what I am looking for but I was unsuccessful. I have seen this question and that String is not compatible with my phone. For some reason the only focussing modes which I can use is fixed and auto.
I was hoping someone here would shed some light on the subject because I am at a loss.
Thankyou very much for your time.
Since API 14 you can set this parameter
http://developer.android.com/reference/android/hardware/Camera.Parameters.html#FOCUS_MODE_CONTINUOUS_PICTURE
Yes, camera.autoFocus(callback) is a one-time function. You will need to call it in a loop to have it autofocus continuously. Preferably you would have a motion detection via accelerometer or compass to detect when camera is moved.

Audio programming, Sound Processing and DSP

I was playing with a karaoke application on iPhone and came up with following questions:
The application allowed its users to control the volume of the artist; even mute it. How is this possible?
Does adjusting artist sound/setting equalizer etc. mean performing some transformation of required frequencies? What sort of mathematics is required here(frequency domain transformations)?
The application recorded users voice input via a mic. Assuming that the sound is recorded in some format, the application was able to mix the recording with the karaoke track(with artists voice muted). How can this be done?
Did they play both the track and voice recording simultaneously? Or maybe they inserted additional frequency(channel?) in the original track, maybe replaced it?
What sort of DSP is involved here? Is this possible in Java, Objective C?
I am curious and if you have links to documents or books that can help me understand the mechanism here, please share.
Thanks.
I don't know that particular application, probably it has a voice track recorder separately.
For generic 2-channels stereo sound the easiest voice suppression can be performed assuming that artist's voice is somehow equally balanced between two channels (acoustically it appears in center). So the simplest 'DSP' would be subtract one channel from another. It does not work that well however with modern records since all instruments and voice are recorded separately and then mixed together (meaning that voice will not be necessarily in phase between two channels).
I have written two detailed blogposts on how to get a custom EQ in iOS. But i have no details about how to do the DSP yourself. If you simply want to choose between a wide range of effects and stuff, try this.
First post explains how you build libsox:
http://uberblo.gs/2011/04/iosiphoneos-equalizer-with-libsox-making-it-a-framework
The second explains how to use it:
http://uberblo.gs/2011/04/iosiphoneos-equalizer-with-libsox-doing-effects
please up the answer if it helped you! thanks!

Categories