SensorExtension Raw HRM data to BPM - java

I've been looking to build a very simple heart rate monitor as a project to experiment with the sensors on the Samsung Note 4, in particular the heart rate sensor under the camera. I've been granted the SensorExtension sdk by Samsung and have run their sample activity that will display the raw data of the sensor.
I was wondering if someone can give me a nudge in the right direction as to how to convert the raw data into meaningful beats per minute data. I know it involves a lot of signal processing but any help would be appreciated as i'd rather not rely on Samsung Digital Health sdk.
Thanks in advance.

If you add a time stamp to the data you can count the peaks and troughs per second. I show this using an excel spreadsheet below. The same method could be easily translated to your Android code.
I also found this information on whats hidden within that Data:
PPG Signal
In the mentioned link they describe exactly what is hidden within the data shown in this image
I hope this helps.

Related

How to synchronize audio in Android development

I am a freshman in android development field and lack of experience since I just had four months development experience till now. Here is a kind of tough demand for me and I sincerely hope that there is anyone who can provide me some solution since I totally got no idea.
Here is the background: there is a page named ReciteActivity in which you can recite text and the device will firstly record and then rewrite and store the text at last.Then you could go to Reply page in which you can see the text you recited displayed on the screen and listen to the corresponding audio record.
Now we got a new demand that is, let us take a example, the audio record starts from the second paragraph if you press the second paragraph on the screen, which means the audio record needs to do a synchronization with the specified paragraph.
Hope this question is explicitly clarified and look forward to getting an effective solution from you smart guys in stackoverflow. Thank you in advance.

Having problems getting Weight Data with Android from a BLE Scale

I'm currently developing an android app for a weight scale I received that transmits data through bluetooth low energy.
I was looking at documentation and if I got the information correctly, there are specific UUIDs for data. I received a BLE scale with a Chinese protocol document found here: http://www.anj.fyi/protocol.pdf
I found and was able to get a functioning scanner working that lists the device name and the UUIDs it broadcasts.
Lets say I want just the weight data to show up in the UI, nothing else and nothing more.
I don't know what UUID they used for the weight data, and there are a lot of UUIDs. Probably 20+. I checked a UUID compilation and the usual weight data UUID does not show up.
How do I get the data from those UUIDs?
I'm thinking it might be the ones that are notifications, indications or read properties.
Looking at the UUID for example, f000ffc2.
How would I get data from that characteristic? Would anyone have an example code to grab the data from those UUIDs, or tutorials because I'm terribly lost right now.
I really appreciate it.
There are no weight information on the document you list http://www.anj.fyi/protocol.pdf, it is only shows the BLE module hardware interface spec, i.e. it does not specify the detailed service and characteristic.(I an a native Chinese speaker).
Regarding to the UUID you want to know which is the one to represent the weight, yes you are right it should be the read/notification feature without write permission. Can you use the apps e.g. lightblue on iOS to receive the notification(meanwhile change the value on your device) to test it? this will help you to understand which characteristic is the one you want.

Android - Adaptive Bit rate streaming. (HLS) Quality of stream

I'm new to adaptive bit rate streaming. Basically I'm trying to write an app that shows information about the quality of the connection on an Android device.
Since HoneyComb(3.0), Android supports adaptive bit rate streaming through HTTP Live Streaming (HLS). It seams like support for helping developers verify the quality of this connection device side is very limited.
What I would like to know is some low level information about the stream. Such as: the number of segments, the segment duration, number of requests to change bit rate, the bit rate the media player sees (to facilitate the change), etc.
I've been able to get some information about stream quality from the MediaPlayer, MediaController, MediaMetaDataRetriever, CamcorderProfile, MediaFormat, MediaExtractor classes. However, the stuff I'm looking for is even lower level. If possible I'd like to be able to actually see how the player is communicating with the server.
I just started looking at the MediaCodec class, however I can't figure out how to get the MediaCodec from a mediaplayer. Or Maybe I just don't know how to use this properly as I cannot find any good documentation and examples.
Does anyone know if it is possible to access the low level information on the Android that I'm looking for? Is the MediaCodec the way to go? If so, does anyone have any working examples of how I could get the currently used MediaCodec and extract the information I'm looking for out of it? (Or at least point me in the right direction)
Really appreciate any help on this one.
Cheers

Android Camera autofocus when user holds camera still

I'm sure most of you have used an android phone before and taken a picture. Whenever the user changes the mobile phone's position and holds it steady, the camera focusses automatically. I'm having a hard time replicating this in my app. The autofocus() method is being called only once when the application is being launched. I have been searching for a solution these past 3 days and while reading the google documentation I stumbled upon the sensor method calls (such as when the user tilts the mobile forwards or backwards). I could use this API to achieve what I need but it sounds too dirty and too complicated. I'm sure there's another way around it.
All examples on the internet which I have found only focus when the user presses the screen or a button. I have also gone through several questions on SO to hopefully find what I am looking for but I was unsuccessful. I have seen this question and that String is not compatible with my phone. For some reason the only focussing modes which I can use is fixed and auto.
I was hoping someone here would shed some light on the subject because I am at a loss.
Thankyou very much for your time.
Since API 14 you can set this parameter
http://developer.android.com/reference/android/hardware/Camera.Parameters.html#FOCUS_MODE_CONTINUOUS_PICTURE
Yes, camera.autoFocus(callback) is a one-time function. You will need to call it in a loop to have it autofocus continuously. Preferably you would have a motion detection via accelerometer or compass to detect when camera is moved.

Audio programming, Sound Processing and DSP

I was playing with a karaoke application on iPhone and came up with following questions:
The application allowed its users to control the volume of the artist; even mute it. How is this possible?
Does adjusting artist sound/setting equalizer etc. mean performing some transformation of required frequencies? What sort of mathematics is required here(frequency domain transformations)?
The application recorded users voice input via a mic. Assuming that the sound is recorded in some format, the application was able to mix the recording with the karaoke track(with artists voice muted). How can this be done?
Did they play both the track and voice recording simultaneously? Or maybe they inserted additional frequency(channel?) in the original track, maybe replaced it?
What sort of DSP is involved here? Is this possible in Java, Objective C?
I am curious and if you have links to documents or books that can help me understand the mechanism here, please share.
Thanks.
I don't know that particular application, probably it has a voice track recorder separately.
For generic 2-channels stereo sound the easiest voice suppression can be performed assuming that artist's voice is somehow equally balanced between two channels (acoustically it appears in center). So the simplest 'DSP' would be subtract one channel from another. It does not work that well however with modern records since all instruments and voice are recorded separately and then mixed together (meaning that voice will not be necessarily in phase between two channels).
I have written two detailed blogposts on how to get a custom EQ in iOS. But i have no details about how to do the DSP yourself. If you simply want to choose between a wide range of effects and stuff, try this.
First post explains how you build libsox:
http://uberblo.gs/2011/04/iosiphoneos-equalizer-with-libsox-making-it-a-framework
The second explains how to use it:
http://uberblo.gs/2011/04/iosiphoneos-equalizer-with-libsox-doing-effects
please up the answer if it helped you! thanks!

Categories