Are there libraries out there that can convert data (text files, etc) to sound and back to the original data?
The sound can be transmitted any medium I wish, whether radio, etc. I just need to store data in sound files.
Scenario:
step1: Convert a .docx file with embedded images to .wav.
step2: Send over a radio wave.
step3: Convert this .wav back to the .docx file with the embedded images.
This concept can be applied to any data.
Technology:
.net or java
I think the medium is important, as are other factors such as the size of the files and the transmission time available. A simple algorithm would be to convert your files to text (UUENCODE should do that trick) then convert to morse code : http://www.codeproject.com/KB/vb/morsecode.aspx
Morse gives you a simple alphabet able to survive transmission over a fairly noisy radio channel.
If your carrier is cleaner a conversion of your UUEncoded file into a series of frequencies one per character would probably also work, and be easy enough to decode at the other end, Frequency Analyzer in C#
You could try to use the magnetic card technology for your files, I'm also trying to do this on android.
Any data can be converted to byte into a string of characters it very possible with java and android.
then use the Encoding mechanism of Magnetic Cards API to encode the string to sound. Then you can just use the vice versa, convert the sound into string convert string into byte and save the data. It's just it takes time to convert both ways but it is feasible, I'm trying to do this so that any one with unlimited voice connection can transfer files or in the future browse the internet just through calling the other number. I hope I gave you some idea.
The problem is that the data in a word document doesn't necessarily make decent sound. If you pick a 1.8kHz carrier and use the binary contents of the word document to modulate the volume or the frequency (AM or FM) the result will be messy and hardly to decode.
But if you save the document as a bitmap, you can use the pixel values to modulate the volume of the carrier wave.
We've been sending pictures (not just black/white but greyscale and color (three different separations of the image, r, g and b) over phonelines using this method for many years before modems and the internet took off.
The fun part is that you can broadcast data this way. The sound can be received by more than one receiver at the same time. There's no error correction, but as you deal with visual data, you don't have to worry about a few pixels getting lost. It's similar to old fax protocols.
Does the audio file need to be convertible using lossy compressors (MP3 etc.)? If not, you can just add a WAV container around any binary data and you'll be fine. Otherwise it gets more difficult, and you need to ensure that the audio is audible (in a reasonable frequency range when played) and be tolerant enough on the frequency detection to match the output of lossy codecs.
Best way is to convert the audio file into binary and store in a file type you specify.
Try out the AudioInputStream Class in Java
To give what I think is a better response to all of the above, have a look at packet radio and the various bits that correspond to it AX.25 is a good example and there are a number of implementations for it. POCSAG is also another good implementation. Both have libraries available for many different languages and have been around for quite a long time.
Other example include things like WEFAX (weather fax), HFFax, SSTV (slow scan tv), etc.
You can think of them all as being similar to the old school phone line modem type encoders and decoders that run around the 300-2400baud
Related
How To Calculate JPG Data As It Loads From The Input Stream
I need to calculate RGB pixel data from a JPG file on demand. In other words, I cannot load the whole image. I need to open the stream, skip to the information I need, and ultimately return an array of RGB information I need.
I want to extract all the compression information I need, and use it to go after a specific targeted pixel.
The programming language I need to implement this in is JAVA. Is there any classes/APIs that will help me achieve this? Or do I need to create my own JPGInputStream?
If your JPEG stream contains a sequential frame, you could decode each scan (usually 1, 3, or 4) as they arrive and display them. It would look pretty funky color wise.
If your JPEG stream contains a progressive frame, you could also decode after each scan. In that case the progression would be pretty normal.
This kind of approach was great in the days of dialup internet where it could take minutes to download a single image. These days, there tends to be little value in it.
I have written a program to encrypt video files, however I can not open it after encryption. I want the output of the encryption to be able to play the file with encrypted bytes (i.e. Should be able to play the file in its encrypted form) like we do for.png file by keeping their header intact. Desired output reference
what you intend to do might be a little bit more complicated than what you imagine ...
in order to appear to video tools as a valid file, that can actually be played, you have to understand the corresponding file formats.
with image file-formats like bitmap or png, the header is an actual header, in other words a specific structure, usually at the start of the file, that describes what follows ...
with video formats it's the same but ... isn't...
there are different container formats, and what you need to preserve and what you can encrypt might differ from one to another...
for example mpg (the format you will find on DVDs) can contain numerous streams, which can (afaik) be distributed over multiple files, with each file containing various headers at different locations (a table of contents, headers for each video and audio stream, etc ...)
for those formats you will actually have to decode the headers and calculate the positions (and lengths) of other headers...
so ... even just finding the headers is a piece of work ... which needs to be done once per supported file format, and there are a few... https://en.wikipedia.org/wiki/Video_file_format
ok, and then we leave the headers, scramble the rest and we have playable encrypted videos, right?
... sadly ... nope...
next up: video and audio encoding/compression
you will actually have to understand how frames and audiosamples are compressed and encoded ... because the software that will decompress and render the images and audio, actually needs valid streams, depending on encoding, this includes checksums and error correction codes ...
but wait ... can't we just, let's say ... re-encode everything into some easy format without most of this crap, and then have something simple like skip the first X bytes and after that encode?
sure thing, but please remember that the original encoding was there for a reason ... maybe the video was intended to be played on certain devices that expect certain encodings -> the video would not be playable there
ok, but can't we re-encode again, just like we did before?
sure thing... but there will most likely be that slight problem with the filesize...
video encodings usually employ some sort of compression ... like using the property of a video that from one frame to another usually not all pixels change ... if we just encode every few frames and the deltas in between, we can store the same video on way less storage space ... or we could employ standard compressions like zip ... yeah... right... not with encrypted data ... you will have a very hard time to compress encrypted data, or save space with the delta approach ... read up on entropy and how compression works for this one ...
oh and one more thing about reencoding after encryption: if you ever want to decrypt, you better make sure that the new encoding can be reversed without any loss of information... not all codecs are lossless
so... why does it have to be playable? is it worth the effort?
I have tons of ripped .wav files (I'm ready to convert them into flacs if it's easier) which details I want to insert in a MySQL database. When I right click the .wav files in Windows Explorer (not the browser) and select Properties -> Details I can see some details about the song. For example the artist, genre and duration. How can I read and edit these details in Java?
To get durration information, see this link: Java - reading, manipulating and writing WAV files
Essentially, a WAV file is broken up into chunks, which either contain audio data, or describe the audio data in some way, or provide information about it. If the reader doesn't understand one of those chunks it is able to skip it, which allows placing a lot of different kinds of information in the file. One of those chunks contains information like the samplerate, number of channels and total number of sample frames, from which you can calculate the length.
For artist, genre and so on... well there's no standard chunk for that, so if that's really in the file, and not in the windows db somewhere, it's probably stored in ID3 tags embedded in the WAV. I don't know for sure what the chunkID is for ID3, but it's probably "id3 ", or "ID3 " (including the space). You coud probably figure this out by searching for strings of length 4 or more in the file -- usually data chunks are in the beginning and audio is at the end. (on unix/macos I would use the "strings" command, maybe with "head") ID3 tags are standard for MP3, and you can figure out how to parse them by googling. To get to them, you'll need to understand WAV files first, at least enough to know what chunks are, chunkIds, how to skip chunks you don't care about, and so on.
I don't know of a library that will read ID3 tags in WAV files in Java, so you'll either have to write one, or wrap one written in another language. I suspect libsndfile will work, but it doesn't have an MP3 reader, so maybe not. You could also try SOX. You can also check out http://javamusictag.sourceforge.net/ which I've never used, but it came up in a search.
good luck!
I ended up converting them into flac files and using JAudiotagger. Thanks for the responses, this time I ended up this way.
http://www.jthink.net/jaudiotagger/
I want to write two simple utilities:
Receives a Binary file, and converts it to a text file (ASCII format).
Receives a text file in the format of the above file and restores the original binary file.
The reason I need this is that very stupid, but still a reason. I have two computers - one with internet access and one without. I write software on the one without internet. I get emails on the 2nd one. I need to transfer binary files from one to another (e.g. jars) but the only communication between them is a clipboard (text only).
Might be a very localized problem - but I assume it has some solution in the worlds of data encryption/compression/network transfer.
The only thing I could come up is go over the binary file and convert each byte into it's HEX representation - so for every byte I'll get two ASCII characters (i.e. two bytes). Is there anything better? (This solution doubles the amount of info and might not be possible to transfer via clipboard)
One limitation - I need it as a java based solution (I want to write it myself)
Google for Base64, and use Apache commons codec to have a ready to use implementation.
I need to split mpeg4 video stream (actually from android video camera) to send it through RTP.
The specification is little large for quick reference.
I wonder if there any example/open source code for mpeg4 packetization?
Thanks for any help !
Mpeg4 file format is also called ISO/IEC 14496-14. Google it any you will find specifications.
However, what you are trying to do (RTP publisher) will be hard for the following reasons:
Mpeg4 has header at the end of the file. Which means header will be written out only when video stream is finished. Since you want to do real time video streaming you will need to guess where audio and video packets start/end. This will not be the same on all Android devices as they might use different video sizes and codec parameters. So your code will be device-dependent and you'll need to support and test many different devices.
Some devices do not flush video data to file in regular intervals. Some only flush once a minute or so. This will break your real-time stream.
There is no example code. I know because I looked. There are a few companies that do something similar, but mainly they skip RTP. Instead they progressively upload the file to their own server and then implement video/audio stream "chopping" and then insert it into their video/transcoder backend. I used to work for one of those companies and that's how we did it. AFAIK competition took similar approaches. The upside is that all complexity is on server an you do not need to update clients when something breaks or new devices arrive on the market.