I want to be able to take in a file for music, analyze it and then make lights light up to the music. The only problem is, with the board that I built, using Processing has a big delay on the code that runs and it has maybe a obvious 10 ms gap. I need a way to either bring the program back into Arduino, or somehow lower the response time. Any ideas?
It's unclear how you're dealing with the serial communication and where the bottlenecks would start to show up (audio processing/serial comms/both/something else/etc).
Regardless, if you want to do sound analysis on Arduino alone, that will be a challenge, as you'll have a lot less resources to do the FFT number crunching on an 8-bit micro controller.
I would go either of two ways:
Do the sound analysis as efficient as possible on the computer and map it to lights so the software(Processing) only sends minimal data to the firmware(Arduino)(just light data, on a need to know basis). If you have a ridiculous amount of lights you might want to use a serial converter than can handle higher baud rates, but in most cases you shouldn't need that.
Do a minimal sounds analysis on Arduino. If you got your light animations right, you can make something sound reactive using just the amplitude and a bit of easing without getting into FFT/MFC or anything fancier. If you really really want light responsiveness to frequencies consider using a 7 band frequency analyser chip like MSGEQ7. There are Arduino breakouts that make that easier.
I would like to make a program that can transfer data as pulses of a certain frequency but am unsure on how to detect if a frequency is present.
I would assume I need to filter out all unneeded frequencies but I can't seem to find anything on how to do this.
Are there any libraries that already do this or would I have to build my own? Are there any examples of this or something similar being done?
It looks as though you're trying to implement a Modem, and would be well advised to look at the proven modulation techniques used for this purpose - usually QPSK and QAM. The technique you imply in your question is a crude from of amplitude modulation - essentially modulating a carrier of a given frequency with a bit-stream. Heterodyning might be a good place to start when demodulating this. Using an FFT will yield poor results because of the sampling effect of windowing, which will result in a poor bandwidth.
Another practical problem you will face once you've demodulated the signal is clock recovery. It is highly probable that the original bitstream clock will be asynchronous with the sample clock at the receiver. In order to decode the data-stream, you will need to recover the sender's clock (that is to say, the relationship between it and a local clock). A Phased Lock Loop is the usual way of achieving this.
You will also need to work out how to detect the start of the bit-stream - e.g. some kind framing.
You'll want to apply a Fourier transform to the signal data to look it in the frequency domain. JTransforms is an open source library you can use to do that.
im working on a project where have to identify similar patterns between wave files where the frequency differ.
for a example the human voice frequency differs from each other. if i hv to identify if the human crying , shouting laughter of a voice, there should be a pattern between crying voices regardless to the frequency.
so im looking for a algorithm that can identify these elements.
You could start by taking a look at Neural Networks. These type of programs usually deal well with certain inconsistencies in your data. The Neuroph Studio provides you with a quick and relatively easy way to construct your Neural Network.
All you need is a set of data containing whatever you want to match. You can use about 70% of this data to let your Neural Network learn to cluster your data and then, use the remaining 30% to test your Neural Network.
The main issue with Neural Networks is that you need to find a way to encode your data into input vectors. Once you do that, the Neural Network should try and learn to find the differences on its own.
For Image based recognition Principal Component Analysis and it's siblings, like Kernel PCA or Linear Discriminent Analysis are the right thing. PCA is an algorithm which works on any kind of data, so I think also on sound.
I would convert the wav into int-Vectors and run an PCA on it to extract the features.
JMathTools are very good for that...
this i found also...
Hope i can helped you...
I am making a game in java and it is going well. I want to early on implement multi-player so I build on it instead of porting the entire game to multi-player when it has a ton of different features..
I would like to make it a Client / Server application.
Now I'm now sure how or what to implement the multi-player. I have read the java tutorials about sockets and everything, I tested them and made a successful connection (in a test project). I am not sure where to go from here. I don't know how would I transfer for example where different players are on the map or even just if there ARE any player at all.. I don't know if to use a library or do it my self or what... If anyone could please either give me some kind of guideline on how would I transfer player data or anything like that though a TCP connection or maybe give me a library that makes it simpler..
This is pretty wide question and there are multiple ways to do things, but here's my take on it. Disclaimer: I am the server system architect of a mobile multiplayer gaming company. I don't consider myself an expert on these things, but I do have some experience both due to my work and hobbies (I wrote my first "MMORPG" that supported a whopping 255 players in 2004), and feel that I can poke you in the right direction. For most concepts here, I still suggest you do further research using Google, Stackoveflow etc., this is just my "10000 feet view" of what is needed for game networking.
Depending on the type of game you are making (think realtime games like first person shooters vs. turn-based games like chess), the underlying transport layer protocol choice is important. Like Matzi suggested, UDP gives you lower latency (and lower packet overhead, as the header is smaller than TCP), but on the downside the delivery of the packet to the destination is never guaranteed, ie. you can never be sure if the data you sent actually reached the client, or, if you sent multiple packets on a row, if the data arrived in correct order. You can implement a "reliable UDP"-protocol by acknowledging the arrived data with separate messages (although again, if the acknowledgements use UDP, they can also get lost) and handling the order by some extra data, but then you're (at least partially) losing the lower latency and lower overhead. TCP on the other hand guarantees delivery of the data and that the order stays correct, but has higher latency due to packet acknowledgements and overhead (TCP-packets have larger headers). You could say that UDP packets are sort of like "separate entities", while TCP is a continuous, unbreaking stream (you need some way to distinguish where one message ends and another begins).
There are games that use both; separate TCP-connection for important data that absolutely must make it to the client, like player death or such, and another UDP-connection for "fire and forget" -type of data, like the current position of the player (if the position does not arrive to another client, and the player is moving, there's not much point of sending the data again, because it's probably already outdated, and there's going to be another update in a short while).
After you've selected UDP and/or TCP for the transport, you still probably need a custom protocol that encodes and decodes the data ("payload") the TCP/UDP packets move around. For games, the obvious choice is some binary protocol (vs. text-based protocols like HTTP). A simple binary protocol could for example mark the number of bytes in total contained in the message, then type of data, data-field length and the actual data of the field (repeat for the number of fields per message). This can be a bit tricky, so at least for starters you could just use something like just serializing and deserializing your message-objects, then look at already existing protocols or cook your own (it's really not that hard). When you get the encoding and decoding of basic data types (like Strings, ints, floats...) working and some data moving, you need to design your own high-level protocol, that is actually the messages your games and server will be using to talk with each other. These messages are the likes of "player joined game", "player left game", "player is at this location, facing there and moving this way at this speed", "player died", "player sent a chat message" etc.
In real-time games you have some other challenges also, like predicting the position of the player (remember that the data the client sent could easily be hundreds of milliseconds ago when it arrives to another players client, so you need to "guess" where the player is at the time of arrival). Try googling for things like "game dead reckoning" and "game network prediction" etc., also Gamasutra has a pretty good article: Dead Reckoning: Latency Hiding for Networked Games, there are probably loads of others to be found.
Another thing that you need to think about is the concurrency of the server-side code. Many people will tell you that you need to use Java NIO to achieve good performance and using thread per connection is bad, but actually at least on Linux using Native Posix Thread Library (NPTL, pretty much any modern linux-distribution will have it out of the box), the situation is reverse, for reference, see here: Writing Java Multithreaded Servers - whats old is new. We have servers running 10k+ threads with thousands of users and not choking (of course at any given time, the sheer majority of those threads will be sleeping, waiting for client messages or messages to send to client).
Lastly, you need to measure how much computing power and bandwidth your game is going to need. For this, you need to measure how much load a certain (server?) hardware can take with your software and how much traffic your game causes. This is important for determining how many clients you can support with your server and how fast network connection you need (and how much traffic quota per month).
Hope this helped answer some of your questions.
First of all, multiplayer games use UDP for data transfer. There are a lot of reasons for this, for example lower lag and such. If your game contains intensive action, and need fast reactions then you should choose something based on UDP.
Probably there are solutions for gaming on the web, but it is not so hard to write your own implementation either. If you have problems with that, you probably will have problms wih the rest of the game. There are non-game oriented libraries and solutions on the net or even in java, but they are mostly not designed for something that fast as a game can be. A remote procedure call for example can contain costly serializations and generate much larger package, than you really need. They can be convenient solutions, but have poor performance considering games and not reglar business applications.
For example if you have 20 players, each have coordinates, states, and of course moving object. You need at least 20 updates per second to not have much lag, this means a lot of traffic. 20*20 incoming message with user input, and 20*20 outgoing message containing a lot of information. Do the math. You must compress all the players and as many object data into one package, as you can, to have optimal performance. This means you probably have to write small data packages which can be serialized into bytestream quite easily, and they must contain only viable information. If you lose some data, it is not a problem, but you need to take care of important informations to ensure they reach the destination. E.g. you don't want players miss a message about their death.
I wrote a reliable and usable network "library" in C#, and it is not a huge work, but it's recommended to look around and build it good. This is a good article about this topic, read it. Even if you use external library it is good to have a grasp on what it is doing and how should you use it.
For communication between VMs, it doesn't get much simpler than RMI. With RMI you can call methods on an object on an entirely different computer. You can use entire objects as arguments and return values. Thus notifying the server of your move can be as simple as server.sendMove(someMoveObject, somePlayerObject, someOtherObject).
If you're looking for a starting point, this could be a good one.
I was playing with a karaoke application on iPhone and came up with following questions:
The application allowed its users to control the volume of the artist; even mute it. How is this possible?
Does adjusting artist sound/setting equalizer etc. mean performing some transformation of required frequencies? What sort of mathematics is required here(frequency domain transformations)?
The application recorded users voice input via a mic. Assuming that the sound is recorded in some format, the application was able to mix the recording with the karaoke track(with artists voice muted). How can this be done?
Did they play both the track and voice recording simultaneously? Or maybe they inserted additional frequency(channel?) in the original track, maybe replaced it?
What sort of DSP is involved here? Is this possible in Java, Objective C?
I am curious and if you have links to documents or books that can help me understand the mechanism here, please share.
Thanks.
I don't know that particular application, probably it has a voice track recorder separately.
For generic 2-channels stereo sound the easiest voice suppression can be performed assuming that artist's voice is somehow equally balanced between two channels (acoustically it appears in center). So the simplest 'DSP' would be subtract one channel from another. It does not work that well however with modern records since all instruments and voice are recorded separately and then mixed together (meaning that voice will not be necessarily in phase between two channels).
I have written two detailed blogposts on how to get a custom EQ in iOS. But i have no details about how to do the DSP yourself. If you simply want to choose between a wide range of effects and stuff, try this.
First post explains how you build libsox:
http://uberblo.gs/2011/04/iosiphoneos-equalizer-with-libsox-making-it-a-framework
The second explains how to use it:
http://uberblo.gs/2011/04/iosiphoneos-equalizer-with-libsox-doing-effects
please up the answer if it helped you! thanks!