I would like to make use of the data coming from a MIDI in device. I need to be notified when certain events occur so that I can do things like transpose certain notes on the fly or call a method or whatever you can think of.
Although I'm quite new to programming in general and to java in particular I have already been able to play a sequence with the sequencer using javax.sound.midi. I can even add a listener that tells me when certain events are played by the sequencer. Now I was hoping to be able to do something similar with the MIDI IN stream but I don't know how.
Any ideas or workarounds would be welcome because I'm quite stuck at the moment.
To record MIDI data, you have to
connect the input port's Transmitter to your Sequencer's Receiver by calling its setReceiver method,
create new Sequence/Track objects, and connect them to the Sequencer,
enable recording on your track(s), and
start recording on your Sequencer.
(see the documentation)
I finally found the solution and I'd like to clarify the question and add my solution in case it is helpful for anybody else.
The project I'm working on (which is way over my head by the way) is a midi keyboard arranger. In case you don't know what it is, it's a keyboard that plays patterns (styles) and changes the tone and the arrangement depending on the chord played. What I needed was what I think is called a dump from the midi in port so that my program can figure out what chord has been played so that it can have the sequencer respond in different ways.
So to respond my own original question to do this you need to create a new transmitter for your midi in port. Then create a new receiver to receive the data stream from the new transmitter. Finally have the receiver send the data to a PrintStream and then you can do what you want with the data stream!
Implement the send(javax.sound.midi.ShortMessage, long) method of the javax.sound.midi.Receiver interface in a class of your own and use the instances of the latter as you would with other Receiver objects.
Related
I want to generate chords from a midi file, but I can't find any source code written in java yet. So I want to write it by myself. What I want to do first is to gather the notes in the same position, but this is the problem, I don't know if there is a way to get the midi note position using JMusic. If not, are there any way to get this information? Thank you all~
Like slim mentioned, Midi files are basically a collection of Midi Events, which are basically hex bytes of code that correspond to a Midi action. EVERYTHING in Midi (including very in-depth things like tempo, setting the instrument bank to typical things such as note on/off events and note volume [called velocity in MIDI]) is controlled by those Midi Events. Unfortunately, you're going to need ALL of that information in order to do what you want.
My advice? There is this odd notion (that I once held before working with Midi as well) that Midi should be simple to work with because it's been around for so long and it's so easy to use in DAW's like FL Studio. Well let me be the person to crush this notion for you; MIDI is complex, hard, and idiosyncratic. As a beginner, it will force you to take caffeine (to keep the gears rolling and for headaches), tylenol (also for the headaches) and alcohol when you realize you just worked 6 hours for one thing and you're getting the fatigue. Turn back now, pay Dave Smith to help you, or hit the books, cause it's going to get nasty.
HOWEVER: You will never feel greater success than when your baby starts playing music.
I'm learning about android development. Let's say I want to be able to listen to spotify music in the background, while simultaneously listening to a spoken word podcast thru some other podcast app. Ive tried creating a Soundbuilder object and changed the maxStreams to 2 when I hit a togglebutton. However, when I run the app it makes no difference. Either spotify has the focus or the podcast app has focus.
Should I be utilizing the AudioManager class instead? To be able to eventually controll the volume of each stream independently? Also, would the phone have to be root to be able to change the maxStreams to 2?
I think You should check this example: MixingAudioInputStream.java
It's example taken from here
Check these out and try mixing both streams into single stream by Yourself - as trying to code new things is best way to learn.
So right now, I'm currently using the TI SensorTag and edited it such that it will send a GATT notification with some data every time I press one of the switches on the device and followed this code where moisture is the data I'm trying to send.
static void sendData(void )
{
int length=0;
while(moisture[length] != NULL)
{
length++;
}
attHandleValueNoti_t nData;
nData.len = length;
nData.handle = length;
osal_memcpy( &nData.value, &moisture, length );
// Send the Notification
GATT_Notification( 0, &nData, FALSE );
}
Now on the Java side, TI provided the SensorTag app source code so I'm editing that to receive the data and save it into a .txt file for later retrieval. I was able to get the app to create a new directory on startup if it does not exist and create the .txt file and populate it with random strings with the same button press as the one used to send the data. A quick question I had about this is should this be done or should I use separate buttons?
What I'm having a huge issue even understanding is how to read the incoming notification or data. From what I understand so far, you need to know the characteristic or something of the incoming notification to read it? I do have notifications enabled on my central device so I know that I have at least that covered. For this kind of data transfer, I don't need to use any UUID things, correct? And if I do, would I be able to piggyback on one of the existing sensor services to do so? Or perhaps use the test service?
I've read a decent amount on BLE communications but I just can't seem to get it. How do I read the incoming notification or data I sent from the SensorTag through BLE?
A quick question I had about this is should this be done or should I
use separate buttons?
It's totally your call. If I were you, I would stick on to one button since BLE devices are better if designed the most simplest way. KISS.
From what I understand so far, you need to know the characteristic or something of the incoming notification to read it?
Yes, you need the same profile running on both the peripheral and the central to enable notifications. In Bluez for example, run the bluetoothd daemon with all experimental profiles to communicate with a TI Sensor tag like this: bluetoothd -E . The same logic applies for a central running on Java. Reference: http://www.amazon.com/Inside-Bluetooth-Communications-Sensing-Library/dp/1608075796
For this kind of data transfer, I don't need to use any UUID things, correct?
No, you don't have to since you aren't creating a new service but rather using the moisture sensor service already available on the device.
I've read a decent amount on BLE communications but I just can't seem to get it.
To know more about Bluetooth terminology such as profiles, services, characteristics, asymmetric architecture, etc, please read the following references to understand the theory behind what's taking place:
http://www.amazon.com/Inside-Bluetooth-Communications-Sensing-Library/dp/1608075796 (use this if you are already into the technical details of the project)
http://www.amazon.com/Bluetooth-Low-Energy-Developers-Handbook/dp/013288836X/ref=pd_sim_14_1?ie=UTF8&refRID=13KZ3RZ0VW93CK91RCM3 (this gives a more general picture of the BLE)
In Turbo C++ we have a header file called dos.h which exposes three functions sound, nosound and delay. Using these three functions it was possible to write a rudimentary piano program in C++.
I wanted to achieve the same result using Java. My options were either to use the library provided by jfugue or javax.sound.sampled. The problem is that I don't know the duration each note is played beforehand.
I want to start playing a certain frequency when the user presses a certain key and stop only when the user releases it. How may I tackle this problem?
The Java tutorials have an example where a boolean is consulted in the innermost while loop where one is packaging the bytes and handing them off to the SourceDataLine for playback.
Thus, your event, perhaps a key-off event, can be written to change this boolean. Since the sound playback is in its own thread, it is good to make the boolean "volatile", and to use this method of messaging rather than trying to directly control the playback.
Let's see if I can find the tutorial example...
http://docs.oracle.com/javase/tutorial/sound/playing.html
Notice in the example in the section "Using a Source Data Line" there is a while loop with the expression "!stopped" as one of the conditions. The base class for the playback in this example most certainly has a boolean "stopped", and probably has it marked "volatile".
I was playing with a karaoke application on iPhone and came up with following questions:
The application allowed its users to control the volume of the artist; even mute it. How is this possible?
Does adjusting artist sound/setting equalizer etc. mean performing some transformation of required frequencies? What sort of mathematics is required here(frequency domain transformations)?
The application recorded users voice input via a mic. Assuming that the sound is recorded in some format, the application was able to mix the recording with the karaoke track(with artists voice muted). How can this be done?
Did they play both the track and voice recording simultaneously? Or maybe they inserted additional frequency(channel?) in the original track, maybe replaced it?
What sort of DSP is involved here? Is this possible in Java, Objective C?
I am curious and if you have links to documents or books that can help me understand the mechanism here, please share.
Thanks.
I don't know that particular application, probably it has a voice track recorder separately.
For generic 2-channels stereo sound the easiest voice suppression can be performed assuming that artist's voice is somehow equally balanced between two channels (acoustically it appears in center). So the simplest 'DSP' would be subtract one channel from another. It does not work that well however with modern records since all instruments and voice are recorded separately and then mixed together (meaning that voice will not be necessarily in phase between two channels).
I have written two detailed blogposts on how to get a custom EQ in iOS. But i have no details about how to do the DSP yourself. If you simply want to choose between a wide range of effects and stuff, try this.
First post explains how you build libsox:
http://uberblo.gs/2011/04/iosiphoneos-equalizer-with-libsox-making-it-a-framework
The second explains how to use it:
http://uberblo.gs/2011/04/iosiphoneos-equalizer-with-libsox-doing-effects
please up the answer if it helped you! thanks!