I'm making a frequency visualizer in Java (& JNI), and have it working so far with linear scales on the X axis (frequency) and Y axis (amplitude). Converting the Y axis into a logarithmic scale was as simple as plotting the log of the amplitude, but I'm having trouble at a conceptual level understanding how I would go about making the X axis scale in a logarithmic fashion (such as on this tone generator).
Here is my current initialization of each band, where each band represents an equal portion of the total frequencies:
bands = new VisualizerBand[numOfBands];
float physicalBandWidth = ((float) this.getWidth()) / numOfBands; // So as not to exceed 1 band per pixel
float frequencyBandWidth = maxFrequency / numOfBands; // Each band represents an equal percent of the total frequencies represented
for (int i = 0; i < bands.length; i++){
bands[i] = new VisualizerBand();
...
bands[i].setStartFrequency(i * frequencyBandWidth);
bands[i].setEndFrequency(bands[i].getStartFrequency() + frequencyBandWidth);
...
}
From my research and attempts, it seems like the solution will involve deriving each band's frequency bandwidth based on their position in the window, but I'm struggling past that. Any advice, or guidance would be appreciated.
Related
I'm currently working on a terrain engine and I'm experimenting a little bit with noise. It's so fascinating to see what different structures, functions and pure imagination can create with just a few lines of code. Recently I saw this post: http://squall-digital.com/ProceduralGeneration.html, I was definitely intrigued by all of these techniques, but especially the first one caught my attention. The programmer made the gain (or persistence) of the noise to be proportional to the slope of the noise on that point. I'm currently trying to achieve this but I don't think I'm on the right track.
I'm currently using simplex noise. I know the author of the article uses Perlin Noise and yes, I have seen how to calculate the derivative of Perlin Noise, but obviously this implementation wouldn't work because of the fundamental differences in how Perlin and Simplex noise are generated. I thus set out on my own way to try and approximate the slope of noise on a given position.
I came up with the following "algorithm":
Calculate neighboring points of noise [(x + 1, z), (x - 1, z), (x, z + 1), (x, z - 1)].
Calculate their respective noise value
Calculate differenceX and differenceZ in noise values on the x-axis and the z-axis respectively
Create vectors from origin: (2, differenceX, 0) and (0, differenceZ, 2)
Scale to vectors of length 1
Add y-components of the resulting unit vectors
use this y-component as the "slope" approximated at the given point.
Now I have implemented this in code (I added "3D" vectors for the purpose of ease of understanding)
private static float slope(OpenSimplex2F simplex, float x, float z, float noise) {
float[] neighbours = getStraightNeighbours(simplex, x, z);
float xSlope = (neighbours[1] - neighbours[0]) / (2.0f * x);
float zSlope = (neighbours[3] - neighbours[2]) / (2.0f * z);
float[] vecX = new float[] { 1, xSlope, 0 };
float[] vecZ = new float[] { 0, zSlope, 1 };
float scaleX = Maths.sqrt(1.0f + xSlope * xSlope);
float scaleZ = Maths.sqrt(1.0f + zSlope * zSlope);
for (int i = 0; i < 3; i++) {
vecX[i] /= scaleX;
vecZ[i] /= scaleZ;
}
float[] grad = new float[] {
vecX[0] + vecZ[0],
vecX[1] + vecZ[1],
vecX[2] + vecZ[2]
};
return grad[1];
}
Now this gives me extremely underwhelming and rest assured, wrong results: Result
Is there anyone that can explain me if this is a good technique to approximate the slope of if this is completely wrong. I'm not the biggest math genius so I was already happy I could figure this out and that it produced a result in the first place. If anyone has a resource linked to the derivative of simplex noise (which would be a life saver, obviously), it'd be really appreciated!
I need to calculate speed after each 10 seconds or less (currently i am using fused location api to get the location after each 10 seconds). The problem is that the equipment is too slow and sometimes it gives the distance covers equal to zero.
I have tried using Location.distanceBetween() but it also produces zeros even when the equipment is moving. I have tried to calculate distance by a formula but sometimes distance is too small that it gives zero.
Now i want to calculate average speed. I want to save the points obtained in 1 minute (6 lat long values). And then after each 10 seconds, i want to calculate average speed between them. Thus after each 10 seconds I will add one points at the end and remove one point from the start. That will remove the possibility of zero.
Now is there any formula that can calculate speed or distance from set of lat long values or any better approach will be highly appreciated.
You can calculate distance between two point, that are close enough, using simple geometry
deltaLngMeters = R * cos(latitude) * deltaLongitudeRadians;
deltaLatMeters = R * deltaLatitudeRadians;
whereas deltas are in radians, deltaLatitudeRadians = deltaLatitudeDegrees * pi / 180
Hence distance = sqrt(deltaLngMeters ^2 + deltaLatMeters ^ 2).
To sum up
function distance(point1, point2) {
var degToRad = Math.PI / 180;
return R * degToRad * Math.sqrt(Math.pow(Math.cos(point1.lat * degToRad ) * (point1.lng - point2.lng) , 2) + Math.pow(point1.lat - point2.lat, 2));
}
If you have array of six points, you can calculate average speed.
points = [{lat: .., lng: ..}, ... ]; // 6 points
distancesSum = 0;
for(i = 0; i < distances.length - 1; i++) {
distancesSum += distance(points[i], points[i + 1]);
}
return (distancesSum / (points.length - 1));
Yes, R is for the Earth radius, R = 6371000;// meters
You can use multi threading(Thread.sleep()) to calculate a formula repeatedly for every 10 seconds. You can verify it here https://beginnersbook.com/2013/03/multithreading-in-java/.
For small distances(hope the device won't move at speeds above 1 km/s), earth's surface can be treated as a plane. Then the latitude and longitude will be the coordinates of the device on the Cartesian plane attached to earth. Hence you can calculate the distance by this formula:
√(delta(longitude)^2 + delta(latitude)^2)
delta: difference
I am just messing around a bit in processing since i know it better than any other language and stumbled up on this website Custom 2d physics engine. So far so good. i am at the point where i have 2 rectangles colliding and i need to resolve the collision. According to the paper i should use the code :
void ResolveCollision( Object A, Object B )
{
// Calculate relative velocity
Vec2 rv = B.velocity - A.velocity
// Calculate relative velocity in terms of the normal direction
float velAlongNormal = DotProduct( rv, normal )
// Do not resolve if velocities are separating
if(velAlongNormal > 0)
return;
// Calculate restitution
float e = min( A.restitution, B.restitution)
// Calculate impulse scalar
float j = -(1 + e) * velAlongNormal
j /= 1 / A.mass + 1 / B.mass
// Apply impulse
Vec2 impulse = j * normal
A.velocity -= 1 / A.mass * impulse
B.velocity += 1 / B.mass * impulse
}
This is written in C++ so i would need to port it to java. And here i get stuck on two things. 1: What does the author mean with "normal"? how do i get the "normal"? thing 2 are these 3 lines of code:
Vec2 impulse = j * normal
A.velocity -= 1 / A.mass * impulse
B.velocity += 1 / B.mass * impulse
He creates a vector wich has only 1 number? j * normal?
I don'really have a clear picture on what exactly happens which does not really benefit me.
He is probably referring to this as "normal". So normal is a vector with 2 elements since you are referring to a tutorial for 2D physics. And j*normal will multiply each element of normal with the scalar j.
normal, velocity and impulse are vectors with 2 elements for coordinates x, y. From the series of tutorials you are referring to, you can see normal defined here towards the end.
The "normal" vector at a point on the boundary of a 2D or 3D shape is the vector that is:
perpendicular to the boundary at that point;
has length 1; and
points outward instead of inside the shape
The normal vector is the same all along a straight line (2d) or flat surface (3d), so you will also hear people talk about the "normal" of the line or surface in these cases.
The normal vector is used for all kinds of important calculations in graphics and physics code.
How exactly to calculate the normal vector for a point, line, or surface depends on what data structures you have representing the geometry of your objects.
I'm attempting to write a method that determines the area of a polygon (complex or simple) on a sphere. I have a paper that was written by a few guys at the JPL that more or less give you the equations for these calculations.
The pdf file can be found here:
http://trs-new.jpl.nasa.gov/dspace/handle/2014/40409
The equation can be found on page 7, under "The Spherical Case - Approximation":
I also typed the equation in Word:
Spherical_Case_Equation
I need assistance with converting this equation into the standard form (I think that's the right terminology). I've already done something similar for the Planer Case:
private double calcArea(Point2D[] shape) {
int n = shape.length;
double sum = 0.0;
if (n < 3) return 0.0;
for (int i = 0; i < n-1 ; i++) {
sum += (shape[i].getX() * shape[i+1].getY()) - (shape[i+1].getX() * shape[i].getY());
}
System.out.println(0.5 * Math.abs(sum));
return 0.5 * Math.abs(sum);
}
I just need help with doing something similar for the spherical case. Any assistance will be greatly appreciated.
I haven't read the paper you referenced. The area of a spherical polygon is proportional to the angle excess.
Area = r²(Σ Aᵢ - (n - 2)π)
To compute the corner angles, you may start with the 3D coordinates of your points. So at corner i you have vertex p[i] = (x[i],y[i],z[i]) and adjacent vertices p[i-1] and p[i+1] (resp p[(i+n-1)%n] and p[(i+1)%n] to get this cyclically correct). Then the cross products
v₁ = p[i] × p[i-1]
v₂ = p[i] × p[i+1]
will be orthogonal to the planes spanned by the incident edges and the origin which is the center of the sphere. Noe the angle between two vectors in space is given by
Aᵢ = arccos(⟨v₁,v₂⟩ / (‖v₁‖ * ‖v₂‖))
where ⟨v₁,v₂⟩ denotes the dot product between these two vectors which is proportional to the cosine of the angle, and ‖v₁‖ denotes the length of the first vector, likewise ‖v₂‖ for the second.
I want to find the fundamental frequency for human voice in an Android Application. I'm calculating this one with this FFT class and this Complex class.
My code to calculate FFT is this:
public double calculateFFT(byte[] signal)
{
final int mNumberOfFFTPoints =1024;
double mMaxFFTSample;
double temp;
Complex[] y;
Complex[] complexSignal = new Complex[mNumberOfFFTPoints];
double[] absSignal = new double[mNumberOfFFTPoints/2];
for(int i = 0; i < mNumberOfFFTPoints; i++){
temp = (double)((signal[2*i] & 0xFF) | (signal[2*i+1] << 8)) / 32768.0F;
complexSignal[i] = new Complex(temp,0.0);
}
y = FFT.fft(complexSignal);
mMaxFFTSample = 0.0;
int mPeakPos = 0;
for(int i = 0; i < (mNumberOfFFTPoints/2); i++)
{
absSignal[i] = Math.sqrt(Math.pow(y[i].re(), 2) + Math.pow(y[i].im(), 2));
if(absSignal[i] > mMaxFFTSample)
{
mMaxFFTSample = absSignal[i];
mPeakPos = i;
}
}
return ((1.0 * sampleRate) / (1.0 * mNumberOfFFTPoints)) * mPeakPos;
}
and I have the same values as
How do I obtain the frequencies of each value in an FFT?
Is it possible to find the fundamental frequency from these values? Can someone help me?
Thanks in advance.
Fundamental frequency detection for human voice is an active area of research, as the references below suggest. Your approach must be carefully designed and must depend on the nature of the data.
For example if your source is a person singing a single note, with no music or other background sounds in the recording, a modified peak detector might give reasonable results.
If your source is generalized human speech, you will not get a unique fundamental frequency for anything other than the individual formants within the speech.
The graph below illustrates an easy detection problem. It shows the frequency spectrum of a female soprano holding a B-flat-3 (Bb3) note. The fundamental frequency of Bb3 is 233 Hz but the soprano is actually singing a 236 Hz fundamental (the left-most and highest peak.) A simple peak detector yields the correct fundamental frequency in this case.
The graph below illustrates one of the challenges of fundamental frequency detection, even for individually sung notes, let alone for generalized human speech. It shows the frequency spectrum of a female soprano holding an F4 note. The fundamental frequency of F4 is 349 Hz but the soprano is actually singing a 360 Hz fundamental (the left-most peak.)
However, in this case, the highest peak is not the fundamental, but rather the first harmonic at 714 Hz. Your modified peak detector would have to contend with these cases.
In generalized human speech, the concept of fundamental frequency is not really applicable to any subset of longer duration than each individual formant within the speech. This is because the frequency spectrum of generalized human speech is highly time-variant.
See these references:
Speech Signal Analysis
Human Speech Formants
Fundamental frequency detection
FFT, graphs, and audio data from Sooeet.com FFT calculator
Sounds like you've already chosen a solution (FFTs) to your problem. I'm no DSP expert, but I'd venture that you're not going to get very good results this way. See a much more detailed discussion here: How do you analyse the fundamental frequency of a PCM or WAV sample?
If you do choose to stick with this method:
Consider using more than 1024 points if you need accuracy at lower frequencies - remember a (spoken) human voice is surprisingly low.
Choose your sampling frequency wisely - apply a low-pass filter if you can. There's a reason that telephones have a bandwidth of only ~3KHz, the rest is not truly necessary for hearing human voices.
Then, examine the first half of your output values, and pick the lowest biggest one: this is where the hard part is - there may be several (Further peaks should appear at the harmonics (fixed multiples) of this too, but this is hard to check as your buckets are not of a useful size here). This is the range of frequencies that the true fundamental hopefully lies within.
Again though, maybe worth thinking of the other ways of solving this as FFT might give you disappointing results in the real world.
My code for autocorrelation in this:
public double calculateFFT(double[] signal)
{
final int mNumberOfFFTPoints =1024;
double[] magnitude = new double[mNumberOfFFTPoints/2];
DoubleFFT_1D fft = new DoubleFFT_1D(mNumberOfFFTPoints);
double[] fftData = new double[mNumberOfFFTPoints*2];
double max_index=-1;
double max_magnitude=-1;
final float sampleRate=44100;
double frequency;
for (int i=0;i<mNumberOfFFTPoints;i++){
//fftData[2 * i] = buffer[i+firstSample];
fftData[2 * i] = signal[i]; //da controllare
fftData[2 * i + 1] = 0;
fft.complexForward(fftData);
}
for(int i = 0; i < mNumberOfFFTPoints/2; i++){
magnitude[i]=Math.sqrt((fftData[2*i] * fftData[2*i]) + (fftData[2*i + 1] * fftData[2*i + 1]));
if (max_magnitude<magnitude[i]){
max_magnitude=magnitude[i];
max_index=i;
}
}
return frequency=sampleRate*(double)max_index/(double)mNumberOfFFTPoints;
}
The value of "return" is my fundamental frequency?
An FFT maxima returns the peak bin frequency, which may not be the fundamental frequency, but the FFT result bin nearest an overtone or harmonic of the fundamental frequency instead. A longer FFT using more data will give you more closely spaced FFT result bins, and thus a bin probably nearer the peak frequency. You might also be able to interpolate the peak if it is between bins. But if you are dealing with a signal that has a strong harmonic content, such as voice or music, the you may need to use a pitch detection/estimation algorithm instead of an FFT peak algorithm.