Algorithm to make balloon fly to specified altitude/height - java

I'm looking for a way/algorithm to make a robot balloon fly to a certain altitude. The robot is controlled by a Raspberry Pi and has a propeller. Propeller speed can be set to several values (it uses PWM so technically 1024 different power outputs).
The balloon has a distance sensor pointing down, so it's possible to get the current height several times per second.
Only idea I had so far was to measure the height constantly and set to max speed based on the height left to travel. This doesn't seem like the best option though, but can't figure out how to fit all power outputs in.
Any ideas would be welcome. I'm using Java to code the project but any high-level algorithm/description would be great!
Thx,
Magic

There is a great "game" available that lets you try and play around with exactly that problem: Colobot (seems to be open source now). Create a Winged-Grabber (or shooter if you are more the FPS type of person) and try to get it to fly to a specific destination using only the altitude and motor controls.
in general the Pseudo-Code by MadConan is the way to go, however the main task lies in writing a smart setPower function. In the end you need some smoothing function that reduces the power in relation to how close you are to your desired altitude, however the exact values of that function completely depend on your hardware and the weight of your final system.
Depending on how valuable and/or fragile your setup will be in the end, you might want to develop a learning system, that takes the under-/overshot as a basis to adjust the smoothing function while it runs. Make sure to take factors like up-/down-wind into your calculation.

Pseudo code.
while(true){
val height = getHeight(); // from sensor
// Get the difference between the current height and
// the TARGET height. Positive values mean too low
// while negative values mean too high
val offset = TARGET_VALUE - height;
// Set the power to some direct ratio of the offset
// When the baloon is at 0 height, the offset should
// be relatively high, so the power will be set
// high. If the offset is negative, the power will be
// set negative from the current power.
setPower(offset);// I'll leave it up to you to figure out the ratio
}

Related

How do I get the distance from an object to another object with a camera?

My FRC (robotics) team is having issues with image processing, and tomorrow is our last testing day before competition.
The camera is facing downward and tilted in the x direction. We are trying to calculate the distance that an object is to a fixed point on the same surface. We only need to calculate the x distance (in inches).
Here's a diagram.
The object could be anywhere on the line with the fixed point.
Here is the view from the camera
The tape measure represents the line in the diagram.
I know it's low res and not the best picture, I took it just before I left today. The tape measure is where the object could be. And we only care about it's x position.
Other info if needed:
Camera: Pixy
Focal length: 28mm (1.1024")
Sensor size: 0.25"
Height of camera from surface (the ground in our case): 8"
We always know the x position (in pixels) of the object, we just need to calculate the distance (in inches) that the object is from the fixed point.
If you have any other questions please ask. Thanks.
You are on the right track with your image of the tape measure. All you need to do is manually (from that image), determine the inches (from zero) for each x-position (pixel). Create a lookup table that you can use in the code.
When you determine the x-position of the object and the x-position of the fixed point, look up the inches for each of these x-positions and subtract to get the distance between the object and the fixed point.
This approach is super simple, but also depends on proper calibration of the system. In particular, the operational setup (height, angle, camera optics, etc.) has to exactly match the setup when the test image was taken that was used to create the lookup table.
A standard technique is to calibrate the system by taking and processing a calibration image whenever the operational setup might have changed. For example, you could place a grid patter (e.g., with one inch squares) in the field of view. The idea is that you code a calibration analysis that will determine the proper lookup table values based on the standard image.

Removing lakes from diamond square map

I implemented the diamond square algorithm in Java, but i'm not entirely satisfied with the results as a height map. It forms a lot of "lakes" - small areas of low height. The heights are generated using the diamond square algorithm, then normalized. In the example below, white = high, black = low and blue is anything below height 15: a placeholder for oceans.
This image shows the uncolored height map
How can I smooth the terrain to reduce the number of lakes?
I've investigated a simple box blurring function (setting each pixel to the average of its neighbors), but this causes strange artifacts, possibly because of the square step of the diamond square.
Would a different (perhaps gaussian) blur be appropriate, or is this a problem with my implementation? This link says the diamond square has some inherent issues, but these don't seem to be regularly spaced artifacts, and my heightmap is seeded with 16 (not 4) values.
Your threshold algorithm needs to be more logical. You need to actually specify what is to be removed in terms of size, not just height. Basically the simple threshold sets "sea level" and anything below this level will be water. The problem is that because the algorithm used to generate the terrain is does so in a haphazard way, small areas could be filled by water.
To fix this you need to essentially determine the size of regions of water and only allow larger areas.
One simple way to do this is to not allow single "pixels" to represent water. Essentially either do not set them as water(could use a bitmap where each bit represents if there is water or not) or simply raise the level up. This should get most of the single pixels out of your image and clear it up quite a bit.
You can extend this for N pixels(essentially representing area). Basically you have to identify the size of the regions of water by counting connected pixels. The problem is this, is that it allows long thin regions(which could represent rivers).
So it it is better to take it one step further and count the width and length separate.
e.g., to detect a simple single pixel
if map[i,j] < threshold && (map[i-1,j-1] > threshold && ... && map[i+1,j+1] > threshold) then Area = 1
will detect isolated pixels.
You can modify this to detect larger groups and write a generic algorithm to measure any size of potential "oceans"... then it should be simple to get generate any height map with any minimum(and maximum) size oceans you want. The next step is to "fix" up(or use a bitmap) the parts of the map that may be below sea level but did not convert to actual water. i.e., since we generally expect things below sea level to contain water. By using a bitmap you can allow for water in water or water in land, etc.
If you use smoothing, it might work just as well but you still will always run in to such problems. Smoothing reduces the size of the "oceans" but a large ocean might turn in to a small one and a small one eventually in to a single pixel. Depending on the overall average of the map, you might end up with all water or all land after enough iterations. Blurring also reduces the detail of the map.
The good news is, that if you design your algorithm with controllable parameters then you can control things like how many oceans are in the map, or how large they are, how square they are(or how circular if you want), or how much total water can be used, etc).
The more effort you put in to this you more accurate you can simulate reality. Ultimately, if you want to be infinitely complex you can take in to account how terrains are actually formed, etc... but, of course, the whole point of these simple algorithms is to allow them to be computable in reasonable amounts of time.

Trilateration of 3 Calculated Distances from WiFI Strength Signals

I am using android to scan WIFI AP's every frame of time, I am getting from each AP the Strength of Signal (RSSI in dbm) and I am calculating the distance with this formula:
public double calculateDistance(double levelInDb, double freqInMHz) {
double exp = (32.44 - (20 * Math.log10(freqInMHz)) + Math.abs(levelInDb)) / 20.0;
return Math.pow(10.0, exp);
}
That is working fine, So I have three or more distances, now I need to draw on a map all AP's with its fixed locations, I made some reading on internet and I found the Trilateration (is the process of determining absolute or relative locations of points by measurement of distances) but it looks like I need at least one Point (x,y), at this moment I just have the calculated distances from the Signal Strength that can be taken as the radius of the different circumferences.
I am confused because I don't have any concrete point (x,y) to start to calculate the location of the Mobile Phone.
I just need to know if there is a way to calculate that point or I can assume that initial point or I am missing something.
Thank you, I really appreciate any.
As Paulw11 mentioned is his comment, you have to know the exact position of all of the APs or at least the one of them and the relative position of the other twos to this one. Then the Trilateration procedure will produce one circle for each AP and the interception of them will be the device.
Keep in mind that the Trilateration with Wi-Fi will produce an area instead of a point which will result in an area of uncertainty with an accurancy of at least 2-3m. And from what I can see you are calculating the distance based on the free space loss model which is a generic type and it is not the truth for each environment, so, this assumption will make even worse your estimation.
A good approach is to make a radio mapping of your area first with Wi-Fi measurements from the device and then be based on this Wi-Fi fingerprints. Or prepare a training period first. There are many tutorials on that.

Approximating a fitting image size

The solution I am aiming for does select the best fitting image size from a given number of sizes.
Given a number of rather random resolutions, I would like to find an image sized as close as possible to my preferred size.
Suppose I would like to use an image sized width x height (preferredImageSize).
Example: 320x200
Suppose I have the following image sizes at my disposal (availableImageSize) width1 x height1, width2 x height2, ... (maybe up to 10 different sizes).
Examples: 474x272, 474x310, 264x150, 226x128, 640x365, 474x410, 480x276, 256x144, 160x90, 320x182, 640x365, 192x108, 240x137, 480x276
For developing some generic approach to make the preferredImageSize variable I am trying to find a good solution that computes rather quick but also results into something that does look good on the screen.
I define looks good on the screen as an image that is:
hardly upscaled
as close to the given aspect-ratio (preferredImageSize.width / preferredImageSize.height) as possible
may be heavily downscaled
may be cropped/stretched in very small amounts
My initial (rather trivial) approach:
Run through the available image sizes once and find the smallest width delta (abs(preferredImageSize.width - availableImageSize.width)). The image with that smallest delta is then chosen (bestFitWidth).
That is certainly a way to solve the issue but definitely does not comply with my looks good on the screen hopes.
Any hints, no matter if text, source or links would be awesome. Ow, and if you think that my requirements (aka hopes) are already leading into the wrong direction, go ahead, let me know...
Edit: added cropping and stretching as options - which, I am afraid will make the issue even harder to solve. So if needed leave it out of the equation.
Simple "if/then" approach:
I would do two things:
Since you would rather not upscale, but are OK with downscaling (which I find a good choice), NEVER use a source image that is smaller than your target, unless none is available.
Since "heavy" downscaling is OK, I would try to find an image that matches the aspect ratio as closely as possible, starting with the smallest acceptable image and going to progressively larger images.
To put it together, first throw out all images from the list that are smaller than your target. Then, start with the smallest image left and check its aspect ratio against your target. If the mismatch is acceptable (which you need to quantify), use the image, otherwise go to the next bigger one. If you don't find any acceptable ones, use the one with the best match.
If you've already thrown out all images as smaller than your target, you will likely end up with a bad-looking image either way, but you should then try out whether it is worse the use an image that requires more upscaling, or whether it is worse to use an image that is a worse aspect ratio match.
One other thing you need to think about is whether you want to stretch or crop the images to match your target aspect ratio.
More complex quantitative approach:
The most flexible approach, though, would be to define yourself a "penalty" function that depends on the size mismatch and the aspect ratio mismatch and then find the source image that gives you the lowest "penalty". This is what you have currently done and you've defined your penalty function as abs(preferredImageSize.width - availableImageSize.width). You could go with something a little more complex, like for example:
width_diff = preferredImageSize.width - availableImageSize.width
height_diff = preferredImageSize.height - availableImageSize.height
if (width_diff > 0) width_penalty = upscale_penalty * width_diff
else width_penalty = downscale_penalty * width_diff
if (height_diff > 0) height_penalty = upscale_penalty * height_diff
else height_penalty = downscale_penalty * height_diff
aspect_penalty = ((preferredImageSize.width / preferredImageSize.height) -
(availableImageSize.width / availableImageSize.height)) * stretch_penalty;
total_penalty = width_penalty + height_penalty + aspect_penalty;
Now you can play with the 3 numbers upscale_penalty, downscale_penalty, and stretch_penalty to give these three quality reducing operations different importance. Just try a couple of combinations and see which works best.

Troubleshooting Image Recognition Neural Network Issues

Thanks in advance for reading this.
So I'm attempting to write a neural network for recognizing a specific logo within an image. I basically have a sliding window of a specific aspect ratio that will scale the current window to the expected size of the input. The window slides around pumping input into the network, and looking at the output to determine if what's in the window is the logo that I'm looking for. In that case, it will draw a box around the edge of the window, outlining the logo.
My problem resides in the fact that the neural network reports way too high of confidence for other parts of the image, and will end up drawing so many boxes all over the place, that it's impossible to see much of the original image. So there is obviously something wrong with the neural network.
For inputting the image, I have tried unrolling as grayscale, and as color. It doesn't work either way. I've tried variations on the input size as well. When it starts to get too small, then it will get worse, but even at 57x22x3 colored unrolled input, it still fails.
So I don't think that's the issue either. My neural network has X input neurons (where X is width * height * num_colors). I have one hidden layer, also of size X, and finally, I have 1 output neuron in the output layer, outputting a value between 0.0 and 1.0, representing the total confidence.
I have 17 positive training examples (ideal output is a 1.0), and 19 negative training examples (ideal output is a 0.0). After training, the network reports nearly equal confidence of ~0.95 for all positive, and nearly equal confidence of ~0.013 for all negative examples.
My theory is the number of training examples I have is far too small, and I should collect/generate more. I had only 5 of each initially, but I didn't see any gains from going up to 17+ either.
I should note I've tried using Encog and Neuroph, and both have extremely similar results. I'm using backpropagation for learning, and have tried using learning rates between 0.3 and 0.7, as well as momentum values between 0.0 and 0.8. Regardless, the result is almost always the same.
Thank you for your help.
Usually neural networks do require a lot of samples for learning, but can't say for sure that this is your problem.
Maybe a better idea for you to do matching is to find percent match for each pixel in your pattern vs the pixel where the pattern could be in the image given (for example use sliding window style).
If you have an array of pixel colours to match your pattern against:
0xFF0000, 0x00FF00, 0x0000FF
and a pattern with these pixel colours:
0xEE0000, 0x00FF00, 0x0101DE
You can get a delta in % for each pixel, then average them. Now there are multiple way you could average (weighted averages, exponentially weighted average, etc). At the end you can get a percent match for the entire pattern: how well the pattern matches the current pixels in the sliding window. You can always keep track of the maximum score are so at the end you display only one box (the one which has the highest probability of matching the patter).
You can create a neuron for each pixel, and the dendrites can be different part of the the colour hex number. Maybe a dendrite for each R,G,B. In the example I gave above I took one dendrite for the whole colour integer.
Try using a SOM/LVQ neural network for classifying the sliding window input this matlab post should give u some ideas http://scriptbucket.wordpress.com/2012/09/21/image-classification-using-matlab-somlvq/

Categories