I work with camera2API and i need to get focal lenght property.
I found one way to get it from camera characteristic
float[] f = characteristics.get(CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS);
for (float d : f) {
Logger.logGeneral("LENS_INFO_AVAILABLE_FOCAL_LENGTHS : " + d);
}
but this approach retern back value like 3.8 or smth close to it. It depend of the device. But this value have to be approximately close to 30-35...
Then i found another solution. When i take a photo i opened photo properties and saw exextly what i get and what i need
I tryed to get this property directly from image
private JSONObject getJsonProperties() {
JSONObject properties = new JSONObject();
final ExifInterface exif = getExifInterface();
final String [] tagProperties = {TAG_DATETIME, TAG_DATETIME_DIGITIZED, TAG_EXPOSURE_TIME,
TAG_FLASH, TAG_FOCAL_LENGTH, TAG_GPS_ALTITUDE, TAG_GPS_ALTITUDE_REF, TAG_GPS_DATESTAMP,
TAG_GPS_LATITUDE, TAG_GPS_LATITUDE_REF, TAG_GPS_LONGITUDE, TAG_GPS_LONGITUDE_REF,
TAG_GPS_PROCESSING_METHOD, TAG_GPS_TIMESTAMP, TAG_IMAGE_LENGTH, TAG_IMAGE_WIDTH, TAG_ISO, TAG_MAKE,
TAG_MODEL, TAG_ORIENTATION, TAG_SUBSEC_TIME, TAG_SUBSEC_TIME_DIG, TAG_SUBSEC_TIME_ORIG, TAG_WHITE_BALANCE};
for (String tag : tagProperties){
try {
properties.put(tag, exif.getAttribute(tag));
} catch (JSONException e) {
e.printStackTrace();
}
}
return properties;
}
private ExifInterface getExifInterface() {
ExifInterface exif = null;
try {
exif = new ExifInterface(ImageSaver.getImageFilePath());
} catch (IOException e) {
e.printStackTrace();
}
return exif;
}
I combine set of entire properties and of course include one that i very need TAG_FOCAL_LENGTH. But again i get wrong valeu like this 473/100... i don't know what does it mean... Valeu have to be approximatly 30-35...
What am i doing wrong?
Eventyally i found the answer.
According #Aleks G
The 35mm equivalent would be your obtained focal length multiplied by a certain number, known as crop factor. This factor is very much different for different manufacturers and even models. For example, for many Nikon DSLR cameras it's 1.5; for many Canon DSLRs it's 1.6; for Apple iPhone 5 it's 6.99, for Samsung Galaxy S3 it's 7.6, and so on.
To the best of my knowledge, Android does not provide an API to determine the crop factor of the device's camera.
There's one trick you can try using though. Once you take a photo with the device's camera, some devices will populate FocalLengthIn35mmFilm exif tag - so you can try using this technique. Not all phones do this though, for example, my Huawei doesn't.
Some other devices will populate two fields for Focal Length - the first being the actual focal length used, the second being the 35mm equivalent. Again, not all do. The same Huawei phone doesn't, but my iPhone 5 does.
To make the long story short, there's no guaranteed way of figuring out the 35mm equivalent from the focal length you obtain on the device.
And eventually i just add to my list of properties such tag
TAG_FOCAL_LENGTH_IN_35MM_FILM
And now if it is available i can get it
Related
I am trying to do something along the lines of signal strength triangulation using cell towers, on android. I'm using android studio and java. I've found useful csv with cell tower latitude and longitude, and written some fairly trivial code to parse it, all I need now is the cell towers' mcc, mnc, lac and cell id, so I can search the csv and find the lat and long. I'm using the telephonyManager class and .getAllCellInfo(), like so:
tel = (TelephonyManager) getSystemService(Context.TELEPHONY_SERVICE);
InputStream inputStream = getResources().openRawResource(R.raw.cells);
CsvFile csvFile = new CsvFile(inputStream);
List<String[]> res = csvFile.read();
List<CellInfo> cellsTemp = tel.getAllCellInfo();
for (CellInfo ci : cellsTemp) {
if (ci instanceof CellInfoLte) {
Log.d("Cells: ", ((CellInfoLte) ci).getCellIdentity().toString());
this.cellsLte.add((CellInfoLte) ci);
this.towers.add(new cellTower((CellInfoLte) ci, res));
}
}
However, when I log the cellIdentity of those cells, I get this:
CellIdentityLte:{ mCi=2147483647 mPci=313 mTac=2147483647 mEarfcn=1300 mBandwidth=2147483647 mMcc=null mMnc=null mAlphaLong= mAlphaShort=}
As you can see, the mcc and mnc are null, and the the cell id and location code are 2147483647, or Integer.MAX_VALUE, which as I understand it, is used when they are for whatever reason unavailable.
I have the ACCESS_FINE_LOCATION and READ_PHONE_STATE runtime permissions, and I've added them to the manifest file as well. I've also tried just logging the objects directly from tel.getAllCellInfo(), same exact result.
This is the declaration for tel and towers:
public class MainActivity extends AppCompatActivity {
protected RequestQueue requestQueue;
protected TelephonyManager tel;
protected List<CellInfoLte> cellsLte = new ArrayList<CellInfoLte>();
protected List<cellTower> towers = new ArrayList<cellTower>();
And cellTower is just a sort of wrapper class that contains some information I'm going to need to calculate distance later. It also contains the cellInfo of the tower.
I am also aware that there are apis that can do this whole thing automatically, but I need to do this myself, as a sort of a proof of concept.
I'm running this on a LGE LG-H870, but I've also tried it on a Xiaomi Redmi 8, and had the same issue.
I am an (almost)complete beginner with android studio, but I think I have a perfectly decent understanding of java. Is there any way at all to get around this? Any help would be greatly appreciated.
i've made an app that implements augmented reality based on POI's and have all the functionality working for one POI but i would now like to be able to put in multiple points. Can any give me advice on how to do this ? Can i create an array of POI's ?? posted my relevant code below but don't really know where to go from here.
private void setAugmentedRealityPoint() {
homePoi = new AugmentedPOI(
"Home",
"Latitude, longitude",
28.306802, -81.601358
);
This is how its currently set and i then go on to use it in other area's as shown belown:
public double calculateAngle() {
double dX = homePoi.getPoiLatitude() - myLatitude;
double dY = homePoi.getPoiLongitude() - myLongitude;
}
and here:
private boolean isWithinDistance(double myLatitude, double myLongitude){
Location my1 = new Location("One");
my1.setLatitude(myLatitude);
my1.setLongitude(myLongitude);
Location target =new Location("Two");
target.setLatitude(homePoi.getPoiLatitude());
target.setLongitude(homePoi.getPoiLongitude());
double range =my1.distanceTo(target);
double zone = 20;
if (range < zone ) {
return true;
}
else {
return false;
}
}
Any help would be appreciated.
Using a List would be a smart idea. You could add all entries into it in code, or you could pull them in from a JSON file. When you're rendering them, you could check if they are in range.
If you have a lot of these POIs, you should divide them into smaller and smaller regions, and only load what you need. For example, structure them like this:
- CountryA
+ County 1
* POI
* POI
- CountryB
+ County 1
* POI
* POI
+ County 2
* POI
Get the country and county of the user, and only load what you really need. I assume this is a multiplayer game, so I'll share some of my code.
On the server side, I have 3 objects: Country, County and POI.
First I discover all countries on the disk, and make an object for it. Inside my country object I have a list for all counties, and inside my County object I have a list of POIs. When a player joins, they send a packet with their Country and County, and I can select the appropriate POIs for them. Storing them in smaller regions is essential, or your server will have a hard time if you go through all of the POIs for every player.
Here is my method for discovering data: Server.java#L311-L385
Code for selecting POIs for a player: Server.java#L139-L181
And how you can render it: PlayScreen.java#L209-L268
You need to port it to your own app, and I'm probably horrible at explaining, but I hope you got something out of it.
I want to get frame rate of video, but i don't want to use FFMPEG,JAVACV lib.
is that possible to get frame rate of video in android?
I read KEY_FRAME_RATE it's says that,"Specifically, MediaExtractor provides an integer value corresponding to the frame rate information of the track if specified and non-zero."
but i don't know how to use it?
if you know about how to get frame rate from video then answer here.
MediaExtractor extractor = new MediaExtractor();
int frameRate = 24; //may be default
try {
//Adjust data source as per the requirement if file, URI, etc.
extractor.setDataSource(...);
int numTracks = extractor.getTrackCount();
for (int i = 0; i < numTracks; ++i) {
MediaFormat format = extractor.getTrackFormat(i);
String mime = format.getString(MediaFormat.KEY_MIME);
if (mime.startsWith("video/")) {
if (format.containsKey(MediaFormat.KEY_FRAME_RATE)) {
frameRate = format.getInteger(MediaFormat.KEY_FRAME_RATE);
}
}
}
} catch (IOException e) {
e.printStackTrace();
}finally {
//Release stuff
extractor.release();
}
Note: Try to run the above code in worker thread.
Update 1 What is KEY_FRAME_RATE and may be optional
KEY_FRAME_RATE
Added in API level 16
String KEY_FRAME_RATE
A key describing the frame rate of a video format in frames/sec. The associated value is normally an integer when the value is used by the platform, but video codecs also accept float configuration values. Specifically, MediaExtractor provides an integer value corresponding to the frame rate information of the track if specified and non-zero. Otherwise, this key is not present. MediaCodec accepts both float and integer values. This represents the desired operating frame rate if the KEY_OPERATING_RATE is not present and KEY_PRIORITY is 0 (realtime). For video encoders this value corresponds to the intended frame rate, although encoders are expected to support variable frame rate based on buffer timestamp. This key is not used in the MediaCodec input/output formats, nor by MediaMuxer.
Constant Value: "frame-rate"
Update 2 Code check if for NPE if KEY_FRAME_RATE not present. See above
I was playing around with open cv and I decided to test out tutorialpoint's example for a Robinson mask. I copied the code and used a grayscaled jpg.
-unfortunately the outputed image was completely black.
-I tried commenting out what appears to be two additional directional filters. The image still came out black.
-I'm using java 1.8 with opencv 3
try{
int kernelSize = 9;
System.loadLibrary( Core.NATIVE_LIBRARY_NAME );
Mat source = Imgcodecs.imread("grayScale2.jpg", Imgcodecs.CV_LOAD_IMAGE_GRAYSCALE);
Mat destination = new Mat(source.rows(),source.cols(),source.type());
Mat kernel = new Mat(kernelSize,kernelSize, CvType.CV_32F){
{
put(0,0,-1);
put(0,1,0);
put(0,2,1);
put(1,0-2);
put(1,1,0);
put(1,2,2);
put(2,0,-1);
put(2,1,0);
put(2,2,1);
}
};
Imgproc.filter2D(source, destination, -1, kernel);
Imgcodecs.imwrite("robinsonMaskExample.jpg", destination);
} catch (Exception e) {
System.out.println("Error: " + e.getMessage());
}
The code that you linked us is a bit flawed. It defines the kernel size to be size 9 x 9, but the kernel itself is clearly 3 x 3. As such, it's putting the kernel coefficients at the top left corner of the kernel and the rest of the kernel itself is 0. This is probably the reason why you're not seeing the right results. The put method puts a number in the row and column of the matrix. As you can see in that code that defines the kernel, it's putting things in rows 0,1,2 and columns 0,1,2 - which is implicitly a 3 x 3 kernel, but the size of the kernel is actually 9 x 9.
As such, please uncomment those lines you commented out as it's important to define the entire edge detection mask properly. Also, the post is wrong in terms of what edge detection mask that's using. That's actually using the Sobel operator. I've never heard of a mask called "Robinson" before, but I have heard of a Roberts-Cross mask, which is a 2 x 2 kernel that looks like this:
Source: Wikipedia
Therefore, the simplest fix is to change the kernel size so that it's 3.... so simply change this:
int kernelSize = 9;
To this:
int kernelSize = 3;
For a broader picture:
try{
int kernelSize = 3; // Change
System.loadLibrary( Core.NATIVE_LIBRARY_NAME );
Mat source = Imgcodecs.imread("grayScale2.jpg", Imgcodecs.CV_LOAD_IMAGE_GRAYSCALE);
Mat destination = new Mat(source.rows(),source.cols(),source.type());
Mat kernel = new Mat(kernelSize,kernelSize, CvType.CV_32F){
{
put(0,0,-1);
put(0,1,0);
put(0,2,1);
put(1,0-2);
put(1,1,0);
put(1,2,2);
put(2,0,-1);
put(2,1,0);
put(2,2,1); // Leave it this way - don't uncomment
}
};
Imgproc.filter2D(source, destination, -1, kernel);
Imgcodecs.imwrite("robinsonMaskExample.jpg", destination);
} catch (Exception e) {
System.out.println("Error: " + e.getMessage());
}
Moral of this story. Let this be a lesson to you in terms of finding tutorials online. Don't trust all of them, as they sometimes give you wrong information, such as what you experienced just now with the wrong kernel size and calling the edge detector wrong. I'd certainly use them as a good starting point, but when it comes to the nitty-gritty details, always debug code that you see that has been posted to make sure that what they intended to write is actually what is produced.
I've got a live stream of JPG images coming in on a different thread in my Java app and I want to continually scan for faces to output later a list of all the different faces that passed the camera while it was running, and how many times each face was seen. Here's my current code:
void doImageProcessing() {
// Create face stuff
FKEFaceDetector faceDetector = new FKEFaceDetector(new HaarCascadeDetector());
EigenFaceRecogniser<KEDetectedFace, Person> faceRecognizer = EigenFaceRecogniser.create(512, new RotateScaleAligner(), 512, DoubleFVComparison.CORRELATION, Float.MAX_VALUE);
FaceRecognitionEngine<KEDetectedFace, Extractor<KEDetectedFace>, Person> faceEngine = FaceRecognitionEngine.create(faceDetector, faceRecognizer);
// Start loop
while (true) {
// Get next frame
byte[] imgData = nextProcessingData;
nextProcessingData = null;
// Decode image
BufferedImage img = ImageIO.read(new ByteArrayInputStream(imgData));
// Detect faces
FImage fimg = ImageUtilities.createFImage(img);
List<KEDetectedFace> faces = faceEngine.getDetector().detectFaces(fimg);
// Go through detected faces
for (KEDetectedFace face : faces) {
// Find existing person for this face
Person person = null;
try {
List<IndependentPair<KEDetectedFace, ScoredAnnotation<Person>>> rfaces = faceEngine.recogniseBest(face.getFacePatch());
ScoredAnnotation<Person> score = rfaces.get(0).getSecondObject();
if (score != null)
person = score.annotation;
} catch (Exception e) {
}
// If not found, create
if (person == null) {
// Create person
person = new Person();
System.out.println("Identified new person: " + person.getIdentifier());
// Train engine to recognize this new person
faceEngine.train(person, face.getFacePatch());
} else {
// This person has been detected before
System.out.println("Identified existing person: " + person.getIdentifier());
}
}
}
}
The problem is it always detects a face as new, even if it's the same face that was detected in the previous frame. rfaces is always empty. It can never identify an existing face. What am I doing wrong?
Also, I have no idea what the parameters for the EigenFaceRecognizer creator function should be, maybe that's why it's not recognizing anything...
The parameters you've given to the EigenFaceRecogniser.create() function are way off, so that's probably the likely cause of your problems. The following is more likely to work:
EigenFaceRecogniser<KEDetectedFace, Person> faceRecognizer = EigenFaceRecogniser.create(20, new RotateScaleAligner(), 1, DoubleFVComparison.CORRELATION, 0.9f);
Explaination:
The first parameter is the number of principal components in the EigenFace algorithm; the exact value is normally determined experimentally, but something around ~20 is probably fine.
The third parameter is the number of nearest neighbours to use for the KNN classifier. 1 nearest-neighbour should be fine.
The final parameter is a distance threshold for the classifier. The correlation comparison returns a similarity measure (high values mean more similar), so the threshold given is a lower limit that must be exceeded. As we've set 1 nearest neighbour, then the distance between the most similar face and the query face must be greater than 0.9. Note that a value of 0.9 is just a guess; in order to optimise the performance of your recogniser you'll need to play around with this.
Another minor point - instead of:
BufferedImage img = ImageIO.read(new ByteArrayInputStream(imgData));
FImage fimg = ImageUtilities.createFImage(img);
It's generally better to let OpenIMAJ read your image as it works around a number of known problems with ImageIO's handling of certain types of JPEG:
FImage fimg = ImageUtilities.readF(new ByteArrayInputStream(imgData));