Im trying to add some images to my android app. First of all, i want to add my background image, but when i do it app starts working much slower, animations not smoothly and app just lagging. First I added it this way:
android:background="#mipmap/background.png"
What i tryed after:
1) Make bg image in different resolutions, for different screen size and put it to corresponding folder in resourses. My bg image resolutins:
MDPI: 320x467 px
HDPI: 480x800 px
XDPI: 640x960 px
XXDPI: 960x1400 px
XXXDPI: 1280x1920 px
That didnt worked.
2) Remove all images from Android Studio to my backend and get them by request with AsyncTask:
First, i choosing url for resolution by dpi:
URL url = null;
try {
float density = getResources().getDisplayMetrics().density;
url = new URL(PicturesApi.getUrlByDPI(density));
} catch (MalformedURLException e) {
e.printStackTrace();
}
new SetImageBackground().execute(url);
getUrlByDPI method:
public static String getUrlByDPI(float density){
if (density == 0.75f)
{
return "http://back_url/static/ldpi/background.png";
}
else if (density >= 1.0f && density < 1.5f)
{
return "http://back_url/static/mdpi/background.png";
}
else if (density == 1.5f)
{
return "http://back_url/static/hdpi/background.png";
}
else if (density > 1.5f && density <= 2.0f)
{
return "http://back_url/static/xhdpi/background.png";
}
else if (density > 2.0f && density <= 3.0f)
{
return "http://back_url/static/xxhdpi/background.png";
}
else
{
return "http://back_url/static/xxxhdpi/background.png";
}
}
SetImageBackground class:
public class SetImageBackground extends AsyncTask<URL, Void, BitmapDrawable> {
#Override
protected BitmapDrawable doInBackground(URL... urls) {
Bitmap bmp = null;
try {
bmp = BitmapFactory.decodeStream(urls[0].openConnection().getInputStream());
} catch (IOException e) {
e.printStackTrace();
}
BitmapDrawable bitdraw = new BitmapDrawable(getResources(), bmp);
return bitdraw;
}
#Override
protected void onPostExecute(BitmapDrawable bitdraw){
background = (CoordinatorLayout) findViewById(R.id.app_bar);
background.setBackground(bitdraw);
}
}
It works, but the problem with lags stays.
Why it could happends and what should I pay attention to in image (resolution, file extension) when i adding image to app, or how it do in correct way? May be i do it wrong?
Related
So Im having trouble using Microsoft's Emotion API for Android. I have no issues with regards to running the Face API; Im able to get the face rectangles but I am not able to get it working on the emotion api. I am taking images using the builtin Android camera itself. Here is the code I am using:
private void detectAndFrame(final Bitmap imageBitmap)
{
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
imageBitmap.compress(Bitmap.CompressFormat.PNG, 100, outputStream);
ByteArrayInputStream inputStream =
new ByteArrayInputStream(outputStream.toByteArray());
AsyncTask<InputStream, String, List<RecognizeResult>> detectTask =
new AsyncTask<InputStream, String, List<RecognizeResult>>() {
#Override
protected List<RecognizeResult> doInBackground(InputStream... params) {
try {
Log.e("i","Detecting...");
faces = faceServiceClient.detect(
params[0],
true, // returnFaceId
false, // returnFaceLandmarks
null // returnFaceAttributes: a string like "age, gender"
);
if (faces == null)
{
Log.e("i","Detection Finished. Nothing detected");
return null;
}
Log.e("i",
String.format("Detection Finished. %d face(s) detected",
faces.length));
ImageView imageView = (ImageView)findViewById(R.id.imageView);
InputStream stream = params[0];
com.microsoft.projectoxford.emotion.contract.FaceRectangle[] rects = new com.microsoft.projectoxford.emotion.contract.FaceRectangle[faces.length];
for (int i = 0; i < faces.length; i++) {
com.microsoft.projectoxford.face.contract.FaceRectangle rect = faces[i].faceRectangle;
rects[i] = new com.microsoft.projectoxford.emotion.contract.FaceRectangle(rect.left, rect.top, rect.width, rect.height);
}
List<RecognizeResult> result;
result = client.recognizeImage(stream, rects);
return result;
} catch (Exception e) {
Log.e("e", e.getMessage());
Log.e("e", "Detection failed");
return null;
}
}
#Override
protected void onPreExecute() {
//TODO: show progress dialog
}
#Override
protected void onProgressUpdate(String... progress) {
//TODO: update progress
}
#Override
protected void onPostExecute(List<RecognizeResult> result) {
ImageView imageView = (ImageView)findViewById(R.id.imageView);
imageView.setImageBitmap(drawFaceRectanglesOnBitmap(imageBitmap, faces));
MediaStore.Images.Media.insertImage(getContentResolver(), imageBitmap, "AnImage" ,"Another image");
if (result == null) return;
for (RecognizeResult res: result) {
Scores scores = res.scores;
Log.e("Anger: ", ((Double)scores.anger).toString());
Log.e("Neutral: ", ((Double)scores.neutral).toString());
Log.e("Happy: ", ((Double)scores.happiness).toString());
}
}
};
detectTask.execute(inputStream);
}
I keep getting the error Post Request 400, indicating some sort of issue with the JSON or the face rectangles. But I'm not sure where to start debugging this issue.
You're using the stream twice, so the second time around you're already at the end of the stream. So either you can reset the stream, or, simply call the emotion API without rectangles (ie skip the call to the face API.) The emotion API will determine the face rectangles for you.
I took the Google example for using ImageReader from here.
The code uses Camera2 API and ImageReader to such that querying image runs in different thread than previewing it.
As I want to target Android KitKat (API 20), I need to modify the code to use older Camera API with keeping the ImageReader part as is.
Here is the part of original code that sets onImageAvailableListener:
/**
* THIS IS CALLED WHEN OPENING CAMERA
* Sets up member variables related to camera.
*
* #param width The width of available size for camera preview
* #param height The height of available size for camera preview
*/
private void setUpCameraOutputs(int width, int height) {
Activity activity = getActivity();
CameraManager manager = (CameraManager) activity.getSystemService(Context.CAMERA_SERVICE);
try {
for (String cameraId : manager.getCameraIdList()) {
CameraCharacteristics characteristics
= manager.getCameraCharacteristics(cameraId);
// We don't use a front facing camera in this sample.
Integer facing = characteristics.get(CameraCharacteristics.LENS_FACING);
if (facing != null && facing == CameraCharacteristics.LENS_FACING_FRONT) {
continue;
}
StreamConfigurationMap map = characteristics.get(
CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
if (map == null) {
continue;
}
// For still image captures, we use the largest available size.
Size largest = Collections.max(
Arrays.asList(map.getOutputSizes(ImageFormat.JPEG)),
new CompareSizesByArea());
mImageReader = ImageReader.newInstance(largest.getWidth(), largest.getHeight(),
ImageFormat.JPEG, /*maxImages*/2);
mImageReader.setOnImageAvailableListener(
mOnImageAvailableListener, mBackgroundHandler);
.
.
.
.
}
Now I was able to use older Camera API. But I am lost in connecting it with ImageReader. So I don't know how should I set onImageListener so that I can access it once the image is delivered.
Here is my modification :
#Override
public void onActivityCreated(Bundle savedInstanceState) {
super.onActivityCreated(savedInstanceState);
mTextureView = (AutoFitTextureView) v.findViewById(R.id.texture);
mTextureView.setSurfaceTextureListener(new SurfaceTextureListener() {
#Override
public void onSurfaceTextureUpdated(SurfaceTexture surface) {
}
#Override
public void onSurfaceTextureSizeChanged(SurfaceTexture surface,
int width, int height) {
}
#Override
public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) {
return true;
}
#Override
public void onSurfaceTextureAvailable(SurfaceTexture surface,
int width, int height) {
mCamera = Camera.open();
try {
Camera.Parameters parameters = mCamera.getParameters();
if (getActivity().getResources().getConfiguration().orientation != Configuration.ORIENTATION_LANDSCAPE) {
// parameters.set("orientation", "portrait"); // For
// Android Version 2.2 and above
mCamera.setDisplayOrientation(90);
// For Android Version 2.0 and above
parameters.setRotation(90);
}
mCamera.setParameters(parameters);
mCamera.setPreviewTexture(surface);
} catch (IOException exception) {
mCamera.release();
}
mCamera.startPreview();
setUpCameraOutputs(width, height);
tfPreviewListener.initialize(getActivity().getAssets(), scoreView);
}
});
}
My question is how should I add ImageReader in the code above to make it work properly?
Thanks in advance.
In my camera layout there is 2 buttons for adjusting zoom parameter, one for increasing, and one for decreasing. On each button there is a OnClickListener. Before increasing/decreasing the zoom value I use Camera.Parameters.isZoomSupported () function to check, is the device support zooming. On my Sony Z1 it works perfectly, but on my other device (Samsung Galaxy S), the fuction returns true, but the device can't zoom.
My code piece:
public void onClick(View v) {
if (isZoomSupported()) {
Camera cam = application.getCamera();
if (cam != null) {
cam.stopPreview();
Parameters par = cam.getParameters();
int maxZoom = par.getMaxZoom();
int zoomValue = par.getZoom();
zoomErtek += 1;
if (zoomValue > maxZoom) {
zoomValue = maxZoom;
}
par.setZoom(zoomValue);
cam.setParameters(par);
cam.startPreview();
}
} else {
toastShortWithCancel(getString(R.string.zoom_not_supported));
}
}
And my little isZoomSupported() function:
private boolean isZoomSupported() {
Camera cam = application.getCamera();
if (cam != null) {
Parameters par = cam.getParameters();
return par.isZoomSupported();
}
return false;
}
So, what is the problem with my zoom control? Is there any mistakes? My Samsung device runs Android 2.3.5, so I use API 8 for programming
I'm using libgdx to make an application and I need to use the camera so I followed this tutorial and all my camera feed is rotated 90 degrees but they are being drawn as if they weren't. Unfortunately that means the preview is totally distorted and is very hard to use.
I won't post my code here unless snippets are asked for because I copy-pasted the code from the tutorial into my game.. The only change I recall making was as follows.
I changed the original surfaceCreated() method in CameraSurace.java
public void surfaceCreated( SurfaceHolder holder ) {
// Once the surface is created, simply open a handle to the camera hardware.
camera = Camera.open();
}
to open the front facing camera (Im using a Nexus 7 that only has a front camera...)
public void surfaceCreated( SurfaceHolder holder ) {
// Once the surface is created, simply open a handle to the camera hardware.
camera = openFrontFacingCamera();
}
#SuppressLint("NewApi")
private Camera openFrontFacingCamera()
{
int cameraCount = 0;
Camera cam = null;
Camera.CameraInfo cameraInfo = new Camera.CameraInfo();
cameraCount = Camera.getNumberOfCameras();
for ( int camIdx = 0; camIdx < cameraCount; camIdx++ ) {
Camera.getCameraInfo( camIdx, cameraInfo );
if ( cameraInfo.facing == Camera.CameraInfo.CAMERA_FACING_FRONT ) {
try {
cam = Camera.open( camIdx );
} catch (RuntimeException e) {
System.out.println("Falied to open.");
}
}
}
return cam;
}
Other than that change the rest of the code is almost the exact same (excluding minor variable changes and such.)
You can use the ExifInterface class to determine the ORIENTATION_TAG associated with your image and rotate the image accordingly.
The code would look like this:
ei = new ExifInterface(imagePath);
orientation = ei.getAttributeInt(ExifInterface.TAG_ORIENTATION,
ExifInterface.ORIENTATION_NORMAL);
switch (orientation) {
case ExifInterface.ORIENTATION_ROTATE_90:
imageView.setRotation(90);
break;
...
default:
break;
}
Upon diving into the camera API, I found that all I have to do is use a nice little method called setDisplayOrientation(90) and it works perfectly now.
revised code:
#SuppressLint("NewApi")
public void surfaceCreated( SurfaceHolder holder ) {
// Once the surface is created, simply open a handle to the camera hardware.
camera = openFrontFacingCamera();
camera.setDisplayOrientation(90);
}
#SuppressLint("NewApi")
private Camera openFrontFacingCamera()
{
int cameraCount = 0;
Camera cam = null;
Camera.CameraInfo cameraInfo = new Camera.CameraInfo();
cameraCount = Camera.getNumberOfCameras();
for ( int camIdx = 0; camIdx < cameraCount; camIdx++ ) {
Camera.getCameraInfo( camIdx, cameraInfo );
if ( cameraInfo.facing == Camera.CameraInfo.CAMERA_FACING_FRONT ) {
try {
cam = Camera.open( camIdx );
} catch (RuntimeException e) {
System.out.println("Falied to open.");
}
}
}
return cam;
}
P.S only reason I'm ignoring the NewApi is because I know the exact device this app will be running on, and it is specific to that device... Would not recommend unless you know that the device's API is high enough... (it only requires API 8)
I'm really stuck on the screen orientation logic.
Here is my code:
#Override
public EngineOptions onCreateEngineOptions() {
this.cameraWidth = getResources().getDisplayMetrics().widthPixels;
this.cameraHeight = getResources().getDisplayMetrics().heightPixels;
this.camera = CameraFactory.createPixelPerfectCamera(this, this.cameraWidth / 2.0F, this.cameraHeight / 2.0F);
this.camera.setResizeOnSurfaceSizeChanged(true);
this.dpi = getResources().getDisplayMetrics().densityDpi;
Display display = ((WindowManager) getSystemService(WINDOW_SERVICE)).getDefaultDisplay();
int rotation = display.getRotation();
if (rotation == Surface.ROTATION_90 || rotation == Surface.ROTATION_270) {
screenOrientation = ScreenOrientation.LANDSCAPE_SENSOR;
} else {
screenOrientation = ScreenOrientation.PORTRAIT_SENSOR;
}
EngineOptions engineOptions = new EngineOptions(true,screenOrientation, new FillResolutionPolicy(), this.camera);
engineOptions.getAudioOptions().setNeedsSound(true);
return engineOptions;
}
#Override
public void onSurfaceChanged(final GLState pGLState, final int pWidth, final int pHeight) {
super.onSurfaceChanged(pGLState, pWidth, pHeight);
Log.i(TAG, "onSurfaceChanged " + "w: " + this.camera.getSurfaceWidth() + " h: " + this.camera.getSurfaceHeight());
this.cameraWidth = this.camera.getSurfaceWidth();
this.cameraHeight = this.camera.getSurfaceHeight();
this.camera.setCenter(this.cameraWidth / 2.0F, this.cameraHeight / 2.0F);
}
When I try my LWP on AVD 3.7 FWVGA slider 480x854 everything works fine, but only in the LWP preview mode. When, for example - from the Landscape LWP preview mode I press button "Set wallpaper" I'm getting half black screen with my shifted LWP to the other half of desktop.
Also, I have noticed that method onCreateEngineOptions is not called when we returning from the Previos mode to the desktop.
Also, everytime I correctly receive onSurfaceChanged event in my LWP. Also, I have configured and can handle screen orientation change event... But how to apply it to my logic ?
public BroadcastReceiver mBroadcastReceiver = new BroadcastReceiver() {
#Override
public void onReceive(Context context, Intent myIntent) {
if (myIntent.getAction().equals(BROADCAST_CONFIGURATION_CHANGED)) {
Log.d(TAG, "received->" + BROADCAST_CONFIGURATION_CHANGED);
if (getResources().getConfiguration().orientation == Configuration.ORIENTATION_LANDSCAPE) {
Log.i(TAG, "LANDSCAPE_SENSOR");
} else {
Log.i(TAG, "PORTRAIT_SENSOR");
}
}
}
}
How to correctly setup LWP to handle both of modes - Portrait and Landscape ?
Thanks in advance!
I have similar problem with a game, I fix the problem with this line in each activity in the manifest file:
<activity
....
android:configChanges="orientation"
... />
and use the methods:
#Override
public void onResumeGame() {
super.onResumeGame();
}
#Override
public void onPauseGame() {
super.onPauseGame();
}
hopefully solve your problem, best regards.