Presumably it must be quite common to desire a screen in which
there are graphical elements (for which you may like to use a canvas) and widgets/buttons on the same screen. But everything I've looked at so far gives examples of either screens full of widgets OR whole screen canvases. Can someone point me to some example code for using both at the same time.
...or is this not the done thing?
EDIT: Following on from Steve's suggestion my code now looks like this:
public class CanLay extends Activity
{
Bitmap bm;
#Override
protected void onCreate(Bundle savedInstanceState)
{
// TODO Auto-generated method stub
super.onCreate(savedInstanceState);
setContentView(R.layout.canlay);
InputStream is = getResources().openRawResource(R.drawable.ella);
bm = BitmapFactory.decodeStream(is);
SurfaceView sv;
SurfaceHolder sh;
Canvas can = null;
sv = (SurfaceView)findViewById(R.id.surview);
sh = sv.getHolder();
try
{
can = sh.lockCanvas(null);
synchronized(sh)
{
can.drawBitmap(bm, 0, 0, null);
}
}
finally
{
if (can != null)
{
sh.unlockCanvasAndPost(can);
}
}
}
}
The only problem now is that sh.lockCanvas(null); always returns null.
I don't know of any sample code, but you usually get a Canvas from a SurfaceView, which is a View like other widgets (eg, TextView and Button). I would try just laying out the SurfaceView along with the other elements of the layout. The your basic XML structure might look something like
<LinearLayout >
<TextView />
<Button />
<SurfaceView />
</LinearLayout>
EDIT:
To get the Canvas from SurfaceView, first get the SurfaceHolder, then lock the canvas, draw your stuff, and unlock the canvas to have it displayed. In code that normally looks like:
SurfaceHolder holder = surfaceView.getHolder();
Canvas c = null;
try {
c = holder.lockCanvas(null);
synchronized(holder) {
// draw here
// c.drawBitmap() or whatever
}
} finally {
if(c != null)
holder.unlockCanvasAndPost(c);
}
DOUBLE EDIT:
According to the docs, lockCanvas returns null when the surface isn't ready. When you're still in onCreate(), the surface is definitely not ready. The way you know a surface is ready is by handling a callback to SurfaceHolder.Callback.surfaceCreated(). (Games often use surfaceCreated() to know when to start running their non-event thread.)
I know this may sound like more and more stuff you have to do, but it's really not that bad. You can even do it inline with something like this:
void onCreate(Bundle savedInstanceState) {
// inflate the XML with setContentView(), create your Bitmap, etc
sv = (SurfaceView)findViewById(R.id.surview);
sv.addCallback(new SurfaceHolder.Callback() {
#Override
void surfaceCreated(SurfaceHolder holder) {
Canvas can;
try {
can = holder.lockCanvas(null);
synchronized(holder) {
can.drawBitmap(bm, 0, 0, null);
}
} finally {
if(can != null) {
holder.unlockCanvasAndPost(can);
}
}});
// the rest of onCreate()
}
I may have messed up some of the braces, but you get the idea. Overall, it might be easier to put your SurfaceHolder.Callback implementation in its own non-anoymous class since there are restrictions on being anonymous, but that's the way you know your SurfaceView is ready for business. And of course, it's good to implement SurfaceHolder.Callback.surfaceDestroyed() so that you know when SurfaceView is going out of business. (Games often use surfaceDestroyed() to know when to stop running their non-event thread!)
Related
I am creating a basic camera app as a small project I'm doing to get started with Android development.
When I click on the button to take a picture, there is about a 1-second delay in which the preview freezes before unfreezing again. There is no issue with crashing - just the freezing issue. Why is this happening and how can I fix it?
Below is the method where the camera is instantiated, as well as my SurfaceView class.
private void startCamera() {
this.setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_PORTRAIT);
cameraPreviewLayout = (FrameLayout) findViewById(R.id.camera_preview);
camera = checkDeviceCamera();
camera.setDisplayOrientation(90);
mImageSurfaceView = new ImageSurfaceView(MainActivity.this, camera);
cameraPreviewLayout.addView(mImageSurfaceView);
ImageButton captureButton = (ImageButton)findViewById(R.id.imageButton);
captureButton.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
camera.takePicture(null, null, pictureCallback);
camera.stopPreview();
camera.startPreview();
}
});
}
public class ImageSurfaceView extends SurfaceView implements
SurfaceHolder.Callback {
private Camera camera;
private SurfaceHolder surfaceHolder;
public ImageSurfaceView(Context context, Camera camera) {
super(context);
this.camera = camera;
this.surfaceHolder = getHolder();
this.surfaceHolder.addCallback(this);
}
#Override
public void surfaceCreated(SurfaceHolder holder) {
try {
this.camera.setPreviewDisplay(holder);
this.camera.startPreview();
this.camera.setDisplayOrientation(90);
} catch (IOException e) {
e.printStackTrace();
}
}
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
}
#Override
public void surfaceDestroyed(SurfaceHolder holder) {
}
}
EDIT: There is currently nothing in the pictureCallback.
Camera.PictureCallback pictureCallback = new Camera.PictureCallback() {
}
You don't need to call stopPreview() after takePicture(). And you don't need startPreview() on the next line. You do need startPreview() inside your onPictureTaken() callback (not in onClick() as in the posted code!!) if you want live preview to restart after the picture is captured into a Jpeg stream.
To keep your UI responsive while using camera, you should do all work with the camera on a background thread. But it is not enough to call Camera.open() or Camera.close() on some background thread. You must create a Handler thread and use it for Camera.open(). The same Looper will be used for all camera callbacks, including PictureCallback.onPictureTaken(). See my detailed walkthrough about the use of HandlerThread.
As I explained elsewhere, you can achieve even better performance if you use the new camera2 API on devices that fully support this API (but better use the old API with devices that provide only LEGACY level of camera2 support).
But if you want to get maximum from the camera ISP, this kind of freeze may be inevitable (this depends on many hardware and firmware design choices, made by the manufacturer). Some clever UI tweaks may help to conceal this effect.
You’ll need to enable access to the hidden “Developer options” menu on
your Android phone. To do that, simply tap the “About phone” option in
Settings. Then tap “Build number” seven times and you’re done. Now you
can just back out to the main Settings menu and you’ll find Developer
options somewhere near the bottom of the list.
==>Now that you’re done with that part, let the real fun begins. Tap the new Developer options menu you just enabled and scroll until you
see the following three settings (note that they may be located within
an “Advanced” subsection):
Window animation scale Transition animation scale Animator animation
scale
==>Did you see them? By default, each of those three options is set to “1x” but tapping them and changing them to “.5x” will dramatically
speed up your phone. This harmless tweak forces the device to speed up
all transition animations, and the entire user experience is faster
and smoother as a result
Scenario:
I ran into a strange issue while testing out threads in my fragment.
I have a fragment written in Kotlin with the following snippet in onResume():
override fun onResume() {
super.onResume()
val handlerThread = HandlerThread("Stuff")
handlerThread.start()
val handler = Handler(handlerThread.looper)
handler.post {
Thread.sleep(2000)
tv_name.setText("Something something : " + isMainThread())
}
}
is MainThread() is a function that checks if the current thread is the main thread like so:
private fun isMainThread(): Boolean = Looper.myLooper() == Looper.getMainLooper()
I am seeing my TextView get updated after 2 seconds with the text "Something something : false"
Seeing false tells me that this thread is currently not the UI/Main thread.
I thought this was strange so I created the same fragment but written in Java instead with the following snippet from onResume():
#Override
public void onResume() {
super.onResume();
HandlerThread handlerThread = new HandlerThread("stuff");
handlerThread.start();
new Handler(handlerThread.getLooper()).post(new Runnable() {
#Override
public void run() {
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}
textView.setText("Something something...");
}
});
}
The app crashes with the following exception as expected:
android.view.ViewRootImpl$CalledFromWrongThreadException: Only the original thread that created a view hierarchy can touch its views.
at android.view.ViewRootImpl.checkThread(ViewRootImpl.java:7313)
at android.view.ViewRootImpl.requestLayout(ViewRootImpl.java:1161)
I did some research but I couldn't really find something that explains this. Also, please assume that my views are all inflated correctly.
Question:
Why does my app not crash when I modify my TextView in the runnable that's running off my UI thread in the Fragment written in Kotlin?
If there's something in some documentation somewhere that explains this, can someone please refer me to this?
I am not actually trying to modify my UI off the UI thread, I am just curious why this is happening.
Please let me know if you guys need any more information. Thanks a lot!
Update:
As per what #Hong Duan mentioned, requestLayout() was not getting called. This has nothing to do with Kotlin/Java but with the TextView itself.
I goofed and didn't realize that the TextView in my Kotlin fragment has a layout_width of "match_parent." Whereas the TextView in my Java fragment has a layout_width of "wrap_content."
TLDR: User error + requestLayout(), where thread checking doesn't always occur.
The CalledFromWrongThreadException only throws when necessary, but not always. In your cases, it throws when the ViewRootImpl.checkThread() is called during ViewRootImpl.requestLayout(), here is the code from ViewRootImpl.java:
#Override
public void requestLayout() {
if (!mHandlingLayoutInLayoutRequest) {
checkThread();
mLayoutRequested = true;
scheduleTraversals();
}
}
void checkThread() {
if (mThread != Thread.currentThread()) {
throw new CalledFromWrongThreadException(
"Only the original thread that created a view hierarchy can touch its views.");
}
}
And for TextView, it's not always necessary to relayout when we update it's text, we can see the logic in the source code:
/**
* Check whether entirely new text requires a new view layout
* or merely a new text layout.
*/
private void checkForRelayout() {
// If we have a fixed width, we can just swap in a new text layout
// if the text height stays the same or if the view height is fixed.
if ((mLayoutParams.width != LayoutParams.WRAP_CONTENT
|| (mMaxWidthMode == mMinWidthMode && mMaxWidth == mMinWidth))
&& (mHint == null || mHintLayout != null)
&& (mRight - mLeft - getCompoundPaddingLeft() - getCompoundPaddingRight() > 0)) {
// Static width, so try making a new text layout.
int oldht = mLayout.getHeight();
int want = mLayout.getWidth();
int hintWant = mHintLayout == null ? 0 : mHintLayout.getWidth();
/*
* No need to bring the text into view, since the size is not
* changing (unless we do the requestLayout(), in which case it
* will happen at measure).
*/
makeNewLayout(want, hintWant, UNKNOWN_BORING, UNKNOWN_BORING,
mRight - mLeft - getCompoundPaddingLeft() - getCompoundPaddingRight(),
false);
if (mEllipsize != TextUtils.TruncateAt.MARQUEE) {
// In a fixed-height view, so use our new text layout.
if (mLayoutParams.height != LayoutParams.WRAP_CONTENT
&& mLayoutParams.height != LayoutParams.MATCH_PARENT) {
autoSizeText();
invalidate();
return; // return with out relayout
}
// Dynamic height, but height has stayed the same,
// so use our new text layout.
if (mLayout.getHeight() == oldht
&& (mHintLayout == null || mHintLayout.getHeight() == oldht)) {
autoSizeText();
invalidate();
return; // return with out relayout
}
}
// We lose: the height has changed and we have a dynamic height.
// Request a new view layout using our new text layout.
requestLayout();
invalidate();
} else {
// Dynamic width, so we have no choice but to request a new
// view layout with a new text layout.
nullLayouts();
requestLayout();
invalidate();
}
}
As you can see, in some cases, the requestLayout() is not called, so the main thread check is not introduced.
So I think the key point is not about Kotlin or Java, it's about the TextViews' layout params which determined whether requestLayout() is called or not.
Most likely, in Kotlin case, there is some overhead in setText() which assures that it runs in UI thread.
On my app I have an ImageView which I turned into a Bitmap for editing. I need to detect which pixels on the ImageView were touched by the user. In addition, if the user draws a line with his finger, I need to know all the pixels that were touched in order to change them. How do I detect which pixels were touched?
Ok Jonah, here are some directions for you.
I guess you want that blending effect to react quickly to user input so first thing you'd better go for a custom SurfaceView instead of a ImageView because it is more suitable for drawing high frame rate animations required in 2D action games and animations. I strongly recommend you to read this guide; giving special attention to the part about the use of SurfaceView, before going any further. You will basically need to create a class that extends SurfaceView and implements SurfaceHolder.Callback. This view will then be responsible to listen for user touch events and to render the frames to animate the blending effect.
Take a look at following code as a reference:
public class MainView extends SurfaceView implements SurfaceHolder.Callback {
public MainView(Context context, AttributeSet attrs) {
super(context, attrs);
SurfaceHolder holder = getHolder();
holder.addCallback(this); // Register this view as a surface change listener
setFocusable(true); // make sure we get key events
}
#Override
public boolean onTouchEvent(MotionEvent event) {
super.onTouchEvent(event);
// Check if the touch pointer is the one you want
if (event.getPointerId(event.getActionIndex()) == 0) {
switch (event.getAction()) {
case MotionEvent.ACTION_DOWN:
// User touched screen...
case MotionEvent.ACTION_CANCEL:
// User dragged his finger out of the view bounds...
case MotionEvent.ACTION_UP:
// User raised his finger...
case MotionEvent.ACTION_MOVE:
// User dragged his finger...
// Update the blending effect bitmap here and trigger a frame redraw,
// if you don't already have an animation thread to do it for you.
return true;
}
return false;
}
/*
* Callback invoked when the Surface has been created and is ready to be
* used.
*/
public void surfaceCreated(SurfaceHolder holder) {
// You need to wait for this call back before attempting to draw
}
/*
* Callback invoked when the Surface has been destroyed and must no longer
* be touched. WARNING: after this method returns, the Surface/Canvas must
* never be touched again!
*/
public void surfaceDestroyed(SurfaceHolder holder) {
// You shouldn't draw to this surface after this method has been called
}
}
Then use it on the layout of your "drawing" activity like this:
<com.xxx.yyy.MainView
android:id="#+id/main_view"
android:layout_width="match_parent"
android:layout_height="match_parent"/>
To draw to this surface you need the following code:
Canvas c = null;
try {
c = mSurfaceHolder.lockCanvas(null);
synchronized (mSurfaceHolder) {
if (c != null)
c.drawBitmap(blendingImage, 0, 0, null); // Render blending effect
}
} catch (Exception e) {
Log.e("SurfaceView", "Error drawing frame", e);
} finally {
// do this in a finally so that if an exception is thrown
// during the above, we don't leave the Surface in an
// inconsistent state
if (c != null) {
mSurfaceHolder.unlockCanvasAndPost(c);
}
}
A fully functional example would be impractical to put in an answer so I recommend you to download the Lunar Lander sample game from Google for a full working example. Note however, that you won't need a game animation thread (although it won't hurt having one), like the one coded in the Lunar Lander sample, if all you need is the blending effect. The purpose of that thread is to create a game loop in which game frames are constantly generated to animate objects that may or may not depend on user input. In your case, all you need is to trigger a frame redraw after processing each touch event.
EDIT: The following code are fixes to get the code you've provided in the comments, working.
Here are the changes to MainActivity:
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
// put pics from drawables to Bitmaps
Resources res = getResources();
BitmapDrawable bd1 = (BitmapDrawable) res.getDrawable(R.drawable.pic1);
// FIX: This block makes `operation` a mutable bitmap version of the loaded resource
// This is required because immutable bitmaps can't be changed
Bitmap tmp = bd1.getBitmap();
operation = Bitmap.createBitmap(tmp.getWidth(), tmp.getHeight(), Bitmap.Config.ARGB_8888);
Canvas c = new Canvas(operation);
Paint paint = new Paint();
c.drawBitmap(tmp, 0f, 0f, paint);
BitmapDrawable bd2 = (BitmapDrawable) res.getDrawable(R.drawable.pic2);
bmp = bd2.getBitmap();
myView = new MainView(this, operation, bmp);
FrameLayout preview = (FrameLayout) findViewById(R.id.preview);
preview.addView(myView);
}
...
Here are the changes to the MainView class:
public class MainView extends SurfaceView implements Callback {
private SurfaceHolder holder;
private Bitmap operation;
private Bitmap bmp2;
private boolean surfaceReady;
// took out AttributeSet attrs
public MainView(Context context, Bitmap operation, Bitmap bmp2) {
super(context);
this.operation = operation;
this.bmp2 = bmp2;
holder = getHolder(); // Fix: proper reference the instance variable
holder.addCallback(this); // Register this view as a surface change
// listener
setFocusable(true); // make sure we get key events
}
// Added so the blending operation is made in one place so it can be more easily upgraded
private void blend(int x, int y) {
if (x >= 0 && y >= 0 && x < bmp2.getWidth() && x < operation.getWidth() && y < bmp2.getHeight() && y < operation.getHeight())
operation.setPixel(x, y, bmp2.getPixel(x, y));
}
// Added so the drawing is now made in one place
private void drawOverlays() {
Canvas c = null;
try {
c = holder.lockCanvas(null);
synchronized (holder) {
if (c != null)
c.drawBitmap(operation, 0, 0, null); // Render blending
// effect
}
} catch (Exception e) {
Log.e("SurfaceView", "Error drawing frame", e);
} finally {
// do this in a finally so that if an exception is thrown
// during the above, we don't leave the Surface in an
// inconsistent state
if (c != null) {
holder.unlockCanvasAndPost(c);
}
}
}
#Override
public boolean onTouchEvent(MotionEvent event) {
super.onTouchEvent(event);
if (!surfaceReady) // No attempt to blend or draw while surface isn't ready
return false;
// Check if the touch pointer is the one you want
if (event.getPointerId(event.getActionIndex()) == 0) {
switch (event.getAction()) {
case MotionEvent.ACTION_DOWN:
// User touched screen. Falls through ACTION_MOVE once there is no break
case MotionEvent.ACTION_MOVE:
// User dragged his finger...
blend((int) event.getX(), (int) event.getY());
}
// Update the blending effect bitmap here and trigger a frame
// redraw,
// if you don't already have an animation thread to do it for you.
drawOverlays();
return true;
}
return false;
}
/*
* Callback invoked when the Surface has been created and is ready to be
* used.
*/
public void surfaceCreated(SurfaceHolder holder) {
surfaceReady = true;
drawOverlays();
}
/*
* Callback invoked when the Surface has been destroyed and must no longer
* be touched. WARNING: after this method returns, the Surface/Canvas must
* never be touched again!
*/
public void surfaceDestroyed(SurfaceHolder holder) {
// You shouldn't draw to this surface after this method has been called
surfaceReady = false;
}
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width,
int height) {
// TODO Auto-generated method stub
}
}
This code works for me. I just hope I didn't forget anything =:)
Let me know if you still have trouble, ok?
So the answer to this is that you're going to have to be a little clever, but it really shouldn't be so bad. Instead of posting all the code to do what you want to do, I'll give you a link here and an explanation.
So by managing the touch events of an application, you can figure out the average coordinates of a touch event. Using that information you can determine the center of all the pixels touched and continue to track that as a line is drawn with the finger. To track a straight line use the
case MotionEvent.ACTION_DOWN:
case MotionEvent.ACTION_UP:
clauses to determine the start and end of the line. If you want to track a line that is not straight and drawn, you're going to need a little more tracking using
case MotionEvent.ACTION_MOVE:
and that should get you fairly started. You may need a little bit of logic to deal with the fact that you will be drawing a very thin line and I suspect that's not quite what you're going for. Maybe it is though. Either way this should be a good place to get started.
EDIT
In regards to your first comment, here is a link you can use for an example. I have to make a small disclaimer though. To get all of the pieces to work together correctly, it may not seem that simple at first. I assure you that this is one of the simplest examples and breaks the tutorial into sections.
For what you would like to do, I think you'll want to pay particular attention to section 2 (no to be confused with step 2):
2. Facilitate Drawing
I suggest this because it shows different ways to use information form the TouchEvent. The things included in section 1 will explain a little bit about the environment to setup displaying a TouchEvent's captured data whereas section 3 is mostly about aesthetics. This may not directly answer your question, but I suspect it will get you where you need to be.
Happy coding! Leave a comment if you have any questions.
I know, this question was asked maaany times, but I do not think there was a solution for this. I'm developing application which should be targeted for all devices with android system and backfacing camera. The problem is I can test application on only two devices and have to be sure that it works on all devices.I think only reasonable solution would be to to find a code samples for camera api that are quaranteed to work on nearly all devices. Does anybody can provide such sources ... but sources ... that really are tested on maaaaany (ALL) devices ? I've lost all hairs from my head ... and .. and I'm loosing my mind i think... It is all because I've released app (only for tests in my company) which was tested on only two devices, and which uses camera api in a way that should work, but it appears there are some phones like for example HTC desire HD or HTC Evo 3d (with 3d camera) where the app simply crashes (because of fu..ing camera) or freezes (also because of fu..ing camera). If there is someone who have sources for camera api (taking a picture without user gui interaction, periodically) which are really tested, please be so kind and if You can, post the source or redirect me to proper place.
Hmm maybe The question should sound like this: "Is it technically possible to use camera api on all devices ?"
Maybe I will describe how I'm currently using api.
1) Initialize cam:
public void initCam()
{
LoggingFacility.debug("Attempting to initialize camera",this);
LoggingFacility.debug("Preview is enabled:"+isPreview,this);
try {
if (camera==null)
{
camera = Camera.open();
camera.setPreviewDisplay(mHolder);
if (camera!=null)
{
Camera.Parameters parameters = camera.getParameters();
List<Size> sizes = parameters.getSupportedPictureSizes();
if (sizes!=null)
{
Size min = sizes.get(0);
for (Size size : sizes)
if (size.width<min.width) min = size;
{
parameters.setPictureSize(min.width, min.height);
}
}
camera.setParameters(parameters);
setDisplayOrientation(90);
}
}
startPreview(aps);
} catch (Throwable e){
if (exceptionsCallback!=null)
exceptionsCallback.onException(e);
}
}
2) Start preview:
private void startPreview(AfterPreviewStarted after)
{
try {
if (!isPreview)
{
LoggingFacility.debug("Starting preview",this);
//camera.stopPreview();
camera.startPreview();
isPreview = true;
LoggingFacility.debug("Preview is enabled:"+isPreview,this);
}
if (after!=null) after.doAfter();
}catch(Throwable e)
{
if (exceptionsCallback!=null)
exceptionsCallback.onException(e);
}
}
3) Take picture:
public void takePicture(final PictureCallback callback)
{
LoggingFacility.debug("Attempting to take a picture",this);
if (camera!=null)
{
if (isPreview)
{
try
{
LoggingFacility.debug("preview is enabled jut before taking picture",this);
//AudioManager mgr = (AudioManager)ctx.getSystemService(Context.AUDIO_SERVICE);
//mgr.setStreamMute(AudioManager.STREAM_SYSTEM, true);
LoggingFacility.debug("Taking picture... preview will be stopped...",this);
isPreview = false;
camera.takePicture(null, new PictureCallback(){
public void onPictureTaken(byte[] arg0, Camera arg1)
{
//LoggingFacility.debug("Picture has been taken - 1t callback",CameraPreview.this);
}
}, callback);
//mgr.setStreamMute(AudioManager.STREAM_SYSTEM, false);
} catch (Throwable e){
if (exceptionsCallback!=null)
exceptionsCallback.onException(e);
}
}
}
4) Release camera after done, or after surface is disposed.
public void releaseCam()
{
LoggingFacility.debug("Attempting to release camera",this);
if (camera!=null)
{
isTakingPictures = false;
camera.stopPreview();
isPreview = false;
camera.release();
camera = null;
LoggingFacility.debug("A camera connection has been released...",this);
}
}
In 3rd code snippet in callback method Im invoking startPreview again since after taking picture a preview is disabled, and some smartphones require preview to be started to make a picture. All above method are part of class extending SurfaceView and implementing SurfaceHolder.Callback and is a part of activity.
SurfaceHolder.Callback is implemented as follows:
public void surfaceCreated(SurfaceHolder holder) {
initCam();
}
public void surfaceDestroyed(SurfaceHolder holder) {
releaseCam();
}
public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
}
Constructor of class
CameraPreview(Context context) {
super(context);
this.ctx = context;
mHolder = getHolder();
mHolder.addCallback(this);
mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
}
I was also considering another approach - to overcome taking picture, and instead of this to register onPreviewFrame callback and for example in this callback check the flag if a picture has been requested, if so - convert image to bitmap and use it in further processing. I was trying this approach, but then stuck with another problem - even If I register empty callback, a gui responds much slower.
For everyone who like me have problems using android camera api please refer to this link . It seems the code from this sample works on majority of smartphones.
final int PICTURE_TAKEN = 1;
Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
intent.putExtra(MediaStore.EXTRA_VIDEO_QUALITY, 1);
intent.putExtra(MediaStore.EXTRA_OUTPUT, Uri.fromFile(new File(filename)));
startActivityForResult(intent, PICTURE_TAKEN);
This works for me, haven't had complaints sofar.
Right now I have my ApplicationActivity, this activity is responsible for managing multiple views (GLSurfaceViews). Can / Should I have all the views set the renderer to a "global" renderer?
Code:
public class ApplicationActivity extends Activity
{
private static final String TAG = ApplicationActivity.class.getSimpleName();
private final Stack<Screen> screens = new Stack<Screen>();
private GlRenderer glRenderer;
#Override
public void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
Log.d(TAG, "Main Activity Created");
setupGraphics();
ChangeScreen(new MainMenu(this, glRenderer)); //Creating a new Screen sets the renderer
}
private void setupGraphics()
{
requestWindowFeature(Window.FEATURE_NO_TITLE);
getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN, WindowManager.LayoutParams.FLAG_FULLSCREEN);
glRenderer = new GlRenderer(this);
}
public void Draw() //Is called by the glRenderer onDrawFrame() { mainActivity.Draw() }
{
}
}
Its the same activity switching between GLSurfaceViews and by my knowledge I believe that the method setRenderer sets the view renderer and then starts the rendering thread (creating a new thread) but I don't want to recreate the thread every time I switch between views - may create potential problems.
So in the end I want a Renderer class just to keep graphics sepreate from business logic and such but, I don't know if using one Renderer is even possible, without setting the thread again?
You can only use Multiple Views with the same Renderer only if you properly switch out between them with GLSurfaceView.onPause() / .onResume();
My specific case:
#Override
protected void onPause() //Overrides onPause from Activity
{
surfaceViews.peek().onPause();
super.onPause();
}
So everytime the activity pauses I would have to pause the current View. And if the Activity resumes then resume the View also.
I also have a method called SetView which will either (pause and remove then change to another View) or (pause and then change to another View) this is accomplished using a Stack
public void SetView(View screen)
{
if (!screens.empty())
{
screens.peek().onPause();
screens.pop();
}
screens.push(screen);
setContentView(screens.peek());
}
Of course though because we are using Views instead of Activities now we must Override the onBackPressed() to go back to previous Views.
#Override
public void onBackPressed()
{
if (screens.size() == 1)
super.onBackPressed();
else
{
screens.pop();
setContentView(screens.peek());
screens.peek().onResume();
}
}
By doing new GLRenderer() you create new instance of your class. So there is no problem to have the same renderer used in different activities.
EDIT: I seem to misunderstand your question - if you want many GL surfaces visible at once, then no, it is not possible. But it got nothing to do with reusing renderer code.