I'm developing and Android application, but I have some doubts about the
feasibility of my project.
I have to implement a custom layout composed from a ImageView that show an image, in particular a VectorDrawable.
Overlapping the ImageView there is a SurfaceView that :
Capture every coordinates of the touches;
Draw a Bitmap (in a certain position) everytime I touch the screen.
The purpose is to show a background image and use it as a reference, each time the user touches the screen a marker on SurfaceView must be inserted, this technique allows to simulate the insertion of a marker on an image.
The question is:
There are other better method to do this?
I have implemented yet, the idea works, but I have some limitation about the SurfaView (for example I can't insert it in a ScrollView).
And at last:
Taking into consideration the reasoning made up to now, assuming to have a ImageView that show a VectorDrawable, is it possible create a function that magnify and lessen the image (VectorDrawable)?
p.s. I apologize for having put two questions but the whole is closely related, I thought it was foolish to open two threads
Task 1: Your task is doable using a SurfaceView but I'm assuming that after you draw your image you would want it to stay if that is the case you may have to keep track of each image separately. If your task is just drawing an image you can also override a View which can do the same task but will also provide transparency and basic layout implementations. Difference between View and SurfaceView is performance, SurfaceView is pretty low level with a double buffer and gives you more control. Also if the number of images are small you can override a FrameLayout and spawn ImageView. Its not very efficient but it will be easier to implement.
Task 2: You can scale an ImageView easily by using setScaleX and setScaleY. But this may pixelate the image. Loading a large Bitmap then drawing it on a custom view would be a better way.
Related
I have a general question about how to design dynamic UI elements in Android (with Java).
I would like to have a thermometer in my app that depending on some external information increases or decreases a red bar(shown below).
. How can i design something like this in Android?
I could for example just design the thermometer without the red bar as a jpeg with some drawing program and then somehow implement the red bar in Android as an object that can be changed programmatically whereas the 'rim' of the thermometer does not change.
The problem I see with this approach is that I believe it is extremely difficult to match the rim of the thermometer with the red bar for different screen sizes and resolutions. Do you have another suggestion on how I could do something like this? Is there maybe a library for this?
I'd appreciate every comment as I have no experience whatsoever generally with designing such dynamic UI objects.
Updated Answer:
I have created a demo project on Github here. This project has the Thermometer custom view in which the outer thermometer is drawn from an image file and the inside mercury view of the thermometer is drawn from the code. You can look over the project of user kofigyan I shared in my previous answer for other customizations.
Previous Answer:
I think what are you looking for is creating Custom Views in Android.
You can extend the Android View class and can create your Custom View. The View has a Canvas object on which you can draw the shape of a Thermo-meter and can build functionalities to change the state of your shape in your subclass itself. Moreover, with the Paint object, you can paint your drawn shape.
I found this project on Github by a user kofigyan.
For having a dynamic ui you can take two approaches.
Jetpack Compose(not being discussed since it's still in Beta)
LiveData and observer pattern
In LiveData and observer pattern. Give the temperature as a LiveData variable(in your ViewModel) and implement an Observer in your MainActivity which gets triggered automatically when the value of the temperature live data changes.
For getting the output graphic you can use a canvas and then draw on it and tie it to the temperature variable.
You could for example use the Canvas.drawRect function to create a short or long rectangle depending on the temperature. ( the part from the circle to the top of the thermometer can be the rectangle)
I've been following the Camera2 example (android/camera-samples/Camera2Video) to create an abstraction over the Camera2 library. The idea of my abstraction is to give the user the ability to use a Camera view from React Native, so it's just a <Camera> view which can be whatever size/aspect ratio.
While this works out of the box on iOS, I can't seem to get the preview on Android to display "what the camera sees" in the correct aspect ratio.
The official example from Android works like this:
They create a custom SurfaceView extension that should automatically fit to the correct aspect ratio (see: AutoFitSurfaceView)
They use that AutoFitSurfaceView in their layout
They add a listener to the AutoFitSurfaceView's Surface to find out when it has been created (source)
Once the Surface has been created, they call getPreviewOutputSize(...) to get the best matching camera preview size (e.g. so you don't stream 4k for a 1080p screen, that's wasted pixels)
Then they pass the best matching camera preview size to the AutoFitSurfaceView::setAspectRatio(...) function
By knowing the desired aspect ratio, the AutoFitSurfaceView should then automatically perform a center-crop transform in it's onMeasure override
If you read the source code of their getPreviewOutputSize(...) function, you might notice that this uses a Display to find the best matching preview size. If I understood the code correctly, this would only work if the camera preview (AutoFitSurfaceView) is exactly the same size as the device's screen. This is poorly designed, as there are lots of cases where that simply isn't true. In my case, the Camera has a bit of a bottom spacing/margin, so it doesn't fill the screen and therefore has weird resolutions (1080x1585 on a 1080x1920 screen)
With that long introduction, here comes my question: How do I actually perform a correct center crop transform (aka scaleType = CENTER_CROP) on my SurfaceView? I've tried the following:
Set the size of my SurfaceView using SurfaceHolder::setFixedSize(...), but that didn't change anything at all
Remove their getPreviewOutputSize(...) stuff and simply use the highest resolution available
Use the Android View properties scaleX and scaleY to scale the view by the aspect-ratio difference of the view <-> camera input scaler size (this somewhat worked, but is giving me errors if I try to use high-speed capture: Surface size 1080x1585 is not part of the high speed supported size list [1280x720, 1920x1080])
Any help appreciated!
I'm starting to learn Java and i came across an excersize where i need to fade away one image and display by fading in another image.
My solution to this excersize is to have one imageView and fading out the first image, then switching the image source to the second image and fading the imageView in so it should display the new image. Doing that so it will display all the images i want by fading out then in with a new image.
this is my code for the program:
public void fade(View view){
ImageView simpsonImageView = findViewById(R.id.simpsonsImageView);
simpsonImageView.animate().alpha(0f).setDuration(3000);
simpsonImageView.setImageResource(R.drawable.bart);
simpsonImageView.animate().alpha(1f).setDuration(3000);
simpsonImageView.animate().alpha(0f).setDuration(3000);
simpsonImageView.setImageResource(R.drawable.lisa);
simpsonImageView.animate().alpha(1f).setDuration(3000);
}
Now i have seen in the tutorial i'm learning from that the tutor used different imageView for each image. I wanted to know which solution is correct or at least acceptable amoung these two. Or it dosent really matter and that both solutions are fine.
There is one thing which you can only achieve when using two ImageViews: you can crossfade the two images so that the screen is never entirely empty.
In the context of your exercise however, you only want to exchange images sequentially.
From a performance point of view, one ImageView may be better than two because it will obviously take less memory and CPU time but I doubt that this will have a noticeable impact on modern devices.
So as long as you don't animate lots of pictures simultaneously (think of football teams instead of the Simpsons), both solutions are fine.
Please note that with your code as-is there will be no animation visible at all and the ImageView will appear to only show the second picture. This is because animate() triggers an animation but it does not wait until the animation is finished. So you need to work with an AnimationListener or use Handler.postDelayed() to swap pictures and start the next animation only as soon as the previous one is finished.
Here is a scenario on which i done a lot of research on google but hopeless. the scenario is, i am going to develop a 2D game in which i want to use multiple backgrounds and i want to translate them as they are moving forward, it must look like one background is near and translating/moving fast and the other background is a bit far and translating/moving a little slow. the nearer background have almost full intensity and the farer background have a bit low intensity. you can say they have different transparency levels. the question is, how can i get this task done or is this possible to use multiple backgrounds.
any suggestion will be appreciated.
as an example, see the image of BADLAND game bellow.
I think as far as I got your question you want to put two or more images one over another. I mean if you are trying to overlap the multiple backgrounds and asking for it yes it can be done easily.
What you need to do is to use the FrameLayout. as
"FrameLayout represents a simple layout for the user interface of
Android applications. It is usually used for displaying single Views
at a specific area on the screen or overlapping its child views."
and then you can implements the animations on them and translate them You can find different types of translation over them.
Here is all about using the frame layout frameLayout1 framelayout2 and for animations and translation here are links. link , link2 , link3
So, I'm trying to make the background to one of my apps look "futuristic." I thought of an idea to make the screen look almost transparent yet have views over it. So, it would look something like this:
(source: rackspacecloud.com)
I'm thinking that I can use the camera to capture the background of the phone (without taking a picture, just having the real time view in the background) and then, if possible, place a semi-transparent slightly blurred ImageView over that. Finally, on top of that I can place the other views including the ImageButtons.
So, my question is how would I go about doing this? I have searched but haven't found anything relevant. It must be possible; its just how to do it? I don't expect you to give me all the code as an answer, just if you have any ideas that can help or links or code that can point me in the right direction, it would be greatly appreciated! Thanks!
It shouldn't be too hard to get started. There are samples located here that show you how to open the camera and draw the preview onto a SurfaceView. Since you want to overlay your other Views on top of the camera preview, just make sure that the SurfaceView that you are using for the camera preview is contained inside a FrameLayout (docs located here). The FrameLayout lets you insert child views and they are z-indexed using the order they are inserted. Therefore, if you insert your SurfaceView and then insert a Button of some kind it will be z-ordered in front of the SurfaceView and you can set its alpha value so that it can be more or less transparent. All that said, you will have to do some trial and error for how you want to position your views that are being rendered in front of the camera preview because a FrameLayout used on different screen sizes might position the Views differently. Also, I'd stay away from layering too many Views on top of one another because the compositor will have to figure out how to render all of it into a single window which could impact performance.