I’m building a program and I’m stuck at picking a random image that already is defined in the program.
Here’s the code:
Image Opic = new Image(getClass().getResourceAsStream("Resource/O1.png"));
Image Xpic = new Image(getClass().getResourceAsStream("Resource/X1.png"));
Image PlayerPic = new Random[Opic,Xpic];
Image AiPic = new Random[Opic,Xpic];
Image Opic = new Image(getClass().getResourceAsStream("Resource/O1.png"));
Image Xpic = new Image(getClass().getResourceAsStream("Resource/X1.png"));
Image PlayerPic = Math.random() > 0.5 ? Opic : Xpic;
Not sure what language you are actually coding in, since it does not look like javascript (well not Java either) to me. But this is just a demonstration in Java on how this problem can be approach. You can almost for sure be able to find similar method in any language.
Related
I was trying to create and save an image with AnyChart for an android application. This image is created without being rendered since the idea is to use the data from a vector to generate the image and save it on the phone's memory. I am using Java for programming the android application.
I have been testing the ".saveAsPng()" and ".saveAsSVG()" functions from the Anychart Library but there has been no success... I don't receive an error but I don't get the image either... and I don't know exactly how to proceed...
I tried to follow this guideline (https://docs.anychart.com/Common_Settings/Server-Side_Rendering) but as I said, I haven't succeeded in generating and saving the file.
This is the code that I have been using:
private class CustomDataEntry2 extends ValueDataEntry {
CustomDataEntry2(double x, Number value) {
super(x, value);
}
}
_
List<DataEntry> dataLateral = new ArrayList<>();
for (int p=0; p<DataX.size();p++) {
dataLateral.add(new CustomDataEntry2(DataY.get(p), DataX.get(p)));
}
AnyChartView anyChartViewLateral = new com.anychart.AnyChartView(this);
APIlib.getInstance().setActiveAnyChartView(anyChartViewLateral);
anyChartViewLateral.setProgressBar(new ProgressBar(this));
Polar polarLateralImage = AnyChart.polar();
polarLateralImage.startAngle(90);
Linear xScaleLateral = Linear.instantiate();
xScaleLateral.minimum(-180).maximum(180);
xScaleLateral.ticks().interval(90);
polarLateralImage.xScale(xScaleLateral);
Line polarSeriesLine = polarLateralImage.line(dataLateral);
polarSeriesLine.closed(false).markers(true);
polarSeriesLine.markers().size(3);
polarLateralImage.autoRedraw(true);
anyChartViewLateral.setChart(polarLateralImage);
polarLateralImage.saveAsPng(400,400,0.3,"testImage.png");
Could anyone tell me what am I missing or what amb I doing wrong? I know I might be asking too much, but if it were possible, I would be happy if someone could provide a code snippet that works.
Thank you very much!
Unfortunately, the current version of the AnyChart Android native library doesn't support exporting features, it was no implemented yet.
Struggling with Android MediaCodec, I'm looking for a straight forward process to change the resolution of a video file in Android.
For now I'm trying a single thread transcoding method that makes all the work step by step so I can understand it well, and at high level it looks as follows:
public void TranscodeVideo()
{
// Extract
MediaTrack[] tracks = ExtractTracks(InputPath);
// Decode
MediaTrack videoTrack = tracks.Where(o => o.IsVideo).FirstOrDefault();
MediaTrack rawVideoTrack = DecodeTrack(videoTrack);
// Edit?
// ResizeVideoTrack(rawVideoTrack);
// Encode
MediaFormat newFormat = MediaHelper.CreateVideoOutputFormat(videoTrack.Format);
MediaTrack encodeVideodTrack = EncodeTrack(rawVideoTrack , newFormat);
// Muxe
encodeVideodTrack.Index = videoTrack.Index;
tracks[Array.IndexOf(tracks, videoTrack)] = encodeVideodTrack;
MuxeTracks(OutputPath, tracks);
}
Extraction works fine, returning a track with audio only and a track with video only. Muxing works fine combining again two previous tracks. Decoding works but I don't know how to check it, the raw frames on the track weight much more than the originals so I assume that it's right.
Problem
The encoder input buffer size is smaller than the raw frames size, and also related to the encoding configured format, so I assume that I need to resize the frames in some way but I don't find anything useful. I'm correct on this? I'm missing something? What is the way to go resizing Raw video frames? Any help? :S
PD
Maybe you will notice that I'm using C# (Xamarin.Android) for more fun. But the underlaying API is of course Java.
I'm using ByteBuffers, not Surfaces because it seems easier. I will be the next step using surfaces, any advice is welcome.
I know that the single thread process is highly inefficient, but makes it simple. It will be another next step to connect the decoder output buffer to the encoder input buffer.
I digged through PhilLab, Grafika and Bigflake examples but nothing seems to be very useful for me.
Avoiding to use ffmpeg on Android.
Thank you everyone for your time.
Going off of the comment above to implement libVLC
Add this to your app root's build.gradle
allprojects {
repositories {
...
maven {
url 'https://jitpack.io'
}
}
}
Add this to your dependent app's build.gradle
dependancies {
...
implementation 'com.github.masterwok:libvlc-android-sdk:3.0.13'
}
Here is an example of loading an RTSP stream as an activity
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.camera_stream_layout);
// Get URL
this.rtspUrl = getIntent().getExtras().getString(RTSP_URL);
Log.d(TAG, "Playing back " + rtspUrl);
this.mSurface = findViewById(R.id.camera_surface);
this.holder = this.mSurface.getHolder();
ArrayList<String> options = new ArrayList<>();
options.add("-vvv"); // verbosity
//Add vlc transcoder options here
this.libvlc = new LibVLC(getApplicationContext(), options);
this.holder.setKeepScreenOn(true);
//this.holder.setFixedSize();
// Create media player
this.mMediaPlayer = new MediaPlayer(this.libvlc);
this.mMediaPlayer.setEventListener(this.mPlayerListener);
// Set up video output
final IVLCVout vout = this.mMediaPlayer.getVLCVout();
vout.setVideoView(this.mSurface);
//Set size of video to fit app screen
DisplayMetrics displayMetrics = new DisplayMetrics();
getWindowManager().getDefaultDisplay().getMetrics(displayMetrics);
ViewGroup.LayoutParams videoParams = this.mSurface.getLayoutParams();
videoParams.width = displayMetrics.widthPixels;
videoParams.height = displayMetrics.heightPixels;
vout.setWindowSize(videoParams.width, videoParams.height);
vout.addCallback(this);
vout.attachViews();
final Media m = new Media(this.libvlc, Uri.parse(this.rtspUrl));
//Use this to add transcoder options m.addOption("vlc transcode options here");
this.mMediaPlayer.setMedia(m);
this.mMediaPlayer.play();
}
Here is the documentation of vlc transcoder options
https://wiki.videolan.org/Documentation:Streaming_HowTo_New/
You are right, the input buffer size of the encoder is smaller because it expects input to be of the specified dimensions. The encoder only, like the name suggests, encodes.
I read your question as more of a "why" than a "how" question so i'll only point you to where you'll find the "why's"
The decoded frame is a YUV image (is suggest to quickly skim through the wikipedia article), usually NV21 if i'm not mistaken but might be different from device to device. To do this i suggest you use a library as every every plane of the image needs to be scaled down differently and it usually takes care of filtering.Check out libYUV. If you are interested in the actual resizing algorithms check out this and for implementations this.
If you are not required to handle the decoding and encoding with bytebuffers, i suggest to use a surface as you already mentioned. It has multiple benefits over decoding to bytebuffers.
More memory efficient as there is no copy between the native buffer and app allocated buffer, the native buffers are simply geting swapped from and to the surface.
If you plan to render the frame, be it for resizing or displaying, it can be done by the devices graphic processor. On how to do that check out BigFlakes DecodeEditEncode test.
In hope this answers some of your questions.
I came across functions in the OpenImaj library (LuoTangSubjectRegion and Achanta Saliency) that I would love to use them , however the problem is that Java is far from my first language. Therefore I wanted to ask if somebody could help me with trying to implement a simple piece of code that would read in an image, compute its saliency map and save that saliency map?
Cheers.
Here is sample. I am not sure it works, but i suspect it is closer than what you have.
I am not a maven expert. Apparently, you need maven to download the library. http://openimaj.org/UseLibrary.html. Sadly, this means I can't test this sample.
Good luck, for more code samples see http://openimaj.org/tutorial/processing-your-first-image.html.
import org.openimaj.image.MBFImage;
import org.openimaj.image.FImage;
//You will need several more imports. Your IDE can handle that.
public class SampleImage {
public static Main(String args[])
{
//Read image in
MBFImage image = ImageUtilities.readMBF(new File("c:\\file.jpg"));
//Print out random information
System.out.println(image.colourSpace);
//Create Object to preform work.
AchantaSaliency test = new AchantaSaliency();
//Get Saliency Map
test.analyseImage(image);
FImage newImage = test.getSaliencyMap();
//Display original image
DisplayUtilities.displayImage(image);
//Display new image
DisplayUtilities.displayImage(newImage);
//Save new image to file
ImageUtilities.write(newImage, new File("C:\\test_output.jpg"));
}
}
I need to change the Jdialog box title bar icon. By default it uses a java coffee image.
I have searched in internet and used many codes
1. Image im = Toolkit.getDefaultToolkit().getImage("/org/qmon/generate/Images/JDialog -2.ico");
dialog.setIconImage(im);
2. Toolkit kit = Toolkit.getDefaultToolkit ();
Image img = kit.getImage ("/org/qmon/generate/Images/Create File Tag-16x16.png");
dialog.setIconImage(img);
nothing works properly.. Kindly help me.. Thanks in Advance
Firtsly, ico is not a support image format for Java.
The likely reason you're having issues with the second approach is that getImage is expecting a file reference and the image you seem to referencing looks like it's embedded (stored within your application)
Try using something more like...
Image img = kit.getImage (getClass().getResource("/org/qmon/generate/Images/Create File Tag-16x16.png"));
Instead.
Personally, I prefer ImageIO.read as it throws a IOException when something goes wrong...
Image img = ImageIO.read(getClass().getResource("/org/qmon/generate/Images/Create File Tag-16x16.png"));
But that's me...
You should also consider taking a look at Convert List<BufferedImage> to Image which demonstrates the use of ico file (from a 3rd party API) and setIconImages method
Image image = ImageIO.read(new URL(
"http://www.gravatar.com/avatar/f1d58f7932b6ae8027c4e1d84f440ffe?s=128&d=identicon&r=PG"));
dialog.setIconImage( image );
dialog.setVisible(true);
I am using this in my application and working fine
java.net.URL url = ClassLoader.getSystemResource("res/java.png");
ImageIcon icon = new ImageIcon(url);
JOptionPane.showMessageDialog(null, jep, "UroSync",JOptionPane.INFORMATION_MESSAGE, icon);
To improve what MadProgrammer has said, I met the problem and I solved it instantiating a JDialog but using the static class Toolkit method getDefaultToolkit().getImage(Image img).
JDialog dialog = new JDialog();
dialog.setIconImage(Toolkit.getDefaultToolkit().getImage(MyMainClass.class.getResource("/myIcon.png")));
To do that you need to add before the image into the build path of the Project.
I am trying to add a Map to my libgdx app as a proof of concept. It seems that no matter how I make a packfile, the com.badlogic.gdx.graphics.g2d.tiled.TileAtlas constructor TileAtlas(TiledMap map, FileHandle inputDir) will not correctly read it. My Tile Map is simple and has only 2 tiles, and both the external gui and internal system will generate a packed file.
Here's the issue, either I name the packfile with a filename to match one of my images to satisfy line 2 below, or the method errors out. If I add 2 packfiles, one for each name of an image in my tile set, I find the Atlas isn't constructed correctly in memory. What am I missing here? Should there only ever be one tile in a tilemap?
Code from Libgdx:
for (TileSet set : map.tileSets) {
FileHandle packfile = getRelativeFileHandle(inputDir, removeExtension(set.imageName) + " packfile");
TextureAtlas textureAtlas = new TextureAtlas(packfile, packfile.parent(), false);
Array<AtlasRegion> atlasRegions = textureAtlas.findRegions(removeExtension(removePath(set.imageName)));
for (AtlasRegion reg : atlasRegions) {
regionsMap.put(reg.index + set.firstgid, reg);
if (!textures.contains(reg.getTexture())) {
textures.add(reg.getTexture());
}
}
}
com.badlogic.gdx.graphics.g2d.tiled --> It looks like you're using the old tiled API. I don't even think that package exists anymore, so you should probably download a newer version.
Check out this blog article. I haven't used the new API yet, but at a quick glance it looks much easier to load maps.