I have a fullscreen java app that will run on an 8 monitor digital signage type display on a Windows 7 machine. I need to be able to display content on specific physical monitors. Ideally I would like the displays ordered 1-8 in Display Properties -> Settings, however many attempts of unplugging/plugging in and reordering have failed to get the physical monitors to appear in any deterministic order via the Display Properties->Settings. I can reorder them fine, but when my java program retrieves information on the displays it is not in the layout/order that windows has them configured.
GraphicsEnvironment ID returns Strings such as Device0 and Device1, but these do not match the Windows Display numbering as seen in the Display properties. For instance if the layout is 7,4,1,2,3,4,5,6 I still get back Device0, Device1... in which Device0 corresponds to identified screen 1 (not 7 which is the first screen on the left). Is there a way to query the OS to determine what layout the displays are in and/or some other technique to display fullscreen on a specific physical monitor?
You can get the bounds of the screens relative to the big virtual desktop:
GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
for (GraphicsDevice gd : ge.getScreenDevices()) {
Rectangle bounds = gd.getDefaultConfiguration().getBounds();
System.out.println(bounds.toString());
}
for my setup this gives:
java.awt.Rectangle[x=0,y=0,width=1280,height=1024]
java.awt.Rectangle[x=1280,y=304,width=1280,height=720]
You can use this information to determine their order. E.g. if you are absolutely sure that your monitors are in a nice grid you can go fullscreen on the upper right monitor like this:
GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
GraphicsDevice gdUpperRight = null;
Rectangle bUpperRight = null;
for (GraphicsDevice gd : ge.getScreenDevices()) {
Rectangle b = gd.getDefaultConfiguration().getBounds();
if (bUpperRight == null || b.x > bUpperRight.x || b.y > bUpperRight.y) {
bUpperRight = b;
gdUpperRight = gd;
}
}
gdUpperRight.setFullScreenWindow(myFrame);
Related
I was looking for a way to properly choose the monitor I want to fullscreen on based on the position of the window prior to fullscreening.
I looked online for ages but couldn't find anything so I ended up trying a bunch of things until I got something working.
I figured someone eventually will try to look this problem up and I might as well share my solution.
What I ended up doing is get the virtual screen coordinates of the center of the window using LWJGL2's Display class like so:
int x = Display.getX() + Display.getWidth()/2,
y = Display.getY() + Display.getHeight()/2;
I then used AWT to get all available monitors:
GraphicsEnvironment.getLocalGraphicsEnvironment().getScreenDevices()
I iterated through them and got their virtual bounds(and applied any DPI scaling they might have):
Rectangle bounds = screenDevice.getDefaultConfiguration().getDefaultTransform().createTransformedShape(screenDevice.getDefaultConfiguration().getBounds()).getBounds();
EDIT: Slightly altered line so it can support Windows' DPI scaling properly.
If the bounds contained the center of the window, it means that most of the window is probably within that monitor:
if(bounds.contains(x,y))
return bounds; //getMonitorFromWindow()
Then to toggle between windowed borderless fullscreen and normal windowed in LibGDX I did the following:
// config is the LwjglApplicationConfiguration of the application
// upon changing using alt+enter
if(fullscreen) {
config.resizable = false;
Gdx.graphics.setUndecorated(true);
Rectangle monitor = getMonitorFromWindow();
// set to full screen in current monitor
Gdx.graphics.setWindowedMode(monitor.width, monitor.height);
Display.setLocation(monitor.x, monitor.y);
} else {
config.resizable = true;
Gdx.graphics.setUndecorated(false);
Rectangle monitor = getMonitorFromWindow();
// set to windowed centered in current monitor
Gdx.graphics.setWindowedMode((int) (monitor.width * 0.8f), (int) (monitor.height * 0.8f));
Display.setLocation(monitor.x + (int) (monitor.width * 0.1f), monitor.y + (int) (monitor.height * 0.1f));
}
I hope someone would find this useful.
This is my code
shell.setFullScreen(true);
shell.setMaximized(true);
shell.setText("SD Cyber Cafe");
shell.setLayout(new FormLayout());
Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize();
Image oriImage = new Image(display, "C:\\Users\\LAPTOP-SYAMSOUL\\Desktop\\lockscreen_app\\main_bg.jpeg"); //should get from database
//System.out.println(screenSize.width);
Image newImage = new Image(display, oriImage.getImageData(100).scaledTo(screenSize.width, screenSize.height));
shell.setBackgroundImage(newImage);
When I run the app via eclipse, it works fine...
But after I exported to Runnable JAR the background Image is not scaled... why??
This is what I expected:
..
..
..
But currently it appear like below: ( I don't want this):
It is hard to say for sure from this code but Toolkit is a Swing/AWT method and should not be used with SWT. It may well be giving the wrong values.
Get the primary display size using something like:
Rectangle displayArea = shell.getDisplay().getPrimaryMonitor().getBounds();
which tells you about the main (primary) monitor or
Rectangle displayArea = shell.getMonitor().getBounds();
which tells you about the monitor on which the shell will appear (may be different if there are several monitors).
I am new to Java and trying to determine how GraphicsDevice.getDefaultConfiguration() differs from what GraphicsDevice.getConfigurations() returns.
Some users are getting a serious slowdown when they start out application. I have found one machine that reproduces the issue which I am thinking is cause by this https://bugs.openjdk.java.net/browse/JDK-6477756
This led me to see if I can avoid using GraphicsDevice.getConfigurations(). I just need to know the bounds of the desktop and I see two ways to do it. They seem to do the same thing in all my testing but I hope someone knowledgeable can tell me if there will be different behavior in some situations.
Method 1
Rectangle result = new Rectangle();
for (GraphicsDevice graphicsDevice : graphicsEnv.getScreenDevices()) {
for (GraphicsConfiguration graphicsConfiguration : graphicsDevice.getConfigurations()) {
Rectangle2D.union(result, graphicsConfiguration.getBounds(), result);
}
}
Method 2
GraphicsEnvironment graphicsEnv = GraphicsEnvironment.getLocalGraphicsEnvironment();
Rectangle result = new Rectangle();
for (GraphicsDevice graphicsDevice : graphicsEnv.getScreenDevices()) {
Rectangle2D.union(result, graphicsDevice.getDefaultConfiguration().getBounds(), result);
}
Method 2 is much faster on the machine that reproduces the users slowdown. Is there any reason not to use it instead?
I am try to write a JFrame App which I want fullscreen (and by this I do not mean maximized), however the Application UI is very small (about 500x600) is there a possible way I could set the resolution of a fullscreen JFrame to 1024x768 that will work on Linux and Windows?
I was simply using this code:
setUndecorated(true);
Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize();
setBounds(0,0,screenSize.width, screenSize.height);
However I could not find a way to modify the resolution and it still displays the task panel.
I am developing in eclipse on Linux Mint 14 KDE.
Thanks in advance!
EDIT
: I got a little further using this code:
setUndecorated( true );
setResizable( false );
setAlwaysOnTop( true );
setVisible( true );
GraphicsEnvironment env = GraphicsEnvironment.getLocalGraphicsEnvironment();
DisplayMode dm = new DisplayMode(1024, 768, 16, DisplayMode.REFRESH_RATE_UNKNOWN);
vc = env.getDefaultScreenDevice();
vc.setFullScreenWindow(this);
if (dm != null && vc.isDisplayChangeSupported()){
try{
vc.setDisplayMode(dm);
}catch (Exception e){
System.exit(0);
}
}
that code was inside the contructor of my class that extends JFrame. However it does not change the resolution, it just runs at default 1080p.
is there a possible way I could set the resolution of a fullscreen
JFrame to 1024x768 that will work on Linux and Windows.
If you want your JFrame to be shown on entire screen then you can use this one:
setUndecorated(true);
setExtendedState(JFrame.MAXIMIZED_BOTH);
toFront();
Take a look at Full Screen Exclusive Mode API for full details, it take special note of Display Mode
The highlighted code demonstrate openCV framework is loaded in my C code and it render Police watching. Which is just to demonstrate it works very smooth and very clean code to write.
Target:
My webCAM is connected in to the USB port. I would like to capture the live webcam image and match from a local file (/tmp/myface.png), if live webcam match with local file myface.png, it will show the text "Police watching"
My Questions to fix:
1) How can i now, capture my webCAM on this following code?
2) When the webCAM is captured, how can i load the file and find if it match, on match it shows that text only.
#include "cv.h"
#include "highgui.h"
int main()
{
CvPoint pt = cvPoint( 620/4, 440/2 ); // width, height
IplImage* hw = cvCreateImage(cvSize(620, 440), 8,3); // width, height
CvFont font; // cvSet(hw,cvScalar(0,0,0)); // optional
cvInitFont (&font, CV_FONT_HERSHEY_COMPLEX, 1.0, 1.0, 0, 1, CV_AA);
cvPutText (hw, "Police watching", pt, &font, CV_RGB(150, 0, 150));
cvShowImage("Police watching", hw); //cvNamedWindow("Police watching", 0); // optional
cvWaitKey (0);
}
Note: When this model will work i will practice this to convert in to JNI java model.
Capturing a video frame is simple just follow this example. The essential part is:
IplImage *img = cvLoadImage( argv[1], CV_LOAD_IMAGE_COLOR );
CvCapture* capture = cvCaptureFromCAM( CV_CAP_ANY );
while ( 1 ) {
IplImage* frame = cvQueryFrame( capture );
//match(img,frame);
}
cvReleaseCapture( &capture );
The second part is probably much harder and depends on what exactly are you trying to do. If you want to simply compare images, you can use cvNorm. If you want face detection or face recognition you really need to know what are you doing.
This is how I get a webcam feed... (the code is in Python but is easily translated)
# create capture device
device = 0 # assume we want first device
capture = cv.CreateCameraCapture(0)
cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_WIDTH, 640)
cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_HEIGHT, 480)
# check if capture device is OK
if not capture:
print "Error opening capture device"
sys.exit(1)
# capture the current frame
frame = cv.QueryFrame(capture)
if frame is None:
break
# mirror
cv.Flip(frame, None, 1)
#Do face detection here
I think you'll have an incredibly hard time trying to match a face from a single file to a live video stream. Look into cv.HaarDetectObjects for some cool feature detection algorithms.