I am looking to apply an alphabet that does not exist in the UTF-Setup currently applied to all of my view elements and Swing components. To display these new characters, would I have to simply have each as its own image and then present the images adjacent to one another as in a character-like pattern, or is there any method by which to import letters from pictures to be added to something like a text area upon acting on a button?
Basically, if I have a pictograph system, may I import these images as characters, or would I have to maintain them as pictures?
To give some specificity, picture, Klingon writing or Dragon language, something that certainly is not defined in the standard packages of Character sets.
Thank you!
Best way I can think of to do this is to create some font file (.ttf, .otf, etc.) representing your special alphabet and then proceed to follow the instructions in this answer here.
Downside is, there really isn't any easy way to create font files. Usually it involves many hours manually tracing symbols using a vector graphic editor and compiling those to a font file.
If your characters are already vector images, then most of the work will have already been done.
Related
Modern text editors like Notepad++ can visualize control characters like CR, LF, STX, ETX, EOT. I have started to wonder how text editors visualize these characters so neatly.
Note: I am familiar with how encodings and character sets work. And I'm also familiar with the reason why these characters exist.
Some ideas:
Does it apply a special font for these specific characters ?
i.e. a font which contains a representation of all characters.
Or does it use an advanced text-field control/gui-component that renders (i.e. draws) them on the canvas ?
Or does it just replace the characters ? (e.g. replacing a 0x0D with unicode character 0x240D i.e. ␍)
This seems to be the easiest. But then how does it preserve the fact that copying the text still keeps the original text.
The reason for my question: I would like to create a java application that does the same thing.
The 3rd idea should be true, you can replace the charaters with unicode control pictures using a proper font.
There are some inherent problems with assigning glyphs ('images') to Control Codes; most have to do with the case that they already have a particular use! For example, if you send a Tab code to your display, you'd typically expect the cursor to move by a certain number of positions and not to see a character ○ pop up.
Also, typically, fonts use Unicode as their native encoding. Unicode does not allow a glyph to be assigned to the control codes:
Sixty-five code points (U+0000–U+001F and U+007F–U+009F) are reserved as control codes (https://en.wikipedia.org/wiki/Unicode)
There is an 'alias' sort of set defined: U+2400 to U+241F for 0x00 to 0x1f, U+2420 "␠" for "symbol for space", and U+2421 "␡" for "symbol for Delete" (your #3) but then you need to make sure the user has a font that contains these glyphs.
The most configurable way is to 'manually' draw whatever you like. This means you can use any font you want (without the need for a special font), and character replacement is not necessary (only the drawing code need to filter out 'specials'). A drawback, though, is that you are also in charge of drawing regular text.
If that is overkill or you don't have sufficient control over the text draw area, you can simply use different foreground and background colors for the control characters only. This is a screenshot of a quick-and-dirty hex viewer I wrote a while ago – I only change the colors here, but I could have written out custom text for all as well.
For a good overview of what it takes, see James Brown's Design & Implementation of a Win32 Text Editor; it focuses on using Win32 API calls but there is a lot of background as well. Drawing neat Control Codes is addressed in the section Enhanced Drawing & Painting.
I'll start saying what I'm doing:
I'll take a photo with a webcam, in this photo there will be an object, always the same object, in a square format with letters inside it. I need to identify those letters. The step of identifying those letters is already done, the problem is the quality of the image coming from the webcam: it won't be the best nor in the best positioning, and the api I'm using to identify those letters requires positioning and quality.
The reason why I have a square is to help to identify where those letters are, so I can 'look for an square' in the image an then do what I've already done to identify the letters. My question is: is there more things I have to do in order to achieve this? Os is it only 're positioning the image, look for the square and then it's done'. If I need to study image processing there is no problem, I'm here because I don't even know what I have to look for.
I'm developing in Java because 'school things', so if there's already and api (I've heard and tried OpenCV, but I don't know what to do with it) it would really help me.
Thanks in advance.
Edit 1: As asked by Springfield762, I took some photos and I'll explain them below.
First let me explain what are the photos: the 'square thing' that will contain the letters isn't done yet, another department is taking care of it, so I had to improvise something here with pens and batteries. The letters will all be made of wood in a nice shape, I had to replace them with some Magicka cards as I don't have them yet, but the cards fits well to explain the example. I also made an example of the the square (that actually ended as an rectangle) in paint, so it has absolutely nothing of beautiful.
I took 3 photos, one using the light coming from the window, the second using the light of my room and the third using the flash of the webcam. (Sorry about links, I can't post images nor links, although I'm always here, this is the first time I post a question...)
Window light:
Room light:
Flash:
Square (rectangle) example:
The 'project' of the square you guys can ignore, I did it so that you can understand the images. And the reason I took 3 different photos was just to show all different possibilities that the webcam might be in. Also, the quality of the Magicka cards isn't a problem, since each card represents one letter, so it'll be easy to 'see' them.
Well, I found most answers to this question, I'll explain them below.
First it's not a square, but a rectangle, and it is still to be made. So I started testing the software using anything that was a rectangle, first I had to 'locate' the rectangle in the frame captured by the camera, then show it in the original image seen by the user, I accomplished that by:
Capturing the actual frame
Converting that frame to HSV;
Applying some kind of threshold (using the Core.inRange function, so that I could find a specific color in the range specified in the function);
Applying the Imgproc.findContours to find the contours of the rectangle;
Finally drawing a rectangle using the points found by the findContours;
How it ended: i.imgur.com/wmNVai0.jpg
After that I knew that I could place the rectangle in a way that all the letters inside it would be in a straight line, so I didn't need to care about the positioning of the letters. Now I had to fight with the OCR.
I chose Tesseract as it is OpenSource and seems to be a strong tool (supported by Google, that's for sure something), then I started to test some images.
In the beginning it was tough and I thought I'd have to train OCR even more, but the thing is that it has some kind of dictionary that tries to find words which are listed in this dictionary, and I didn't need that as I was looking for characters that could be in a total random way. I had to turn off that dictionary by adding the following line to a conf file:
load_system_dawg F
load_freq_dawg F
After that I had to change somethings in the image as well:
Transform into Grayscale;
Resize it by ~80%;
Original images (I can't post links...):
i.imgur.com/DFqNSYB.jpg
i.imgur.com/2Ntfqy3.jpg
Grayscale:
imgur.com/XUZ9b1Z.jpg
i.imgur.com/yjXMH5Q.jpg
Resized:
i.imgur.com/zgX9bKF.jpg
i.imgur.com/CWPRU3I.jpg
(Sometimes I had problems with resized images and on other moments I didn't, that's something I have to test even more.)
Then I could get some good results, though I'm still afraid as the light of the environment makes a whole difference, I still have to test it and mainly I still need the god da** base, I'll post it as an edit later.
If I did anything wrong or if anyone wants to correct me, please feel free to say it!
I'm using JOGL (OpenGL for Java) for my application and I need to draw tons of strings on screen at once and my current solution is far too slow. Right now I'm drawing the strings using TextRenderer using the draw3D method and for even a moderate number of strings (around 300-500), it just kills the FPS. I started messing with drawing text onto the object textures, which is much faster, but there are a few problems with it. The first is that allocating all those textures requires a lot of memory. The second is that I need to find a way to size the texture so its only as big as the string and then map it to the object without stretching. The problem there is that all these thousands of boxes are using a single model being rendered with a call list. I'm not sure its possible to change the texture mapping for each object in that situation.
I don't mind if the text appears flat or 3D, it just has to be positioned in 3D space. I would prefer to render the text in the highest quality possible without sacrificing too much speed, since readability of the text is the most important part of the application. Also, nearly all of the strings are different, there aren't many duplicates.
So, my question: Am I going down the right path with drawing the strings on the textures, and if so, how can I overcome those 2 problems? Or is there another method that would suit my needs?
Depending on exactly how TextRenderer works - you might be able to use display lists to batch up your text drawing commands.
If TextRenderer works by having a texture of individual character glyphs and piecing together a string a glyph at a time: it'll be fine. just bookend your text drawing code with glNewList and glEndList. Once a list is defined, just use glCallList to use it.
If however, TextRenderer works by drawing complete strings into a texture and using one quad per string - display lists may not work. If the strings in one batch do not all fit within TextRenderer's cache, it will delete the least-recently used one to reclaim some space. Display lists will only recreate the OpenGL calls made, and so the work done by TextRenderer to update the string cache texture will be lost and you'll get incorrect output. From a quick scan of the source, I suspect that TextRenderer works in this manner.
To summarise: Display lists will greatly speed up your rendering, but will only if you don't overflow TextRenderer's string cache texture and don't use the TextRenderer after the display list has been defined.
If you can't meet these constraints you're going to have to go a bit hardcore and write your own text renderer that renders glyph-by-glyph - it'll then be trivial to cache the output geometry and extremely quick to re-render. There's an example of such a system here, with the tool to create a font here. It uses LWJGL rather than JOGL, but the translation between the two will be the least of your worries if you want to integrate it - it's meshed with the texture management etc.
I'm working on application in Java that will maintain database of song lyrics in plain text and print out some songbooks/chordbooks(that is create PDF file from selected songs). I was planing that the Java application will generate source code for pdflatex and after compiling this source user will get PDF file.
Lately I've run into a lot of problems because of latex limitation: fixed memory size (some pictures will also be drawn to PDF) - error when exceeded, no way to query end of line or and of page dynamically, it's very hard to override latex placement algorithm in a complex way,... see also some my other questions regarding latex. I come to conclusion that latex is not good option for automated PDF generation.
So I need replacement. I need to be able to typeset:
Chords over lyrics when the lyrics are in variable char width so I need to be able to measure text width
Chord diagrams that means I'll have to draw quite complex pictures
Each song on separate double page
Different fonts etc.
Thanks for all answers
Here are some PDF open source APIs
http://java-source.net/open-source/pdf-libraries
This has been asked many time, You might want to look at this post
IText is a free library which offers lots of capabilities for creating PDFs programmatically.
Rather than try to manage/calculate the complexities of the desired layout, you could try Docmosis. It will let you layout a document as a template using doc or odt formats. This means if you could make a doc or odt look like you want, you can turn it into a template and get Docmosis to render it as a PDF. Text and images can be placed inside or outside tables which makes layout fairly easy to manage.
ConTeXt is another TeX system, but it is easier to control the layout than with LaTeX. For drawing you could use PGF/TikZ or MetaPost. Support for both is available in ConTeXt. With ConTeXt's built in Lua scripting you could draw the chords automatically, assuming you have them stored in some sort of data structure.
why not just use lilypond with latex? it's meant for typesetting music.
Well I've written a basic lossless jpeg joiner thing in java now but I'd like to compare the files it produces with the original files.
I can only compare so much in a hex editor, does anyone know of an easy way, software or java based (preferably software as I dont feel like any more coding for now!) that I can compare two images and produce a "difference map" of where the pixels aren't the same?
Thanks.
Thanks for the suggestions.
I tried the Gimp approach first which works well except when the difference between the images are very small. I couldn't find an "enhance differences" option to make the differences obvious and the histogram also only gives a rough representation of the differences.
In the end I used ImageMagick something I'd installed a while ago and forgot all about. Creating a difference/comparison image is as easy as typing:
compare first.jpg second.png difference.gif
in the command line.
It's all nicely explained here.
TortoiseIDiff is a free image diff viewer:
http://tortoisesvn.tigris.org/TortoiseIDiff.html
It is part of TortoiseSVN, but can be used without Subversion.
Depending on your project, not all files which are under version
control are text files. Most likely you will have images too, for
example screenshots and diagrams for the documentation/helpfile.
For those files it's not possible to use a common file diff tool,
because they only work with text files and diff line-by-line. Here is
where the Tortoise Image Diff tool (TortoiseIDiff) comes to the
rescue. It can show two images side-by-side, or even show the images
over each other alpha blended.
You could do a lot worse than Perceptual Diff.
The best approach would be to use Pix for windows (comes with the DirectX SDK). Supports Bitmap, PNG and Jpeg...Enjoy!
Use an image editor like Photoshop or the Gimp or whatever, which has multiple layers. Create an image where each source image in a separate layer.
At this point, you can visually compare the images by toggling the top layer's visibility off and on.
In most decent editors, you can also set the top layer to "difference" mode. Now each image pixel's value is the absolute difference of the pixel values in the underlying images. You can use e.g. a histogram tool to see if the images are identical. If they're identical, then all the pixel values will be exactly 0.
For stuff like this, I love the netpbm/pbmplus toolkit. You can use djpeg and pnmtoplainpnm to convert each image into a simple ASCII format. You then just read both files and emit a new image which shows where pixels differ. You could, for example, compute the Euclidean distance in RGB space between old and new pixels and emit a white pixel for zero difference, light gray for a small difference, darker for larger differences, and so on. The ASCII format is simple and is well documented on the man pages, and all the standard viewer programs can view it directly.
The latest version of Araxis Merge will do image diffs ( http://www.araxis.com/merge/topic_comparing_image_files.html ).
Unfortunately it's not a free app so whether or not you're willing to pay for it is another thing...
There's also a convenient web app called Resemble.js, which analyzes and compares images pixel by pixel. The different pixels in the images (if any) are highlighted with pink or yellow color depending on your preference.