I'm working on a project involving enlisting a large number of
relatively unskilled workers to do repetitive image analysis using
ImageJ. I've written a set of macros which walk them through the
analysis process, but in order to increase throughput and reduce
errors, I'd like to figure out how to hide as much of the gui/menu
interface as possible.
An optimal solution would show just the image in question and a set of
icons to select the correct macro. To further complicate things, I'm
planning on delivering the applet and image to be analyzed through a
website (though my understanding is that this shouldn't change too
much).
I've searched a fair bit and can't seem to find an example of how to
do this interface simplification. If anyone can point me in the right
direction I'd be quite grateful. I'm open to any suggestion that works, though since my Java is a bit rusty, a macro/script/configuration solution might be easier.
I solved my problem with the excellent Action Bar Plugin:
http://imagejdocu.tudor.lu/doku.php?id=plugin:utilities:action_bar:start
Related
I want to build a Image editing application. I've gone through convolution matrix for creating basic color filters but I want the app to also have advanced editing capabilities like highlight/shadow adjustment, vignette, curves adjustments etc.
Any chances that I might find some examples for the same to learn more about it. Also, any kind of helpful resources would be a great help.
P.S. If there is an existing image editing library/sdk that can get the job done, that would be great too
You should look at opencv and vxl. They are libraries of computer vision functions and have open source community around them. Opencv is a big library/community. I was looking into image processing libraries for some ideas I have (permanantly stuck in pre-development due to lack of time) and I have played with them a bit on linux. I'm still very much an opencv/vxl n00b though.
https://en.wikipedia.org/wiki/OpenCV
I found vxl a bit faster to get started with.
https://en.wikipedia.org/wiki/VXL
There is support for opencv on Android:
http://opencv.org/platforms/android.html
There is not support for vxl on android as far as I can see.
Now, both of these are pretty big projects. I would say there is ALOT to learn and it will take a while. But I think it is probably well worth-while learning. There are many tutorials and examples.
Get the source code first:
$ git clone https://github.com/Itseez/opencv
$ git clone http://git.code.sf.net/p/vxl/git vxl
Especially for a mobile platform it will be important to get the use of image processing right - so that it doesn't kill battery of device - so . . . lots of experimenting and testing and learning to do!
Have fun!
I have a project where I should analyse data via cluster analysis. Basically the data should be visualised like this picture shows
Each dataset - for example let it be people - is one horizontal row, where vertical lines showing the attributes like sex, age, and so on.
Now if this data could be shown I also want to move the rows horizontal and also vertical a) via code and b) via drag & drop.
Do anybody know a good library for that?
Important
Target is desktop application
Expected datasets around 500
Attributes for each data set around 60
There is an app in Java/SWT already, so solutions in this direction would be preferred
The OS is Win7 so C# or similar would be a stopgap
I really like d3.js, and would prefer a similar lock & feel (but in 3D)
If somebody has recommendations for a library which helps to analyze the data, please step forward too!
Check What is the best open-source java charting library? and Libraries for pretty charts in SWT? for more info.
I did used JFreeChart with SWT (2 years ago). The code is quite horrible (you have to write tons of code), but it works and is directly renderable with SWT components (no need of SWT_AWT bridge).
EDIT
When I thought about it again, I realized, that you can use the JavaScript library through Browser widget. It's quite heavyweight solution, but it might work..
You can do this in d3 but it is a very involved process in which you need to deal with the ismoetric perspective and the rest. It shouldn't be terribly complicated but it will not be an out-of-the-box solution.
I am creating an Android application for musicians because I am in a band and see that I needed something of this sort, but could not find one that could do what I wanted, and was wondering how would someone be able to take input from a microphone. Turn that input into a file, and compare it with all of prerecorded sound files in a database to determine the note(s), or chord(s) that are being played. I'm not having trouble with getting the input, but I'm stumped on how one would be able to compare one sound file to another in terms of frequency or something of the kind. I haven't yet been able to find an answer that could really be used to help with the problem, nor have I been able to find a Java library that handles sound comparison. I know this is a extremely hard task to accomplish, but I also know it can be done and would like to have a go at it. If anyone could offer advice, a link to a library that could do such a thing, or even if someone already has done it and could show me exactly how to do it I would be extremely grateful. Thank you for your time, and any feedback is appreciated!
I recommend you check out the music-g API put out by Google. It's an open-source library written in Java that you can integrate with an Android app. It provides sound similarity metrics.
http://code.google.com/p/musicg/
While I don't think we can close as a duplicate (as it isn't, technically), please do a search before posting. What you are asking isn't specific to your platform, so much as the kinds of algorithms you need to implement.
From my post here: How can I do real-time pitch detection in .Net?
See these references: http://cnx.org/content/m11714/latest/
http://www.gamedev.net/community/forums/topic.asp?topic_id=506592&whichpage=1
Line 48 in Spectrum.cpp in the Audacity source code seems to be close to what you want. They also reference an IEEE paper by Tolonen and Karjalainen.
Basically, you need to start with some FFT, but it is much more complicated than that. I think you will find that the near-impossibility of this task (especially for a whole band, a non-clear audio input source, etc.) will make this project not worth it. Psychoacoustics, particularly with distorted guitars, will make this very difficult.
There are tons of really solid posts on this topic here: https://stackoverflow.com/search?q=pitch+detection
I am working on an application for college music majors. A feature i am considering is slowing down music playback without changing its pitch. I have seen this done in commercial software, but cannot find any libraries or open source apps that do anything like this.
Are there libraries out there?
How could this be done from scratch from various file formats?
Note: I am working in java but am not oppossed to changing languages.
Timestretching is quite hard. The more you slow down or speed up the sound the more artifacts you get. If you want to know what they sound like listen to "The Rockafeller Skank" by Fat Boy Slim. There are a lot of ways to do it that all have their own strengths and weaknesses. The math can get really complex. That's why there are so many proprietary algorithms.
This page explains things a bit clearer than I can and links to the Dirac library.
http://www.dspdimension.com/admin/time-pitch-overview/
I found this link for java code to do pitch shifting/timestretching
http://www.adetorres.com/keychanger/KeyChangerReadme.html
I use soundstretch to speed up podcasts which is works quite well, haven't tried it on music though.
This site explains how it's done in the physical world:
http://www.wendycarlos.com/other/Eltro-1967/index.html
I don't know how you would emulate that in software though... I'll keep looking
One way to do it would be to double the sampling rate without changing the sampling rate of your source. (Low quality example, but easy to implement. Note: You can also decrease the sampling rate as well).
Check out any math related to phase vocoders.
Another common method is to create an array of fft bins that store data for scheduled intervals of your sound. Then you can choose how quickly to iterate through the bins, and you can re-synthesize that audio data for as long as you choose thus enabling you to stretch out one short segment of your sound for as long as you like.
audacity does it out of the box and it's free. THere are several plug ins for mp3 players as well that are free. Apparently it's pretty easy to do with an mp3 since it's already coded in the frequency domain.
I am trying to make an application in which one component captures the screen of the user (for screen casting). I am aware that there are two options to achieve the same using a Java applet (please correct me if I am wrong). First is to use the java applet to take screen shots continuously and convert it into a video and upload it as a video file. And second is to create a java vnc server and record it as a .fbs file and play it using a player like: http://www.wizhelp.com/flashlight-vnc/index.html
I would like to know the best solution in terms of video quality, file size, cross-platform compatibility (windows and mac), firewall problems and finally ease of implementation.
I am very new to Java. Please tell me whats the best solution for my problem. Also, is it easy enough for me to program it on my own or should I get it developed via a freelancer. I have tons of programming experience (5+ years in LAMP) but none in Java.
Thank you very much.
I agree that this is pretty hard. I implemented those two solutions (VNC and onboard screen capture) plus a third (capture from an external VGA source via an Epiphan grabber) for a former employer. I had the best bandwidth-to-quality ratio with VNC, but I got higher framerate with VGA capture. In all three cases, I reduced the frames + capture times to PNGs and sequenced them in a QuickTime reference movie. Then I made flattened video (MPEG4 or SWF) of the results. In my case, I then synchronized the screen video with a DV stream.
In the end the technology worked (see a sample of the output) but our business model failed.
From what I know, the older versions of applet had security restrictions that may not allow for screen capture. Instead, a java application may be feasible.
Regarding the build-it-yourself vs the fire-a-coder, it depends on how you value your time compared to what you can find on a freelancer site.
I think you can find someone from India/Romania/Poland/Other countries that can make it for an affordable price
Given your Java knowledge and the difficulty of the task, have you considered taking an alternative approach? For example, how about a native VNC server for the end-user, which is just a small download and then they click "Run." And that native server is programmed to capture the screen and send it straight to your web server, which has a client like vnc2swf or other means of converting the VNC stream to a video or .fbs file? Does all that make sense?
Admittedly, without Java, you have to prepare one executable program per platform you want to support, however, I don't know. That still sounds easier to me. Consider Copilot.com. They are doing VNC but they still use small native apps for each platform.
Sorry but this seems the kind of job that requires a lot of experience. Even if you find code snippets all around the net to fix this and that, the overall result may be way worse than simply hiring an experienced Java programmer.