hints and tips for curing and preventing gephi instability? - java

I am experiencing massive unreliability with my gephi install after it complained about being out of memory and tried to reset the memory limit itself. That didn't work and the VM wouldn't start, so I manually reset the memory to 1024, which didn't work, then to 512, by editing the config file accessed via my start menu.
The system is now hugely unreliable. Freezes and crashes every time I try to work with it. I tried abandoning my project (which is not huge - 11k nodes) in case the project file had gotten corrupted, and tried round tripping a checkpoint, reading in edge and node lists to a blank project from a "just in case of disaster" csv edge and node list export I did. It read the node list ok, but wouldn't read the edge list and froze up again. The logfile contains lots of warnings about deprecated netbeans usages and then a final "SEVERE" warning about a java array index out of bounds. Which doesn't sound to me like something I can do anything about....but hope springs eternal.
In addition to wiping it all out and doing a reinstall, are there any practical tips anyone can offer to help keep gephi happy?
I am on XP SP3 with up to date Java, dual core and 2 gig of RAM.
The config file I edited was on a different system path from the logfile messages. Config file was a general path, and the logfile was specific to me as a user. Which I think is as it should be far as I can tell from the docs - but is something that could be potentially suspicious. I am wondering if my memory allowance reset might not actually have taken effect properly. But I don't know how to inspect this except via the config.
I really really really like Gephi - when it works right. (But to do what I need to do today, I'm going to need to go back to R...)
thanks!

A few tips when you are absolutely memory starved:
- minimize the number of attributes to your nodes and edges. Ideally, you'd have none.
- fine tune the Gephi RAM settings. The Gephi installation page says:
On computers with 2GB of memory, you can set -Xmx1400 to get maximum
performance.
If 1024 was too high for your machine, and 512 is too low, try intermediary values? and of course, kill any unnecessary process running on your machine.

Related

Is there any way to analyze a truncated Java Heap Dump (hprof file)?

In my work, we are running into a difficult to reproduce OOM issue. Or, more accurately, it is very easy to reproduce on one system, making that system unusable, but difficult to reproduce anywhere else, given the same inputs.
The application is being run as a service using a service wrapper. We did manage to get the configuration changed to launch it with the option of outputting a heap dump file on OOM but, unfortunately, they were truncated, most likely due to the service wrapper timing out and killing the process as it wrote the file. This is readily apparent, since the max memory is set to 1GB, and the hprof files are as small as 700MB, which is too small to be the entire heap upon OOM.
It would take a lot of jumping through hoops to additionally configure the wrapper to give the java process a longer time to write out the heap, but we are pursuing this using these 2 options:
wrapper.jvm_exit.timeout=600
wrapper.shutdown.timeout=600
The question is, is there anything useful I can do with the truncated hprof files I have? Eclipse MAT chokes on them. Jhat appears to load them, but then only shows 3 instances of Java.Object of size 0 and nothing else. I tried YourKit and it couldn't write its oids file.
It seems to me like these files should have some useful, accessible information in them. Is there a tool that can read what's there?
Thank you for your time!
Best option for analyzing the dump file which i came across till date, is text editors like vim.
Use Jpofiler(https://www.ejtechnologies.com/products/jprofiler/overview.html). It's not free, but, it has a trial period.
The live memory and CPU view options are your best bet to isolate your issues. It generally runs reasonably well even on large dumps.

Entity classes from database hangs when "retrieving the keys" from data source Netbeans/Hibernate

Inside Netbeans 8.0.2:
Steps: New File > Hibernate> Hibernate reverse engineering
Retrieval of the Tables and Views begin, gets to 98% and then hangs. It freezes on the same view OR the very next view.
I've tried on multiple machines - same result.
Is there a limit on the size of the input data - in the wizard? Or, maybe a problem with the database - itself?
This is VisualVS snapshot
thanks
One guess. Try increasing the heap size of Eclipse. Use the .ini file and increase the -Xmx and -Xms parameters. It is a pure guess, but who knows :)
Second suggestion :) You visualVM or yourkit or profiler whatever profiler you have. Then attach yourself to the Netbeans and then find out where exactly it is blocking. On which operation. Then We/You will know more about the nature of the error and how to resolve it :D
I would recommend clean cache dir (details), after that read this topic and probably you should open a bug on netbeans bugzilla.

IntelliJ is very slow when handling big files

I'm using Guidewire development Studio (IntelliJ-based IDE), and it is very slow when handling big text files (~ 1500 lines and above). I tried with an out-of-the-box community IntelliJ as well, but meet the same problem.
When I open those files, it takes 1 second to type a character, even when I see clearly that the used memory is still plenty (1441 MB/ 3959 MB). Also, it quickly suck up all the memory if I open multiple files (I allocate 4GB just for IntelliJ). Intellisense and other automatical stuff is painfully slow as well.
I love IntelliJ, but working in those condition is just so hard. Is there any way to work around this problem? I have thought of some alternatives, like:
Edit big files on another editor (eg: Notepad++), then reload it on IntelliJ
Open another small file, copy your bit of code there, edit it, then copy it back. It would help because intellisense and code highlight is maintained, however it is troublesome
I did turn off all unnecessary plugins, only leaving those necessary, but nothing improved much.
I am also wondering if I can "embed" some of outside editor in IntelliJ? Like Notepad++, Notepad2 for example? I did my homework and google around but find no plugins/ configuration that allow to do that.
Is there anyone who's experienced can give me some advices how to work with big files in IntelliJ (without going mad)?
UPDATE: Through my research I learn that IntelliJ can break for very large files (like 20mb) or so on. But my file isn't that big. It just have about 100KB - 1MB, but it's very long text.
UPDATE 2 After trying increase the heap memory as Sajidkhan advise (I changed both idea64.vmoptions and idea.vmoptions), I realize that somehow IntelliJ doesn't take the change. The memory heap is stuck at maximum 3GB.
On another note, the slow performance can be perceived when the system uses only around ~1GB of heap memory, so I think the problem doesn't relate to memory issue.
After a while working around the bush, I find a workaround, kind of.
When I check other answers from similar questions, I found that they begin get troubles when the file size is at least several MBs. It doesn't make sense, since I got the trouble when the files are only several KBs. After more careful checking, I found that the Gosu plugin is the culprit: after I mark my Gosu file as "text only", the speed becomes normal.
So I guess the problem has something to do with code highlighter & syntax reminder. For now, the best way I work-around this is:
Right-click the file and mark it as plain text.
Close the file and open it again, then edit.
Note: Since it applies for all the file type in Guidewire development suit, you may want to mark permanently some long files as plain text, especially the *.properties (aka, i18n/international files). The benefit of code auto-completer just doesn't worth the trouble.
Can you try editing idea64.vmoptions in the bin folder. You could set the max heap and max PermGen to be a higher value
Don't forget to restart!
Tested on different PCs. Even on fast processors the editor is painfully slow when working with large files (2000+ lines of code).
Eclipse, Netbeans are absolutely OK. Tuning .vmoptions will not help.
This bug is still not fixed: https://intellij-support.jetbrains.com/hc/en-us/community/posts/206999515-PhpStorm-extremely-slow-on-large-source-files
UPDATE. Try 32 bit version with default settings. Usually 32bit idea works faster and eats less memory.

BerkleyDb JE DbDump Extraordinary memory usage

We use BDB JE in one of our applications, and DbDump for backing up database. The interesting things happened one day. DbDump starts to throw out an OutOfMemoryError. Postmortem analysis shows that a lot of memory is used by internal BDB nodes (IN). It seems like BerkleyDB reads all the dataset in memory while backing it up, which is quite strange for me.
Another strange fact is that this behavior only visible when the environment is open by the application itself. So when DbDumb is the only client who open environment everything seems to be fine.
Have you considered using DbBackup instead? I know they do two different things, but if all you're looking to do is backup a database, there's no need to pull it all into memory when simply copying the files elsewhere will do. Or is the command line ability the deciding factor here?

How to reliably detect disk is down on Linux via Java

Is there a good way to detect that particular disk went offline on server on Linux, via Java?
I have an application, that due to performance reasons, writes to all disks directly (without any RAID in middle).
I need to detect if Linux would unmount any disk due to disk crash during run-time, so I would stop using it. The problem is that each mount has a root directory, so without proper detection, the application will just fill-up the root partition.
Will appreciate any advice on this.
In Linux, everything is accessible through text files. I don't really understand what is the exact information you require, but check /proc/diskstat, /proc/mounts, /proc/mdstat (for raids), etc...
As anyone with sysadmin experience could tell you, disks crashing or otherwise going away has a nasty habit of making any process that touches anything under the mountpoint wait in uninterruptible sleep. Additionally, in my experience, this can include things like trying to read /proc/mounts, or running the 'df' command.
My recommendation would be to use RAID, and if necessary, invest your way out of the problem. Say, if performance is limited by small random writes, a RAID card with a battery backed write cache can do wonders.

Categories