I'm using Guidewire development Studio (IntelliJ-based IDE), and it is very slow when handling big text files (~ 1500 lines and above). I tried with an out-of-the-box community IntelliJ as well, but meet the same problem.
When I open those files, it takes 1 second to type a character, even when I see clearly that the used memory is still plenty (1441 MB/ 3959 MB). Also, it quickly suck up all the memory if I open multiple files (I allocate 4GB just for IntelliJ). Intellisense and other automatical stuff is painfully slow as well.
I love IntelliJ, but working in those condition is just so hard. Is there any way to work around this problem? I have thought of some alternatives, like:
Edit big files on another editor (eg: Notepad++), then reload it on IntelliJ
Open another small file, copy your bit of code there, edit it, then copy it back. It would help because intellisense and code highlight is maintained, however it is troublesome
I did turn off all unnecessary plugins, only leaving those necessary, but nothing improved much.
I am also wondering if I can "embed" some of outside editor in IntelliJ? Like Notepad++, Notepad2 for example? I did my homework and google around but find no plugins/ configuration that allow to do that.
Is there anyone who's experienced can give me some advices how to work with big files in IntelliJ (without going mad)?
UPDATE: Through my research I learn that IntelliJ can break for very large files (like 20mb) or so on. But my file isn't that big. It just have about 100KB - 1MB, but it's very long text.
UPDATE 2 After trying increase the heap memory as Sajidkhan advise (I changed both idea64.vmoptions and idea.vmoptions), I realize that somehow IntelliJ doesn't take the change. The memory heap is stuck at maximum 3GB.
On another note, the slow performance can be perceived when the system uses only around ~1GB of heap memory, so I think the problem doesn't relate to memory issue.
After a while working around the bush, I find a workaround, kind of.
When I check other answers from similar questions, I found that they begin get troubles when the file size is at least several MBs. It doesn't make sense, since I got the trouble when the files are only several KBs. After more careful checking, I found that the Gosu plugin is the culprit: after I mark my Gosu file as "text only", the speed becomes normal.
So I guess the problem has something to do with code highlighter & syntax reminder. For now, the best way I work-around this is:
Right-click the file and mark it as plain text.
Close the file and open it again, then edit.
Note: Since it applies for all the file type in Guidewire development suit, you may want to mark permanently some long files as plain text, especially the *.properties (aka, i18n/international files). The benefit of code auto-completer just doesn't worth the trouble.
Can you try editing idea64.vmoptions in the bin folder. You could set the max heap and max PermGen to be a higher value
Don't forget to restart!
Tested on different PCs. Even on fast processors the editor is painfully slow when working with large files (2000+ lines of code).
Eclipse, Netbeans are absolutely OK. Tuning .vmoptions will not help.
This bug is still not fixed: https://intellij-support.jetbrains.com/hc/en-us/community/posts/206999515-PhpStorm-extremely-slow-on-large-source-files
UPDATE. Try 32 bit version with default settings. Usually 32bit idea works faster and eats less memory.
Related
In my work, we are running into a difficult to reproduce OOM issue. Or, more accurately, it is very easy to reproduce on one system, making that system unusable, but difficult to reproduce anywhere else, given the same inputs.
The application is being run as a service using a service wrapper. We did manage to get the configuration changed to launch it with the option of outputting a heap dump file on OOM but, unfortunately, they were truncated, most likely due to the service wrapper timing out and killing the process as it wrote the file. This is readily apparent, since the max memory is set to 1GB, and the hprof files are as small as 700MB, which is too small to be the entire heap upon OOM.
It would take a lot of jumping through hoops to additionally configure the wrapper to give the java process a longer time to write out the heap, but we are pursuing this using these 2 options:
wrapper.jvm_exit.timeout=600
wrapper.shutdown.timeout=600
The question is, is there anything useful I can do with the truncated hprof files I have? Eclipse MAT chokes on them. Jhat appears to load them, but then only shows 3 instances of Java.Object of size 0 and nothing else. I tried YourKit and it couldn't write its oids file.
It seems to me like these files should have some useful, accessible information in them. Is there a tool that can read what's there?
Thank you for your time!
Best option for analyzing the dump file which i came across till date, is text editors like vim.
Use Jpofiler(https://www.ejtechnologies.com/products/jprofiler/overview.html). It's not free, but, it has a trial period.
The live memory and CPU view options are your best bet to isolate your issues. It generally runs reasonably well even on large dumps.
OK, this is driving me insane! It used to be infrequent and now its practically ever character I type that causes Eclipse goes into 'Not Responding' and CPU rockets towards 100% and stays there for a minute. Sometimes this is accompanied by node.exe taking half the CPU and a LOT of memory. I kill node.exe and sometimes it stays off but mostly it comes back.
I've looked up node.exe and can't figure out what it has to do with my application. I'm writing a webapp using Tomcat, Struts, Java, JSP, JQuery. I disabled every plugin from Preferences->startup/shutdown with no effect.
Help! I can't develop when every keypress takes a minute!
Take a look at https://bugs.eclipse.org/bugs/show_bug.cgi?id=442049 or https://github.com/Nodeclipse/nodeclipse-1/issues/159
I overcomed by removing <nature>org.eclipse.wst.jsdt.core.jsNature</nature>
But you may have other JSDT related issue.
And you must know exactly what process is consuming CPU at what rate.
I would really suggest not using Eclipse for node. Try NotePad++ on Windows or Sublime or Atom or something...
I am experiencing massive unreliability with my gephi install after it complained about being out of memory and tried to reset the memory limit itself. That didn't work and the VM wouldn't start, so I manually reset the memory to 1024, which didn't work, then to 512, by editing the config file accessed via my start menu.
The system is now hugely unreliable. Freezes and crashes every time I try to work with it. I tried abandoning my project (which is not huge - 11k nodes) in case the project file had gotten corrupted, and tried round tripping a checkpoint, reading in edge and node lists to a blank project from a "just in case of disaster" csv edge and node list export I did. It read the node list ok, but wouldn't read the edge list and froze up again. The logfile contains lots of warnings about deprecated netbeans usages and then a final "SEVERE" warning about a java array index out of bounds. Which doesn't sound to me like something I can do anything about....but hope springs eternal.
In addition to wiping it all out and doing a reinstall, are there any practical tips anyone can offer to help keep gephi happy?
I am on XP SP3 with up to date Java, dual core and 2 gig of RAM.
The config file I edited was on a different system path from the logfile messages. Config file was a general path, and the logfile was specific to me as a user. Which I think is as it should be far as I can tell from the docs - but is something that could be potentially suspicious. I am wondering if my memory allowance reset might not actually have taken effect properly. But I don't know how to inspect this except via the config.
I really really really like Gephi - when it works right. (But to do what I need to do today, I'm going to need to go back to R...)
thanks!
A few tips when you are absolutely memory starved:
- minimize the number of attributes to your nodes and edges. Ideally, you'd have none.
- fine tune the Gephi RAM settings. The Gephi installation page says:
On computers with 2GB of memory, you can set -Xmx1400 to get maximum
performance.
If 1024 was too high for your machine, and 512 is too low, try intermediary values? and of course, kill any unnecessary process running on your machine.
I'm making an application in Java using Eclipse Indigo. When I run it using Eclipse the Task Manager shows javaw.exe is using 50mb of memory. When I export the application as a runnable .jar and execute the .jar the Task Manager shows javaw.exe is using 500mb.
Why is this? How could I fix this?
Edit: I'm using a Windows 7 64 bit, and my system says I have Java 1.7 installed. Apparently the memory problem is caused by a while loop. I'll study what's inside the while loop causing the problem.
Edit: Problem found. At one point in the while loop new BufferedImage instances where created, instead of replacing the same BufferedImage.
Without any additional details about your code, I would suggest using a profiler to analyze the problem. I know YourKit and the one that is available for NetBeans are very good.
Once you run you app from the profiler, you should initially look at the objects and listeners created by your application's packages. If the issue is not there, you can expand your search to other packages until you identify things that are growing out-of-control, and then look at the code that handles those entities.
When you run certain parts of the code multiple times and still see memory utilization after that code stopped running, then you might have a leak and may consider nulling or emptying variables/listeners on exit.
It should be a good starting point, but please report your results back, so we know how it goes. By the way, what operating system are you using and what version of java?
--Luiz
You need to profile your code to get the exact answer, but from my experience when I see similar things I often equate it to garbage collecting. For example, I ran the same job and gave one job 10 gigs and the other 2 gigs..Both ran and completed but the 10gigs one used more memory(and finished faster) while the second(2gig) one, I believe, garbage collected so it still completed but took a bit more time with less memory. I'm a bit new to java so I maybe assuming the garbage collecting but I have seen what you are talking about.
You need to profile your code(check out jconsole, which is included with java, or visualVM)..
That sounds most peculiar.
I can think of two possible explanations:
You looked at the wrong javaw.exe instance. Perhaps you looked at the instance that is running Eclipse ... which is likely to be that big, or bigger.
You have (somehow) managed to configure Java to run with a large heap by default. On Linux you could do this with a wrapper script, a shell function or a shell alias. You can do at least the first of those on Windows.
I don't think it is the JAR file itself. AFAIK, you can't set JVM parameters in a JAR file. It is possible that you've somehow included a different version of something in the JAR file, but that's a bit of a stretch ...
If none of these ideas help, try profiling.
Problem found. At one point in the while loop new BufferedImage instances where created, instead of replacing the same BufferedImage.
Ah yes. BufferedImage uses large amounts of out-of-heap memory and that needs to be managed carefully.
But this doesn't explain why your application used more memory when run from the JAR than when launched from Eclipse ... unless you were telling the application to do different things.
What tools are available to view the output of the built-in JVM profiler? For example, I'm starting my JVM with:
-agentlib:hprof=cpu=times,thread=y,cutoff=0,format=a,file=someFile.hprof.txt
This generates output in the hprof ("JAVA PROFILE 1.0.1") format.
I have had success in the past using HPjmeter to view these output files in a reasonable way. However, for whatever reason the files that are generated using the current version of the Sun JVM fail to load in the current version of HPjmeter:
java.lang.NullPointerException
at com.hp.jmeter.f.jb.a(Unknown Source)
at com.hp.jmeter.f.a.a(Unknown Source)
at com.hp.c.a.j.z.run(Unknown Source)
Exception in thread "HPeprofDataFileReaderThread" java.lang.AssertionError: null pointer exception from loader
at com.hp.jmeter.f.a.a(Unknown Source)
at com.hp.c.a.j.z.run(Unknown Source)
(Why would they obfuscate the bytecode for a free product?!)
Two questions arise from this:
Does anyone know the cause of this HPjmeter error? (EDIT: Yes--see below)
What other tools exist to read hprof files? And why are there none from Sun (are there)?
I know the Eclipse TPTP and other tools can monitor JVMTI data on the fly, but I need a solution that can process the generated hprof files after the fact since the deployed machine only has a JRE (not a JDK) intalled.
EDIT: A very helpful HPjmeter developer replied to my question on an HP ITRC forum indicating that heap=dump needs to be included in the -agentlib options temporarily until a bug in HPjmeter is fixed. This information makes HPjmeter viable again, but I will still leave the question open to see if anyone knows of any other tools.
EDIT: As of version 4.0.00 of HPjmeter (available 05/2009) this bug has been fixed.
Your Kit Java Profiler is able to read hprof snapshots (I am not sure if only for memory profiling or for CPU as well). It is not free but is by far the best java profiler I ever used. It presents the results in a clear, intuitive way and performs well on large data sets. The documentation is also pretty good.
For viewing and analyzing the output of hprof=samples or hprof=cpu I have used PerfAnal with good results. The GUI is a bit spartan, but very useful.
PerfAnal is a free download (GPL, originally an example project in the book Java Programming on Linux).
See this article:
http://www.oracle.com/technetwork/articles/javase/perfanal-137231.html
for more information and the download.
Normally you can just run
java -jar PerfAnal.jar hprof.java.txt
You may need to fiddle with -Xmx for large hprof files.
I am not 100% sure it'll work (it sounds like it will) and I am not sure it'll show it in the format you want... but have you thought about the VisualVM?
I believe it'll open up the resulting file.
I have been using Eclipse Memory Analyzer for analyzing different performance problems successfully. First of all, install the tool as described in the project webpage in Eclipse.
After that, you can create a dump file knowing the pid of the jvm to be analyzed
jmap -dump:format=b,file=<filename>.hprof <jvm_pid>
Then just import the .hprof file in eclipse. It has some automatic reports that try (for me they usually do not work) to point out which could be the possible problems.
Edit:
Answering the comment: You are right, it is more like a leak finder for Java. For performance problems, I have played with JRat for small projects. It shows time comsumed per method, number of times a method is called, hierarchy of calls, etc. The only problem is that as far as I know, it does not support .hprof files. To use it, yo need to execute your program adding a VM argument
-javaagent:<path>/shiftone-jrat.jar
This will generate a directory with the profile captured by the tool. Then, execute
java -jar shiftone-jrat.jar
And open the trace. Even been a simple tool, I think it could be useful.