I'm trying to spawn one entity from my class which customizes EntityZombie.
World bukkit; // ...
CraftWorld craft = (CraftWorld)bukkit;
net.minecraft.server.v1_7_R3.World world = craft.getHandle();
Boss boss = new Boss(world);
Location spawn; // ...
boss.setLocation(spawn.getX(), spawn.getY(), spawn.getZ(), spawn.getYaw(), spawn.getPitch());
world.addEntity(boss);
public class Boss extends EntityZombie {
public Boss(World world) {
super(world);
}
// Empty right now, nothing more than the normal zombie
}
The output is:
If I run the code above by staying away, when I come back I see nothing.
If I run it by being close, the client crashes.
Nothing relevant in console, here's the full crash report.
---- Minecraft Crash Report ----
// Who set us up the TNT?
Time: 06/03/15 14.57
Description: Ticking screen
java.lang.NullPointerException: Ticking screen
at bjb.a(SourceFile:514)
at fz.a(SourceFile:97)
at fz.a(SourceFile:15)
at ej.a(SourceFile:174)
at bcx.e(SourceFile:78)
at bao.p(SourceFile:1343)
at bao.ak(SourceFile:774)
at bao.f(SourceFile:728)
at net.minecraft.client.main.Main.main(SourceFile:148)
A detailed walkthrough of the error, its code path and all known details is as follows:
---------------------------------------------------------------------------------------
-- Head --
Stacktrace:
at bjb.a(SourceFile:514)
at fz.a(SourceFile:97)
at fz.a(SourceFile:15)
at ej.a(SourceFile:174)
at bcx.e(SourceFile:78)
-- Affected screen --
Details:
Screen name: ~~ERROR~~ NullPointerException: null
-- Affected level --
Details:
Level name: MpServer
All players: 1 total; [bjk['System_'/1, l='MpServer', x=-518,50, y=5,62, z=754,50]]
Chunk stats: MultiplayerChunkCache: 10, 10
Level seed: 0
Level generator: ID 01 - flat, ver 0. Features enabled: false
Level generator options:
Level spawn location: World: (-519,4,756), Chunk: (at 9,0,4 in -33,47; contains blocks -528,0,752 to -513,255,767), Region: (-2,1; contains chunks -64,32 to -33,63, blocks -1024,0,512 to -513,255,1023)
Level time: 45 game time, 45 day time
Level dimension: 0
Level storage version: 0x00000 - Unknown?
Level weather: Rain time: 0 (now: false), thunder time: 0 (now: false)
Level game mode: Game mode: survival (ID 0). Hardcore: false. Cheats: false
Forced entities: 1 total; [bjk['System_'/1, l='MpServer', x=-518,50, y=5,62, z=754,50]]
Retry entities: 0 total; []
Server brand: Spigot
Server type: Non-integrated multiplayer server
Stacktrace:
at bjf.a(SourceFile:289)
at bao.b(SourceFile:1972)
at bao.f(SourceFile:737)
at net.minecraft.client.main.Main.main(SourceFile:148)
-- System Details --
Details:
Minecraft Version: 1.7.10
Operating System: Windows 7 (amd64) version 6.1
Java Version: 1.8.0_25, Oracle Corporation
Java VM Version: Java HotSpot(TM) 64-Bit Server VM (mixed mode), Oracle Corporation
Memory: 136194320 bytes (129 MB) / 186380288 bytes (177 MB) up to 1060372480 bytes (1011 MB)
JVM Flags: 6 total; -XX:HeapDumpPath=MojangTricksIntelDriversForPerformance_javaw.exe_minecraft.exe.heapdump -Xmx1G -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:-UseAdaptiveSizePolicy -Xmn128M
AABB Pool Size: 0 (0 bytes; 0 MB) allocated, 0 (0 bytes; 0 MB) used
IntCache: cache: 0, tcache: 0, allocated: 0, tallocated: 0
Launched Version: 1.7.10
LWJGL: 2.9.1
OpenGL: GeForce GT 440/PCIe/SSE2 GL version 4.4.0, NVIDIA Corporation
GL Caps: Using GL 1.3 multitexturing.
Using framebuffer objects because OpenGL 3.0 is supported and separate blending is supported.
Anisotropic filtering is supported and maximum anisotropy is 16.
Shaders are available because OpenGL 2.1 is supported.
Is Modded: Probably not. Jar signature remains and client brand is untouched.
Type: Client (map_client.txt)
Resource Packs: []
Current Language: English (US)
Profiler Position: N/A (disabled)
Vec3 Pool Size: 0 (0 bytes; 0 MB) allocated, 0 (0 bytes; 0 MB) used
Anisotropic Filtering: On (2)
Perhaps, do I need to register any custom mob for each biome first? If yes, how?
You could download RemoteEntities Api from:
HERE
You can download other custom mob api if you want but I recommend you using it.
Related
I'm working with OpenJDK 11 and a very simple SpringBoot application that almost the only thing it has is the SpringBoot actuator enabled so I can call /actuator/health etc.
I also have a kubernetes cluster on GCE very simple with just a pod with a container (containing this app of course)
My configuration has some key points that I want to highlight, it has some requirements and limits
resources:
limits:
memory: 600Mi
requests:
memory: 128Mi
And it has a readiness probe
readinessProbe:
initialDelaySeconds: 30
periodSeconds: 30
httpGet:
path: /actuator/health
port: 8080
I'm also setting a JVM_OPTS like (that my program is using obviously)
env:
- name: JVM_OPTS
value: "-XX:MaxRAM=512m"
The problem
I launch this and it gets OOMKilled in about 3 hours every time!
I'm never calling anything myself the only call is the readiness probe each 30 seconds that kubernetes does, and that is enough to exhaust the memory ? I have also not implemented anything out of the ordinary, just a Get method that says hello world along all the SpringBoot imports to have the actuators
If I run kubectl top pod XXXXXX I actually see how gradually get bigger and bigger
I have tried a lot of different configurations, tips, etc, but anything seems to work with a basic SpringBoot app
Is there a way to actually hard limit the memory in a way that Java can raise a OutOfMemory exception ? or to prevent this from happening?
Thanks in advance
EDIT: After 15h running
NAME READY STATUS RESTARTS AGE
pod/test-79fd5c5b59-56654 1/1 Running 4 15h
describe pod says...
State: Running
Started: Wed, 27 Feb 2019 10:29:09 +0000
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Wed, 27 Feb 2019 06:27:39 +0000
Finished: Wed, 27 Feb 2019 10:29:08 +0000
That last span of time is about 4 hours and only have 483 calls to /actuator/health, apparently that was enough to make java exceed the MaxRAM hint ?
EDIT: Almost 17h
its about to die again
$ kubectl top pod test-79fd5c5b59-56654
NAME CPU(cores) MEMORY(bytes)
test-79fd5c5b59-56654 43m 575Mi
EDIT: loosing any hope at 23h
NAME READY STATUS RESTARTS AGE
pod/test-79fd5c5b59-56654 1/1 Running 6 23h
describe pod:
State: Running
Started: Wed, 27 Feb 2019 18:01:45 +0000
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Wed, 27 Feb 2019 14:12:09 +0000
Finished: Wed, 27 Feb 2019 18:01:44 +0000
EDIT: A new finding
Yesterday night I was doing some interesting reading:
https://developers.redhat.com/blog/2017/03/14/java-inside-docker/
https://banzaicloud.com/blog/java10-container-sizing/
https://medium.com/adorsys/jvm-memory-settings-in-a-container-environment-64b0840e1d9e
TL;DR I decided to remove the memory limit and start the process again, the result was quite interesting (after like 11 hours running)
NAME CPU(cores) MEMORY(bytes)
test-84ff9d9bd9-77xmh 218m 1122Mi
So... WTH with that CPU? I kind expecting a big number on memory usage but what happens with the CPU?
The one thing I can think is that the GC is running as crazy thinking that the MaxRAM is 512m and he is using more than 1G. I'm wondering, is Java detecting ergonomics correctly? (I'm starting to doubt it)
To test my theory I set a limit of 512m and deploy the app this way and I found that from the start there is a unusual CPU load that it has to be the GC running very frequently
kubectl create ...
limitrange/mem-limit-range created
pod/test created
kubectl exec -it test-64ccb87fd7-5ltb6 /usr/bin/free
total used free shared buff/cache available
Mem: 7658200 1141412 4132708 19948 2384080 6202496
Swap: 0 0 0
kubectl top pod ..
NAME CPU(cores) MEMORY(bytes)
test-64ccb87fd7-5ltb6 522m 283Mi
522m is too much vCPU, so my logical next step was to ensure I'm using the most appropriated GC for this case, I changed the JVM_OPTS this way:
env:
- name: JVM_OPTS
value: "-XX:MaxRAM=512m -Xmx128m -XX:+UseSerialGC"
...
resources:
requests:
memory: 256Mi
cpu: 0.15
limits:
memory: 700Mi
And thats bring the vCPU usage to a reasonable status again, after kubectl top pod
NAME CPU(cores) MEMORY(bytes)
test-84f4c7445f-kzvd5 13m 305Mi
Messing with Xmx having MaxRAM is obviously affecting the JVM but how is not possible to control the amount of memory we have on virtualized containers ? I know that free command will report the host available RAM but OpenJDK should be using cgroups rihgt?.
I'm still monitoring the memory ...
EDIT: A new hope
I did two things, the first one was to remove again my container limit, I want to analyze how much it will grow, also I added a new flag to see how the process is using the native memory -XX:NativeMemoryTracking=summary
At the beginning every thing was normal, the process started consuming like 300MB via kubectl top pod so I let it running about 4 hours and then ...
kubectl top pod
NAME CPU(cores) MEMORY(bytes)
test-646864bc48-69wm2 54m 645Mi
kind of expected, right ? but then I checked the native memory usage
jcmd <PID> VM.native_memory summary
Native Memory Tracking:
Total: reserved=2780631KB, committed=536883KB
- Java Heap (reserved=131072KB, committed=120896KB)
(mmap: reserved=131072KB, committed=120896KB)
- Class (reserved=203583KB, committed=92263KB)
(classes #17086)
( instance classes #15957, array classes #1129)
(malloc=2879KB #44797)
(mmap: reserved=200704KB, committed=89384KB)
( Metadata: )
( reserved=77824KB, committed=77480KB)
( used=76069KB)
( free=1411KB)
( waste=0KB =0.00%)
( Class space:)
( reserved=122880KB, committed=11904KB)
( used=10967KB)
( free=937KB)
( waste=0KB =0.00%)
- Thread (reserved=2126472KB, committed=222584KB)
(thread #2059)
(stack: reserved=2116644KB, committed=212756KB)
(malloc=7415KB #10299)
(arena=2413KB #4116)
- Code (reserved=249957KB, committed=31621KB)
(malloc=2269KB #9949)
(mmap: reserved=247688KB, committed=29352KB)
- GC (reserved=951KB, committed=923KB)
(malloc=519KB #1742)
(mmap: reserved=432KB, committed=404KB)
- Compiler (reserved=1913KB, committed=1913KB)
(malloc=1783KB #1343)
(arena=131KB #5)
- Internal (reserved=7798KB, committed=7798KB)
(malloc=7758KB #28415)
(mmap: reserved=40KB, committed=40KB)
- Other (reserved=32304KB, committed=32304KB)
(malloc=32304KB #3030)
- Symbol (reserved=20616KB, committed=20616KB)
(malloc=17475KB #212850)
(arena=3141KB #1)
- Native Memory Tracking (reserved=5417KB, committed=5417KB)
(malloc=347KB #4494)
(tracking overhead=5070KB)
- Arena Chunk (reserved=241KB, committed=241KB)
(malloc=241KB)
- Logging (reserved=4KB, committed=4KB)
(malloc=4KB #184)
- Arguments (reserved=17KB, committed=17KB)
(malloc=17KB #469)
- Module (reserved=286KB, committed=286KB)
(malloc=286KB #2704)
Wait, What ? 2.1 GB reserved for threads? and 222 MB being used, what is this ? I currently don't know, I just saw it...
I need time trying to understand why this is happening
I finally found my issue and I want to share it so others can benefit in some way from this.
As I found on my last edit I had a thread problem that was causing all the memory consumption over time, specifically we was using an asynchronous method from a third party library without properly taking care those resources (ensure those calls was ending correctly in this case).
I was able to detect the issue because I used a memory limit on my kubernete deployment from the beginning (which is a good practice on production environments) and then I monitored very closely my app memory consumption using tools like jstat, jcmd, visualvm, kill -3 and most importantly the -XX:NativeMemoryTracking=summary flag that gave me so much detail in this regard.
I have 1 x master node and 1 x slave node setup.
My issue is when running the map reduce processing. The slave node doesn't seem working. Anyone can provide help on how to check, to change and ensure the slave is working?
The config files info can be found on the URL below too
https://drive.google.com/file/d/1ULEe6k2zYnfQDQUQIbz_xR29WgT1DJhB/view
Here are my observation
1) When i check the CPU resources utilization, The slaves doesn't seem working and CPU resources at 0% when running the map reduce job while the master at 44% CPU resources. refer to the attachment.
2) When i run the dfs report it show it has 2 live nodes but on the cluster web it show only 1. Refer to the attachment and below.
3) The total processing time of map reduce is same with or without the slave
-------------------------------------------------
Live datanodes (2):
Name: 192.168.249.128:9866 (node-master)
Hostname: localhost
Decommission Status : Normal
Configured Capacity: 20587741184 (19.17 GB)
DFS Used: 174785723 (166.69 MB)
Non DFS Used: 60308293 (57.51 MB)
DFS Remaining: 20352647168 (18.95 GB)
DFS Used%: 0.85%
DFS Remaining%: 98.86%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Oct 23 11:17:39 PDT 2018
Last Block Report: Tue Oct 23 11:07:32 PDT 2018
Num of Blocks: 93
Name: 192.168.249.129:9866 (node1)
Hostname: localhost
Decommission Status : Normal
Configured Capacity: 20587741184 (19.17 GB)
DFS Used: 85743 (83.73 KB)
Non DFS Used: 33775889 (32.21 MB)
DFS Remaining: 20553879552 (19.14 GB)
DFS Used%: 0.00%
DFS Remaining%: 99.84%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Oct 23 11:17:38 PDT 2018
Last Block Report: Tue Oct 23 11:03:59 PDT 2018
Num of Blocks: 4
You're showing datanodes with dfsreport, not nodemanagers that actually are processing the data. In the YARN UI, you will want to take note of the "Active Nodes" counter, which in your case is 1. That would make sense if the master is a namenode and resource manager while the slave would be a datanode and nodemanager.
Other than that, if you have a non splittable file, for example a ZIP, or your file is less than the block size (by default 128 MB), then only one mapper will process that. Plus, it's not guaranteed that mappers (or reducers) will be distributed evenly over all available resources
Outside of a learning environment, though, 40 GB of storage and 8 GB of RAM would be better spent on multi threading rather than distributed computing (or a proper database; i.e parse files and load them into a queryable store). Or use Spark or Pig, which don't require Hadoop, but are much easier to work with than MapReduce
I have a Java application that uses a native library for some of its functionality. It uses JNI to control the native library and also receives asynchronous callback from the library. You can think of it as a Java frontend and native backend that communicate with each other.
I am facing a memory leak. Shortly after I start the application, the memory slowly but steadily increases. So I tried to look what could cause the leak.
First, I tried replacing the Java frontend with a simple C++ text interface. That way, the application doesn't use Java in any way - and the leaks stopped. So the problem must be in Java frontend.
So I fired up the jvisualVM to see if the heap increases - and it turned out it doesn't. The Java heap size was fairly constant. I even launched the program with xmx32m, but the memory kept increasing well past 100m without any OutOfMemoryErrors. In fact, the jvisualVM showed Java heap at about 7m.
So I dug deeper into the program with WinDbg. I analyzed the heap patterns with !heap -s command and I got this:
Heaps on a freshly run program:
0:059> !heap -s
LFH Key : 0x382288b9
Termination on corruption : ENABLED
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-----------------------------------------------------------------------------
00330000 00000002 2048 1704 2048 22 71 2 0 0 LFH
005b0000 00001002 1088 212 1088 68 3 2 0 0 LFH
00aa0000 00001002 1088 108 1088 15 7 2 0 0 LFH
004f0000 00001002 15424 12876 15424 1372 89 9 0 1 LFH
...
0:059> !heap -stat -h 004f0000
heap # 004f0000
group-by: TOTSIZE max-display: 20
size #blocks total ( %) (percent of total busy bytes)
2b110 20 - 562200 (60.36)
98 166e - d5150 (9.33)
6cd20 1 - 6cd20 (4.77)
...
Heaps on a program that has been running for about half an hour:
0:046> !heap -s
LFH Key : 0x5e47ba72
Termination on corruption : ENABLED
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-----------------------------------------------------------------------------
006b0000 00000002 2048 1744 2048 46 92 2 0 0 LFH
00200000 00001002 1088 220 1088 68 3 2 0 0 LFH
00950000 00001002 1088 108 1088 15 7 2 0 0 LFH
001b0000 00001002 47808 31936 47808 1855 102 12 0 0 LFH
...
0:046> !heap -stat -h 001b0000
heap # 001b0000
group-by: TOTSIZE max-display: 20
size #blocks total ( %) (percent of total busy bytes)
98 59d1 - 355418 (36.67)
2b110 10 - 2b1100 (29.61)
6cd20 1 - 6cd20 (4.68)
...
Now it can be clearly seen that the leaks are caused by a growing number of blocks with size 98. But when I try to analyze one of the blocks with !heap -p -a, I get:
*** ERROR: Symbol file could not be found. Defaulted to export symbols for jvm.dll
without any stack trace. So the blocks are allocated somewhere inside the jvm.dll, and because there are no pdbs for JVM, I cannot debug the leak further.
I managed to pinpoint where the leak is occuring in my code. All callbacs to the Java frontend pass through one function:
void callback(JNIEnv *env, int stream, double value, char *callbackName){
jclass jni = env->FindClass("nativ/Callbacks");
jmethodID callbackMethodID = env->GetStaticMethodID(jni, callbackName, "(ID)V");
jvalue params[2];
params[0].i = (long)(stream);
params[1].d = value;
env->CallStaticVoidMethodA(jni, callbackMethodID, params); //commenting this out stops the leaks
}
When I comment out the last command, the leaks stop, but I get no feedback back to the frontend.
Could this be a JVM bug? How do I find out?
malloc() internally calls HeapAlloc(). I guess you need a 'Release' method to release the memory allocated by JVM, as long as your library hold reference to JVM's internal state.
I have deployed Java code on two different servers.The code is doing File Writing operations.
On the local server ,parameters are :
uname -a
SunOS snmi5001 5.10 Generic_120011-14 sun4u sparc SUNW,SPARC-Enterprise
ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 389296
coredump(blocks) unlimited
nofiles(descriptors) 20000
vmemory(kbytes) unlimited
Java Version:
java version "1.5.0_12"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_12-b04)
Java HotSpot(TM) Server VM (build 1.5.0_12-b04, mixed mode)
On a Different(lets say MIT) server :
uname -a
SunOS au11qapcwbtels2 5.10 Generic_147440-05 sun4u sparc SUNW,Sun-Fire-15000
ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 8192
coredump(blocks) unlimited
nofiles(descriptors) 256
vmemory(kbytes) unlimited
java -version
java version "1.5.0_32"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_32-b05)
Java HotSpot(TM) Server VM (build 1.5.0_32-b05, mixed mode)
The problem is that the code is running signficatly slower on the MIT server.
Because of the difference in nofiles and stack for the two OS's ,i thought if i change the ulimit -s and ulimit -n it would make a difference.
I cannot change the parameters on MIT server without confirming the problem,so the decreased the ulimit parameters for the local server and retested.But code finished execution is same time.
I have no idea what difference between the OS parameters which could be causing this.
Any help is appreciated.I will post more paramters if anyone tells me what to look for.
EDIT:
For MIT Server
No of CPU: psrinfo -p
24
psrinfo -pv
The physical processor has 2 virtual processors (0 4)
UltraSPARC-IV+ (portid 0 impl 0x19 ver 0x24 clock 1800 MHz)
The physical processor has 2 virtual processors (1 5)
UltraSPARC-IV+ (portid 1 impl 0x19 ver 0x24 clock 1800 MHz)
The physical processor has 2 virtual processors (2 6)
UltraSPARC-IV+ (portid 2 impl 0x19 ver 0x24 clock 1800 MHz)
The physical processor has 2 virtual processors (3 7)
UltraSPARC-IV+ (portid 3 impl 0x19 ver 0x24 clock 1800 MHz)
The physical processor has 2 virtual processors (32 36)
UltraSPARC-IV+ (portid 32 impl 0x19 ver 0x24 clock 1800 MHz)
The physical processor has 2 virtual processors (33 37)
UltraSPARC-IV+ (portid 33 impl 0x19 ver 0x24 clock 1800 MHz)
The physical processor has 2 virtual processors (34 38)
UltraSPARC-IV+ (portid 34 impl 0x19 ver 0x24 clock 1800 MHz)
The physical processor has 2 virtual processors (35 39)
UltraSPARC-IV+ (portid 35 impl 0x19 ver 0x24 clock 1800 MHz)
The physical processor has 2 virtual processors (64 68)
UltraSPARC-IV+ (portid 64 impl 0x19 ver 0x24 clock 1800 MHz)
The physical processor has 2 virtual processors (65 69)
UltraSPARC-IV+ (portid 65 impl 0x19 ver 0x24 clock 1800 MHz)
The physical processor has 2 virtual processors (66 70)
UltraSPARC-IV+ (portid 66 impl 0x19 ver 0x24 clock 1800 MHz)
The physical processor has 2 virtual processors (67 71)
UltraSPARC-IV+ (portid 67 impl 0x19 ver 0x24 clock 1800 MHz)
The physical processor has 2 virtual processors (96 100)
UltraSPARC-IV+ (portid 96 impl 0x19 ver 0x24 clock 1800 MHz)
The physical processor has 2 virtual processors (97 101)
UltraSPARC-IV+ (portid 97 impl 0x19 ver 0x24 clock 1800 MHz)
The physical processor has 2 virtual processors (98 102)
UltraSPARC-IV+ (portid 98 impl 0x19 ver 0x24 clock 1800 MHz)
The physical processor has 2 virtual processors (99 103)
UltraSPARC-IV+ (portid 99 impl 0x19 ver 0x24 clock 1800 MHz)
The physical processor has 2 virtual processors (128 132)
UltraSPARC-IV+ (portid 128 impl 0x19 ver 0x24 clock 1800 MHz)
The physical processor has 2 virtual processors (129 133)
UltraSPARC-IV+ (portid 129 impl 0x19 ver 0x24 clock 1800 MHz)
The physical processor has 2 virtual processors (130 134)
UltraSPARC-IV+ (portid 130 impl 0x19 ver 0x24 clock 1800 MHz)
The physical processor has 2 virtual processors (131 135)
UltraSPARC-IV+ (portid 131 impl 0x19 ver 0x24 clock 1800 MHz)
The physical processor has 2 virtual processors (224 228)
UltraSPARC-IV+ (portid 224 impl 0x19 ver 0x24 clock 1800 MHz)
The physical processor has 2 virtual processors (225 229)
UltraSPARC-IV+ (portid 225 impl 0x19 ver 0x24 clock 1800 MHz)
The physical processor has 2 virtual processors (226 230)
UltraSPARC-IV+ (portid 226 impl 0x19 ver 0x24 clock 1800 MHz)
The physical processor has 2 virtual processors (227 231)
UltraSPARC-IV+ (portid 227 impl 0x19 ver 0x24 clock 1800 MHz)
kstat cpu_info :
module: cpu_info instance: 231
name: cpu_info231 class: misc
brand UltraSPARC-IV+
chip_id 227
clock_MHz 1800
core_id 231
cpu_fru hc:///component=SB7
cpu_type sparcv9
crtime 587.102844985
current_clock_Hz 1799843256
device_ID 9223937394446500460
fpu_type sparcv9
implementation UltraSPARC-IV+ (portid 227 impl 0x19 ver 0x24 clock 1800 MHz)
pg_id 48
snaptime 19846866.5310415
state on-line
state_begin 1334854522
For the Local server i could only get the kstat info :
module: cpu_info instance: 0
name: cpu_info0 class: misc
brand SPARC64-VI
chip_id 1024
clock_MHz 2150
core_id 0
cpu_fru hc:///component=/MBU_A/CPUM0
cpu_type sparcv9
crtime 288.5675516
device_ID 250691889836161
fpu_type sparcv9
implementation SPARC64-VI (portid 1024 impl 0x6 ver 0x93 clock 2150 MHz)
snaptime 207506.8330168
state on-line
state_begin 1354493257
module: cpu_info instance: 1
name: cpu_info1 class: misc
brand SPARC64-VI
chip_id 1024
clock_MHz 2150
core_id 0
cpu_fru hc:///component=/MBU_A/CPUM0
cpu_type sparcv9
crtime 323.4572206
device_ID 250691889836161
fpu_type sparcv9
implementation SPARC64-VI (portid 1024 impl 0x6 ver 0x93 clock 2150 MHz)
snaptime 207506.8336113
state on-line
state_begin 1354493292
Similarly total 59 instances .
Also the memory for local server : vmstat
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr s0 s1 s4 s1 in sy cs us sy id
0 0 0 143845984 93159232 431 895 1249 30 29 0 2 6 0 -0 1 3284 72450 6140 11 3 86
The memory for the MIT server : vmstat
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr m0 m1 m2 m3 in sy cs us sy id
0 0 0 180243376 184123896 81 786 248 15 15 0 0 3 14 -0 4 1854 7563 2072 1 1 98
df -h for MIT server:
Filesystem Size Used Available Capacity Mounted on
/dev/md/dsk/d0 7.9G 6.7G 1.1G 86% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 171G 1.7M 171G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap2.so.1
7.9G 6.7G 1.1G 86% /platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1
7.9G 6.7G 1.1G 86% /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
/dev/md/dsk/d3 7.9G 6.6G 1.2G 85% /var
swap 6.0G 56K 6.0G 1% /tmp
swap 171G 40K 171G 1% /var/run
swap 171G 0K 171G 0% /dev/vx/dmp
swap 171G 0K 171G 0% /dev/vx/rdmp
/dev/md/dsk/d5 2.0G 393M 1.5G 21% /home
/dev/vx/dsk/appdg/oravl
2.0G 17M 2.0G 1% /ora
/dev/md/dsk/d60 1.9G 364M 1.5G 19% /apps/stats
/dev/md/dsk/d4 16G 2.1G 14G 14% /var/crash
/dev/md/dsk/d61 1005M 330M 594M 36% /opt/controlm6
/dev/vx/dsk/appdg/oraproductvl
10G 2.3G 7.6G 24% /ora/product
/dev/md/dsk/d63 963M 1.0M 904M 1% /var/opt/app
/dev/vx/dsk/dmldg/appsdmlsvtvl
1.0T 130G 887G 13% /apps/dml/svt
/dev/vx/dsk/appdg/homeappusersvl
20G 19G 645M 97% /home/app/users
/dev/vx/dsk/dmldg/appsdmlmit2vl
20G 66M 20G 1% /apps/dml/mit2
/dev/vx/dsk/dmldg/datadmlmit2vl
1.9T 1.1T 773G 61% /data/dml/mit2
/dev/md/dsk/d62 9.8G 30M 9.7G 1% /usr/openv/netbackup/logs
df -h for local server :
Filesystem Size Used Available Capacity Mounted on
/dev/dsk/c0t0d0s0 20G 7.7G 12G 40% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 140G 1.6M 140G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
fd 0K 0K 0K 0% /dev/fd
/dev/dsk/c0t0d0s5 9.8G 9.3G 483M 96% /var
swap 140G 504K 140G 1% /tmp
swap 140G 80K 140G 1% /var/run
swap 140G 0K 140G 0% /dev/vx/dmp
swap 140G 0K 140G 0% /dev/vx/rdmp
/dev/dsk/c0t0d0s6 9.8G 9.4G 403M 96% /opt
/dev/vx/dsk/eva8k/tlkhome
2.0G 66M 1.8G 4% /tlkhome
/dev/vx/dsk/eva8k/tlkuser4
48G 26G 20G 57% /tlkuser4
/dev/vx/dsk/eva8k/ST82
1.1G 17M 999M 2% /ST_A_82
/dev/vx/dsk/eva8k/tlkuser11
37G 37G 176M 100% /tlkuser11
/dev/vx/dsk/eva8k/oravl97
20G 12G 7.3G 63% /oravl97
/dev/vx/dsk/eva8k/tlkuser5
32G 23G 8.3G 74% /tlkuser5
/dev/vx/dsk/eva8k/mbtlkproj1
2.0G 18M 1.9G 1% /mbtlkproj1
/dev/vx/dsk/eva8k/Oravol98
38G 25G 12G 68% /oravl98
/dev/vx/dsk/eva8k_new/tlkuser15
57G 57G 0K 100% /tlkuser15
/dev/vx/dsk/eva8k/Oravol1
39G 16G 22G 42% /oravl01
/dev/vx/dsk/eva8k/Oravol99
30G 8.3G 20G 30% /oravl99
/dev/vx/dsk/eva8k/tlkuser9
18G 13G 4.8G 73% /tlkuser9
/dev/vx/dsk/eva8k/oravl08
32G 25G 6.3G 81% /oravl08
/dev/vx/dsk/eva8k/oravl07
46G 45G 1.2G 98% /oravl07
/dev/vx/dsk/eva8k/Oravol3
103G 90G 13G 88% /oravl03
/dev/vx/dsk/eva8k_new/tlkuser12
79G 79G 0K 100% /tlkuser12
/dev/vx/dsk/eva8k/Oravol4
88G 83G 4.3G 96% /oravl04
/dev/vx/dsk/eva8k/oravl999
10G 401M 9.0G 5% /oravl999
/dev/vx/dsk/eva8k_new/tlkuser14
54G 39G 15G 73% /tlkuser14
/dev/vx/dsk/eva8k/Oravol2
85G 69G 14G 84% /oravl02
/dev/vx/dsk/eva8k/sdkhome
1.0G 17M 944M 2% /sdkhome
/dev/vx/dsk/eva8k/tlkuser7
44G 36G 7.8G 83% /tlkuser7
/dev/vx/dsk/eva8k/tlkproj1
1.0G 17M 944M 2% /tlkproj1
/dev/vx/dsk/eva8k/tlkuser3
35G 29G 5.9G 84% /tlkuser3
/dev/vx/dsk/eva8k/tlkuser10
29G 29G 2.7M 100% /tlkuser10
/dev/vx/dsk/eva8k/oravl05
30G 29G 1.2G 97% /oravl05
/dev/vx/dsk/eva8k/oravl06
36G 34G 1.6G 96% /oravl06
/dev/vx/dsk/eva8k/tlkuser6
29G 27G 2.1G 93% /tlkuser6
/dev/vx/dsk/eva8k/tlkuser2
36G 30G 5.8G 84% /tlkuser2
/dev/vx/dsk/eva8k/tlkuser1
66G 49G 16G 75% /tlkuser1
/dev/vx/dsk/eva8k_new/tlkuser13
84G 77G 7.0G 92% /tlkuser13
/dev/vx/dsk/eva8k_new/tlkuser16
44G 37G 6.4G 86% /tlkuser16
/dev/vx/dsk/eva8k/db2
1.0G 593M 404M 60% /opt/db2V8.1
/dev/vx/dsk/eva8k/WebSphere6029
3.0G 2.2G 776M 75% /opt/WebSphere6029
/dev/vx/dsk/eva8k/websphere6
2.0G 88M 1.8G 5% /opt/websphere6
/dev/vx/dsk/eva8k/wli
4.0G 1.4G 2.5G 36% /opt/wli10gR3MP1
/dev/vx/dsk/eva8k/user
2.0G 19M 1.9G 1% /user/telstra/history
dvcinasdm3:/oracle_cdrom/data
576G 576G 206M 100% /oracle_cdrom
dvcinasdm2:/system_kits
822G 818G 4.2G 100% /system_kits
dvcinasdm2:/db_share 295G 283G 13G 96% /db_share
dvcinas2dm2:/system_data/data
315G 283G 32G 90% /system_data
dvcinas2dm2:/ossinfra/data
49G 18G 32G 36% /ossinfra
For local server the command : /usr/sbin/prtpicl -v | egrep "devfs-path|driver-name|subsystem-id" | nawk '/:subsystem-id/ { print $0; getline; print $0; getline; print $0; }' | nawk -F: '{ print $2 }' gives :
subsystem-id 0x13a1
devfs-path /pci#0,600000/pci#0/pci#8/pci#0/scsi#1
driver-name mpt
subsystem-id 0x1648
devfs-path /pci#0,600000/pci#0/pci#8/pci#0/network#2
driver-name bge
subsystem-id 0x1648
devfs-path /pci#0,600000/pci#0/pci#8/pci#0/network#2,1
driver-name bge
subsystem-id 0xfc11
devfs-path /pci#0,600000/pci#0/pci#8/pci#0,1/SUNW,emlxs#1
driver-name emlxs
subsystem-id 0x125e
devfs-path /pci#3,700000/network
driver-name e1000g
subsystem-id 0x125e
devfs-path /pci#3,700000/network
driver-name e1000g
subsystem-id 0x13a1
devfs-path /pci#10,600000/pci#0/pci#8/pci#0/scsi#1
driver-name mpt
subsystem-id 0x1648
devfs-path /pci#10,600000/pci#0/pci#8/pci#0/network
driver-name bge
subsystem-id 0x1648
devfs-path /pci#10,600000/pci#0/pci#8/pci#0/network
driver-name bge
subsystem-id 0xfc11
devfs-path /pci#10,600000/pci#0/pci#8/pci#0,1/SUNW,emlxs#1
driver-name emlxs
For MIT server it gives :
subsystem-id 0xfc00
devfs-path /pci#3d,600000/SUNW,emlxs#1
driver-name emlxs
subsystem-id 0xfc00
devfs-path /pci#3d,600000/SUNW,emlxs#1,1
driver-name emlxs
subsystem-id 0xfc00
devfs-path /pci#5d,600000/SUNW,emlxs#1
driver-name emlxs
subsystem-id 0xfc00
devfs-path /pci#5d,600000/SUNW,emlxs#1,1
driver-name emlxs
on the start of i/o consuming code,iostat -d c3t50001FE1502613A9d7 5 shows :
1161 37 134 0 0 0 0 0 0 329 24 2
3 2 3 0 0 0 0 0 0 554 71 10
195 26 6 0 0 0 0 0 0 853 108 19
37 6 4 0 0 0 0 0 0 1134 143 10
140 8 7 0 0 0 0 0 0 3689 86 7
173 24 85 0 0 0 0 0 0 9914 74 9
0 0 0 0 0 0 0 0 0 12323 114 2
13 9 41 0 0 0 0 0 0 10609 117 2
0 0 0 0 0 0 0 0 0 10746 72 2
sd0 sd1 sd4 ssd134
kps tps serv kps tps serv kps tps serv kps tps serv
1 0 3 0 0 0 0 0 0 11376 137 2
2 0 10 0 0 0 0 0 0 11980 157 3
231 39 14 0 0 0 0 0 0 10584 140 3
785 175 5 0 0 0 0 0 0 13503 170 2
9 4 32 0 0 0 0 0 0 11597 168 2
7 1 6 0 0 0 0 0 0 11555 106 2
On the MIT server iostat shows :
0.0 460.4 0.0 4029.2 0.4 0.6 0.9 1.2 2 11 c6t5006048452A79BD6d206
0.0 885.2 0.0 8349.3 0.5 0.8 0.6 0.9 3 24 c4t5006048452A79BD9d206
0.0 660.0 0.0 5618.8 0.5 0.7 0.7 1.0 2 18 c6t5006048452A79BD6d206
0.0 779.1 0.0 7408.6 0.3 0.7 0.4 0.8 2 21 c4t5006048452A79BD9d206
0.0 569.8 0.0 4893.9 0.3 0.5 0.5 1.0 2 15 c6t5006048452A79BD6d206
0.0 521.5 0.0 5433.6 0.2 0.5 0.3 0.9 1 16 c4t5006048452A79BD9d206
0.0 362.8 0.0 3134.8 0.2 0.4 0.6 1.1 1 10 c6t5006048452A79BD6d206
So,we can see that the kps for local server is much more than that of MIT server,during the time of max i/o operations.
Conclusions on the local and MIT server
A quick glance at your machines:
Local server is a small-chassis Sun Enterprise machine on SPARC VI, possibly a M4000. You are writing data on an external file system (called eva8k_new) over multipathed PCIe slots using a direct SCSI connection. This machine is 3-5 years old.
MIT server is a SunFire 15000 - an old, mainframe-class Solaris server. It has 12 dual-core UltraSPARC IV+ CPUs in the hardware partition that you are running in (the physical chassis can be logically split into several different hardware partitions which cannot see each other at all). You are writing to a SAN over a 1Gb/s or 2Gb/s fibre channel (the LUN might be called dmldg) on multipathed PCI slots. This machine is at least 7 years old, but the technology is 10 years old.
The storage system used on the local and MIT servers are both external. The performance of the storage is dependent on a number of factors including the I/O speed of the physical interface (PCI vs. PCIe) and the interconnect (1 or 2Gb/s fibre channel on the SunFire). This article explains how to get this information.
Theoretical performance problems
The performance of your application may be gated on one of several bottlenecks (assuming no code problems and network latencies/bottlenecks):
CPU: If your CPU were faster, you could get the application to go faster.
Single-threaded: Some applications are bottlenecked on a single thread, and so adding threads/cores does not improve performance.
Multi-thread capable: Sometimes, if the application is multi-threaded, adding more threads/cores can improve performance
Storage IO bandwidth or IOPS: The application is reading from or writing to storage system (including disks). Adding disks, changing RAID type, adding disk cache and other things may improve IO or IOPS; alternatively you might change to another storage subsystem.
IO bandwidth is the maximum amount of data that can pass in a given second, which may saturate first if streaming data to or from a disk
IOPS (IO operations per second) is the maximum number of IO commands (read or write) that can be processed per second. Typically this saturates first for processes that are searching for or in files, or (re)writing small chunks.
Looking at your issue, we can do a quick check:
If the issue is CPU, then:
You should see the CPU utilisation for the java process in top to be very high during program execution (90-99%)
The problem is not likely threading, because the SunFire MIT Server has a good number of cores available, therefore the problem is single-thread performance.
The UltraSPARC IV+ is quite a lot slower than the SPARC VI's. This is easily a noticeable drop, so this might be the reason the MIT server is slower
If the issue is IO, then:
You will see the CPU utilization for the java process in top to be low (probably 50% or lower, but possibly as high as 80% or so as a rule of thumb)
You will see the IO to the disk subsystem using iostat saturate - that is immediately rise to a fixed number and not really 'peak' over this number. The following options might be useful: iostat -d <disk> 5. The throughput value and number of operations/sec will be higher on the local server, and lower on the MIT server
You need to speak to the administrator to see if a faster storage system is available for the MIT server.
All the above is assuming that other processes on the servers are not interfering with the operation of your program - clearly another high-cpu process or one writing a lot to the same disk will affect the performance greatly.
Conclusions
From the CPU data you provide, there is no evidence of a CPU bottleneck.
From the iostat data you provide, as you comment, the IO on the SunFire is significantly below that of the local server. This is likely the result of the attached storage, namely at least one of:
Lower performance of PCI vs. PCIe in the local server
Probable 1Gb/s fibre channel slower than the (possibly faster) SCSI attached storage on the local server
Older and slower disks on the SunFire vs. the local attached storage
(Note that the same SAN appears connected to the local server, so this could be tested).
With clear evidence of a hardware being the cause of the performance difference, there is little that can be done.
Some things may improve the general performance of the application, though. It's a good idea to run a Java profiler on the application. Examples include Netbeans and JProfiler.
The profiler will identify which IO operations are the problem. You might be able to:
Generally improve the algorithm at the bottleneck
Use a caching layer to aggregate multiple write operations before writing once
If using the original Java I/O clases (in java.io), you could rewrite the application to use Java NIO
EDIT: Thoughts on a caching layer
Assumption: That the problematic IO operation is either repeatedly writing small chunks to disk and flushing them, or keeps performing random-access write-to-disk operations. Your application may already be streaming to disk efficiently, in which case caching would not be useful.
When you have an expensive or slow operation in an application, you will want to minimize the number of times it is invoked - ideally to the theoretical minimum which hopefully is 1. However your code may not be doing so - for example you are using an OutputStream and writing small chunks to it and flushing to disk. In this case, you may write each disk block (8k) many times, each time with just a little more data.
Instead, you could use a RAM cache to consolidate all the writes; when you know there will be no more writes to the block, then you write it exactly once to disk. For streaming, Java has the BufferedOutputStream for this for simplistic cases. When you obtain the FileOutputStream instance from the File, wrap the FileOutputStream in the BufferedOutputStream and use only the BufferedOutputStream.
If, however, you are performing true random-access writes (eg using a java.io.RandomAccessFile), and moving the file pointer with RandomAccessFile.seek(), you may want to consider writing a write cache in RAM. Precisely what this would look like depends wholly on your file data structure, but you might want to start with a block paging mechanism. Chapter 1 of Java NIO has an introduction to those concepts, but hopefully you either don't need to go there or you find a close match in the NIO API.
If you are concerned about performance, I wouldn't use such an old version of Java. It's quite likely that the OS calls and native code generated for one architecture is sub-optimal. I would expect the newer architecure to suffer.
Can you compare Java 7 between these machines?
The ulimit suggest the first machine has much more resources. Which model of CPUs and how much memory do the two machines have?
I have a standalone Java application which has:
-Xmx1024m -Xms1024m -XX:MaxPermSize=256m -XX:PermSize=256m
Over the course of time it hogs more and more memory, starts to swap(and slow down) and eventually died a number of times(not OOM+dump, just died, nothing on /var/log/messages).
What I've tried so far:
Heap dumps: live objects take 200-300Mb out of 1G heap --> ok with heap
Number of live threads is rather constant(~60-70) --> ok with thread stacks
JMX stops answering at some point(mb it answers but timeout is lower)
Turn off swap - it dies faster
strace - seems everything slows down a bit, app still haven't died, and not sure for which things look there
Checking top: VIRT grows to 5.5Gb, RSS to 3.7 Gb
Checking vmstat(obviously we start to swap):
--------------------------procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
Sun Jul 22 16:10:26 2012: r b swpd free buff cache si so bi bo in cs us sy id wa st
Sun Jul 22 16:48:41 2012: 0 0 138652 2502504 40360 706592 1 0 169 21 1047 206 20 1 74 4 0
. . .
Sun Jul 22 18:10:59 2012: 0 0 138648 24816 58600 1609212 0 0 124 669 913 24436 43 22 34 2 0
Sun Jul 22 19:10:22 2012: 33 1 138644 33304 4960 1107480 0 0 100 536 810 19536 44 22 23 10 0
Sun Jul 22 20:10:28 2012: 54 1 213916 26928 2864 578832 3 360 100 710 639 12702 43 16 30 11 0
Sun Jul 22 21:10:43 2012: 0 0 629256 26116 2992 467808 84 176 278 1320 1293 24243 50 19 29 3 0
Sun Jul 22 22:10:55 2012: 4 0 772168 29136 1240 165900 203 94 435 1188 1278 21851 48 16 33 2 0
Sun Jul 22 23:10:57 2012: 0 1 2429536 26280 1880 169816 6875 6471 7081 6878 2146 8447 18 37 1 45 0
sar also shows steady system% growth = swapping:
15:40:02 CPU %user %nice %system %iowait %steal %idle
17:40:01 all 51.00 0.00 7.81 3.04 0.00 38.15
19:40:01 all 48.43 0.00 18.89 2.07 0.00 30.60
20:40:01 all 43.93 0.00 15.84 5.54 0.00 34.70
21:40:01 all 46.14 0.00 15.44 6.57 0.00 31.85
22:40:01 all 44.25 0.00 20.94 5.43 0.00 29.39
23:40:01 all 18.24 0.00 52.13 21.17 0.00 8.46
12:40:02 all 22.03 0.00 41.70 15.46 0.00 20.81
Checking pmap gaves the following largest contributors:
000000005416c000 1505760K rwx-- [ anon ]
00000000b0000000 1310720K rwx-- [ anon ]
00002aaab9001000 2079748K rwx-- [ anon ]
Trying to correlate addresses I've got from pmap from stuff dumped by strace gave me no matches
Adding more memory is not practical(just make problem appear later)
Switching JVM's is not possible(env is not under our control)
And the question is:
What else can I try to track down the problem's cause or try to work around it?
Something in your JVM is using an "unbounded" amount of non-Heap memory. Some possible candidates are:
Thread stacks.
Native heap allocated by some native code library.
Memory-mapped files.
The first possibility will show up as a large (and increasing) number of threads when you take a thread stack dump. (Just check it ... OK?)
The second one you can (probably) eliminate if your application (or some 3rd part library it uses) doesn't use any native libraries.
The third one you can eliminate if your application (or some 3rd part library it uses) doesn't use memory mapped files.
I would guess that the reason that you are not seeing OOME's is that your JVM is being killed by the Linux OOM killer. It is also possible that the JVM is bailing out in native code (e.g. due to a malloc failure not being handled properly), but I'd have thought that a JVM crash dump would be the more likely outcome ...
Problem was in a profiler library attached - it recorded CPU calls/allocation sites, thus required memory to store that.
So, human factor here :)
There is a known problem with Java and glibc >= 2.10 (includes Ubuntu >= 10.04, RHEL >= 6).
The cure is to set this env. variable:
export MALLOC_ARENA_MAX=4
There is an IBM article about setting MALLOC_ARENA_MAX
https://www.ibm.com/developerworks/community/blogs/kevgrig/entry/linux_glibc_2_10_rhel_6_malloc_may_show_excessive_virtual_memory_usage?lang=en
This blog post says
resident memory has been known to creep in a manner similar to a
memory leak or memory fragmentation.
search for MALLOC_ARENA_MAX on Google or SO for more references.
You might want to tune also other malloc options to optimize for low fragmentation of allocated memory:
# tune glibc memory allocation, optimize for low fragmentation
# limit the number of arenas
export MALLOC_ARENA_MAX=2
# disable dynamic mmap threshold, see M_MMAP_THRESHOLD in "man mallopt"
export MALLOC_MMAP_THRESHOLD_=131072
export MALLOC_TRIM_THRESHOLD_=131072
export MALLOC_TOP_PAD_=131072
export MALLOC_MMAP_MAX_=65536