I have 1 x master node and 1 x slave node setup.
My issue is when running the map reduce processing. The slave node doesn't seem working. Anyone can provide help on how to check, to change and ensure the slave is working?
The config files info can be found on the URL below too
https://drive.google.com/file/d/1ULEe6k2zYnfQDQUQIbz_xR29WgT1DJhB/view
Here are my observation
1) When i check the CPU resources utilization, The slaves doesn't seem working and CPU resources at 0% when running the map reduce job while the master at 44% CPU resources. refer to the attachment.
2) When i run the dfs report it show it has 2 live nodes but on the cluster web it show only 1. Refer to the attachment and below.
3) The total processing time of map reduce is same with or without the slave
-------------------------------------------------
Live datanodes (2):
Name: 192.168.249.128:9866 (node-master)
Hostname: localhost
Decommission Status : Normal
Configured Capacity: 20587741184 (19.17 GB)
DFS Used: 174785723 (166.69 MB)
Non DFS Used: 60308293 (57.51 MB)
DFS Remaining: 20352647168 (18.95 GB)
DFS Used%: 0.85%
DFS Remaining%: 98.86%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Oct 23 11:17:39 PDT 2018
Last Block Report: Tue Oct 23 11:07:32 PDT 2018
Num of Blocks: 93
Name: 192.168.249.129:9866 (node1)
Hostname: localhost
Decommission Status : Normal
Configured Capacity: 20587741184 (19.17 GB)
DFS Used: 85743 (83.73 KB)
Non DFS Used: 33775889 (32.21 MB)
DFS Remaining: 20553879552 (19.14 GB)
DFS Used%: 0.00%
DFS Remaining%: 99.84%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Oct 23 11:17:38 PDT 2018
Last Block Report: Tue Oct 23 11:03:59 PDT 2018
Num of Blocks: 4
You're showing datanodes with dfsreport, not nodemanagers that actually are processing the data. In the YARN UI, you will want to take note of the "Active Nodes" counter, which in your case is 1. That would make sense if the master is a namenode and resource manager while the slave would be a datanode and nodemanager.
Other than that, if you have a non splittable file, for example a ZIP, or your file is less than the block size (by default 128 MB), then only one mapper will process that. Plus, it's not guaranteed that mappers (or reducers) will be distributed evenly over all available resources
Outside of a learning environment, though, 40 GB of storage and 8 GB of RAM would be better spent on multi threading rather than distributed computing (or a proper database; i.e parse files and load them into a queryable store). Or use Spark or Pig, which don't require Hadoop, but are much easier to work with than MapReduce
Related
I'm working with OpenJDK 11 and a very simple SpringBoot application that almost the only thing it has is the SpringBoot actuator enabled so I can call /actuator/health etc.
I also have a kubernetes cluster on GCE very simple with just a pod with a container (containing this app of course)
My configuration has some key points that I want to highlight, it has some requirements and limits
resources:
limits:
memory: 600Mi
requests:
memory: 128Mi
And it has a readiness probe
readinessProbe:
initialDelaySeconds: 30
periodSeconds: 30
httpGet:
path: /actuator/health
port: 8080
I'm also setting a JVM_OPTS like (that my program is using obviously)
env:
- name: JVM_OPTS
value: "-XX:MaxRAM=512m"
The problem
I launch this and it gets OOMKilled in about 3 hours every time!
I'm never calling anything myself the only call is the readiness probe each 30 seconds that kubernetes does, and that is enough to exhaust the memory ? I have also not implemented anything out of the ordinary, just a Get method that says hello world along all the SpringBoot imports to have the actuators
If I run kubectl top pod XXXXXX I actually see how gradually get bigger and bigger
I have tried a lot of different configurations, tips, etc, but anything seems to work with a basic SpringBoot app
Is there a way to actually hard limit the memory in a way that Java can raise a OutOfMemory exception ? or to prevent this from happening?
Thanks in advance
EDIT: After 15h running
NAME READY STATUS RESTARTS AGE
pod/test-79fd5c5b59-56654 1/1 Running 4 15h
describe pod says...
State: Running
Started: Wed, 27 Feb 2019 10:29:09 +0000
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Wed, 27 Feb 2019 06:27:39 +0000
Finished: Wed, 27 Feb 2019 10:29:08 +0000
That last span of time is about 4 hours and only have 483 calls to /actuator/health, apparently that was enough to make java exceed the MaxRAM hint ?
EDIT: Almost 17h
its about to die again
$ kubectl top pod test-79fd5c5b59-56654
NAME CPU(cores) MEMORY(bytes)
test-79fd5c5b59-56654 43m 575Mi
EDIT: loosing any hope at 23h
NAME READY STATUS RESTARTS AGE
pod/test-79fd5c5b59-56654 1/1 Running 6 23h
describe pod:
State: Running
Started: Wed, 27 Feb 2019 18:01:45 +0000
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Wed, 27 Feb 2019 14:12:09 +0000
Finished: Wed, 27 Feb 2019 18:01:44 +0000
EDIT: A new finding
Yesterday night I was doing some interesting reading:
https://developers.redhat.com/blog/2017/03/14/java-inside-docker/
https://banzaicloud.com/blog/java10-container-sizing/
https://medium.com/adorsys/jvm-memory-settings-in-a-container-environment-64b0840e1d9e
TL;DR I decided to remove the memory limit and start the process again, the result was quite interesting (after like 11 hours running)
NAME CPU(cores) MEMORY(bytes)
test-84ff9d9bd9-77xmh 218m 1122Mi
So... WTH with that CPU? I kind expecting a big number on memory usage but what happens with the CPU?
The one thing I can think is that the GC is running as crazy thinking that the MaxRAM is 512m and he is using more than 1G. I'm wondering, is Java detecting ergonomics correctly? (I'm starting to doubt it)
To test my theory I set a limit of 512m and deploy the app this way and I found that from the start there is a unusual CPU load that it has to be the GC running very frequently
kubectl create ...
limitrange/mem-limit-range created
pod/test created
kubectl exec -it test-64ccb87fd7-5ltb6 /usr/bin/free
total used free shared buff/cache available
Mem: 7658200 1141412 4132708 19948 2384080 6202496
Swap: 0 0 0
kubectl top pod ..
NAME CPU(cores) MEMORY(bytes)
test-64ccb87fd7-5ltb6 522m 283Mi
522m is too much vCPU, so my logical next step was to ensure I'm using the most appropriated GC for this case, I changed the JVM_OPTS this way:
env:
- name: JVM_OPTS
value: "-XX:MaxRAM=512m -Xmx128m -XX:+UseSerialGC"
...
resources:
requests:
memory: 256Mi
cpu: 0.15
limits:
memory: 700Mi
And thats bring the vCPU usage to a reasonable status again, after kubectl top pod
NAME CPU(cores) MEMORY(bytes)
test-84f4c7445f-kzvd5 13m 305Mi
Messing with Xmx having MaxRAM is obviously affecting the JVM but how is not possible to control the amount of memory we have on virtualized containers ? I know that free command will report the host available RAM but OpenJDK should be using cgroups rihgt?.
I'm still monitoring the memory ...
EDIT: A new hope
I did two things, the first one was to remove again my container limit, I want to analyze how much it will grow, also I added a new flag to see how the process is using the native memory -XX:NativeMemoryTracking=summary
At the beginning every thing was normal, the process started consuming like 300MB via kubectl top pod so I let it running about 4 hours and then ...
kubectl top pod
NAME CPU(cores) MEMORY(bytes)
test-646864bc48-69wm2 54m 645Mi
kind of expected, right ? but then I checked the native memory usage
jcmd <PID> VM.native_memory summary
Native Memory Tracking:
Total: reserved=2780631KB, committed=536883KB
- Java Heap (reserved=131072KB, committed=120896KB)
(mmap: reserved=131072KB, committed=120896KB)
- Class (reserved=203583KB, committed=92263KB)
(classes #17086)
( instance classes #15957, array classes #1129)
(malloc=2879KB #44797)
(mmap: reserved=200704KB, committed=89384KB)
( Metadata: )
( reserved=77824KB, committed=77480KB)
( used=76069KB)
( free=1411KB)
( waste=0KB =0.00%)
( Class space:)
( reserved=122880KB, committed=11904KB)
( used=10967KB)
( free=937KB)
( waste=0KB =0.00%)
- Thread (reserved=2126472KB, committed=222584KB)
(thread #2059)
(stack: reserved=2116644KB, committed=212756KB)
(malloc=7415KB #10299)
(arena=2413KB #4116)
- Code (reserved=249957KB, committed=31621KB)
(malloc=2269KB #9949)
(mmap: reserved=247688KB, committed=29352KB)
- GC (reserved=951KB, committed=923KB)
(malloc=519KB #1742)
(mmap: reserved=432KB, committed=404KB)
- Compiler (reserved=1913KB, committed=1913KB)
(malloc=1783KB #1343)
(arena=131KB #5)
- Internal (reserved=7798KB, committed=7798KB)
(malloc=7758KB #28415)
(mmap: reserved=40KB, committed=40KB)
- Other (reserved=32304KB, committed=32304KB)
(malloc=32304KB #3030)
- Symbol (reserved=20616KB, committed=20616KB)
(malloc=17475KB #212850)
(arena=3141KB #1)
- Native Memory Tracking (reserved=5417KB, committed=5417KB)
(malloc=347KB #4494)
(tracking overhead=5070KB)
- Arena Chunk (reserved=241KB, committed=241KB)
(malloc=241KB)
- Logging (reserved=4KB, committed=4KB)
(malloc=4KB #184)
- Arguments (reserved=17KB, committed=17KB)
(malloc=17KB #469)
- Module (reserved=286KB, committed=286KB)
(malloc=286KB #2704)
Wait, What ? 2.1 GB reserved for threads? and 222 MB being used, what is this ? I currently don't know, I just saw it...
I need time trying to understand why this is happening
I finally found my issue and I want to share it so others can benefit in some way from this.
As I found on my last edit I had a thread problem that was causing all the memory consumption over time, specifically we was using an asynchronous method from a third party library without properly taking care those resources (ensure those calls was ending correctly in this case).
I was able to detect the issue because I used a memory limit on my kubernete deployment from the beginning (which is a good practice on production environments) and then I monitored very closely my app memory consumption using tools like jstat, jcmd, visualvm, kill -3 and most importantly the -XX:NativeMemoryTracking=summary flag that gave me so much detail in this regard.
I am running spark job on hadoop cluster, and the job is failing at few times with the exception :
exception : Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.JavaMain], main() threw exception, begin > end in range (begin, end): (1494159709088, 1494159706071)
the job ran successfully on the rerun.
After searching on google, It might be Clock skew between the Oozie server host and launcher host.
Is there a way i can check if there is clock skew ? or how can i check the time on all the nodes whether they are in sync or not.
Thanks
ntptime command output :
ntp_gettime() returns code 0 (OK)
time dcb9b19b.a2328f64 Sun, May 7 2017 14:45:47.633, (.633584090),
maximum error 434990 us, estimated error 815 us, TAI offset 0
ntp_adjtime() returns code 0 (OK)
modes 0x0 (),
offset 176.871 us, frequency -25.666 ppm, interval 1 s,
maximum error 434990 us, estimated error 815 us,
status 0x2001 (PLL,NANO),
time constant 10, precision 0.001 us, tolerance 500 ppm,
ntpstat command output :
synchronised to NTP server (174.68.168.57) at stratum 3
time correct to within 77 ms
polling server every 1024 s
I am having a problem with tomcat since switching to a different package provider (bitnami -> official debian).
Someone seems to be hitting our servers with a request (with malicious intent):
59.111.29.6 - - [04/Feb/2017:16:17:58 +0000] "-" 400 -
where "-" is the request path, which coincides with
Feb 04, 2017 4:17:58 PM org.apache.coyote.http11.AbstractHttp11Processor process
INFO: Error parsing HTTP request header
Note: further occurrences of HTTP header parsing errors will be logged at DEBUG level.
which coincides with the increased CPU usage.
The server status shows the following:
<h1>JVM</h1><p> Free memory: 355.58 MB Total memory: 833.13 MB Max memory: 2900.00 MB</p><table border="0"><thead><tr><th>Memory Pool</th><th>Type</th><th>Initial</th><th>Total</th><th>Maximum</th><th>Used</th></tr></thead><tbody><tr><td>Eden Space</td><td>Heap memory</td><td>34.12 MB</td><td>229.93 MB</td><td>800.00 MB</td><td>12.47 MB (1%)</td></tr><tr><td>Survivor Space</td><td>Heap memory</td><td>4.25 MB</td><td>28.68 MB</td><td>100.00 MB</td><td>2.22 MB (2%)</td></tr><tr><td>Tenured Gen</td><td>Heap memory</td><td>85.37 MB</td><td>574.51 MB</td><td>2000.00 MB</td><td>462.84 MB (23%)</td></tr><tr><td>Code Cache</td><td>Non-heap memory</td><td>2.43 MB</td><td>7.00 MB</td><td>48.00 MB</td><td>6.89 MB (14%)</td></tr><tr><td>Perm Gen</td><td>Non-heap memory</td><td>128.00 MB</td><td>128.00 MB</td><td>512.00 MB</td><td>52.57 MB (10%)</td></tr></tbody></table><h1>"http-nio-8080"</h1><p> Max threads: 200 Current thread count: 10 Current thread busy: 3 Keeped alive sockets count: 1<br> Max processing time: 301 ms Processing time: 71.068 s Request count: 10021 Error count: 2996 Bytes received: 0.00 MB Bytes sent: 3.18 MB</p><table border="0"><tr><th>Stage</th><th>Time</th><th>B Sent</th><th>B Recv</th><th>Client (Forwarded)</th><th>Client (Actual)</th><th>VHost</th><th>Request</th></tr><tr><td><strong>F</strong></td><td>1486364749526 ms</td><td>0 KB</td><td>0 KB</td><td>185.40.4.169</td><td>185.40.4.169</td><td nowrap>?</td><td nowrap class="row-left">? ? ?</td></tr><tr><td><strong>F</strong></td><td>1486364749526 ms</td><td>0 KB</td><td>0 KB</td><td>185.40.4.169</td><td>185.40.4.169</td><td nowrap>?</td><td nowrap class="row-left">? ? ?</td></tr><tr><td><strong>R</strong></td><td>?</td><td>?</td><td>?</td><td>?</td><td>?</td><td>?</td></tr><tr><td><strong>S</strong></td><td>36 ms</td><td>0 KB</td><td>0 KB</td><td>106.51.39.130</td><td>106.51.39.130</td><td nowrap>104.197.119.177</td><td nowrap class="row-left">GET /manager/status?org.apache.catalina.filters.CSRF_NONCE=072F9F6884D94C5D7B30D1D34CE61BD9 HTTP/1.1</td></tr><tr><td><strong>R</strong></td><td>?</td><td>?</td><td>?</td><td>?</td><td>?</td><td>?</td></tr></table><p>P: Parse and prepare request S: Service F: Finishing R: Ready K: Keepalive</p><hr size="1" noshade="noshade">
<center><font size="-1" color="#525D76">
So it doesn't seem like an out of memory issue (but I could be wrong).
How can I stop someone from making the request in the first place to avoid the issues I'm facing? My webapp running on tomcat restricts HTTP methods to GET/POST, but how can I configure tomcat as a whole to restrict them?
I would advise you to obtain a thread dump of your server :
Isolates the PID of the tomcat server using :
jps -l
Obtains a thread dump using :
kill -3 PID
or jstack PID
Then checks the Thread dump, you should find the reason of the hogging thread
I have a 2-node cluster(hadoop 2.6.0) with a master(as salve) and 1 slave in etc/hadoop/salve on the master machine.
After calling
start-yarn.sh
jps shows
:~$ jps
11821 SecondaryNameNode
11114 NameNode
12037 ResourceManager
11432 DataNode
12674 Jps
12363 NodeManager
and within a minute the NodeManager is gone with an BindException: Address already in use in the log file(pasted below). The port 8040 is open and used by ResoureManager.
log file on master
2014-12-19 17:21:25,185 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NodeManager metrics system...
2014-12-19 17:21:25,186 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NodeManager metrics system stopped.
2014-12-19 17:21:25,186 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NodeManager metrics system shutdown complete.
2014-12-19 17:21:25,186 FATAL org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting NodeManager
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.BindException: Problem binding to [0.0.0.0:8040] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException
at org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl.getServer(RpcServerFactoryPBImpl.java:139)
at org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC.getServer(HadoopYarnProtoRPC.java:65)
at org.apache.hadoop.yarn.ipc.YarnRPC.getServer(YarnRPC.java:54)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.createServer(ResourceLocalizationService.java:356)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.serviceStart(ResourceLocalizationService.java:334)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceStart(ContainerManagerImpl.java:450)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:264)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:463)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:509)
Caused by: java.net.BindException: Problem binding to [0.0.0.0:8040] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:720)
at org.apache.hadoop.ipc.Server.bind(Server.java:424)
at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:573)
at org.apache.hadoop.ipc.Server.<init>(Server.java:2205)
at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:931)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:537)
at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:512)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:776)
at org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl.createServer(RpcServerFactoryPBImpl.java:169)
at org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl.getServer(RpcServerFactoryPBImpl.java:132)
... 13 more
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:407)
... 21 more
2014-12-19 17:21:25,189 INFO org.apache.hadoop.yarn.server.nodemanager.NodeManager: SHUTDOWN_MSG
dfsreport
:~$ hdfs dfsadmin -report
Configured Capacity: 216116072448 (201.27 GB)
Present Capacity: 55109070848 (51.32 GB)
DFS Remaining: 55109021696 (51.32 GB)
DFS Used: 49152 (48 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Live datanodes (2):
Name: 192.168.179.3:50010 (slave)
Hostname: xyz
Decommission Status : Normal
Configured Capacity: 83369902080 (77.64 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 63215587328 (58.87 GB)
DFS Remaining: 20154290176 (18.77 GB)
DFS Used%: 0.00%
DFS Remaining%: 24.17%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Name: 192.168.179.58:50010 (master)
Hostname: abc
Decommission Status : Normal
Configured Capacity: 132746170368 (123.63 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 97791414272 (91.08 GB)
DFS Remaining: 34954731520 (32.55 GB)
DFS Used%: 0.00%
DFS Remaining%: 26.33%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
BindException - Address already in use - This is an indication that port 8040 is busy and is in use by someother process. You could overcome this by changing the default yarn port in file etc/hadoop/yarn-site.xml as below:
<property>
<name>yarn.nodemanager.localizer.address</name>
<value>192.168.179.58:10200</value>
</property>
I have a concurrent system with many machines/nodes involved. Each machine run several JVMs doing different stuff. It is a "layered" architecture where each layer consists of many JVM running across the machines. Basically the top-layer JVM receives input from the outside via files, parses the input and sends it as many small records for "storage" in layer-two. Layer-two doesn't actually persist the data itself but actually persists it in layer-three (HBase and Solr) and HBase actually doesn't persist it itself either since it sends it to layer-four (HDFS) for persistence.
Most of the communication among the layers is synchronized so of course it ends up in a lot of threads waiting for lower layers to complete. But I would expect those waiting threads to be "free" wrt CPU usage.
I see a very high iowait (%wa in top) though - something like 80-90% iowait and only 10-20% sys/usr CPU usage. The system seems exhausted - slow to login via ssh and slow to respond to commands etc.
My question is if all those JVM threads waiting for lower layers to complete can cause this? Is it not supposed to be "free" waiting for responses (sockets). Does it matter with respect to this, whether the different layers uses blocking or non-blocking (NIO) io? Exactly in what situations does Linux count something as iowait (%wa in top)? When all threads in all JVMs on the machines are in a situation where it is waiting (counting because there is no other thread to run to do something meaningful in the meantime)? Or does threads waiting also count in %wa even though there are other processes ready to use the CPU for real processing?
I would really want to get a thorough explanation on how it works and how to interpret this high %wa. In the beginning I guessed that it counted as %wa when all threads where waiting, but that there where actually plenty of room for doing more, so I tried to increase the number of threads expecting to get more throughput, but that doesn't happen. So it is a real problem, not just a "visual" problem looking at top.
The output below is taken from a machine where only HBase and HDFS is running. It is on machines with HBase and/or HDFS that the problem i showing (most clearly)
--- jps ---
19498 DataNode
19690 HRegionServer
19327 SecondaryNameNode
---- typical top -------
top - 11:13:21 up 14 days, 18:20, 1 user, load average: 4.83, 4.50, 4.25
Tasks: 99 total, 1 running, 98 sleeping, 0 stopped, 0 zombie
Cpu(s): 14.1%us, 4.3%sy, 0.0%ni, 5.4%id, 74.8%wa, 0.0%hi, 1.3%si, 0.0%st
Mem: 7133800k total, 7099632k used, 34168k free, 55540k buffers
Swap: 487416k total, 248k used, 487168k free, 2076804k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
19690 hbase 20 0 4629m 4.2g 9244 S 51 61.7 194:08.84 java
19498 hdfs 20 0 1030m 116m 9076 S 16 1.7 75:29.26 java
---- iostat -kd 1 ----
root#edrxen1-2:~# iostat -kd 1
Linux 2.6.32-29-server (edrxen1-2) 02/22/2012 _x86_64_ (2 CPU)
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
xvda 3.53 3.36 15.66 4279502 19973226
dm-0 319.44 6959.14 422.37 8876213913 538720280
dm-1 0.00 0.00 0.00 912 624
xvdb 229.03 6955.81 406.71 8871957888 518747772
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
xvda 0.00 0.00 0.00 0 0
dm-0 122.00 3852.00 0.00 3852 0
dm-1 0.00 0.00 0.00 0 0
xvdb 105.00 3252.00 0.00 3252 0
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
xvda 0.00 0.00 0.00 0 0
dm-0 57.00 1712.00 0.00 1712 0
dm-1 0.00 0.00 0.00 0 0
xvdb 78.00 2428.00 0.00 2428 0
--- iostat -x ---
Linux 2.6.32-29-server (edrxen1-2) 02/22/2012 _x86_64_ (2 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
8.06 0.00 3.29 65.14 0.08 23.43
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
xvda 0.00 0.74 0.35 3.18 6.72 31.32 10.78 0.11 30.28 6.24 2.20
dm-0 0.00 0.00 213.15 106.59 13866.95 852.73 46.04 1.29 14.41 2.83 90.58
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.00 5.78 1.12 0.00
xvdb 0.07 86.97 212.73 15.69 13860.27 821.42 64.27 2.44 25.21 3.96 90.47
--- free -o ----
total used free shared buffers cached
Mem: 7133800 7099452 34348 0 55612 2082364
Swap: 487416 248 487168
IO wait on Linux indicates that processes are blocked on uninterruptible I/O. In practice, it typically means that the process is performing disk access -- in this case, I'd guess one of the following:
hdfs is performing a lot of disk accesses, and it's making other disk access slow as a result. (Checking iostat -x may help, as it'll show an extra "%util" column which indicates what percentage of the time the disk is "busy".)
You're running low on system memory under load, and are ending up dipping into swap sometimes.