Error "Failed to place enough replicas" Is Reported When HDFS Reads or Writes Files
Symptom
When a user performs a write operation on HDFS, the message "Failed to place enough replicas:expected…" is displayed.
Cause Analysis
- The data receiver of the DataNode is unavailable.
The DataNode log is as follows:
2016-03-17 18:51:44,721 | WARN | org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@5386659f | hadoopc1h2:25009:DataXceiverServer: | DataXceiverServer.java:158java.io.IOException: Xceiver count 4097 exceeds the limit of concurrent xcievers: 4096at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:140)at java.lang.Thread.run(Thread.java:745) - The disk space configured for the DataNode is insufficient.
- DataNode heartbeats are delayed.
Solution
- If the DataNode data receiver is unavailable, add the value of the HDFS parameter dfs.datanode.max.transfer.threads on Manager.
- If disk space or CPU resources are insufficient, add DataNodes or ensure that disk space and CPU resources are available.
- If the network is faulty, ensure that the network is available.
Parent topic: Using HDFS
- Symptom
- Cause Analysis
- Solution