• MapReduce Service

mrs
  1. Help Center
  2. MapReduce Service
  3. User Guide
  4. MRS Manager Operation Guide
  5. Alarm Reference
  6. ALM-19006 HBase Replication Synchronization Failed

ALM-19006 HBase Replication Synchronization Failed

Description

This alarm is generated when disaster recovery (DR) data fails to be synchronized to a standby cluster. It is cleared when DR data is successfully synchronized.

Attribute

Alarm ID

Alarm Severity

Automatically Cleared

19006

Major

Yes

Parameters

Parameter

Description

ServiceName

Specifies the service for which the alarm is generated.

RoleName

Specifies the role for which the alarm is generated.

HostName

Specifies the host for which the alarm is generated.

Impact on the System

HBase data in a cluster fails to be synchronized to the standby cluster, causing data inconsistency between active and standby clusters.

Possible Causes

  • The HBase service on the standby cluster is abnormal.
  • The network is abnormal.

Procedure

  1. Check whether the alarm is automatically cleared.

    1. Log in to MRS Manager of the active cluster, and click Alarm.
    2. In the alarm list, click the alarm and obtain the alarm generation time from Generated On in Alarm Details. Check whether the alarm persists for over 5 minutes.
      • If yes, go to 2.a.
      • If no, go to 1.c.
    3. Wait 5 minutes and check whether the alarm is automatically cleared.
      • If yes, no further action is required.
      • If no, go to 2.a.

  2. Check the HBase service status of the standby cluster.

    1. Log in to MRS Manager of the active cluster, and click Alarm.
    2. In the alarm list, click the alarm and obtain HostName from Location in Alarm Details.
    3. Log in to the node where the HBase client resides in the active cluster. Run the following command to switch the user:

      sudo su - root

      su - omm

    4. Run the status 'replication', 'source' command to check the replication synchronization status of the faulty node.

      The replication synchronization status of a node is as follows.

      10-10-10-153: 
       SOURCE: PeerID=abc, SizeOfLogQueue=0, ShippedBatches=2, ShippedOps=2, ShippedBytes=320, LogReadInBytes=1636, LogEditsRead=5, LogEditsFiltered=3, SizeOfLogToReplicate=0, TimeForLogToReplicate=0, ShippedHFiles=0, SizeOfHFileRefsQueue=0, AgeOfLastShippedOp=0, TimeStampsOfLastShippedOp=Mon Jul 18 09:53:28 CST 2016, Replication Lag=0, FailedReplicationAttempts=0 
       SURCE: PeerID=abc1, SizeOfLogQueue=0, ShippedBatches=1, ShippedOps=1, ShippedBytes=160, LogReadInBytes=1636, LogEditsRead=5, LogEditsFiltered=3, SizeOfLogToReplicate=0, TimeForLogToReplicate=0, ShippedHFiles=0, SizeOfHFileRefsQueue=0, AgeOfLastShippedOp=16788, TimeStampsOfLastShippedOp=Sat Jul 16 13:19:00 CST 2016, Replication Lag=16788, FailedReplicationAttempts=5
    5. Obtain PeerID corresponding to a record whose value of FailedReplicationAttempts is greater than 0.

      In the preceding step, data on faulty node 10-10-10-153 fails to be synchronized to a standby cluster whose PeerID is abc1.

    6. Run the list_peers command to find the cluster and the HBase instance corresponding to the PeerID.
      PEER_ID CLUSTER_KEY STATE TABLE_CFS 
       abc1 10.10.10.110,10.10.10.119,10.10.10.133:24002:/hbase2 ENABLED  
       abc 10.10.10.110,10.10.10.119,10.10.10.133:24002:/hbase ENABLED 

      /hbase2 indicates that data is synchronized to the HBase2 instance of the standby cluster.

    7. In the service list on MRS Manager of the standby cluster, check whether the health status of the HBase instance obtained in 2.f is Good.
      • If yes, go to 3.a.
      • If no, go to 2.h.
    8. In the alarm list, check whether alarm ALM-19000 HBase Service Unavailable exists.
      • If yes, go to 2.i.
      • If no, go to 3.a.
    9. Rectify the fault by following the steps provided in ALM-19000 HBase Service Unavailable.
    10. Wait several minutes and check whether the alarm is cleared.
      • If yes, no further action is required.
      • If no, go to 3.a.

  3. Check the network connection between RegionServers on active and standby clusters.

    1. Log in to MRS Manager of the active cluster, and click Alarm.
    2. In the alarm list, click the alarm and obtain HostName from Location in Alarm Details.
    3. Log in to the faulty RegionServer node.
    4. Run the ping command to check whether the network connection between the faulty RegionServer node and the host where RegionServer of the standby cluster resides is in the normal state.
    5. Contact public cloud O&M personnel to recover the network.
    6. After the network recovers, check whether the alarm is cleared in the alarm list.
      • If yes, no further action is required.
      • If no, go to Step 4.

  4. Collect fault information.

    1. On MRS Manager, choose System > Export Log.
    2. Contact technical support engineers for help, detail see technical support.

Related Information

N/A