Events Supported by Event Monitoring¶
Note
Events in Event Monitoring come from operations on cloud service resources and are not collected by the Agent in Server Monitoring.
Event Source | Event Name | Event ID | Event Severity | Description | Solution | Impact |
---|---|---|---|---|---|---|
ECS | Delete ECS | deleteServer | Major | The ECS was deleted
| Check whether the deletion was performed intentionally by a user. | Services are interrupted. |
Reboot ECS | rebootServer | Minor | The ECS was restarted
| Check whether the restart was performed intentionally by a user.
| Services are interrupted. | |
Resize ECS | resizeServer | Minor | The ECS was resized
|
| Services are interrupted. | |
Restart triggered due to system faults | startAutoRecovery | Major | ECSs on a faulty host would be automatically migrated to another properly-running host. During the migration, the ECSs was restarted. | Wait for the event to end and check whether services are affected. | Services may be interrupted. | |
Restart completed due to system faults | endAutoRecovery | Major | The ECS was recovered after the automatic migration. | This event indicates that the ECS has recovered and been working properly. | None | |
Auto recovery timeout (being processed on the backend) | faultAutoRecovery | Major | Migrating the ECS to a normal host timed out. | Migrate services to other ECSs. | Services are interrupted. | |
Improper ECS running | vmIsRunningImproperly | Major | The ECS was faulty or the ECS NIC was abnormal. | Deploy service applications in HA mode. After the fault is rectified, check whether services recover. | Services are interrupted. | |
Improper ECS running recovered | vmIsRunningImproperlyRecovery | Major | The ECS was restored to the normal status. | Wait for the ECS status to become normal and check whether services are affected. | None | |
VM faults caused by host process exceptions | VMFaultsByHostProcessExceptions | Critical | The host where the ECS resides is faulty. The system will automatically try to start the ECS. | After the ECS is started, check whether this ECS and services on it can run properly. | The ECS is faulty. | |
Restarted GuestOS | RestartGuestOS | Minor | The guest OS was restarted. | Contact O&M personnel. | Services may be interrupted. |
Note
Once a physical host running ECSs breaks down, the ECSs are automatically migrated to a functional physical host. During the migration, the ECSs will be restarted.
Event Source | Namespace | Event Name | Event ID | Event Severity | Description | Solution | Impact |
---|---|---|---|---|---|---|---|
AAD | SYS.DDOS | DDoS Attack Events | ddosAttackEvents | Major | A DDoS attack occurs in the AAD protected lines. | Judge the impact on services based on the attack traffic and attack type. If the attack traffic exceeds your purchased elastic bandwidth, change to another line or increase your bandwidth. | Services may be interrupted. |
Domain name scheduling event | domainNameDispatchEvents | Major | The high-defense CNAME corresponding to the domain name is scheduled, and the domain name is resolved to another high-defense IP address. | Pay attention to the workloads involving the domain name. | Services are not affected. | ||
Blackhole event | blackHoleEvents | Major | The attack traffic exceeds the purchased AAD protection threshold. | A blackhole is canceled after 30 minutes by default. The actual blackhole duration is related to the blackhole triggering times and peak attack traffic on the current day. The maximum duration is 24 hours. If you need to permit access before a blackhole becomes ineffective, contact technical support. | Services may be interrupted. | ||
Cancel Blackhole | cancelBlackHole | Informational | The customer's AAD instance recovers from the black hole state. | This is only a prompt and no action is required. | Customer services recover. | ||
IP address scheduling triggered | ipDispatchEvents | Major | IP route changed | Check the workloads of the IP address. | Services are not affected. |
Event Source | Namespace | Event Name | Event ID | Event Severity | Description | Solution | Impact |
---|---|---|---|---|---|---|---|
ELB | SYS.ELB | The backend servers are unhealthy. | healthCheckUnhealthy | Major | Generally, this problem occurs because backend server services are offline. This event will not be reported after it is reported for several times. | Ensure that the backend servers are running properly. | ELB does not forward requests to unhealthy backend servers. If all backend servers in the backend server group are detected unhealthy, services will be interrupted. |
The backend server is detected healthy. | healthCheckRecovery | Minor | The backend server is detected healthy. | No further action is required. | The load balancer can properly route requests to the backend server. |
Event Source | Event Name | Event ID | Event Severity | Description | Solution | Impact |
---|---|---|---|---|---|---|
CBR | Failed to create the backup. | backupFailed | Critical | The backup failed to be created. | Manually create a backup or contact customer service. | Data loss may occur. |
Failed to restore the resource using a backup. | restorationFailed | Critical | The resource failed to be restored using a backup. | Restore the resource using another backup or contact customer service. | Data loss may occur. | |
Failed to delete the backup. | backupDeleteFailed | Critical | The backup failed to be deleted. | Try again later or contact customer service. | Charging may be abnormal. | |
Failed to delete the vault. | vaultDeleteFailed | Critical | The vault failed to be deleted. | Try again later or contact technical support. | Charging may be abnormal. | |
Replication failure | replicationFailed | Critical | The backup failed to be replicated. | Try again later or contact technical support. | Data loss may occur. | |
The backup is created successfully. | backupSucceeded | Major | The backup was created. | None | None | |
Resource restoration using a backup succeeded. | restorationSucceeded | Major | The resource was restored using a backup. | Check whether the data is successfully restored. | None | |
The backup is deleted successfully. | backupDeletionSucceeded | Major | The backup was deleted. | None | None | |
The vault is deleted successfully. | vaultDeletionSucceeded | Major | The vault was deleted. | None | None | |
Replication success | replicationSucceeded | Major | The backup was replicated successfully. | None | None | |
Client offline | agentOffline | Critical | The backup client was offline. | Ensure that the Agent status is normal and the backup client can be connected to . | Backup tasks may fail. | |
Client online | agentOnline | Major | The backup client was online. | None | None |
Event Source | Namespace | Event Name | Event ID | Event Severity | Description | Solution | Impact |
---|---|---|---|---|---|---|---|
RDS | SYS.RDS | DB instance creation failure | createInstanceFailed | Major | Generally, the cause is that the number of disks is insufficient due to quota limits, or underlying resources are exhausted. | The selected resource specifications are insufficient. Select other available specifications and try again. | DB instances cannot be created. |
Full backup failure | fullBackupFailed | Major | A single full backup failure does not affect the files that have been successfully backed up, but prolong the incremental backup time during the point-in-time restore (PITR). | Try again. | Restoration using backups will be affected. | ||
Read replica promotion failure | activeStandBySwitchFailed | Major | The standby DB instance does not take over workloads from the primary DB instance due to network or server failures. The original primary DB instance continues to provide services within a short time. | Perform the operation again during off-peak hours. | Read replica promotion failed. | ||
Replication status abnormal | abnormalReplicationStatus | Major | The possible causes are as follows: The replication delay between the primary instance and the standby instance or a read replica is too long, which usually occurs when a large amount of data is being written to databases or a large transaction is being processed. During peak hours, data may be blocked. The network between the primary instance and the standby instance or a read replica is disconnected. | The issue is being fixed. Please wait for our notifications. | The replication status is abnormal. | ||
Replication status recovered | replicationStatusRecovered | Major | The replication delay between the primary and standby instances is within the normal range, or the network connection between them has restored. | Check whether services are running properly. | Replication status is recovered. | ||
DB instance faulty | faultyDBInstance | Major | A single or primary DB instance was faulty due to a catastrophic failure, for example, server failure. | The issue is being fixed. Please wait for our notifications. | The instance status is abnormal. | ||
DB instance recovered | DBInstanceRecovered | Major | RDS rebuilds the standby DB instance with its high availability. After the instance is rebuilt, this event will be reported. | The DB instance status is normal. Check whether services are running properly. | The instance is recovered. | ||
Failure of changing single DB instance to primary/standby | singleToHaFailed | Major | A fault occurs when RDS is creating the standby DB instance or configuring replication between the primary and standby DB instances. The fault may occur because resources are insufficient in the data center where the standby DB instance is located. | Automatic retry is in progress. | Changing a single DB instance to primary/standby failed. | ||
Database process restarted | DatabaseProcessRestarted | Major | The database process is stopped due to insufficient memory or high load. | Check whether services are running properly. | The primary instance is restarted. Services are interrupted for a short period of time. | ||
Instance storage full | instanceDiskFull | Major | Generally, the cause is that the data space usage is too high. | Scale up the storage. | The instance storage is used up. No data can be written into databases. | ||
Instance storage full recovered | instanceDiskFullRecovered | Major | The instance disk is recovered. | Check whether services are running properly. | The instance has available storage. | ||
Kafka connection failed | kafkaConnectionFailed | Major | The network is unstable or the Kafka server does not work properly. | Check whether services are affected. | None |
Event Source | Namespace | Event Name | Event ID | Event Severity | Description | Solution | Impact |
---|---|---|---|---|---|---|---|
DDS | SYS.DDS | DB instance creation failure | DDSCreateInstanceFailed | Major | A DDS instance fails to be created due to insufficient disks, quotas, and underlying resources. | Check the number and quota of disks. Release resources and create DDS instances again. | DDS instances cannot be created. |
Replication failed | DDSAbnormalReplicationStatus | Major | The possible causes are as follows: The replication delay between the primary instance and the standby instance or a read replica is too long, which usually occurs when a large amount of data is being written to databases or a large transaction is being processed. During peak hours, data may be blocked. The network between the primary instance and the standby instance or a read replica is disconnected. | Submit a service ticket. | Your applications are not affected because this event does not interrupt data read and write. | ||
Replication recovered | DDSReplicationStatusRecovered | Major | The replication delay between the primary and standby instances is within the normal range, or the network connection between them has restored. | No action is required. | None | ||
DB instance failed | DDSFaultyDBInstance | Major | This event is a key alarm event and is reported when an instance is faulty due to a disaster or a server failure. | Submit a service ticket. | The database service may be unavailable. | ||
DB instance recovered | DDSDBInstanceRecovered | Major | If a disaster occurs, NoSQL provides an HA tool to automatically or manually rectify the fault. After the fault is rectified, this event is reported. | No action is required. | None | ||
Faulty node | DDSFaultyDBNode | Major | This event is a key alarm event and is reported when a database node is faulty due to a disaster or a server failure. | Check whether the database service is available and submit a service ticket. | The database service may be unavailable. | ||
Node recovered | DDSDBNodeRecovered | Major | If a disaster occurs, NoSQL provides an HA tool to automatically or manually rectify the fault. After the fault is rectified, this event is reported. | No action is required. | None | ||
Primary/standby switchover or failover | DDSPrimaryStandbySwitched | Major | A primary/standby switchover is performed or a failover is triggered. | No action is required. | None | ||
Insufficient storage space | DDSRiskyDataDiskUsage | Major | The storage space is insufficient. | Scale up storage space. For details, see section "Scaling Up Storage Space" in the corresponding user guide. | The instance is set to read-only and data cannot be written to the instance. | ||
Data disk expanded and being writable | DDSDataDiskUsageRecovered | Major | The capacity of a data disk has been expanded and the data disk becomes writable. | No further action is required. | No adverse impact. | ||
Schedule for deleting a KMS key | DDSplanDeleteKmsKey | Major | A request to schedule deletion of a KMS key was submitted. | After the KMS key is scheduled to be deleted, either decrypt the data encrypted by KMS key in a timely manner or cancel the key deletion. | After the KMS key is deleted, users cannot encrypt disks. |
Event Source | Namespace | Event Name | Event ID | Event Severity | Description | Solution | Impact |
---|---|---|---|---|---|---|---|
GaussDB(for MySQL) | SYS.GAUSSDB | Incremental backup failure | TaurusIncrementalBackupInstanceFailed | Major | The network between the instance and the management plane (or the OBS) is disconnected, or the backup environment created for the instance is abnormal. | Submit a service ticket. | Backup jobs fail. |
Read replica creation failure | addReadonlyNodesFailed | Major | The quota is insufficient or underlying resources are exhausted. | Check the read replica quota. Release resources and create read replicas again. | Read replicas fail to be created. | ||
DB instance creation failure | createInstanceFailed | Major | The instance quota or underlying resources are insufficient. | Check the instance quota. Release resources and create instances again. | DB instances fail to be created. | ||
Read replica promotion failure | activeStandBySwitchFailed | Major | The read replica fails to be promoted to the primary node due to network or server failures. The original primary node takes over services quickly. | Submit a service ticket. | The read replica fails to be promoted to the primary node. | ||
Instance specifications change failure | flavorAlterationFailed | Major | The quota is insufficient or underlying resources are exhausted. | Submit a service ticket. | Instance specifications fail to be changed. | ||
Faulty DB instance | TaurusInstanceRunningStatusAbnormal | Major | The instance process is faulty or the communications between the instance and the DFV storage are abnormal. | Submit a service ticket. | Services may be affected. | ||
DB instance recovered | TaurusInstanceRunningStatusRecovered | Major | The instance is recovered. | Observe the service running status. | None | ||
Faulty node | TaurusNodeRunningStatusAbnormal | Major | The node process is faulty or the communications between the node and the DFV storage are abnormal. | Observe the instance and service running statuses. | A read replica may be promoted to the primary node. | ||
Node recovered | TaurusNodeRunningStatusRecovered | Major | The node is recovered. | Observe the service running status. | None | ||
Read replica deletion failure | TaurusDeleteReadOnlyNodeFailed | Major | The communications between the management plane and the read replica are abnormal or the VM fails to be deleted from IaaS. | Submit a service ticket. | Read replicas fail to be deleted. | ||
Password reset failure | TaurusResetInstancePasswordFailed | Major | The communications between the management plane and the instance are abnormal or the instance is abnormal. | Check the instance status and try again. If the fault persists, submit a service ticket. | Passwords fail to be reset for instances. | ||
DB instance reboot failure | TaurusRestartInstanceFailed | Major | The network between the management plane and the instance is abnormal or the instance is abnormal. | Check the instance status and try again. If the fault persists, submit a service ticket. | Instances fail to be rebooted. | ||
Restoration to new DB instance failure | TaurusRestoreToNewInstanceFailed | Major | The instance quota is insufficient, underlying resources are exhausted, or the data restoration logic is incorrect. | If the new instance fails to be created, check the instance quota, release resources, and try to restore to a new instance again. In other cases, submit a service ticket. | Backup data fails to be restored to new instances. | ||
EIP binding failure | TaurusBindEIPToInstanceFailed | Major | The binding task fails. | Submit a service ticket. | EIPs fail to be bound to instances. | ||
EIP unbinding failure | TaurusUnbindEIPFromInstanceFailed | Major | The unbinding task fails. | Submit a service ticket. | EIPs fail to be unbound from instances. | ||
Parameter modification failure | TaurusUpdateInstanceParameterFailed | Major | The network between the management plane and the instance is abnormal or the instance is abnormal. | Check the instance status and try again. If the fault persists, submit a service ticket. | Instance parameters fail to be modified. | ||
Parameter template application failure | TaurusApplyParameterGroupToInstanceFailed | Major | The network between the management plane and instances is abnormal or the instances are abnormal. | Check the instance status and try again. If the fault persists, submit a service ticket. | Parameter templates fail to be applied to instances. | ||
Full backup failure | TaurusBackupInstanceFailed | Major | The network between the instance and the management plane (or the OBS) is disconnected, or the backup environment created for the instance is abnormal. | Submit a service ticket. | Backup jobs fail. | ||
Primary/standby failover | TaurusActiveStandbySwitched | Major | When the network, physical machine, or database of the primary node is faulty, the system promotes a read replica to primary based on the failover priority to ensure service continuity. |
| During the failover, database connection is interrupted for a short period of time. After the failover is complete, you can reconnect to the database. | ||
Database read-only | NodeReadonlyMode | Major | The database supports only query operations. | Submit a service ticket. | After the database becomes read-only, write operations cannot be processed. | ||
Database read/write | NodeReadWriteMode | Major | The database supports both write and read operations. | Submit a service ticket. | None. | ||
Instance DR switchover | DisasterSwitchOver | Major | If an instance is faulty and unavailable, a switchover is performed to ensure that the instance continues to provide services. | Contact technical support. | The database connection is intermittently interrupted. The HA service switches workloads from the primary node to a read replica and continues to provide services. | ||
Database process restarted | TaurusDatabaseProcessRestarted | Major | The database process is stopped due to insufficient memory or high load. | Log in to the Cloud Eye console. Check whether the memory usage increases sharply or the CPU usage is too high for a long time. You can increase the specifications or optimize the service logic. | When the database process is suspended, workloads on the node are interrupted. In this case, the HA service automatically restarts the database process and attempts to recover the workloads. |
Event Source | Namespace | Event Name | Event ID | Event Severity | Description | Solution | Impact |
---|---|---|---|---|---|---|---|
GaussDB | SYS.GAUSSDBV5 | Process status alarm | ProcessStatusAlarm | Major | Key processes exit, including CMS/CMA, ETCD, GTM, CN, and DN processes. | Wait until the process is automatically recovered or a primary/standby failover is automatically performed. Check whether services are recovered. If no, contact SRE engineers. | If processes on primary nodes are faulty, services are interrupted and then rolled back. If processes on standby nodes are faulty, services are not affected. |
Component status alarm | ComponentStatusAlarm | Major | Key components do not respond, including CMA, ETCD, GTM, CN, and DN components. | Wait until the process is automatically recovered or a primary/standby failover is automatically performed. Check whether services are recovered. If no, contact SRE engineers. | If processes on primary nodes do not respond, neither do the services. If processes on standby nodes are faulty, services are not affected. | ||
Cluster status alarm | ClusterStatusAlarm | Major | The cluster status is abnormal. For example, the cluster is read-only; majority of ETCDs are faulty; or the cluster resources are unevenly distributed. | Contact SRE engineers. | If the cluster status is read-only, only read services are processed. If the majority of ETCDs are fault, the cluster is unavailable. If resources are unevenly distributed, the instance performance and reliability deteriorate. | ||
Hardware resource alarm | HardwareResourceAlarm | Major | A major hardware fault occurs in the instance, such as disk damage or GTM network fault. | Contact SRE engineers. | Some or all services are affected. | ||
Status transition alarm | StateTransitionAlarm | Major | The following events occur in the instance: DN build failure, forcible DN promotion, primary/standby DN switchover/failover, or primary/standby GTM switchover/failover. | Wait until the fault is automatically rectified and check whether services are recovered. If no, contact SRE engineers. | Some services are interrupted. | ||
Other abnormal alarm | OtherAbnormalAlarm | Major | Disk usage threshold alarm | Focus on service changes and scale up storage space as needed. | If the used storage space exceeds the threshold, storage space cannot be scaled up. | ||
Faulty DB instance | TaurusInstanceRunningStatusAbnormal | Major | This event is a key alarm event and is reported when an instance is faulty due to a disaster or a server failure. | Submit a service ticket. | The database service may be unavailable. | ||
DB instance recovered | TaurusInstanceRunningStatusRecovered | Major | GaussDB(openGauss) provides an HA tool for automated or manual rectification of faults. After the fault is rectified, this event is reported. | No further action is required. | None | ||
Faulty DB node | TaurusNodeRunningStatusAbnormal | Major | This event is a key alarm event and is reported when a database node is faulty due to a disaster or a server failure. | Check whether the database service is available and submit a service ticket. | The database service may be unavailable. | ||
DB node recovered | TaurusNodeRunningStatusRecovered | Major | GaussDB(openGauss) provides an HA tool for automated or manual rectification of faults. After the fault is rectified, this event is reported. | No further action is required. | None | ||
DB instance creation failure | GaussDBV5CreateInstanceFailed | Major | Instances fail to be created because the quota is insufficient or underlying resources are exhausted. | Release the instances that are no longer used and try to provision them again, or submit a service ticket to adjust the quota. | DB instances cannot be created. | ||
Node adding failure | GaussDBV5ExpandClusterFailed | Major | The underlying resources are insufficient. | Submit a service ticket. The O&M personnel will coordinate resources in the background, and then you delete the node that failed to be added and add a new node. | None | ||
Storage scale-up failure | GaussDBV5EnlargeVolumeFailed | Major | The underlying resources are insufficient. | Submit a service ticket. The O&M personnel will coordinate resources in the background and then you scale up the storage space again. | Services may be interrupted. | ||
Reboot failure | GaussDBV5RestartInstanceFailed | Major | The network is abnormal. | Retry the reboot operation or submit a service ticket to the O&M personnel. | The database service may be unavailable. | ||
Full backup failure | GaussDBV5FullBackupFailed | Major | The backup files fail to be exported or uploaded. | Submit a service ticket to the O&M personnel. | Data cannot be backed up. | ||
Differential backup failure | GaussDBV5DifferentialBackupFailed | Major | The backup files fail to be exported or uploaded. | Submit a service ticket to the O&M personnel. | Data cannot be backed up. | ||
Backup deletion failure | GaussDBV5DeleteBackupFailed | Major | This function does not need to be implemented. | N/A | N/A | ||
EIP binding failure | GaussDBV5BindEIPFailed | Major | The EIP is bound to another resource. | Submit a service ticket to the O&M personnel. | The instance cannot be accessed from the Internet. | ||
EIP unbinding failure | GaussDBV5UnbindEIPFailed | Major | The network is faulty or EIP is abnormal. | Unbind the IP address again or submit a service ticket to the O&M personnel. | IP addresses may be residual. | ||
Parameter template application failure | GaussDBV5ApplyParamFailed | Major | Modifying a parameter template times out. | Modify the parameter template again. | None | ||
Parameter modification failure | GaussDBV5UpdateInstanceParamGroupFailed | Major | Modifying a parameter template times out. | Modify the parameter template again. | None | ||
Backup and restoration failure | GaussDBV5RestoreFromBcakupFailed | Major | The underlying resources are insufficient or backup files fail to be downloaded. | Submit a service ticket. | The database service may be unavailable during the restoration failure. | ||
Failed to upgrade the hot patch | GaussDBV5UpgradeHotfixFailed | Major | Generally, this fault is caused by an error reported during kernel upgrade. | View the error information about the workflow and redo or skip the job. | None |
Event Source | Namespace | Event Name | Event ID | Event Severity | Description | Solution | Impact |
---|---|---|---|---|---|---|---|
DDM | SYS.DDM | Failed to create a DDM instance | createDdmInstanceFailed | Major | The underlying resources are insufficient. | Release resources and create the instance again. | DDM instances cannot be created. |
Failed to change class of a DDM instance | resizeFlavorFailed | Major | The underlying resources are insufficient. | Submit a service ticket to the O&M personnel to coordinate resources and try again. | Services on some nodes are interrupted. | ||
Failed to scale out a DDM instance | enlargeNodeFailed | Major | The underlying resources are insufficient. | Submit a service ticket to the O&M personnel to coordinate resources, delete the node that fails to be added, and add a node again. | The instance fails to be scaled out. | ||
Failed to scale in a DDM instance | reduceNodeFailed | Major | The underlying resources fail to be released. | Submit a service ticket to the O&M personnel to release resources. | The instance fails to be scaled in. | ||
Failed to restart a DDM instance | restartInstanceFailed | Major | The DB instances associated are abnormal. | Check whether DB instances associated are normal. If the instances are normal, submit a service ticket to the O&M personnel. | Services on some nodes are interrupted. | ||
Failed to create a schema | createLogicDbFailed | Major | The possible causes are as follows:
| Check whether
| Services cannot run properly. | ||
Failed to bind an EIP | bindEipFailed | Major | The EIP is abnormal. | Try again later. In case of emergency, contact O&M personnel to rectify the fault. | The DDM instance cannot be accessed from the Internet. | ||
Failed to scale out a schema | migrateLogicDbFailed | Major | The underlying resources fail to be processed. | Submit a service ticket to the O&M personnel. | The schema cannot be scaled out. | ||
Failed to re-scale out a schema | retryMigrateLogicDbFailed | Major | The underlying resources fail to be processed. | Submit a service ticket to the O&M personnel. | The schema cannot be scaled out. |
Event Source | Namespace | Event Name | Event ID | Event Severity | Description | Solution | Impact |
---|---|---|---|---|---|---|---|
EVS | SYS.EVS | Update disk | updateVolume | Minor | Update the name and description of an EVS disk. | No further action is required. | None |
Expand disk | extendVolume | Minor | Expand an EVS disk. | No further action is required. | None | ||
Delete disk | deleteVolume | Major | Delete an EVS disk. | No further action is required. | Deleted disks cannot be recovered. | ||
QoS upper limit reached | reachQoS | Major | The I/O latency increases as the QoS upper limits of the disk are frequently reached and flow control triggered. | Change the disk type to one with a higher specification. | The current disk may fail to meet service requirements. |
Event Source | Namespace | Event Name | Event ID | Event Severity |
---|---|---|---|---|
KMS | SYS.KMS | Key disabled | disableKey | Major |
Key deletion scheduled | scheduleKeyDeletion | Minor | ||
Grant retired | retireGrant | Major | ||
Grant revoked | revokeGrant | Major |
Event Source | Event Name | Event ID | Event Severity | Description | Solution |
---|---|---|---|---|---|
Cloud Eye | Agent heartbeat interruption | agentHeartbeatInterrupted | Major | The Agent sends a heartbeat message to Cloud Eye every minute. If Cloud Eye cannot receive a heartbeat for 3 minutes, Agent Status is displayed as Faulty. |
|
Event Source | Namespace | Event Name | Event ID | Event Severity | Description | Solution | Impact |
---|---|---|---|---|---|---|---|
DCS | SYS.DCS | Full sync retry during online migration | migrationFullResync | Minor | If online migration fails, full synchronization will be triggered because incremental synchronization cannot be performed. | Check whether full sync retries are triggered repeatedly. Check whether the source instance is connected and whether it is overloaded. If full sync retries are triggered repeatedly, contact O&M personnel. | The migration task is disconnected from the source instance, triggering another full sync. As a result, the CPU usage of the source instance may increase sharply. |
masterStandbyFailover | Minor | The master node was abnormal, promoting a replica to master. | |||||
Memcached master/standby switchover | memcachedMasterStandbyFailover | Minor | The master node was abnormal, promoting the standby node to master. | Check whether services can recover by themselves. If applications cannot recover, restart them. | Persistent connections to the instance are interrupted. | ||
Redis server abnormal | redisNodeStatusAbnormal | Major | The Redis server status was abnormal. | Check whether services are affected. If yes, contact O&M personnel. | If the master node is abnormal, an automatic failover is performed. If a standby node is abnormal and the client directly connects to the standby node for read/write splitting, no data can be read. | ||
Redis server recovered | redisNodeStatusNormal | Major | The Redis server status recovered. | Check whether services can recover. If the applications are not reconnected, restart them. | Recover from an exception. | ||
Sync failure in data migration | migrateSyncDataFail | Major | Online migration failed. | Reconfigure the migration task and migrate data again. If the fault persists, contact O&M personnel. | Data migration fails. | ||
Memcached instance abnormal | memcachedInstanceStatusAbnormal | Major | The Memcached node status was abnormal. | Check whether services are affected. If yes, contact O&M personnel. | The Memcached instance is abnormal and may not be accessed. | ||
Memcached instance recovered | memcachedInstanceStatusNormal | Major | The Memcached node status recovered. | Check whether services can recover. If the applications are not reconnected, restart them. | Recover from an exception. | ||
Instance backup failure | instanceBackupFailure | Major | The DCS instance fails to be backed up due to an OBS access failure. | Retry backup manually. | Automated backup fails. | ||
Instance node abnormal restart | instanceNodeAbnormalRestart | Major | DCS nodes restarted unexpectedly when they became faulty. | Check whether services can recover. If the applications are not reconnected, restart them. | Persistent connections to the instance are interrupted. | ||
Long-running Lua scripts stopped | scriptsStopped | Informational | Lua scripts that had timed out automatically stopped running. | Optimize Lua scrips to prevent execution timeout. | If Lua scripts take a long time to execute, they will be forcibly stopped to avoid blocking the entire instance. | ||
Node restarted | nodeRestarted | Informational | After write operations had been performed, the node automatically restarted to stop Lua scripts that had timed out. | Check whether services can recover by themselves. If applications cannot recover, restart them. | Persistent connections to the instance are interrupted. |
Event Source | Event Name | Event ID | Event Severity | Description | Solution | Impact |
---|---|---|---|---|---|---|
RMS | Configuration noncompliance notification | configurationNoncomplianceNotification | Major | The assignment evaluation result is Non-compliant. | Modify the noncompliant configuration items of the resource. | None |
Configuration compliance notification | configurationComplianceNotification | Informational | The assignment evaluation result changed to be Compliant. | None | None |
Event Source | Namespace | Event Name | Event ID | Event Severity | Description | Solution | Impact |
---|---|---|---|---|---|---|---|
HSS | SYS.HSS | HSS agent disconnected | hssAgentAbnormalOffline | Major | The communication between the agent and the server is abnormal, or the agent process on the server is abnormal. | Fix your network connection. If the agent is still offline for a long time after the network recovers, the agent process may be abnormal. In this case, log in to the server and restart the agent process. | Services are interrupted. |
Abnormal HSS agent status | hssAgentAbnormalProtection | Major | The agent is abnormal probably because it does not have sufficient resources. | Log in to the server and check your resources. If the usage of memory or other system resources is too high, increase their capacity first. If the resources are sufficient but the fault persists after the agent process is restarted, submit a service ticket to the O&M personnel. | Services are interrupted. |
Event Source | Namespace | Event Name | Event ID | Event Severity | Description | Solution | Impact |
---|---|---|---|---|---|---|---|
IMS | SYS.IMS | Create Image | createImage | Major | An image was created. | None | You can use this image to create cloud servers. |
Update Image | updateImage | Major | Metadata of an image was modified. | None | Cloud servers may fail to be created from this image. | ||
Delete Image | deleteImage | Major | An image was deleted. | None | This image will be unavailable on the management console. |
Event Source | Event Name | Event ID | Event Severity | Description | Solution | Impact |
---|---|---|---|---|---|---|
BMS | ECC uncorrectable errors generated on GPU SRAM | SRAMUncorrectableEccError | Major | There are ECC uncorrectable errors generated on GPU SRAM. | If services are affected, submit a service ticket. | The GPU hardware may be faulty. As a result, the GPU memory is faulty, and services exit abnormally. |
osShutdown | osShutdown | Major | The BMS was stopped
|
| Services are interrupted. | |
Abnormal shutdown | serverShutdown | Major | The BMS was stopped unexpectedly, which may be caused by
|
| Services are interrupted. | |
Abnormal reboot | serverReboot | Major | The BMS restarted unexpectedly, which may be caused by
|
| Services are interrupted. | |
Network interruption | linkDown | Major | The BMS network was disconnected. Possible causes are as follows:
|
| Services are interrupted. | |
PCIE error | pcieError | Major | The PCIe devices or main board of the BMS was faulty. |
| The network or disk read/write services are affected. | |
Disk error | diskError | Major | The disk backplane or disks of the BMS were faulty. |
| Data read/write services are affected, or the BMS cannot be started. | |
Storage error | storageError | Major | The BMS failed to connect to EVS disks. Possible causes are as follows:
|
| Data read/write services are affected, or the BMS cannot be started. | |
OS reboot | osReboot | Major | The BMS was restarted
|
| Services are interrupted. | |
Inforom alarm generated on GPU | gpuInfoROMAlarm | Major | The driver failed to read inforom information due to GPU faults. | Non-critical services can continue to use the GPU card. For critical services, submit a service ticket to resolve this issue. | Services will not be affected if inforom information cannot be read. If error correction code (ECC) errors are reported on GPU, faulty pages may not be automatically retired and services are affected. | |
Double-bit ECC alarm generated on GPU | doubleBitEccError | Major | A double-bit ECC error occurred on GPU. |
| Services may be interrupted. After faulty pages are retired, the GPU card can continue to be used. | |
Too many retired pages | gpuTooManyRetiredPagesAlarm | Major | An ECC page retirement error occurred on GPU. | If services are affected, submit a service ticket. | Services may be affected. | |
ECC alarm generated on GPU A100 | gpuA100EccAlarm | Major | An ECC error occurred on GPU. |
| Services may be interrupted. After faulty pages are retired, the GPU card can continue to be used. | |
GPU ECC memory page retirement failure | eccPageRetirementRecordingFailure | Major | Automatic page retirement failed due to ECC errors. |
| Services may be interrupted, and memory page retirement fails. As a result, services cannot no longer use the GPU card. | |
GPU ECC page retirement alarm generated | eccPageRetirementRecordingEvent | Minor | Memory pages are automatically retired due to ECC errors. |
| Generally, this alarm is generated together with the ECC error alarm. If this alarm is generated independently, services are not affected. | |
Too many single-bit ECC errors on GPU | highSingleBitEccErrorRate | Major | There are too many single-bit ECC errors. |
| Single-bit errors can be automatically rectified and do not affect GPU-related applications. | |
GPU card not found | gpuDriverLinkFailureAlarm | Major | A GPU link is normal, but the NVIDIA driver cannot find the GPU card. |
| The GPU card cannot be found. | |
GPU link faulty | gpuPcieLinkFailureAlarm | Major | GPU hardware information cannot be queried through lspci due to a GPU link fault. | If services are affected, submit a service ticket. | The driver cannot use GPU. | |
GPU card lost | vmLostGpuAlarm | Major | The number of GPU cards on the VM is less than the number specified in the specifications. | If services are affected, submit a service ticket. | GPU cards get lost. | |
GPU memory page faulty | gpuMemoryPageFault | Major | The GPU memory page is faulty, which may be caused by applications, drivers, or hardware. | If services are affected, submit a service ticket. | The GPU hardware may be faulty. As a result, the GPU memory is faulty, and services exit abnormally. | |
GPU image engine faulty | graphicsEngineException | Major | The GPU image engine is faulty, which may be caused by applications, drivers, or hardware. | If services are affected, submit a service ticket. | The GPU hardware may be faulty. As a result, the image engine is faulty, and services exit abnormally. | |
GPU temperature too high | highTemperatureEvent | Major | GPU temperature too high | If services are affected, submit a service ticket. | If the GPU temperature exceeds the threshold, the GPU performance may deteriorate. | |
GPU NVLink faulty | nvlinkError | Major | A hardware fault occurs on the NVLink. | If services are affected, submit a service ticket. | The NVLink link is faulty and unavailable. | |
nvidia-smi suspended | nvidiaSmiHangEvent | Major | nvidia-smi timed out. | If services are affected, submit a service ticket. | The driver may report an error during service running. |
Event Source | Event Name | Event ID | Event Severity |
---|---|---|---|
Elastic IP and bandwidth | Delete VPC | deleteVpc | Major |
Modify VPC | modifyVpc | Minor | |
Delete subnet | deleteSubnet | Minor | |
Modify subnet | modifySubnet | Minor | |
Modify bandwidth | modifyBandwidth | Minor | |
Delete VPN | deleteVpn | Major | |
Modify VPN | modifyVpn | Minor |
Event Source | Event Name | Event ID | Event Severity |
---|---|---|---|
OBS | Delete bucket | deleteBucket | Major |
Delete bucket policy | deleteBucketPolicy | Major | |
Set bucket ACL | setBucketAcl | Minor | |
Set bucket policy | setBucketPolicy | Minor |
Event Source | Event Name | Event ID | Event Severity | Description | Solution | Impact |
---|---|---|---|---|---|---|
EIP | EIP bandwidth overflow | EIPBandwidthOverflow | Major | The used bandwidth exceeded the purchased one, which may slow down the network or cause packet loss. The value of this event is the maximum value in a monitoring period, and the value of the EIP inbound and outbound bandwidth is the value at a specific time point in the period. The metrics are described as follows: egressDropBandwidth: dropped outbound packets (bytes) egressAcceptBandwidth: accepted outbound packets (bytes) egressMaxBandwidthPerSec: peak outbound bandwidth (byte/s) ingressAcceptBandwidth: accepted inbound packets (bytes) ingressMaxBandwidthPerSec: peak inbound bandwidth (byte/s) ingressDropBandwidth: dropped inbound packets (bytes) | Check whether the EIP bandwidth keeps increasing and whether services are normal. Increase bandwidth if necessary. | The network becomes slow or packets are lost. |
Delete EIP | deleteEip | Minor | The EIP was released. | Check whether the EIP was release by mistake. | The server that has the EIP bound cannot access the Internet. | |
EIP blocked | blockEIP | Critical | The used bandwidth of an EIP exceeded 5 Gbit/s, the EIP were blocked and packets were discarded. Such an event may be caused by DDoS attacks. | Replace the EIP to prevent services from being affected. Locate and deal with the fault. | Services are impacted. | |
EIP unblocked | unblockEIP | Critical | The EIP was unblocked. | Use the previous EIP again. | None | |
Start DDoS traffic scrubbing | ddosCleanEIP | Major | Traffic scrubbing on the EIP was started to prevent DDoS attacks. | Check whether the EIP was attacked. | Services may be interrupted. | |
Stop DDoS traffic scrubbing | ddosEndCleanEip | Major | Traffic scrubbing on the EIP to prevent DDoS attacks was ended. | Check whether the EIP was attacked. | Services may be interrupted. | |
Enterprise-class QoS bandwidth limit exceeded | EIPBandwidthRuleOverflow | Major | The used QoS bandwidth exceeded the allocated one, which may slow down the network or cause packet loss. The value of this event is the maximum value in a monitoring period, and the value of the EIP inbound and outbound bandwidth is the value at a specific time point in the period. egressDropBandwidth: dropped outbound packets (bytes) egressAcceptBandwidth: accepted outbound packets (bytes) egressMaxBandwidthPerSec: peak outbound bandwidth (byte/s) ingressAcceptBandwidth: accepted inbound packets (bytes) ingressMaxBandwidthPerSec: peak inbound bandwidth (byte/s) ingressDropBandwidth: dropped inbound packets (bytes) | Check whether the EIP bandwidth keeps increasing and whether services are normal. Increase bandwidth if necessary. | The network becomes slow or packets are lost. |