Events Supported by Event Monitoring

Note

Events in Event Monitoring come from operations on cloud service resources and are not collected by the Agent in Server Monitoring.

Table 1 Elastic Cloud Server (ECS)

Event Source

Event Name

Event ID

Event Severity

Description

Solution

Impact

ECS

Delete ECS

deleteServer

Major

The ECS was deleted

  • on the management console.

  • by calling APIs.

Check whether the deletion was performed intentionally by a user.

Services are interrupted.

Reboot ECS

rebootServer

Minor

The ECS was restarted

  • on the management console.

  • by calling APIs.

Check whether the restart was performed intentionally by a user.

  • Deploy service applications in HA mode.

  • After the ECS starts up, check whether services recover.

Services are interrupted.

Resize ECS

resizeServer

Minor

The ECS was resized

  • on the management console.

  • by calling APIs.

  • Check whether the operation was performed by a user.

  • Deploy service applications in HA mode.

  • After the ECS is resized, check whether services have recovered.

Services are interrupted.

Restart triggered due to system faults

startAutoRecovery

Major

ECSs on a faulty host would be automatically migrated to another properly-running host. During the migration, the ECSs was restarted.

Wait for the event to end and check whether services are affected.

Services may be interrupted.

Restart completed due to system faults

endAutoRecovery

Major

The ECS was restored to be normal after the automatic migration.

This event indicates that the ECS has recovered and been working properly.

None

Auto recovery timeout (being processed on the backend)

faultAutoRecovery

Major

Migrating the ECS to a normal host timed out.

Migrate services to other ECSs.

Services are interrupted.

Improper ECS running

vmIsRunningImproperly

Major

The ECS was faulty or the ECS NIC was abnormal.

Deploy service applications in HA mode.

After the fault is rectified, check whether services recover.

Services are interrupted.

Improper ECS running recovered

vmIsRunningImproperlyRecovery

Major

The ECS was restored to the normal status.

Wait for the ECS status to become normal and check whether services are affected.

None

VM faults caused by host process exceptions

VMFaultsByHostProcessExceptions

Critical

The processes of the host accommodating the ECS were abnormal.

Contact O&M personnel.

The ECS is faulty.

Restarted GuestOS

RestartGuestOS

Minor

The guest OS was restarted.

Contact O&M personnel.

Services may be interrupted.

Note

Once a physical host running ECSs breaks down, the ECSs are automatically migrated to a functional physical host. During the migration, the ECSs will be restarted.

Table 2 Advanced Anti-DDoS (AAD)

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

AAD

SYS.DDOS

DDoS Attack Events

ddosAttackEvents

Major

A DDoS attack occurs in the AAD protected lines.

Judge the impact on services based on the attack traffic and attack type. If the attack traffic exceeds your purchased elastic bandwidth, change to another line or increase your bandwidth.

Services may be interrupted.

Domain name scheduling event

domainNameDispatchEvents

Major

The high-defense CNAME corresponding to the domain name is scheduled, and the domain name is resolved to another high-defense IP address.

Pay attention to the workloads involving the domain name.

Services are not affected.

Blackhole event

blackHoleEvents

Major

The attack traffic exceeds the purchased AAD protection threshold.

A blackhole is canceled after 30 minutes by default. The actual blackhole duration is related to the blackhole triggering times and peak attack traffic on the current day. The maximum duration is 24 hours. If you need to permit access before a blackhole becomes ineffective, contact technical support.

Services may be interrupted.

Cancel Blackhole

cancelBlackHole

Informational

The customer's AAD instance recovers from the black hole state.

This is only a prompt and no action is required.

Customer services recover.

IP address scheduling triggered

ipDispatchEvents

Major

IP route changed

Check the workloads of the IP address.

Services are not affected.

Table 3 Elastic Load Balance (ELB)

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

ELB

SYS.ELB

The backend servers are unhealthy.

healthCheckUnhealthy

Major

Generally, this problem occurs because backend server services are offline. This event will not be reported after it is reported for several times.

Ensure that the backend servers are running properly.

ELB does not forward requests to unhealthy backend servers. If all backend servers in the backend server group are detected unhealthy, services will be interrupted.

The backend server is detected healthy.

healthCheckRecovery

Minor

The backend server is detected healthy.

No further action is required.

The load balancer can properly route requests to the backend server.

Table 4 Cloud Backup and Recovery (CBR)

Event Source

Event Name

Event ID

Event Severity

Description

Solution

Impact

CBR

Failed to create the backup.

backupFailed

Critical

The backup failed to be created.

Manually create a backup or contact customer service.

Data loss may occur.

Failed to restore the resource using a backup.

restorationFailed

Critical

The resource failed to be restored using a backup.

Restore the resource using another backup or contact customer service.

Data loss may occur.

Failed to delete the backup.

backupDeleteFailed

Critical

The backup failed to be deleted.

Try again later or contact customer service.

Charging may be abnormal.

Failed to delete the vault.

vaultDeleteFailed

Critical

The vault failed to be deleted.

Try again later or contact technical support.

Charging may be abnormal.

Replication failure

replicationFailed

Critical

The backup failed to be replicated.

Try again later or contact technical support.

Data loss may occur.

The backup is created successfully.

backupSucceeded

Major

The backup was created.

None

None

Resource restoration using a backup succeeded.

restorationSucceeded

Major

The resource was restored using a backup.

Check whether the data is successfully restored.

None

The backup is deleted successfully.

backupDeletionSucceeded

Major

The backup was deleted.

None

None

The vault is deleted successfully.

vaultDeletionSucceeded

Major

The vault was deleted.

None

None

Replication success

replicationSucceeded

Major

The backup was replicated successfully.

None

None

Client offline

agentOffline

Critical

The backup client was offline.

Ensure that the Agent status is normal and the backup client can be connected to .

Backup tasks may fail.

Client online

agentOnline

Major

The backup client was online.

None

None

Table 5 Relational Database Service (RDS) — resource exception

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

RDS

SYS.RDS

DB instance creation failure

createInstanceFailed

Major

Generally, the cause is that the number of disks is insufficient due to quota limits, or underlying resources are exhausted.

The selected resource specifications are insufficient. Select other available specifications and try again.

DB instances cannot be created.

Full backup failure

fullBackupFailed

Major

A single full backup failure does not affect the files that have been successfully backed up, but prolong the incremental backup time during the point-in-time restore (PITR).

Try again.

Restoration using backups will be affected.

Read replica promotion failure

activeStandBySwitchFailed

Major

The standby DB instance does not take over workloads from the primary DB instance due to network or server failures. The original primary DB instance continues to provide services within a short time.

Perform the operation again during off-peak hours.

Read replica promotion failed.

Replication status abnormal

abnormalReplicationStatus

Major

The possible causes are as follows:

The replication delay between the primary instance and the standby instance or a read replica is too long, which usually occurs when a large amount of data is being written to databases or a large transaction is being processed. During peak hours, data may be blocked.

The network between the primary instance and the standby instance or a read replica is disconnected.

The issue is being fixed. Please wait for our notifications.

The replication status is abnormal.

Replication status recovered

replicationStatusRecovered

Major

The replication delay between the primary and standby instances is within the normal range, or the network connection between them has restored.

Check whether services are running properly.

Replication status is recovered.

DB instance faulty

faultyDBInstance

Major

A single or primary DB instance was faulty due to a catastrophic failure, for example, server failure.

The issue is being fixed. Please wait for our notifications.

The instance status is abnormal.

DB instance recovered

DBInstanceRecovered

Major

RDS rebuilds the standby DB instance with its high availability. After the instance is rebuilt, this event will be reported.

The DB instance status is normal. Check whether services are running properly.

The instance is recovered.

Failure of changing single DB instance to primary/standby

singleToHaFailed

Major

A fault occurs when RDS is creating the standby DB instance or configuring replication between the primary and standby DB instances. The fault may occur because resources are insufficient in the data center where the standby DB instance is located.

Automatic retry is in progress.

Changing a single DB instance to primary/standby failed.

Database process restarted

DatabaseProcessRestarted

Major

The database process is stopped due to insufficient memory or high load.

Check whether services are running properly.

The primary instance is restarted. Services are interrupted for a short period of time.

Instance storage full

instanceDiskFull

Major

Generally, the cause is that the data space usage is too high.

Scale up the storage.

The instance storage is used up. No data can be written into databases.

Instance storage full recovered

instanceDiskFullRecovered

Major

The instance disk is recovered.

Check whether services are running properly.

The instance has available storage.

Kafka connection failed

kafkaConnectionFailed

Major

The network is unstable or the Kafka server does not work properly.

Check whether services are affected.

None

Table 6 Document Database Service (DDS)

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

DDS

SYS.DDS

DB instance creation failure

DDSCreateInstanceFailed

Major

A DDS instance fails to be created due to insufficient disks, quotas, and underlying resources.

Check the number and quota of disks. Release resources and create DDS instances again.

DDS instances cannot be created.

Replication failed

DDSAbnormalReplicationStatus

Major

The possible causes are as follows:

The replication delay between the primary instance and the standby instance or a read replica is too long, which usually occurs when a large amount of data is being written to databases or a large transaction is being processed. During peak hours, data may be blocked.

The network between the primary instance and the standby instance or a read replica is disconnected.

Submit a service ticket.

Your applications are not affected because this event does not interrupt data read and write.

Replication recovered

DDSReplicationStatusRecovered

Major

The replication delay between the primary and standby instances is within the normal range, or the network connection between them has restored.

No action is required.

None

DB instance failed

DDSFaultyDBInstance

Major

This event is a key alarm event and is reported when an instance is faulty due to a disaster or a server failure.

Submit a service ticket.

The database service may be unavailable.

DB instance recovered

DDSDBInstanceRecovered

Major

If a disaster occurs, NoSQL provides an HA tool to automatically or manually rectify the fault. After the fault is rectified, this event is reported.

No action is required.

None

Faulty node

DDSFaultyDBNode

Major

This event is a key alarm event and is reported when a database node is faulty due to a disaster or a server failure.

Check whether the database service is available and submit a service ticket.

The database service may be unavailable.

Node recovered

DDSDBNodeRecovered

Major

If a disaster occurs, NoSQL provides an HA tool to automatically or manually rectify the fault. After the fault is rectified, this event is reported.

No action is required.

None

Primary/standby switchover or failover

DDSPrimaryStandbySwitched

Major

A primary/standby switchover is performed or a failover is triggered.

No action is required.

None

Insufficient storage space

DDSRiskyDataDiskUsage

Major

The storage space is insufficient.

Scale up storage space. For details, see section "Scaling Up Storage Space" in the corresponding user guide.

The instance is set to read-only and data cannot be written to the instance.

Data disk expanded and being writable

DDSDataDiskUsageRecovered

Major

The capacity of a data disk has been expanded and the data disk becomes writable.

No further action is required.

No adverse impact.

Schedule for deleting a KMS key

DDSplanDeleteKmsKey

Major

A request to schedule deletion of a KMS key was submitted.

After the KMS key is scheduled to be deleted, either decrypt the data encrypted by KMS key in a timely manner or cancel the key deletion.

After the KMS key is deleted, users cannot encrypt disks.

Table 7 GaussDB(for MySQL)

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

GaussDB(for MySQL)

SYS.GAUSSDB

Incremental backup failure

TaurusIncrementalBackupInstanceFailed

Major

The network between the instance and the management plane (or the OBS) is disconnected, or the backup environment created for the instance is abnormal.

Submit a service ticket.

Backup jobs fail.

Read replica creation failure

addReadonlyNodesFailed

Major

The quota is insufficient or underlying resources are exhausted.

Check the read replica quota. Release resources and create read replicas again.

Read replicas fail to be created.

DB instance creation failure

createInstanceFailed

Major

The instance quota or underlying resources are insufficient.

Check the instance quota. Release resources and create instances again.

DB instances fail to be created.

Read replica promotion failure

activeStandBySwitchFailed

Major

The read replica fails to be promoted to the primary node due to network or server failures. The original primary node takes over services quickly.

Submit a service ticket.

The read replica fails to be promoted to the primary node.

Instance specifications change failure

flavorAlterationFailed

Major

The quota is insufficient or underlying resources are exhausted.

Submit a service ticket.

Instance specifications fail to be changed.

Faulty DB instance

TaurusInstanceRunningStatusAbnormal

Major

The instance process is faulty or the communications between the instance and the DFV storage are abnormal.

Submit a service ticket.

Services may be affected.

DB instance recovered

TaurusInstanceRunningStatusRecovered

Major

The instance is recovered.

Observe the service running status.

None

Faulty node

TaurusNodeRunningStatusAbnormal

Major

The node process is faulty or the communications between the node and the DFV storage are abnormal.

Observe the instance and service running statuses.

A read replica may be promoted to the primary node.

Node recovered

TaurusNodeRunningStatusRecovered

Major

The node is recovered.

Observe the service running status.

None

Read replica deletion failure

TaurusDeleteReadOnlyNodeFailed

Major

The communications between the management plane and the read replica are abnormal or the VM fails to be deleted from IaaS.

Submit a service ticket.

Read replicas fail to be deleted.

Password reset failure

TaurusResetInstancePasswordFailed

Major

The communications between the management plane and the instance are abnormal or the instance is abnormal.

Check the instance status and try again. If the fault persists, submit a service ticket.

Passwords fail to be reset for instances.

DB instance reboot failure

TaurusRestartInstanceFailed

Major

The network between the management plane and the instance is abnormal or the instance is abnormal.

Check the instance status and try again. If the fault persists, submit a service ticket.

Instances fail to be rebooted.

Restoration to new DB instance failure

TaurusRestoreToNewInstanceFailed

Major

The instance quota is insufficient, underlying resources are exhausted, or the data restoration logic is incorrect.

If the new instance fails to be created, check the instance quota, release resources, and try to restore to a new instance again. In other cases, submit a service ticket.

Backup data fails to be restored to new instances.

EIP binding failure

TaurusBindEIPToInstanceFailed

Major

The binding task fails.

Submit a service ticket.

EIPs fail to be bound to instances.

EIP unbinding failure

TaurusUnbindEIPFromInstanceFailed

Major

The unbinding task fails.

Submit a service ticket.

EIPs fail to be unbound from instances.

Parameter modification failure

TaurusUpdateInstanceParameterFailed

Major

The network between the management plane and the instance is abnormal or the instance is abnormal.

Check the instance status and try again. If the fault persists, submit a service ticket.

Instance parameters fail to be modified.

Parameter template application failure

TaurusApplyParameterGroupToInstanceFailed

Major

The network between the management plane and instances is abnormal or the instances are abnormal.

Check the instance status and try again. If the fault persists, submit a service ticket.

Parameter templates fail to be applied to instances.

Full backup failure

TaurusBackupInstanceFailed

Major

The network between the instance and the management plane (or the OBS) is disconnected, or the backup environment created for the instance is abnormal.

Submit a service ticket.

Backup jobs fail.

Primary/standby failover

TaurusActiveStandbySwitched

Major

When the network, physical machine, or database of the primary node is faulty, the system promotes a read replica to primary based on the failover priority to ensure service continuity.

  1. Check whether the service is running properly.

  2. Check whether an alarm is generated, indicating that the read replica failed to be promoted to primary.

During the failover, database connection is interrupted for a short period of time. After the failover is complete, you can reconnect to the database.

Database read-only

NodeReadonlyMode

Major

The database supports only query operations.

Submit a service ticket.

After the database becomes read-only, write operations cannot be processed.

Database read/write

NodeReadWriteMode

Major

The database supports both write and read operations.

Submit a service ticket.

None.

Instance DR switchover

DisasterSwitchOver

Major

If an instance is faulty and unavailable, a switchover is performed to ensure that the instance continues to provide services.

Contact technical support.

The database connection is intermittently interrupted. The HA service switches workloads from the primary node to a read replica and continues to provide services.

Database process restarted

TaurusDatabaseProcessRestarted

Major

The database process is stopped due to insufficient memory or high load.

Log in to the Cloud Eye console. Check whether the memory usage increases sharply or the CPU usage is too high for a long time. You can increase the specifications or optimize the service logic.

When the database process is suspended, workloads on the node are interrupted. In this case, the HA service automatically restarts the database process and attempts to recover the workloads.

Table 8 GaussDB

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

GaussDB

SYS.GAUSSDBV5

Process status alarm

ProcessStatusAlarm

Major

Key processes exit, including CMS/CMA, ETCD, GTM, CN, and DN processes.

Wait until the process is automatically recovered or a primary/standby failover is automatically performed. Check whether services are recovered. If no, contact SRE engineers.

If processes on primary nodes are faulty, services are interrupted and then rolled back. If processes on standby nodes are faulty, services are not affected.

Component status alarm

ComponentStatusAlarm

Major

Key components do not respond, including CMA, ETCD, GTM, CN, and DN components.

Wait until the process is automatically recovered or a primary/standby failover is automatically performed. Check whether services are recovered. If no, contact SRE engineers.

If processes on primary nodes do not respond, neither do the services. If processes on standby nodes are faulty, services are not affected.

Cluster status alarm

ClusterStatusAlarm

Major

The cluster status is abnormal. For example, the cluster is read-only; majority of ETCDs are faulty; or the cluster resources are unevenly distributed.

Contact SRE engineers.

If the cluster status is read-only, only read services are processed.

If the majority of ETCDs are fault, the cluster is unavailable.

If resources are unevenly distributed, the instance performance and reliability deteriorate.

Hardware resource alarm

HardwareResourceAlarm

Major

A major hardware fault occurs in the instance, such as disk damage or GTM network fault.

Contact SRE engineers.

Some or all services are affected.

Status transition alarm

StateTransitionAlarm

Major

The following events occur in the instance: DN build failure, forcible DN promotion, primary/standby DN switchover/failover, or primary/standby GTM switchover/failover.

Wait until the fault is automatically rectified and check whether services are recovered. If no, contact SRE engineers.

Some services are interrupted.

Other abnormal alarm

OtherAbnormalAlarm

Major

Disk usage threshold alarm

Focus on service changes and scale up storage space as needed.

If the used storage space exceeds the threshold, storage space cannot be scaled up.

Faulty DB instance

TaurusInstanceRunningStatusAbnormal

Major

This event is a key alarm event and is reported when an instance is faulty due to a disaster or a server failure.

Submit a service ticket.

The database service may be unavailable.

DB instance recovered

TaurusInstanceRunningStatusRecovered

Major

GaussDB(openGauss) provides an HA tool for automated or manual rectification of faults. After the fault is rectified, this event is reported.

No further action is required.

None

Faulty DB node

TaurusNodeRunningStatusAbnormal

Major

This event is a key alarm event and is reported when a database node is faulty due to a disaster or a server failure.

Check whether the database service is available and submit a service ticket.

The database service may be unavailable.

DB node recovered

TaurusNodeRunningStatusRecovered

Major

GaussDB(openGauss) provides an HA tool for automated or manual rectification of faults. After the fault is rectified, this event is reported.

No further action is required.

None

DB instance creation failure

GaussDBV5CreateInstanceFailed

Major

Instances fail to be created because the quota is insufficient or underlying resources are exhausted.

Release the instances that are no longer used and try to provision them again, or submit a service ticket to adjust the quota.

DB instances cannot be created.

Node adding failure

GaussDBV5ExpandClusterFailed

Major

The underlying resources are insufficient.

Submit a service ticket. The O&M personnel will coordinate resources in the background, and then you delete the node that failed to be added and add a new node.

None

Storage scale-up failure

GaussDBV5EnlargeVolumeFailed

Major

The underlying resources are insufficient.

Submit a service ticket. The O&M personnel will coordinate resources in the background and then you scale up the storage space again.

Services may be interrupted.

Reboot failure

GaussDBV5RestartInstanceFailed

Major

The network is abnormal.

Retry the reboot operation or submit a service ticket to the O&M personnel.

The database service may be unavailable.

Full backup failure

GaussDBV5FullBackupFailed

Major

The backup files fail to be exported or uploaded.

Submit a service ticket to the O&M personnel.

Data cannot be backed up.

Differential backup failure

GaussDBV5DifferentialBackupFailed

Major

The backup files fail to be exported or uploaded.

Submit a service ticket to the O&M personnel.

Data cannot be backed up.

Backup deletion failure

GaussDBV5DeleteBackupFailed

Major

This function does not need to be implemented.

N/A

N/A

EIP binding failure

GaussDBV5BindEIPFailed

Major

The EIP is bound to another resource.

Submit a service ticket to the O&M personnel.

The instance cannot be accessed from the Internet.

EIP unbinding failure

GaussDBV5UnbindEIPFailed

Major

The network is faulty or EIP is abnormal.

Unbind the IP address again or submit a service ticket to the O&M personnel.

IP addresses may be residual.

Parameter template application failure

GaussDBV5ApplyParamFailed

Major

Modifying a parameter template times out.

Modify the parameter template again.

None

Parameter modification failure

GaussDBV5UpdateInstanceParamGroupFailed

Major

Modifying a parameter template times out.

Modify the parameter template again.

None

Backup and restoration failure

GaussDBV5RestoreFromBcakupFailed

Major

The underlying resources are insufficient or backup files fail to be downloaded.

Submit a service ticket.

The database service may be unavailable during the restoration failure.

Failed to upgrade the hot patch

GaussDBV5UpgradeHotfixFailed

Major

Generally, this fault is caused by an error reported during kernel upgrade.

View the error information about the workflow and redo or skip the job.

None

Table 9 Distributed Database Middleware (DDM)

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

DDM

SYS.DDM

Failed to create a DDM instance

createDdmInstanceFailed

Major

The underlying resources are insufficient.

Release resources and create the instance again.

DDM instances cannot be created.

Failed to change class of a DDM instance

resizeFlavorFailed

Major

The underlying resources are insufficient.

Submit a service ticket to the O&M personnel to coordinate resources and try again.

Services on some nodes are interrupted.

Failed to scale out a DDM instance

enlargeNodeFailed

Major

The underlying resources are insufficient.

Submit a service ticket to the O&M personnel to coordinate resources, delete the node that fails to be added, and add a node again.

The instance fails to be scaled out.

Failed to scale in a DDM instance

reduceNodeFailed

Major

The underlying resources fail to be released.

Submit a service ticket to the O&M personnel to release resources.

The instance fails to be scaled in.

Failed to restart a DDM instance

restartInstanceFailed

Major

The DB instances associated are abnormal.

Check whether DB instances associated are normal. If the instances are normal, submit a service ticket to the O&M personnel.

Services on some nodes are interrupted.

Failed to create a schema

createLogicDbFailed

Major

The possible causes are as follows:

  • The password for the DB instance account is incorrect.

  • The security group of the DDM instance and the associated DB instance are incorrectly configured. As a result, the DDM instance cannot communicate with the associated DB instance.

Check whether

  • The username and password of the DB instance are correct.

  • The security groups associated with the DDM instance and underlying database instance are correctly configured.

Services cannot run properly.

Failed to bind an EIP

bindEipFailed

Major

The EIP is abnormal.

Try again later. In case of emergency, contact O&M personnel to rectify the fault.

The DDM instance cannot be accessed from the Internet.

Failed to scale out a schema

migrateLogicDbFailed

Major

The underlying resources fail to be processed.

Submit a service ticket to the O&M personnel.

The schema cannot be scaled out.

Failed to re-scale out a schema

retryMigrateLogicDbFailed

Major

The underlying resources fail to be processed.

Submit a service ticket to the O&M personnel.

The schema cannot be scaled out.

Table 10 Layer 2 Connection Gateway (L2CG)

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

L2CG

SYS.ESW

IP addresses conflicted

IPConflict

Major

A cloud server and an on-premises server that need to communicate use the same IP address.

Check the ARP and switch information to locate the servers that have the same IP address and change the IP address.

The communications between the on-premises and cloud servers may be abnormal.

Table 11 Elastic IP and bandwidth

Event Source

Namespace

Event Name

Event ID

Event Severity

Elastic IP and bandwidth

SYS.VPC

VPC deleted

deleteVpc

Major

VPC modified

modifyVpc

Minor

Subnet deleted

deleteSubnet

Minor

Subnet modified

modifySubnet

Minor

Bandwidth modified

modifyBandwidth

Minor

VPN deleted

deleteVpn

Major

VPN modified

modifyVpn

Minor

Table 12 Elastic Volume Service (EVS)

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

EVS

SYS.EVS

Update disk

updateVolume

Minor

Update the name and description of an EVS disk.

No further action is required.

None

Expand disk

extendVolume

Minor

Expand an EVS disk.

No further action is required.

None

Delete disk

deleteVolume

Major

Delete an EVS disk.

No further action is required.

Deleted disks cannot be recovered.

QoS upper limit reached

reachQoS

Major

The I/O latency increases as the QoS upper limits of the disk are frequently reached and flow control triggered.

Change the disk type to one with a higher specification.

The current disk may fail to meet service requirements.

Table 13 Key Management Service (KMS)

Event Source

Namespace

Event Name

Event ID

Event Severity

KMS

SYS.KMS

Key disabled

disableKey

Major

Key deletion scheduled

scheduleKeyDeletion

Minor

Grant retired

retireGrant

Major

Grant revoked

revokeGrant

Major

Table 14 Object Storage Service (OBS)

Event Source

Namespace

Event Name

Event ID

Event Severity

OBS

SYS.OBS

Bucket deleted

deleteBucket

Major

Bucket policy deleted

deleteBucketPolicy

Major

Bucket ACL configured

setBucketAcl

Minor

Bucket policy configured

setBucketPolicy

Minor

Table 15 Cloud Eye

Event Source

Event Name

Event ID

Event Severity

Description

Solution

Cloud Eye

Agent heartbeat interruption

agentHeartbeatInterrupted

Major

The Agent sends a heartbeat message to Cloud Eye every minute. If Cloud Eye cannot receive a heartbeat for 3 minutes, Agent Status is displayed as Faulty.

  • Confirm that the Agent domain name cannot be resolved.

  • Check whether your account is in arrears.

  • The Agent process is faulty. Restart the Agent. If the Agent process is still faulty after the restart, the Agent files may be damaged. In this case, reinstall the Agent.

  • Confirm that the server time is inconsistent with the local standard time.

  • Update the Agent to the latest version.

Table 16 DataSpace

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

Data Space

SYS.HWDS

New revision

newRevision

Minor

An updated version was released.

After receiving the notification, export the data of the updated version as required.

None.

Table 17 Distributed Cache Service (DCS)

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

DCS

SYS.DCS

Full sync retry during online migration

migrationFullResync

Minor

If online migration fails, full synchronization will be triggered because incremental synchronization cannot be performed.

Check whether full sync retries are triggered repeatedly. Check whether the source instance is connected and whether it is overloaded. If full sync retries are triggered repeatedly, contact O&M personnel.

The migration task is disconnected from the source instance, triggering another full sync. As a result, the CPU usage of the source instance may increase sharply.

masterStandbyFailover

Minor

The master node was abnormal, promoting a replica to master.

Memcached master/standby switchover

memcachedMasterStandbyFailover

Minor

The master node was abnormal, promoting the standby node to master.

Check whether services can recover by themselves. If applications cannot recover, restart them.

Persistent connections to the instance are interrupted.

Redis server abnormal

redisNodeStatusAbnormal

Major

The Redis server status was abnormal.

Check whether services are affected. If yes, contact O&M personnel.

If the master node is abnormal, an automatic failover is performed. If a standby node is abnormal and the client directly connects to the standby node for read/write splitting, no data can be read.

Redis server recovered

redisNodeStatusNormal

Major

The Redis server status recovered.

Check whether services can recover. If the applications are not reconnected, restart them.

Recover from an exception.

Sync failure in data migration

migrateSyncDataFail

Major

Online migration failed.

Reconfigure the migration task and migrate data again. If the fault persists, contact O&M personnel.

Data migration fails.

Memcached instance abnormal

memcachedInstanceStatusAbnormal

Major

The Memcached node status was abnormal.

Check whether services are affected. If yes, contact O&M personnel.

The Memcached instance is abnormal and may not be accessed.

Memcached instance recovered

memcachedInstanceStatusNormal

Major

The Memcached node status recovered.

Check whether services can recover. If the applications are not reconnected, restart them.

Recover from an exception.

Instance backup failure

instanceBackupFailure

Major

The DCS instance fails to be backed up due to an OBS access failure.

Retry backup manually.

Automated backup fails.

Instance node abnormal restart

instanceNodeAbnormalRestart

Major

DCS nodes restarted unexpectedly when they became faulty.

Check whether services can recover. If the applications are not reconnected, restart them.

Persistent connections to the instance are interrupted.

Long-running Lua scripts stopped

scriptsStopped

Informational

Lua scripts that had timed out automatically stopped running.

Optimize Lua scrips to prevent execution timeout.

If Lua scripts take a long time to execute, they will be forcibly stopped to avoid blocking the entire instance.

Node restarted

nodeRestarted

Informational

After write operations had been performed, the node automatically restarted to stop Lua scripts that had timed out.

Check whether services can recover by themselves. If applications cannot recover, restart them.

Persistent connections to the instance are interrupted.

Table 18 Intelligent Cloud Access (ICA)

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

ICA

SYS.ICA

BGP peer disconnection

BgpPeerDisconnection

Major

The BGP peer is disconnected.

Log in to the gateway and locate the cause.

Service traffic may be interrupted.

BGP peer connection success

BgpPeerConnectionSuccess

Major

The BGP peer is successfully connected.

None

None

Abnormal GRE tunnel status

AbnormalGreTunnelStatus

Major

The GRE tunnel status is abnormal.

Log in to the gateway and locate the cause.

Service traffic may be interrupted.

Normal GRE tunnel status

NormalGreTunnelStatus

Major

The GRE tunnel status is normal.

None

None

WAN interface goes up

EquipmentWanGoingOnline

Major

The WAN interface goes online.

None

None

WAN interface goes down

EquipmentWanGoingOffline

Major

The WAN interface goes offline.

Check whether the event is caused by a manual operation or device fault.

The device cannot be used.

Intelligent enterprise gateway going online

IntelligentEnterpriseGatewayGoingOnline

Major

The intelligent enterprise gateway goes online.

None

None

Intelligent enterprise gateway going offline

IntelligentEnterpriseGatewayGoingOffline

Major

The intelligent enterprise gateway goes offline.

Check whether the event is caused by a manual operation or device fault.

The device cannot be used.

Table 19 Multi-Site High Availability Service (MAS)

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

MAS

SYS.MAS

Abnormal database instance

dbError

Major

Abnormal database instance is detected by MAS.

Log in to the MAS console to view the cause and rectify the fault.

Services are interrupted.

Database instance recovered

dbRecovery

Major

The database instance is recovered.

None

Services are interrupted.

Abnormal Redis instance

redisError

Major

Abnormal Redis instance is detected by MAS.

Log in to the MAS console to view the cause and rectify the fault.

Services are interrupted.

Redis instance recovered

redisRecovery

Major

The Redis instance is recovered.

None

Services are interrupted.

Abnormal MongoDB database

mongodbError

Major

Abnormal MongoDB database is detected by MAS.

Log in to the MAS console to view the cause and rectify the fault.

Services are interrupted.

MongoDB database recovered

mongodbRecovery

Major

The MongoDB database is recovered.

None

Services are interrupted.

Abnormal Elasticsearch instance

esError

Major

Abnormal Elasticsearch instance is detected by MAS.

Log in to the MAS console to view the cause and rectify the fault.

Services are interrupted.

Elasticsearch instance recovered

esRecovery

Major

The Elasticsearch instance is recovered.

None

Services are interrupted.

Abnormal API

apiError

Major

The abnormal API is detected by MAS.

Log in to the MAS console to view the cause and rectify the fault.

Services are interrupted.

API recovered

apiRecovery

Major

The API is recovered.

None

Services are interrupted.

Area status changed

netChange

Major

Area status changes are detected by MAS.

Log in to the MAS console to view the cause and rectify the fault.

Network of the multi-active areas may change.

Table 20 Config

Event Source

Event Name

Event ID

Event Severity

Description

Solution

Impact

RMS

Configuration noncompliance notification

configurationNoncomplianceNotification

Major

The assignment evaluation result is Non-compliant.

Modify the noncompliant configuration items of the resource.

None

Configuration compliance notification

configurationComplianceNotification

Informational

The assignment evaluation result changed to be Compliant.

None

None

Table 21 SecMaster

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

SecMaster

SYS.SecMaster

Exclusive engine creation failed

createEngineFailed

Major

The underlying resources are insufficient.

Submit a ticket to request sufficient resources from the O&M personnel and try again.

The exclusive engine cannot be created.

Exclusive engine exception

engineException

Critical

The traffic is too heavy or there are malicious processes or plug-ins.

  1. Check the executions of plug-ins and processes, see if they occupy too many resources.

  2. Check the instance monitoring information to see whether there is a sharp increase in the number of instances.

The instance cannot be executed.

Playbook instance execution failed

playbookInstanceExecFailed

Minor

Playbooks or processes are incorrectly configured.

Check the instance monitoring information to find the cause of the failure, and modify the playbook and process configuration.

None

Playbook instance increased sharply

playbookInstanceIncreaseSharply

Minor

Playbooks or processes are incorrectly configured.

Check the instance monitoring information to find the cause of the increase, and modify the playbook and process configuration.

None

Log messages increased sharply

logIncrease

Major

The upstream services suddenly generate a large number of log messages.

Check whether the upstream services are normal.

None

Log messages decreased sharply

logsDecrease

Major

Logs generated by the upstream services suddenly decrease.

Check whether the upstream services are normal.

None

Table 22 Key Pair Service

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

KPS

SYS.KPS

Key pair deleted

KPSDeleteKeypair

Informational

A key pair was deleted. This operation cannot be undone.

If this event occurred frequently within a short period of time, check whether malicious deletion took place.

Deleted key pairs cannot be restored.

Table 23 Host Security Service

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

HSS

SYS.HSS

HSS agent disconnected

hssAgentAbnormalOffline

Major

The communication between the agent and the server is abnormal, or the agent process on the server is abnormal.

Fix your network connection. If the agent is still offline for a long time after the network recovers, the agent process may be abnormal. In this case, log in to the server and restart the agent process.

Services are interrupted.

Abnormal HSS agent status

hssAgentAbnormalProtection

Major

The agent is abnormal probably because it does not have sufficient resources.

Log in to the server and check your resources. If the usage of memory or other system resources is too high, increase their capacity first. If the resources are sufficient but the fault persists after the agent process is restarted, submit a service ticket to the O&M personnel.

Services are interrupted.

Table 24 Image Management Service

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

IMS

SYS.IMS

Create Image

createImage

Major

An image was created.

None

You can use this image to create cloud servers.

Update Image

updateImage

Major

Metadata of an image was modified.

None

Cloud servers may fail to be created from this image.

Delete Image

deleteImage

Major

An image was deleted.

None

This image will be unavailable on the management console.

Table 25 Cloud Storage Gateway (CSG)

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

CSG

SYS.CSG

Abnormal CSG process status

gatewayProcessStatusAbnormal

Major

This event is triggered when an exception occurs in the CSG process status.

Abnormal CSG connection status

gatewayToServiceConnectAbnormal

Major

This event is triggered when no CSG status report is returned for five consecutive periods.

Abnormal connection status between CSG and OBS

gatewayToObsConnectAbnormal

Major

This event is triggered when CSG cannot connect to OBS.

Read-only file system

gatewayFileSystemReadOnly

Major

This event is triggered when the partition file system on CSG becomes read-only.

Read-only file share

gatewayFileShareReadOnly

Major

This event is triggered when the file share becomes read-only due to insufficient cache disk storage space.

Table 26 Global Accelerator

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

GA

SYS.GA

Anycast IP address blocked

blockAIP

Critical

The used bandwidth of an EIP exceeded 5 Gbit/s, the EIP were blocked and packets were discarded. Such an event may be caused by DDoS attacks.

Locate the root cause and rectify the fault.

Services are affected. The traffic will not be properly forwarded.

Anycast IP address unblocked

unblockAIP

Critical

The anycast IP address was unblocked.

Ensure that traffic can be properly forwarded.

None

Unhealthy endpoint

healthCheckError

Major

Health check detects the endpoint unhealthy.

Submit a service ticket.

If an endpoint is considered unhealthy, traffic will not be forwarded to it until the endpoint recovers.

Table 27 Cloud Certificate Manager (CCM)

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

CCM

SYS.CCM

Certificate revocation

CCMRevokeCertificate

Major

The certificate enters into the revocation process. Once revoked, the certificate cannot be used anymore.

Check whether the certificate revocation is really needed. Certificate revocation can be canceled.

If a certificate is revoked, the website is inaccessible using HTTPS.

Certificate auto-deployment failure

CCMAutoDeploymentFailure

Major

The certificate fails to be automatically deployed.

Check service resources whose certificates need to be replaced.

If no new certificate is deployed after a certificate expires, the website is inaccessible using HTTPS.

Certificate expiration

CCMCertificateExpiration

Major

An SSL certificate has expired.

Purchase a new certificate in a timely manner.

If no new certificate is deployed after a certificate expires, the website is inaccessible using HTTPS.

Certificate about to expire

CCMcertificateAboutToExpiration

Major

This alarm is generated when an SSL certificate is about to expire in one week, one month, and two months.

Renew or purchase a new certificate in a timely manner.

If no new certificate is deployed after a certificate expires, the website is inaccessible using HTTPS.

Table 28 Bare Metal Server (BMS)

Event Source

Event Name

Event ID

Event Severity

Description

Solution

Impact

BMS

ECC uncorrectable errors generated on GPU SRAM

SRAMUncorrectableEccError

Major

There are ECC uncorrectable errors generated on GPU SRAM.

If services are affected, submit a service ticket.

The GPU hardware may be faulty. As a result, the GPU memory is faulty, and services exit abnormally.

osShutdown

osShutdown

Major

The BMS was stopped

  • on the management console.

  • by calling APIs.

  • Deploy service applications in HA mode.

  • After the BMS is started, check whether services recover.

Services are interrupted.

Abnormal shutdown

serverShutdown

Major

The BMS was stopped unexpectedly, which may be caused by

  • unexpected power-off.

  • hardware faults.

  • Deploy service applications in HA mode.

  • After the BMS is started, check whether services recover.

Services are interrupted.

Abnormal reboot

serverReboot

Major

The BMS restarted unexpectedly, which may be caused by

  • OS faults.

  • hardware faults.

  • Deploy service applications in HA mode.

  • After the BMS is restarted, check whether services recover.

Services are interrupted.

Network interruption

linkDown

Major

The BMS network was disconnected. Possible causes are as follows:

  • The BMS was unexpectedly stopped or restarted.

  • The switch was faulty.

  • The gateway was faulty.

  • Deploy service applications in HA mode.

  • After the BMS is started, check whether services recover.

Services are interrupted.

PCIE error

pcieError

Major

The PCIe devices or main board of the BMS was faulty.

  • Deploy service applications in HA mode.

  • After the BMS is started, check whether services recover.

The network or disk read/write services are affected.

Disk error

diskError

Major

The disk backplane or disks of the BMS were faulty.

  • Deploy service applications in HA mode.

  • After the fault is rectified, check whether services recover.

Data read/write services are affected, or the BMS cannot be started.

Storage error

storageError

Major

The BMS failed to connect to EVS disks. Possible causes are as follows:

  • The SDI card was faulty.

  • Remote storage devices were faulty.

  • Deploy service applications in HA mode.

  • After the fault is rectified, check whether services recover.

Data read/write services are affected, or the BMS cannot be started.

OS reboot

osReboot

Major

The BMS was restarted

  • on the management console.

  • by calling APIs.

  • Deploy service applications in HA mode.

  • After the BMS is restarted, check whether services recover.

Services are interrupted.

Inforom alarm generated on GPU

gpuInfoROMAlarm

Major

The driver failed to read inforom information due to GPU faults.

Non-critical services can continue to use the GPU card. For critical services, submit a service ticket to resolve this issue.

Services will not be affected if inforom information cannot be read. If error correction code (ECC) errors are reported on GPU, faulty pages may not be automatically retired and services are affected.

Double-bit ECC alarm generated on GPU

doubleBitEccError

Major

A double-bit ECC error occurred on GPU.

  1. If services are interrupted, restart the services to restore.

  2. If services cannot be restarted, restart the VM where services are running.

  3. If services still cannot be restored, submit a service ticket.

Services may be interrupted. After faulty pages are retired, the GPU card can continue to be used.

Too many retired pages

gpuTooManyRetiredPagesAlarm

Major

An ECC page retirement error occurred on GPU.

If services are affected, submit a service ticket.

Services may be affected.

ECC alarm generated on GPU A100

gpuA100EccAlarm

Major

An ECC error occurred on GPU.

  1. If services are interrupted, restart the services to restore.

  2. If services cannot be restarted, restart the VM where services are running.

  3. If services still cannot be restored, submit a service ticket.

Services may be interrupted. After faulty pages are retired, the GPU card can continue to be used.

GPU ECC memory page retirement failure

eccPageRetirementRecordingFailure

Major

Automatic page retirement failed due to ECC errors.

  1. If services are interrupted, restart the services to restore.

  2. If services cannot be restarted, restart the VM where services are running.

  3. If services still cannot be restored, submit a service ticket.

Services may be interrupted, and memory page retirement fails. As a result, services cannot no longer use the GPU card.

GPU ECC page retirement alarm generated

eccPageRetirementRecordingEvent

Minor

Memory pages are automatically retired due to ECC errors.

  1. If services are interrupted, restart the services to restore.

  2. If services cannot be restarted, restart the VM where services are running.

  3. If services still cannot be restored, submit a service ticket.

Generally, this alarm is generated together with the ECC error alarm. If this alarm is generated independently, services are not affected.

Too many single-bit ECC errors on GPU

highSingleBitEccErrorRate

Major

There are too many single-bit ECC errors.

  1. If services are interrupted, restart the services to restore.

  2. If services cannot be restarted, restart the VM where services are running.

  3. If services still cannot be restored, submit a service ticket.

Single-bit errors can be automatically rectified and do not affect GPU-related applications.

GPU card not found

gpuDriverLinkFailureAlarm

Major

A GPU link is normal, but the NVIDIA driver cannot find the GPU card.

  1. Restart the VM to restore services.

  2. If services still cannot be restored, submit a service ticket.

The GPU card cannot be found.

GPU link faulty

gpuPcieLinkFailureAlarm

Major

GPU hardware information cannot be queried through lspci due to a GPU link fault.

If services are affected, submit a service ticket.

The driver cannot use GPU.

GPU card lost

vmLostGpuAlarm

Major

The number of GPU cards on the VM is less than the number specified in the specifications.

If services are affected, submit a service ticket.

GPU cards get lost.

GPU memory page faulty

gpuMemoryPageFault

Major

The GPU memory page is faulty, which may be caused by applications, drivers, or hardware.

If services are affected, submit a service ticket.

The GPU hardware may be faulty. As a result, the GPU memory is faulty, and services exit abnormally.

GPU image engine faulty

graphicsEngineException

Major

The GPU image engine is faulty, which may be caused by applications, drivers, or hardware.

If services are affected, submit a service ticket.

The GPU hardware may be faulty. As a result, the image engine is faulty, and services exit abnormally.

GPU temperature too high

highTemperatureEvent

Major

GPU temperature too high

If services are affected, submit a service ticket.

If the GPU temperature exceeds the threshold, the GPU performance may deteriorate.

GPU NVLink faulty

nvlinkError

Major

A hardware fault occurs on the NVLink.

If services are affected, submit a service ticket.

The NVLink link is faulty and unavailable.

nvidia-smi suspended

nvidiaSmiHangEvent

Major

nvidia-smi timed out.

If services are affected, submit a service ticket.

The driver may report an error during service running.

Table 29 Elastic IP and bandwidth

Event Source

Event Name

Event ID

Event Severity

Elastic IP and bandwidth

Delete VPC

deleteVpc

Major

Modify VPC

modifyVpc

Minor

Delete subnet

deleteSubnet

Minor

Modify subnet

modifySubnet

Minor

Modify bandwidth

modifyBandwidth

Minor

Delete VPN

deleteVpn

Major

Modify VPN

modifyVpn

Minor

Table 30 Cloud Phone Host (CPH)

Event Source

Event Name

Event ID

Event Severity

Description

Solution

Impact

CPH

Server shutdown

cphServerOsShutdown

Major

The cloud phone server was stopped

  • on the management console.

  • by calling APIs.

Deploy service applications in HA mode.

After the fault is rectified, check whether services recover.

Services are interrupted.

Server abnormal shutdown

cphServerShutdown

Major

The cloud phone server was stopped unexpectedly. Possible causes are as follows:

  • The cloud phone server was powered off unexpectedly.

  • The cloud phone server was stopped due to hardware faults.

Deploy service applications in HA mode.

After the fault is rectified, check whether services recover.

Services are interrupted.

Server reboot

cphServerOsReboot

Major

The cloud phone server was rebooted

  • on the management console.

  • by calling APIs.

Deploy service applications in HA mode.

After the fault is rectified, check whether services recover.

Services are interrupted.

Server abnormal reboot

cphServerReboot

Major

The cloud phone server was rebooted unexpectedly due to

  • OS faults.

  • hardware faults.

Deploy service applications in HA mode.

After the fault is rectified, check whether services recover.

Services are interrupted.

Network interruption

cphServerlinkDown

Major

The network where the cloud phone server was deployed was disconnected. Possible causes are as follows:

  • The cloud phone server was stopped unexpectedly and rebooted.

  • The switch was faulty.

  • The gateway node was faulty.

Deploy service applications in HA mode.

After the fault is rectified, check whether services recover.

Services are interrupted.

PCIE error

cphServerPcieError

Major

The PCIe device or main board on the cloud phone server was faulty.

Deploy service applications in HA mode.

After the fault is rectified, check whether services recover.

The network or disk read/write is affected.

Disk error

cphServerDiskError

Major

The disk on the cloud phone server was faulty due to

  • disk backplane faults.

  • disk faults.

Deploy service applications in HA mode.

After the fault is rectified, check whether services recover.

Data read/write services are affected, or the BMS cannot be started.

Storage error

cphServerStorageError

Major

The cloud phone server could not connect to EVS disks. Possible causes are as follows:

  • SDI card faults

  • Remote storage devices were faulty.

Deploy service applications in HA mode.

After the fault is rectified, check whether services recover.

Data read/write services are affected, or the BMS cannot be started.

GPU offline

cphServerGpuOffline

Major

GPU of the cloud phone server was loose and disconnected.

Stop the cloud phone server and reboot it.

Faults occur on cloud phones whose GPUs are disconnected. Cloud phones cannot run properly even if they are restarted or reconfigured.

GPU timeout

cphServerGpuTimeOut

Major

GPU of the cloud phone server timed out.

Reboot the cloud phone server.

Cloud phones whose GPUs timed out cannot run properly and are still faulty even if they are restarted or reconfigured.

Disk full

cphServerDiskFull

Major

Disk space of the cloud phone server was used up.

Clear the application data in the cloud phone to release space.

Cloud phone is sub-healthy, prone to failure, and unable to start.

Disk readonly

cphServerDiskReadOnly

Major

The disk of the cloud phone server became read-only.

Reboot the cloud phone server.

Cloud phone is sub-healthy, prone to failure, and unable to start.

Phone metadata damage

cphPhoneMetaDataDamage

Major

Cloud phone metadata was damaged.

Contact O&M personnel.

The cloud phone cannot run properly even if it is restarted or reconfigured.

GPU failure

gpuAbnormal

Critical

The GPU was faulty.

Submit a service ticket.

Services are interrupted.

GPU back to normal

gpuNormal

Informational

The GPU was running properly.

No further action is required.

N/A

Kernel crash

kernelCrash

Critical

The kernel log indicated crash.

Submit a service ticket.

Services are interrupted during the crash.

Kernel OOM

kernelOom

Major

The kernel log indicated out of memory.

Submit a service ticket.

Services are interrupted.

Hardware malfunction

hardwareError

Critical

The kernel log indicated Hardware Error.

Submit a service ticket.

Services are interrupted.

PCIE error

pcieAer

Critical

The kernel log indicated PCIe Bus Error.

Submit a service ticket.

Services are interrupted.

SCSI error

scsiError

Critical

The kernel log indicated SCSI Error.

Submit a service ticket.

Services are interrupted.

Image storage became read-only

partReadOnly

Critical

The image storage became read-only.

Submit a service ticket.

Services are interrupted.

Image storage superblock damaged

badSuperBlock

Critical

The superblock of the file system of the image storage was damaged.

Submit a service ticket.

Services are interrupted.

Image storage /.sharedpath/master became read-only

isuladMasterReadOnly

Critical

Mount point /.sharedpath/master of the image storage became read-only.

Submit a service ticket.

Services are interrupted.

Cloud phone data disk became read-only

cphDiskReadOnly

Critical

The cloud phone data disk became read-only.

Submit a service ticket.

Services are interrupted.

Cloud phone data disk superblock damaged

cphDiskBadSuperBlock

Critical

The superblock of the file system of the cloud phone data disk was damaged.

Submit a service ticket.

Services are interrupted.

Table 31 Object Storage Service (OBS)

Event Source

Event Name

Event ID

Event Severity

OBS

Delete bucket

deleteBucket

Major

Delete bucket policy

deleteBucketPolicy

Major

Set bucket ACL

setBucketAcl

Minor

Set bucket policy

setBucketPolicy

Minor

Table 32 Enterprise connection

Event Source

Event Name

Event ID

Event Severity

Description

Solution

Impact

EC

The WAN interface goes up

EquipmentWanGoesOnline

Major

The WAN interface goes online.

None

None

The WAN interface goes down

EquipmentWanGoesOffline

Major

The WAN interface goes offline.

Check whether the event is caused by a manual operation or device fault.

The device cannot be used.

BGP peer disconnection

BgpPeerDisconnection

Major

BGP peer disconnection

Check whether the event is caused by a manual operation or device fault.

The device cannot be used.

BGP peer connection success

BgpPeerConnectionSuccess

Major

The BGP peer is successfully connected.

None

None

Abnormal GRE tunnel status

AbnormalGreTunnelStatus

Major

Abnormal GRE tunnel status

Check whether the event is caused by a manual operation or device fault.

The device cannot be used.

Normal GRE tunnel status

NormalGreTunnelStatus

Major

The GRE tunnel status is normal.

None

None

The intelligent enterprise gateway goes online

IntelligentEnterpriseGatewayGoesOnline

Major

The intelligent enterprise gateway goes online.

None

None

The intelligent enterprise gateway goes offline

IntelligentEnterpriseGatewayGoesOffline

Major

The intelligent enterprise gateway goes offline.

Check whether the event is caused by a manual operation or device fault.

The device cannot be used.

Table 33 Elastic IP (EIP)

Event Source

Event Name

Event ID

Event Severity

Description

Solution

Impact

EIP

EIP bandwidth overflow

EIPBandwidthOverflow

Major

The used bandwidth exceeded the purchased one, which may slow down the network or cause packet loss. The value of this event is the maximum value in a monitoring period, and the value of the EIP inbound and outbound bandwidth is the value at a specific time point in the period.

The metrics are described as follows:

egressDropBandwidth: dropped outbound packets (bytes)

egressAcceptBandwidth: accepted outbound packets (bytes)

egressMaxBandwidthPerSec: peak outbound bandwidth (byte/s)

ingressAcceptBandwidth: accepted inbound packets (bytes)

ingressMaxBandwidthPerSec: peak inbound bandwidth (byte/s)

ingressDropBandwidth: dropped inbound packets (bytes)

Check whether the EIP bandwidth keeps increasing and whether services are normal. Increase bandwidth if necessary.

The network becomes slow or packets are lost.

Delete EIP

deleteEip

Minor

The EIP was released.

Check whether the EIP was release by mistake.

The server that has the EIP bound cannot access the Internet.

EIP blocked

blockEIP

Critical

The used bandwidth of an EIP exceeded 5 Gbit/s, the EIP were blocked and packets were discarded. Such an event may be caused by DDoS attacks.

Replace the EIP to prevent services from being affected.

Locate and deal with the fault.

Services are impacted.

EIP unblocked

unblockEIP

Critical

The EIP was unblocked.

Use the previous EIP again.

None

Start DDoS traffic scrubbing

ddosCleanEIP

Major

Traffic scrubbing on the EIP was started to prevent DDoS attacks.

Check whether the EIP was attacked.

Services may be interrupted.

Stop DDoS traffic scrubbing

ddosEndCleanEip

Major

Traffic scrubbing on the EIP to prevent DDoS attacks was ended.

Check whether the EIP was attacked.

Services may be interrupted.

Enterprise-class QoS bandwidth limit exceeded

EIPBandwidthRuleOverflow

Major

The used QoS bandwidth exceeded the allocated one, which may slow down the network or cause packet loss. The value of this event is the maximum value in a monitoring period, and the value of the EIP inbound and outbound bandwidth is the value at a specific time point in the period.

egressDropBandwidth: dropped outbound packets (bytes)

egressAcceptBandwidth: accepted outbound packets (bytes)

egressMaxBandwidthPerSec: peak outbound bandwidth (byte/s)

ingressAcceptBandwidth: accepted inbound packets (bytes)

ingressMaxBandwidthPerSec: peak inbound bandwidth (byte/s)

ingressDropBandwidth: dropped inbound packets (bytes)

Check whether the EIP bandwidth keeps increasing and whether services are normal. Increase bandwidth if necessary.

The network becomes slow or packets are lost.

Table 34 Enterprise Switch

Event Source

Event Name

Event ID

Event Severity

Description

Solution

Impact

Enterprise Switch

IP Address Conflict

IPConflict

Major

A cloud server and an on-premises server that need to communicate use the same IP address.

Check the ARP and switch information to locate the servers that have the same IP address and change the IP address.

The communications between the on-premises and cloud servers may be abnormal.