nav-img
Advanced

Events Supported by Event Monitoring

Table 1 Elastic Cloud Server (ECS)

Event Source

Event Name

Event ID

Event Severity

Description

Solution

Impact

ECS

Restart triggered due to system faults

startAutoRecovery

Major

ECSs on a faulty host would be automatically migrated to another properly-running host. During the migration, the ECSs was restarted.

Wait for the event to end and check whether services are affected.

Services may be interrupted.

Restart completed due to system faults

endAutoRecovery

Major

The ECS was recovered after the automatic migration.

This event indicates that the ECS has recovered and been working properly.

None

Auto recovery timeout (being processed on the backend)

faultAutoRecovery

Major

Migrating the ECS to a normal host timed out.

Migrate services to other ECSs.

Services are interrupted.

GPU link fault

GPULinkFault

Critical

The GPU of the host on which the ECS is located was faulty or was recovering from a fault.

Deploy service applications in HA mode.

After the GPU fault is rectified, check whether services are restored.

Services are interrupted.

ECS deleted

deleteServer

Major

The ECS was deleted

  • on the management console.
  • by calling APIs.

Check whether the deletion was performed intentionally by a user.

Services are interrupted.

ECS restarted

rebootServer

Minor

The ECS was restarted

  • on the management console.
  • by calling APIs.

Check whether the restart was performed intentionally by a user.

  • Deploy service applications in HA mode.
  • After the ECS starts up, check whether services recover.

Services are interrupted.

ECS stopped

stopServer

Minor

The ECS was stopped

  • on the management console.
  • by calling APIs.
NOTE:

The ECS is stopped only after CTS is enabled. For details, see Cloud Trace Service User Guide.

  • Check whether the restart was performed intentionally by a user.
  • Deploy service applications in HA mode.
  • After the ECS starts up, check whether services recover.

Services are interrupted.

NIC deleted

deleteNic

Major

The ECS NIC was deleted

  • on the management console.
  • by calling APIs.
  • Check whether the deletion was performed intentionally by a user.
  • Deploy service applications in HA mode.
  • After the NIC is deleted, check whether services recover.

Services may be interrupted.

ECS resized

resizeServer

Minor

The ECS specifications were resized

  • on the management console.
  • by calling APIs.
  • Check whether the operation was performed by a user.
  • Deploy service applications in HA mode.
  • After the ECS is resized, check whether services have recovered.

Services are interrupted.

GuestOS restarted

RestartGuestOS

Minor

The guest OS was restarted.

Contact O&M personnel.

Services may be interrupted.

ECS failure caused by system faults

VMFaultsByHostProcessExceptions

Critical

The host where the ECS resides is faulty. The system will automatically try to start the ECS.

After the ECS is started, check whether this ECS and services on it can run properly.

The ECS is faulty.

Startup failure

faultPowerOn

Major

The ECS failed to start.

Start the ECS again. If the problem persists, contact O&M personnel.

The ECS cannot start.

Host breakdown risk

hostMayCrash

Major

The host where the ECS resides may break down, and the risk cannot be prevented through live migration due to some reasons.

Migrate services running on the ECS first and delete or stop the ECS. Start the ECS only after the O&M personnel eliminate the risk.

The host may break down, causing service interruption.

Scheduled migration completed

instance_migrate_completed

Major

Scheduled ECS migration is completed.

Wait until the ECSs become available and check whether services are affected.

Services may be interrupted.

Scheduled migration being executed

instance_migrate_executing

Major

ECSs are being migrated as scheduled.

Wait until the event is complete and check whether services are affected.

Services may be interrupted.

Scheduled migration canceled

instance_migrate_canceled

Major

Scheduled ECS migration is canceled.

None

None

Scheduled migration failed

instance_migrate_failed

Major

ECSs failed to be migrated as scheduled.

Contact O&M personnel.

Services are interrupted.

Scheduled migration to be executed

instance_migrate_scheduled

Major

ECSs will be migrated as scheduled.

Check the impact on services during the execution window.

None

Scheduled specification modification failed

instance_resize_failed

Major

Specifications failed to be modified as scheduled.

Contact O&M personnel.

Services are interrupted.

Scheduled specification modification completed

instance_resize_completed

Major

Scheduled specifications modification is completed.

None

None

Scheduled specification modification being executed

instance_resize_executing

Major

Specifications are being modified as scheduled.

Wait until the event is completed and check whether services are affected.

Services are interrupted.

Scheduled specification modification canceled

instance_resize_canceled

Major

Scheduled specifications modification is canceled.

None

None

Scheduled specification modification to be executed

instance_resize_scheduled

Major

Specifications will be modified as scheduled.

Check the impact on services during the execution window.

None

Scheduled redeployment to be executed

instance_redeploy_scheduled

Major

ECSs will be redeployed on new hosts as scheduled.

Check the impact on services during the execution window.

None

Scheduled restart to be executed

instance_reboot_scheduled

Major

ECSs will be restarted as scheduled.

Check the impact on services during the execution window.

None

Scheduled stop to be executed

instance_stop_scheduled

Major

ECSs will be stopped as scheduled as they are affected by underlying hardware or system O&M.

Check the impact on services during the execution window.

None

Live migration started

liveMigrationStarted

Major

The host where the ECS is located may be faulty. Live migrate the ECS in advance to prevent service interruptions caused by host breakdown.

Wait for the event to end and check whether services are affected.

Services may be interrupted for less than 1s.

Live migration completed

liveMigrationCompleted

Major

The live migration is complete, and the ECS is running properly.

Check whether services are running properly.

None

Live migration failure

liveMigrationFailed

Major

An error occurred during the live migration of an ECS.

Check whether services are running properly.

There is a low probability that services are interrupted.

ECC uncorrectable error alarm generated on GPU SRAM

SRAMUncorrectableEccError

Major

There are ECC uncorrectable errors generated on GPU SRAM.

If services are affected, submit a service ticket.

The GPU hardware may be faulty. As a result, the SRAM is faulty, and services exit abnormally.

FPGA link fault

FPGALinkFault

Critical

The FPGA of the host on which the ECS is located was

  • faulty.
  • recovering from a fault.

Deploy service applications in HA mode.

After the FPGA fault is rectified, check whether services are restored.

Services are interrupted.

Scheduled redeployment to be authorized

instance_redeploy_inquiring

Major

As being affected by underlying hardware or system O&M, ECSs will be redeployed on new hosts as scheduled.

Authorize scheduled redeployment.

None

Local disk replacement canceled

localdisk_recovery_canceled

Major

Local disk failure

None

None

Local disk replacement to be executed

localdisk_recovery_scheduled

Major

Local disk failure

Check the impact on services during the execution window.

None

Xid event alarm generated on GPU

commonXidError

Major

A xid event alarm occurs on GPU.

If services are affected, submit a service ticket.

The GPU hardware, driver, and application problems lead to Xid events, which may lead to abnormal exit of the business.

nvidia-smi suspended

nvidiaSmiHangEvent

Major

nvidia-smi timed out.

If services are affected, submit a service ticket.

The driver may report an error during service running.

NPU: uncorrectable ECC error

UncorrectableEccErrorCount

Major

There are uncorrectable ECC errors generated on GPU SRAM.

If services are affected, replace the NPU with another one.

Services may be interrupted.

Scheduled redeployment canceled

instance_redeploy_canceled

Major

As being affected by underlying hardware or system O&M, ECSs will be redeployed on new hosts as scheduled.

None

None

Scheduled redeployment being executed

instance_redeploy_executing

Major

As being affected by underlying hardware or system O&M, ECSs will be redeployed on new hosts as scheduled.

Wait until the event is complete and check whether services are affected.

Services are interrupted.

Scheduled redeployment completed

instance_redeploy_completed

Major

As being affected by underlying hardware or system O&M, ECSs will be redeployed on new hosts as scheduled.

Wait until the redeployed ECSs are available and check whether services are affected.

None

Scheduled redeployment failed

instance_redeploy_failed

Major

As being affected by underlying hardware or system O&M, ECSs will be redeployed on new hosts as scheduled.

Contact O&M personnel.

Services are interrupted.

Local disk replacement to be authorized

localdisk_recovery_inquiring

Major

Local disks are faulty.

Authorize local disk replacement.

Local disks are unavailable.

Local disks being replaced

localdisk_recovery_executing

Major

Local disk failure

Wait until the local disks are replaced and check whether the local disks are available.

Local disks are unavailable.

Local disks replaced

localdisk_recovery_completed

Major

Local disk failure

Wait until the services are running properly and check whether local disks are available.

None

Local disk replacement failed

localdisk_recovery_failed

Major

Local disks are faulty.

Contact O&M personnel.

Local disks are unavailable.

Note

Once a physical host running ECSs breaks down, the ECSs are automatically migrated to a functional physical host. During the migration, the ECSs will be restarted.

Table 2 Elastic IP (EIP)

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

EIP

SYS.EIP

EIP bandwidth exceeded

EIPBandwidthOverflow

Major

The used bandwidth exceeded the purchased one, which may slow down the network or cause packet loss. The value of this event is the maximum value in a monitoring period, and the value of the EIP inbound and outbound bandwidth is the value at a specific time point in the period.

The metrics are described as follows:

egressDropBandwidth: dropped outbound packets (bytes)

egressAcceptBandwidth: accepted outbound packets (bytes)

egressMaxBandwidthPerSec: peak outbound bandwidth (byte/s)

ingressAcceptBandwidth: accepted inbound packets (bytes)

ingressMaxBandwidthPerSec: peak inbound bandwidth (byte/s)

ingressDropBandwidth: dropped inbound packets (bytes)

Check whether the EIP bandwidth keeps increasing and whether services are normal. Increase bandwidth if necessary.

The network becomes slow or packets are lost.

EIP released

deleteEip

Minor

The EIP was released.

Check whether the EIP was release by mistake.

The server that has the EIP bound cannot access the Internet.

EIP blocked

blockEIP

Critical

The used bandwidth of an EIP exceeded 5 Gbit/s, the EIP were blocked and packets were discarded. Such an event may be caused by DDoS attacks.

Replace the EIP to prevent services from being affected.

Locate and deal with the fault.

Services are impacted.

EIP unblocked

unblockEIP

Critical

The EIP was unblocked.

Use the previous EIP again.

None

EIP traffic scrubbing started

ddosCleanEIP

Major

Traffic scrubbing on the EIP was started to prevent DDoS attacks.

Check whether the EIP was attacked.

Services may be interrupted.

EIP traffic scrubbing ended

ddosEndCleanEip

Major

Traffic scrubbing on the EIP to prevent DDoS attacks was ended.

Check whether the EIP was attacked.

Services may be interrupted.

QoS bandwidth exceeded

EIPBandwidthRuleOverflow

Major

The used QoS bandwidth exceeded the allocated one, which may slow down the network or cause packet loss. The value of this event is the maximum value in a monitoring period, and the value of the EIP inbound and outbound bandwidth is the value at a specific time point in the period.

egressDropBandwidth: dropped outbound packets (bytes)

egressAcceptBandwidth: accepted outbound packets (bytes)

egressMaxBandwidthPerSec: peak outbound bandwidth (byte/s)

ingressAcceptBandwidth: accepted inbound packets (bytes)

ingressMaxBandwidthPerSec: peak inbound bandwidth (byte/s)

ingressDropBandwidth: dropped inbound packets (bytes)

Check whether the EIP bandwidth keeps increasing and whether services are normal. Increase bandwidth if necessary.

The network becomes slow or packets are lost.

Table 3 Elastic Load Balance (ELB)

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

ELB

SYS.ELB

The backend servers are unhealthy.

healthCheckUnhealthy

Major

Generally, this problem occurs because backend server services are offline. This event will not be reported after it is reported for several times.

Ensure that the backend servers are running properly.

ELB does not forward requests to unhealthy backend servers. If all backend servers in the backend server group are detected unhealthy, services will be interrupted.

The backend server is detected healthy.

healthCheckRecovery

Minor

The backend server is detected healthy.

No further action is required.

The load balancer can properly route requests to the backend server.

Table 4 Cloud Backup and Recovery (CBR)

Event Source

Event Name

Event ID

Event Severity

Description

Solution

Impact

CBR

Failed to create the backup.

backupFailed

Critical

The backup failed to be created.

Manually create a backup or contact customer service.

Data loss may occur.

Failed to restore the resource using a backup.

restorationFailed

Critical

The resource failed to be restored using a backup.

Restore the resource using another backup or contact customer service.

Data loss may occur.

Failed to delete the backup.

backupDeleteFailed

Critical

The backup failed to be deleted.

Try again later or contact customer service.

Charging may be abnormal.

Failed to delete the vault.

vaultDeleteFailed

Critical

The vault failed to be deleted.

Try again later or contact technical support.

Charging may be abnormal.

Replication failure

replicationFailed

Critical

The backup failed to be replicated.

Try again later or contact technical support.

Data loss may occur.

The backup is created successfully.

backupSucceeded

Major

The backup was created.

None

None

Resource restoration using a backup succeeded.

restorationSucceeded

Major

The resource was restored using a backup.

Check whether the data is successfully restored.

None

The backup is deleted successfully.

backupDeletionSucceeded

Major

The backup was deleted.

None

None

The vault is deleted successfully.

vaultDeletionSucceeded

Major

The vault was deleted.

None

None

Replication success

replicationSucceeded

Major

The backup was replicated successfully.

None

None

Client offline

agentOffline

Critical

The backup client was offline.

Ensure that the Agent status is normal and the backup client can be connected to .

Backup tasks may fail.

Client online

agentOnline

Major

The backup client was online.

None

None

Table 5 Relational Database Service (RDS) — resource exception

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

RDS

SYS.RDS

DB instance creation failure

createInstanceFailed

Major

Generally, the cause is that the number of disks is insufficient due to quota limits, or underlying resources are exhausted.

The selected resource specifications are insufficient. Select other available specifications and try again.

DB instances cannot be created.

Full backup failure

fullBackupFailed

Major

A single full backup failure does not affect the files that have been successfully backed up, but prolong the incremental backup time during the point-in-time restore (PITR).

Try again.

Restoration using backups will be affected.

Read replica promotion failure

activeStandBySwitchFailed

Major

The standby DB instance does not take over workloads from the primary DB instance due to network or server failures. The original primary DB instance continues to provide services within a short time.

Perform the operation again during off-peak hours.

Read replica promotion failed.

Replication status abnormal

abnormalReplicationStatus

Major

The possible causes are as follows:

The replication delay between the primary instance and the standby instance or a read replica is too long, which usually occurs when a large amount of data is being written to databases or a large transaction is being processed. During peak hours, data may be blocked.

The network between the primary instance and the standby instance or a read replica is disconnected.

The issue is being fixed. Please wait for our notifications.

The replication status is abnormal.

Replication status recovered

replicationStatusRecovered

Major

The replication delay between the primary and standby instances is within the normal range, or the network connection between them has restored.

Check whether services are running properly.

Replication status is recovered.

DB instance faulty

faultyDBInstance

Major

A single or primary DB instance was faulty due to a catastrophic failure, for example, server failure.

The issue is being fixed. Please wait for our notifications.

The instance status is abnormal.

DB instance recovered

DBInstanceRecovered

Major

RDS rebuilds the standby DB instance with its high availability. After the instance is rebuilt, this event will be reported.

The DB instance status is normal. Check whether services are running properly.

The instance is recovered.

Failure of changing single DB instance to primary/standby

singleToHaFailed

Major

A fault occurs when RDS is creating the standby DB instance or configuring replication between the primary and standby DB instances. The fault may occur because resources are insufficient in the data center where the standby DB instance is located.

Automatic retry is in progress.

Changing a single DB instance to primary/standby failed.

Database process restarted

DatabaseProcessRestarted

Major

The database process is stopped due to insufficient memory or high load.

Check whether services are running properly.

The primary instance is restarted. Services are interrupted for a short period of time.

Instance storage full

instanceDiskFull

Major

Generally, the cause is that the data space usage is too high.

Scale up the storage.

The instance storage is used up. No data can be written into databases.

Instance storage full recovered

instanceDiskFullRecovered

Major

The instance disk is recovered.

Check whether services are running properly.

The instance has available storage.

Kafka connection failed

kafkaConnectionFailed

Major

The network is unstable or the Kafka server does not work properly.

Check whether services are affected.

None

Table 6 Document Database Service (DDS)

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

DDS

SYS.DDS

DB instance creation failure

DDSCreateInstanceFailed

Major

A DDS instance fails to be created due to insufficient disks, quotas, and underlying resources.

Check the number and quota of disks. Release resources and create DDS instances again.

DDS instances cannot be created.

Replication failed

DDSAbnormalReplicationStatus

Major

The possible causes are as follows:

The replication delay between the primary instance and the standby instance or a read replica is too long, which usually occurs when a large amount of data is being written to databases or a large transaction is being processed. During peak hours, data may be blocked.

The network between the primary instance and the standby instance or a read replica is disconnected.

Submit a service ticket.

Your applications are not affected because this event does not interrupt data read and write.

Replication recovered

DDSReplicationStatusRecovered

Major

The replication delay between the primary and standby instances is within the normal range, or the network connection between them has restored.

No action is required.

None

DB instance failed

DDSFaultyDBInstance

Major

This event is a key alarm event and is reported when an instance is faulty due to a disaster or a server failure.

Submit a service ticket.

The database service may be unavailable.

DB instance recovered

DDSDBInstanceRecovered

Major

If a disaster occurs, NoSQL provides an HA tool to automatically or manually rectify the fault. After the fault is rectified, this event is reported.

No action is required.

None

Faulty node

DDSFaultyDBNode

Major

This event is a key alarm event and is reported when a database node is faulty due to a disaster or a server failure.

Check whether the database service is available and submit a service ticket.

The database service may be unavailable.

Node recovered

DDSDBNodeRecovered

Major

If a disaster occurs, NoSQL provides an HA tool to automatically or manually rectify the fault. After the fault is rectified, this event is reported.

No action is required.

None

Primary/standby switchover or failover

DDSPrimaryStandbySwitched

Major

A primary/standby switchover is performed or a failover is triggered.

No action is required.

None

Insufficient storage space

DDSRiskyDataDiskUsage

Major

The storage space is insufficient.

Scale up storage space. For details, see section "Scaling Up Storage Space" in the corresponding user guide.

The instance is set to read-only and data cannot be written to the instance.

Data disk expanded and being writable

DDSDataDiskUsageRecovered

Major

The capacity of a data disk has been expanded and the data disk becomes writable.

No further action is required.

No adverse impact.

Schedule for deleting a KMS key

DDSplanDeleteKmsKey

Major

A request to schedule deletion of a KMS key was submitted.

After the KMS key is scheduled to be deleted, either decrypt the data encrypted by KMS key in a timely manner or cancel the key deletion.

After the KMS key is deleted, users cannot encrypt disks.

Table 7 Distributed Database Middleware (DDM)

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

DDM

SYS.DDM

Failed to create a DDM instance

createDdmInstanceFailed

Major

The underlying resources are insufficient.

Release resources and create the instance again.

DDM instances cannot be created.

Failed to change class of a DDM instance

resizeFlavorFailed

Major

The underlying resources are insufficient.

Submit a service ticket to the O&M personnel to coordinate resources and try again.

Services on some nodes are interrupted.

Failed to scale out a DDM instance

enlargeNodeFailed

Major

The underlying resources are insufficient.

Submit a service ticket to the O&M personnel to coordinate resources, delete the node that fails to be added, and add a node again.

The instance fails to be scaled out.

Failed to scale in a DDM instance

reduceNodeFailed

Major

The underlying resources fail to be released.

Submit a service ticket to the O&M personnel to release resources.

The instance fails to be scaled in.

Failed to restart a DDM instance

restartInstanceFailed

Major

The DB instances associated are abnormal.

Check whether DB instances associated are normal. If the instances are normal, submit a service ticket to the O&M personnel.

Services on some nodes are interrupted.

Failed to create a schema

createLogicDbFailed

Major

The possible causes are as follows:

  • The password for the DB instance account is incorrect.
  • The security group of the DDM instance and the associated DB instance are incorrectly configured. As a result, the DDM instance cannot communicate with the associated DB instance.

Check whether

  • The username and password of the DB instance are correct.
  • The security groups associated with the DDM instance and underlying database instance are correctly configured.

Services cannot run properly.

Failed to bind an EIP

bindEipFailed

Major

The EIP is abnormal.

Try again later. In case of emergency, contact O&M personnel to rectify the fault.

The DDM instance cannot be accessed from the Internet.

Failed to scale out a schema

migrateLogicDbFailed

Major

The underlying resources fail to be processed.

Submit a service ticket to the O&M personnel.

The schema cannot be scaled out.

Failed to re-scale out a schema

retryMigrateLogicDbFailed

Major

The underlying resources fail to be processed.

Submit a service ticket to the O&M personnel.

The schema cannot be scaled out.

Table 8 Elastic IP and bandwidth

Event Source

Namespace

Event Name

Event ID

Event Severity

Elastic IP and bandwidth

SYS.VPC

VPC deleted

deleteVpc

Major

VPC modified

modifyVpc

Minor

Subnet deleted

deleteSubnet

Minor

Subnet modified

modifySubnet

Minor

Bandwidth modified

modifyBandwidth

Minor

VPN deleted

deleteVpn

Major

VPN modified

modifyVpn

Minor

Table 9 Elastic Volume Service (EVS)

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

EVS

SYS.EVS

Update disk

updateVolume

Minor

Update the name and description of an EVS disk.

No further action is required.

None

Expand disk

extendVolume

Minor

Expand an EVS disk.

No further action is required.

None

Delete disk

deleteVolume

Major

Delete an EVS disk.

No further action is required.

Deleted disks cannot be recovered.

QoS upper limit reached

reachQoS

Major

The I/O latency increases as the QoS upper limits of the disk are frequently reached and flow control triggered.

Change the disk type to one with a higher specification.

The current disk may fail to meet service requirements.

Table 10 Identity and Access Management (IAM)

Event Source

Namespace

Event Name

Event ID

Event Severity

IAM

SYS.IAM

Login

login

Minor

Logout

logout

Minor

Password changed

changePassword

Major

User created

createUser

Minor

User deleted

deleteUser

Major

User updated

updateUser

Minor

User group created

createUserGroup

Minor

User group deleted

deleteUserGroup

Major

User group updated

updateUserGroup

Minor

Identity provider created

createIdentityProvider

Minor

Identity provider deleted

deleteIdentityProvider

Major

Identity provider updated

updateIdentityProvider

Minor

Metadata updated

updateMetadata

Minor

Security policy updated

updateSecurityPolicies

Major

Credential added

addCredential

Major

Credential deleted

deleteCredential

Major

Project created

createProject

Minor

Project updated

updateProject

Minor

Project suspended

suspendProject

Major

Table 11 Key Management Service (KMS)

Event Source

Namespace

Event Name

Event ID

Event Severity

KMS

SYS.KMS

Key disabled

disableKey

Major

Key deletion scheduled

scheduleKeyDeletion

Minor

Grant retired

retireGrant

Major

Grant revoked

revokeGrant

Major

Table 12 Object Storage Service (OBS)

Event Source

Namespace

Event Name

Event ID

Event Severity

OBS

SYS.OBS

Bucket deleted

deleteBucket

Major

Bucket policy deleted

deleteBucketPolicy

Major

Bucket ACL configured

setBucketAcl

Minor

Bucket policy configured

setBucketPolicy

Minor

Table 13 Cloud Eye

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Cloud Eye

SYS.CES

Agent heartbeat interruption

agentHeartbeatInterrupted

Major

The Agent sends a heartbeat message to Cloud Eye every minute. If Cloud Eye cannot receive a heartbeat for 3 minutes, Agent Status is displayed as Faulty.

  • Confirm that the Agent domain name cannot be resolved.
  • Check whether your account is in arrears.
  • The Agent process is faulty. Restart the Agent. If the Agent process is still faulty after the restart, the Agent files may be damaged. In this case, reinstall the Agent.
  • Confirm that the server time is inconsistent with the local standard time.
  • Update the Agent to the latest version.

Agent back to normal

agentResumed

Informational

The Agent was back to normal.

No further action is required.

Agent faulty

agentFaulty

Major

The Agent was faulty and this status was reported to Cloud Eye.

The Agent process is faulty. Restart the Agent. If the Agent process is still faulty after the restart, the Agent files may be damaged. In this case, reinstall the Agent.

Update the Agent to the latest version.

Agent disconnected

agentDisconnected

Major

The Agent sends a heartbeat message to Cloud Eye every minute. If Cloud Eye cannot receive a heartbeat for 3 minutes, Agent Status is displayed as Faulty.

Confirm that the Agent domain name cannot be resolved.

Check whether your account is in arrears.

The Agent process is faulty. Restart the Agent. If the Agent process is still faulty after the restart, the Agent files may be damaged. In this case, reinstall the Agent.

Confirm that the server time is inconsistent with the local standard time.

Update the Agent to the latest version.

Table 14 Distributed Cache Service (DCS)

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

DCS

SYS.DCS

Full sync retry during online migration

migrationFullResync

Minor

If online migration fails, full synchronization will be triggered because incremental synchronization cannot be performed.

Check whether full sync retries are triggered repeatedly. Check whether the source instance is connected and whether it is overloaded. If full sync retries are triggered repeatedly, contact O&M personnel.

The migration task is disconnected from the source instance, triggering another full sync. As a result, the CPU usage of the source instance may increase sharply.

Redis master/standby switchover

masterStandbyFailover

Minor

The master node was abnormal, promoting a replica to master.

Check the status of the original master node and rectify the fault.

None

Memcached master/standby switchover

memcachedMasterStandbyFailover

Minor

The master node was abnormal, promoting the standby node to master.

Check whether services can recover by themselves. If applications cannot recover, restart them.

Persistent connections to the instance will be interrupted.

Redis server abnormal

redisNodeStatusAbnormal

Major

The Redis server status was abnormal.

Check whether services are affected. If yes, contact O&M personnel.

If the master node is abnormal, an automatic failover is performed. If a standby node is abnormal and the client directly connects to the standby node for read/write splitting, no data can be read.

Redis server recovered

redisNodeStatusNormal

Major

The Redis server status recovered.

Check whether services can recover. If the applications are not reconnected, restart them.

Recover from an exception.

Sync failure in data migration

migrateSyncDataFail

Major

Online migration failed.

Reconfigure the migration task and migrate data again. If the fault persists, contact O&M personnel.

Data migration fails.

Memcached instance abnormal

memcachedInstanceStatusAbnormal

Major

The Memcached node status was abnormal.

Check whether services are affected. If yes, contact O&M personnel.

The Memcached instance is abnormal and may not be accessed.

Memcached instance recovered

memcachedInstanceStatusNormal

Major

The Memcached node status recovered.

Check whether services can recover. If the applications are not reconnected, restart them.

Recover from an exception.

Instance backup failure

instanceBackupFailure

Major

The DCS instance fails to be backed up due to an OBS access failure.

Retry backup manually.

Automated backup fails.

Instance node abnormal restart

instanceNodeAbnormalRestart

Major

DCS nodes restarted unexpectedly when they became faulty.

Check whether services can recover. If the applications are not reconnected, restart them.

Persistent connections to the instance will be interrupted.

Long-running Lua scripts stopped

scriptsStopped

Informational

Lua scripts that had timed out automatically stopped running.

Optimize Lua scrips to prevent execution timeout.

If Lua scripts take a long time to execute, they will be forcibly stopped to avoid blocking the entire instance.

Node restarted

nodeRestarted

Informational

After write operations had been performed, the node automatically restarted to stop Lua scripts that had timed out.

Check whether services can recover by themselves. If applications cannot recover, restart them.

Persistent connections to the instance will be interrupted.

Table 15 Host Security Service (HSS)

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

HSS

SYS.HSS

HSS agent disconnected

hssAgentAbnormalOffline

Major

The communication between the agent and the server is abnormal, or the agent process on the server is abnormal.

Fix your network connection. If the agent is still offline for a long time after the network recovers, the agent process may be abnormal. In this case, log in to the server and restart the agent process.

Services are interrupted.

Abnormal HSS agent status

hssAgentAbnormalProtection

Major

The agent is abnormal probably because it does not have sufficient resources.

Log in to the server and check your resources. If the usage of memory or other system resources is too high, increase their capacity first. If the resources are sufficient but the fault persists after the agent process is restarted, submit a service ticket to the O&M personnel.

Services are interrupted.

Table 16 Image Management Service (IMS)

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

IMS

SYS.IMS

Create Image

createImage

Major

An image was created.

None

You can use this image to create cloud servers.

Update Image

updateImage

Major

Metadata of an image was modified.

None

Cloud servers may fail to be created from this image.

Delete Image

deleteImage

Major

An image was deleted.

None

This image will be unavailable on the management console.

Table 17 MapReduce Service (MRS)

Event Source

Namespace

Event Name

Event ID

Event Severity

Description

Solution

Impact

MRS

SYS.MRS

DBServer Switchover

dbServerSwitchover

Minor

The DBServer switchover occurs.

Confirm with O&M personnel whether the active/standby switchover is caused by normal operations.

Consecutive active/standby switchovers may affect Hive service availability.

Flume Channel overflow

flumeChannelOverflow

Minor

Flume Channel overflow

Check whether the Flume channel configuration is proper and whether the service volume increases sharply.

Flume tasks cannot write data to the backend.

NameNode Switchover

namenodeSwitchover

Minor

The NameNode switchover occurs.

Confirm with O&M personnel whether the active/standby switchover is caused by normal operations.

Consecutive active/standby switchovers may cause HDFS file read/write failures.

ResourceManager Switchover

resourceManagerSwitchover

Minor

ResourceManager Switchover

Confirm with O&M personnel whether the active/standby switchover is caused by normal operations.

Consecutive active/standby switchovers may cause exceptions or even failures of YARN tasks.

JobHistoryServer Switchover

jobHistoryServerSwitchover

Minor

The JobHistoryServer switchover occurs.

Confirm with O&M personnel whether the active/standby switchover is caused by normal operations.

Consecutive active/standby switchovers may cause failures to read MapReduce task logs.

HMaster Failover

hmasterFailover

Minor

The HMaster failover occurs.

Confirm with O&M personnel whether the active/standby switchover is caused by normal operations.

Consecutive active/standby switchovers may affect HBase service availability.

Hue Failover

hueFailover

Minor

The Hue failover occurs.

Confirm with O&M personnel whether the active/standby switchover is caused by normal operations.

The active/standby switchover may affect the display of the HUE page.

Impala HaProxy Failover

impalaHaProxyFailover

Minor

The Impala HaProxy switchover occurs.

Confirm with O&M personnel whether the active/standby switchover is caused by normal operations.

Consecutive active/standby switchovers may affect Impala service availability.

Impala StateStoreCatalog Failover

impalaStateStoreCatalogFailover

Minor

The Impala StateStoreCatalog failover occurs.

Confirm with O&M personnel whether the active/standby switchover is caused by normal operations.

Consecutive active/standby switchovers may affect Impala service availability.

LdapServer Failover

ldapServerFailover

Minor

The LdapServer failover occurs.

Confirm with O&M personnel whether the active/standby switchover is caused by normal operations.

Consecutive active/standby switchovers may affect LdapServer service availability.

Loader Switchover

loaderSwitchover

Minor

The Loader switchover occurs.

Confirm with O&M personnel whether the active/standby switchover is caused by normal operations.

The active/standby switchover may affect Loader service availability.

Manager Switchover

managerSwitchover

Informational

The Manager switchover occurs.

Confirm with O&M personnel whether the active/standby switchover is caused by normal operations.

The active/standby Manager switchover may cause the Manager page inaccessible and abnormal values of some monitoring items.

Job Running Failed

jobRunningFailed

Informational

A job fails to be executed.

On the Jobs tab page, check whether the failed task is normal.

The job fails to be executed.

Job Killed

jobkilled

Informational

The job is terminated.

Check whether the task is manually terminated.

The job execution process is terminated.

Oozie Workflow Execution Failure

oozieWorkflowExecutionFailure

Minor

Oozie workflows fail to execute.

View Oozie logs to locate the failure cause.

Oozie workflows fail to execute.

Oozie Scheduled Job Execution Failure

oozieScheduledJobExecutionFailure

Minor

Oozie scheduled tasks fail to execute.

View Oozie logs to locate the failure cause.

Oozie scheduled tasks fail to execute.

ClickHouse Service Unavailable

clickHouseServiceUnavailable

Critical

The ClickHouse service is unavailable.

For details, see section "ALM-45425 ClickHouse Service Unavailable" in MapReduce Service User Guide.

The ClickHouse service is abnormal. Cluster operations cannot be performed on the ClickHouse service on FusionInsight Manager, and the ClickHouse service function cannot be used.

DBService Service Unavailable

dbServiceServiceUnavailable

Critical

DBService is unavailable

For details, see section "ALM-27001 DBService Service Unavailable" in MapReduce Service User Guide.

The database service is unavailable and cannot provide data import and query functions for upper-layer services. As a result, service exceptions occur.

DBService Heartbeat Interruption Between the Active and Standby Nodes

dbServiceHeartbeatInterruptionBetweentheActiveAndStandbyNodes

Major

DBService Heartbeat Interruption Between the Active and Standby Nodes

For details, see section "ALM-27003 Heartbeat Interruption Between the Active and Standby Nodes" in MapReduce Service User Guide.

During the DBService heartbeat interruption, only one node can provide the service. If this node is faulty, no standby node is available for failover and the service is unavailable.

Data Inconsistency Between Active and Standby DBServices

dataInconsistencyBetweenActiveAndStandbyDBServices

Critical

Data Inconsistency Between Active and Standby DBServices

For details, see section "ALM-27004 Data Inconsistency Between Active and Standby DBService" in MapReduce Service User Guide.

When data is not synchronized between the active and standby DBServices, the data may be lost or abnormal if the active instance becomes abnormal.

Database Enters the Read-Only Mode

databaseEnterstheReadOnlyMode

Critical

The database enters the read-only mode.

For details, see section "ALM-27007 Database Enters the Read-Only Mode" in MapReduce Service User Guide.

The database enters the read-only mode, causing service data loss.

Flume Service Unavailable

flumeServiceUnavailable

Critical

Flume Service Unavailable

For details, see section "ALM-24000 Flume Service Unavailable" in MapReduce Service User Guide.

Flume is running abnormally and the data transmission service is interrupted.

Flume Agent Exception

flumeAgentException

Major

The Flume Agent is abnormal.

For details, see section "ALM-24001 Flume Agent Exception" in MapReduce Service User Guide.

The Flume agent instance for which the alarm is generated cannot provide services properly, and the data transmission tasks of the instance are temporarily interrupted. Real-time data is lost during real-time data transmission.

Flume Client Disconnection Alarm

flumeClientDisconnected

Major

Flume Client Disconnection Alarm

For details, see section "ALM-24003 Flume Client Interrupted" in MapReduce Service User Guide.

The Flume Client for which the alarm is generated cannot communicate with the Flume Server and the data of the Flume Client cannot be sent to the Flume Server.

Exception Occurs When Flume Reads Data

exceptionOccursWhenFlumeReadsData

Major

Exceptions occur when flume reads data.

For details, see section "ALM-24004 Exception Occurs When Flume Reads Data" in MapReduce Service User Guide.

If data is found in the data source and Flume Source continuously fails to read data, the data collection is stopped.

Exception Occurs When Flume Transmits Data

exceptionOccursWhenFlumeTransmitsData

Major

Exceptions occur when flume transmits data.

For details, see section "ALM-24005 Exception Occurs When Flume Transmits Data" in MapReduce Service User Guide.

If the disk usage of Flume Channel increases continuously, the time required for importing data to a specified destination prolongs. When the disk usage of Flume Channel reaches 100%, the Flume agent process pauses.

Flume Certificate File Is Invalid

flumeCertificateFileIsinvalid

Major

The Flume certificate file is invalid or damaged.

For details, see section "ALM-24010 Flume Certificate File Is Invalid or Damaged" in MapReduce Service User Guide.

The Flume certificate file is invalid or damaged, and the Flume client cannot access the Flume server.

Flume Certificate File Is About to Expire

flumeCertificateFileIsAboutToExpire

Major

The Flume certificate file is about to expire.

For details, see section "ALM-24011 Flume Certificate File Is About to Expire" in MapReduce Service User Guide.

The Flume certificate file is about to expire, which has no adverse impact on the system.

Flume Certificate File Is Expired

flumeCertificateFileIsExpired

Major

The Flume certificate file has expired.

For details, see section "ALM-24012 Flume Certificate File Has Expired" in MapReduce Service User Guide.

The Flume certificate file has expired and functions are restricted. The Flume client cannot access the Flume server.

Flume MonitorServer Certificate File Is Invalid

flumeMonitorServerCertificateFileIsInvalid

Major

The Flume MonitorServer certificate file is invalid.

For details, see section "ALM-24013 Flume MonitorServer Certificate File Is Invalid or Damaged" in MapReduce Service User Guide.

The MonitorServer certificate file is invalid or damaged, and the Flume client cannot access the Flume server.

Flume MonitorServer Certificate File Is About to Expire

flumeMonitorServerCertificate FileIsAboutToExpire

Major

The Flume MonitorServer certificate file is about to expire.

For details, see section "ALM-24014 Flume MonitorServer Certificate Is About to Expire" in MapReduce Service User Guide.

The MonitorServer certificate is about to expire, which has no adverse impact on the system.

Flume MonitorServer Certificate File Is Expired

flumeMonitorServerCertificateFileIsExpired

Major

The Flume MonitorServer certificate file has expired.

For details, see section "ALM-24015 Flume MonitorServer Certificate File Has Expired" in MapReduce Service User Guide.

The MonitorServer certificate file has expired and functions are restricted. The Flume client cannot access the Flume server.

HDFS Service Unavailable

hdfsServiceUnavailable

Critical

The HDFS service is unavailable.

For details, see section "ALM-14000 HDFS Service Unavailable" in MapReduce Service User Guide.

HDFS fails to provide services for HDFS service-based upper-layer components, such as HBase and MapReduce. As a result, users cannot read or write files.

NameService Service Unavailable

nameServiceServiceUnavailable

Major

The NameService service is abnormal.

For details, see section "ALM-14010 NameService Service Is Abnormal" in MapReduce Service User Guide.

HDFS fails to provide services for upper-layer components based on the NameService service, such as HBase and MapReduce. As a result, users cannot read or write files.

DataNode Data Directory Is Not Configured Properly

datanodeDataDirectoryIsNotConfiguredProperly

Major

The DataNode data directory is not configured properly.

For details, see section "ALM-14011 DataNode Data Directory Is Not Configured Properly" in MapReduce Service User Guide.

If the DataNode data directory is mounted on critical directories such as the root directory, the disk space of the root directory will be used up after running for a long time. This causes a system fault.

If the DataNode data directory is not configured properly, HDFS performance will deteriorate.

Journalnode Is Out of Synchronization

journalnodeIsOutOfSynchronization

Major

The Journalnode data is not synchronized.

For details, see section "ALM-14012 JournalNode Is Out of Synchronization" in MapReduce Service User Guide.

When a JournalNode is working incorrectly, data on the node is not synchronized with that on other JournalNodes. If data on more than half of JournalNodes is not synchronized, the NameNode cannot work correctly, making the HDFS service unavailable.

Failed to Update the NameNode FsImage File

failedToUpdateTheNameNodeFsImageFile

Major

The NameNode FsImage file failed to be updated.

For details, see section "ALM-14013 Failed to Update the NameNode FsImage File" in MapReduce Service User Guide.

If the FsImage file in the data directory of the active NameNode is not updated, the HDFS metadata combination function is abnormal and requires rectification. If it is not rectified, the Editlog files increase continuously after HDFS runs for a period. In this case, HDFS restart is time-consuming because a large number of Editlog files need to be loaded. In addition, this alarm also indicates that the standby NameNode is abnormal and the NameNode high availability (HA) mechanism becomes invalid. When the active NameNode is faulty, the HDFS service becomes unavailable.

DataNode Disk Fault

datanodeDiskFault

Major

The DataNode disk is faulty.

For details, see section "ALM-14027 DataNode Disk Fault" in MapReduce Service User Guide.

If a DataNode disk fault alarm is reported, a faulty disk partition exists on the DataNode. As a result, files that have been written may be lost.

Yarn Service Unavailable

yarnServiceUnavailable

Critical

The Yarn service is unavailable.

For details, see section "ALM-18000 Yarn Service Unavailable" in MapReduce Service User Guide.

The cluster cannot provide the Yarn service. Users cannot run new applications. Submitted applications cannot be run.

NodeManager Heartbeat Lost

nodemanagerHeartbeatLost

Major

The NodeManager heartbeat is lost.

For details, see section "ALM-18002 NodeManager Heartbeat Lost" in MapReduce Service User Guide.

The lost NodeManager node cannot provide the Yarn service.

The number of containers decreases, so the cluster performance deteriorates.

NodeManager Unhealthy

nodemanagerUnhealthy

Major

The NodeManager is unhealthy.

For details, see section "ALM-18003 NodeManager Unhealthy" in MapReduce Service User Guide.

The faulty NodeManager node cannot provide the Yarn service.

The number of containers decreases, so the cluster performance deteriorates.

Yarn Application Timeout

yarnApplicationTimeout

Minor

Yarn task execution timed out.

For details, see section "ALM-18020 Yarn Task Execution Timeout" in MapReduce Service User Guide.

The alarm persists after task execution times out. However, the task can still be properly executed, so this alarm does not exert any impact on the system.

MapReduce Service Unavailable

mapreduceServiceUnavailable

Critical

The MapReduce service is unavailable.

For details, see section "ALM-18021 MapReduce Service Unavailable" in MapReduce Service User Guide.

The cluster cannot provide the MapReduce service. For example, MapReduce cannot be used to view task logs and the log archive function is unavailable.

Insufficient Yarn Queue Resources

insufficientYarnQueueResources

Minor

Yarn queue resources are insufficient.

For details, see section "ALM-18022 Insufficient Yarn Queue Resources" in MapReduce Service User Guide.

It takes long time to end an application.

A new application cannot run for a long time after submission.

HBase Service Unavailable

hbaseServiceUnavailable

Critical

The HBase service is unavailable.

For details, see section "ALM-19000 HBase Service Unavailable" in MapReduce Service User Guide.

Operations cannot be performed, such as reading or writing data and creating tables.

System Table Path or File of HBase Is Missing

systemTablePathOrFileOfHBaseIsMissing

Critical

The table directories or files of the HBase System are lost.

For details, see section "ALM-19012 HBase System Table Directory or File Lost" in MapReduce Service User Guide.

The HBase service fails to restart or start.

Hive Service Unavailable

hiveServiceUnavailable

Critical

The Hive service is unavailable.

For details, see section "ALM-16004 Hive Service Unavailable" in MapReduce Service User Guide.

Hive cannot provide data loading, query, and extraction services.

Hive Data Warehouse Is Deleted

hiveDataWarehouseIsDeleted

Critical

The Hive data warehouse is deleted.

For details, see section "ALM-16045 Hive Data Warehouse Is Deleted" in MapReduce Service User Guide.

If the default Hive data warehouse is deleted, databases and tables fail to be created in the default data warehouse, affecting service usage.

Hive Data Warehouse Permission Is Modified

hiveDataWarehousePermissionIsModified

Critical

The Hive data warehouse permissions are modified.

For details, see section "ALM-16046 Hive Data Warehouse Permission Is Modified" in MapReduce Service User Guide.

If the permissions on the Hive default data warehouse are modified, the permissions for users or user groups to create databases or tables in the default data warehouse are affected. The permissions will be expanded or reduced.

HiveServer has been deregistered from zookeeper

hiveServerHasBeenDeregisteredFromZookeeper

Major

HiveServer has been deregistered from zookeeper.

For details, see section "ALM-16047 HiveServer Has Been Deregistered from ZooKeeper" in MapReduce Service User Guide.

If Hive configurations cannot be read from ZooKeeper, HiveServer will be unavailable.

Tez or Spark Library Path Does Not Exist

tezlibOrSparklibIsNotExist

Major

The tez or spark library path does not exist.

For details, see section "ALM-16048 Tez or Spark Library Path Does Not Exist" in MapReduce Service User Guide.

The Hive on Tez and Hive on Spark functions are affected.

Hue Service Unavailable

hueServiceUnavailable

Critical

The Hue service is unavailable.

For details, see section "ALM-20002 Hue Service Unavailable" in MapReduce Service User Guide.

The system cannot provide data loading, query, and extraction services.

Impala Service Unavailable

impalaServiceUnavailable

Critical

The Impala service is unavailable.

For details, see section "ALM-29000 Impala Service Unavailable" in MapReduce Service User Guide.

The Impala service is abnormal. Cluster operations cannot be performed on Impala on FusionInsight Manager, and Impala service functions cannot be used.

Kafka Service Unavailable

kafkaServiceUnavailable

Critical

The Kafka service is unavailable.

For details, see section "ALM-38000 Kafka Service Unavailable" in MapReduce Service User Guide.

The cluster cannot provide the Kafka service, and users cannot perform new Kafka tasks.

Status of Kafka Default User Is Abnormal

statusOfKafkaDefaultUserIsAbnormal

Critical

The status of Kafka default user is abnormal.

For details, see section "ALM-38007 Status of Kafka Default User Is Abnormal" in MapReduce Service User Guide.

If the Kafka default user status is abnormal, metadata synchronization between Brokers and interaction between Kafka and ZooKeeper will be affected, affecting service production, consumption, and topic creation and deletion.

Abnormal Kafka Data Directory Status

abnormalKafkaDataDirectoryStatus

Major

The status of Kafka data directory is abnormal.

For details, see section "ALM-38008 Abnormal Kafka Data Directory Status" in MapReduce Service User Guide.

If the Kafka data directory status is abnormal, the current replicas of all partitions in the data directory are brought offline, and the data directory status of multiple nodes is abnormal at the same time. As a result, some partitions may become unavailable.

Topics with Single Replica

topicsWithSingleReplica

Warning

A topic with a single replica exists.

For details, see section "ALM-38010 Topics with Single Replica" in MapReduce Service User Guide.

There is the single point of failure (SPOF) risk for topics with only one replica. When the node where the replica resides becomes abnormal, the partition does not have a leader, and services on the topic are affected.

KrbServer Service Unavailable

krbServerServiceUnavailable

Critical

The KrbServer service is unavailable.

For details, see section "ALM-25500 KrbServer Service Unavailable" in MapReduce Service User Guide.

When this alarm is generated, no operation can be performed for the KrbServer component in the cluster. The authentication of KrbServer in other components will be affected. The running status of components that depend on KrbServer in the cluster is faulty.

Kudu Service Unavailable

kuduServiceUnavailable

Critical

The Kudu service is unavailable.

For details, see section "ALM-29100 Kudu Service Unavailable" in MapReduce Service User Guide.

Users cannot use the Kudu service.

LdapServer Service Unavailable

ldapServerServiceUnavailable

Critical

The LdapServer service Is unavailable.

For details, see section "ALM-25000 LdapServer Service Unavailable" in MapReduce Service User Guide.

When this alarm is generated, no operation can be performed for the KrbServer users and LdapServer users in the cluster. For example, users, user groups, or roles cannot be added, deleted, or modified, and user passwords cannot be changed on the FusionInsight Manager portal. The authentication for existing users in the cluster is not affected.

Abnormal LdapServer Data Synchronization

abnormalLdapServerDataSynchronization

Critical

The LdapServer data synchronization is abnormal.

For details, see section "ALM-25004 Abnormal LdapServer Data Synchronization" in MapReduce Service User Guide.

LdapServer data inconsistency occurs because LdapServer data on Manager or in the cluster is damaged. The LdapServer process with damaged data cannot provide services externally, and the authentication functions of Manager and the cluster are affected.

Nscd Service Is Abnormal

nscdServiceIsAbnormal

Major

The Nscd service is abnormal.

For details, see section "ALM-25005 nscd Service Exception" in MapReduce Service User Guide.

If the Nscd service is abnormal, the node may fail to synchronize data from an LDAP server. In this case, running the id command may fail to obtain data from an LDAP server, affecting upper-layer services.

Sssd Service Is Abnormal

sssdServiceIsAbnormal

Major

The Sssd service is abnormal.

For details, see section "ALM-25006 Sssd Service Exception" in MapReduce Service User Guide.

If the Sssd service is abnormal, the node may fail to synchronize data from LdapServer. In this case, running the id command may fail to obtain LDAP data, affecting upper-layer services.

Loader Service Unavailable

loaderServiceUnavailable

Critical

The Loader service is unavailable.

For details, see section "ALM-23001 Loader Service Unavailable" in MapReduce Service User Guide.

When the Loader service is unavailable, the data loading, import, and conversion functions are unavailable.

Oozie Service Unavailable

oozieServiceUnavailable

Critical

The Oozie service is unavailable.

For details, see section "ALM-17003 Oozie Service Unavailable" in MapReduce Service User Guide.

The Oozie service cannot be used to submit jobs.

Ranger Service Unavailable

rangerServiceUnavailable

Critical

The Ranger service is unavailable.

For details, see section "ALM-45275 Ranger Service Unavailable" in MapReduce Service User Guide.

When the Ranger service is unavailable, the Ranger cannot work properly and the native UI of the Ranger cannot be accessed.

Abnormal RangerAdmin status

abnormalRangerAdminStatus

Major

The RangerAdmin status is abnormal.

For details, see section "ALM-45276 Abnormal RangerAdmin Status" in MapReduce Service User Guide.

If the status of a single RangerAdmin is abnormal, the access to the Ranger native UI is not affected. If the status of two RangerAdmins is abnormal, the Ranger native UI cannot be accessed and operations such as creating, modifying, and deleting policies cannot be performed.

Spark2x Service Unavailable

spark2xServiceUnavailable

Critical

The Spark2x service is unavailable.

For details, see section "ALM-43001 Spark2x Service Unavailable" in MapReduce Service User Guide.

The Spark tasks submitted by users fail to be executed.

Storm Service Unavailable

stormServiceUnavailable

Critical

The Storm service is unavailable.

For details, see section "ALM-26051 Storm Service Unavailable" in MapReduce Service User Guide.

The cluster cannot provide the Storm service externally, and users cannot execute new Storm tasks.

ZooKeeper Service Unavailable

zooKeeperServiceUnavailable

Critical

The ZooKeeper service is unavailable.

For details, see section "ALM-13000 ZooKeeper Service Unavailable" in MapReduce Service User Guide.

ZooKeeper fails to provide coordination services for upper-layer components and the components depending on ZooKeeper may not run properly.

Failed to Set the Quota of Top Directories of ZooKeeper Component

failedToSetTheQuotaOfTopDirectoriesOfZooKeeperComponent

Minor

The quota of top directories of ZooKeeper components failed to be configured.

For details, see section "ALM-13005 Failed to Set the Quota of Top Directories of ZooKeeper Components" in MapReduce Service User Guide.

Components can write a large amount of data to the top-level directory of ZooKeeper. As a result, the ZooKeeper service is unavailable.