section> Computing
  • Auto Scaling
  • Bare Metal Server
  • Dedicated Host
  • Elastic Cloud Server
  • FunctionGraph
  • Image Management Service
Network
  • Direct Connect
  • Domain Name Service
  • Elastic IP
  • Elastic Load Balancing
  • Enterprise Router
  • NAT Gateway
  • Private Link Access Service
  • Secure Mail Gateway
  • Virtual Private Cloud
  • Virtual Private Network
  • VPC Endpoint
Storage
  • Cloud Backup and Recovery
  • Cloud Server Backup Service
  • Elastic Volume Service
  • Object Storage Service
  • Scalable File Service
  • Storage Disaster Recovery Service
  • Volume Backup Service
Application
  • API Gateway (APIG)
  • Application Operations Management
  • Application Performance Management
  • Distributed Message Service (for Kafka)
  • Simple Message Notification
Data Analysis
  • Cloud Search Service
  • Data Lake Insight
  • Data Warehouse Service
  • DataArts Studio
  • MapReduce Service
  • ModelArts
  • Optical Character Recognition
Container
  • Application Service Mesh
  • Cloud Container Engine
  • Cloud Container Instance
  • Software Repository for Containers
Databases
  • Data Replication Service
  • Distributed Cache Service
  • Distributed Database Middleware
  • Document Database Service
  • GeminiDB
  • Relational Database Service
  • TaurusDB
Management & Deployment
  • Cloud Create
  • Cloud Eye
  • Cloud Trace Service
  • Config
  • Log Tank Service
  • Resource Formation Service
  • Tag Management Service
Security Services
  • Anti-DDoS
  • Cloud Firewall
  • Database Security Service
  • Dedicated Web Application Firewall
  • Host Security Service
  • Identity and Access Management
  • Key Management Service
  • Web Application Firewall
Other
  • Enterprise Dashboard
  • Marketplace
  • Price Calculator
  • Status Dashboard
APIs
  • REST API
  • API Usage Guidelines
  • Endpoints
Development and Automation
  • SDKs
  • Drivers and Tools
  • Terraform
  • Ansible
  • Cloud Create
Architecture Center
  • Best Practices
  • Blueprints
IaaSComputingAuto ScalingBare Metal ServerDedicated HostElastic Cloud ServerFunctionGraphImage Management ServiceNetworkDirect ConnectDomain Name ServiceElastic IPElastic Load BalancingEnterprise RouterNAT GatewayPrivate Link Access ServiceSecure Mail GatewayVirtual Private CloudVirtual Private NetworkVPC EndpointStorageCloud Backup and RecoveryCloud Server Backup ServiceElastic Volume ServiceObject Storage ServiceScalable File ServiceStorage Disaster Recovery ServiceVolume Backup ServicePaaSApplicationAPI Gateway (APIG)Application Operations ManagementApplication Performance ManagementDistributed Message Service (for Kafka)Simple Message NotificationData AnalysisCloud Search ServiceData Lake InsightData Warehouse ServiceDataArts StudioMapReduce ServiceModelArtsOptical Character RecognitionContainerApplication Service MeshCloud Container EngineCloud Container InstanceSoftware Repository for ContainersDatabasesData Replication ServiceDistributed Cache ServiceDistributed Database MiddlewareDocument Database ServiceGeminiDBRelational Database ServiceTaurusDBManagementManagement & DeploymentCloud CreateCloud EyeCloud Trace ServiceConfigLog Tank ServiceResource Formation ServiceTag Management ServiceSecuritySecurity ServicesAnti-DDoSCloud FirewallDatabase Security ServiceDedicated Web Application FirewallHost Security ServiceIdentity and Access ManagementKey Management ServiceWeb Application FirewallOtherOtherEnterprise DashboardMarketplacePrice CalculatorStatus Dashboard

MapReduce Service

  • Using CarbonData
  • Using CDL
  • Using ClickHouse
  • Using DBService
  • Using Flink
    • Using Flink from Scratch
    • Viewing Flink Job Information
    • Flink Configuration Management
    • Security Configuration
    • Security Hardening
    • Security Statement
    • Using the Flink Web UI
    • Deleting Residual Information About Flink Tasks
    • Flink Log Overview
    • Flink Performance Tuning
    • Common Flink Shell Commands
    • Reference
    • Flink Restart Policy
  • Using Flume
  • Using HBase
  • Using HDFS
  • Using HetuEngine
  • Using Hive
  • Using Hudi
  • Using Hue
  • Using IoTDB
  • Using Kafka
  • Using Loader
  • Using MapReduce
  • Using Oozie
  • Using Ranger
  • Using Spark2x
  • Using Tez
  • Using Yarn
  • Using ZooKeeper
  • Appendix
  • Change History
  • Component Operation Guide (LTS)
  • Using Flink
  • Flink Log Overview

Flink Log Overview¶

Log Description¶

Log path:

  • Run logs of a Flink job: ${BIGDATA_DATA_HOME}/hadoop/data${i}/nm/containerlogs/application_${appid}/container_{$contid}

    Note

    The logs of executing tasks are stored in the preceding path. After the execution is complete, the Yarn configuration determines whether these logs are gathered to the HDFS directory.

  • FlinkResource run logs: /var/log/Bigdata/flink/flinkResource

  • FlinkServer run logs: /var/log/Bigdata/flink/flinkserver

  • FlinkServer audit logs: /var/log/Bigdata/audit/flink/flinkserver

Log archive rules:

  1. FlinkResource run logs:

    • By default, service logs are backed up each time when the log size reaches 20 MB. A maximum of 20 logs can be reserved without being compressed.

    • You can set the log size and number of compressed logs on the Manager page or modify the corresponding configuration items in log4j-cli.properties, log4j.properties, and log4j-session.properties in /opt/client/Flink/flink/conf/ on the client. /opt/client is the client installation directory.

    Table 1 FlinkResource log list¶

    Type

    Name

    Description

    FlinkResource run logs

    checkService.log

    Health check log

    kinit.log

    Initialization log

    postinstall.log

    Service installation log

    prestart.log

    Prestart script log

    start.log

    Startup log

  2. FlinkServer service logs and audit logs

    • By default, FlinkServer service logs and audit logs are backed up each time when the log size reaches 100 MB. The service logs are stored for a maximum of 30 days, and audit logs are stored for a maximum of 90 days.

    • You can set the log size and number of compressed logs on the Manager page or modify the corresponding configuration items in log4j-cli.properties, log4j.properties, and log4j-session.properties in /opt/client/Flink/flink/conf/ on the client. /opt/client is the client installation directory.

    Table 2 FlinkServer log list¶

    Type

    Name

    Description

    FlinkServer run logs

    checkService.log

    Health check log

    cleanup.log

    Cleanup log file for instance installation and uninstallation

    flink-omm-client-IP.log

    Job startup log

    flinkserver_yyyymmdd-x.log.gz

    Service archive log

    flinkserver.log

    Service log

    flinkserver---pidxxxx-gc.log.x.current

    GC log

    kinit.log

    Initialization log

    postinstall.log

    Service installation log

    prestart.log

    Prestart script log

    start.log

    Startup log

    stop.log

    Stop log

    FlinkServer audit logs

    flinkserver_audit_yyyymmdd-x.log.gz

    Audit archive log

    flinkserver_audit.log

    Audit log

Log Level¶

Table 3 describes the log levels supported by Flink. The priorities of log levels are ERROR, WARN, INFO, and DEBUG in descending order. Logs whose levels are higher than or equal to the specified level are printed. The number of printed logs decreases as the specified log level increases.

Table 3 Log levels¶

Level

Description

ERROR

Error information about the current event processing

WARN

Exception information about the current event processing

INFO

Normal running status information about the system and events

DEBUG

System information and system debugging information

To modify log levels, perform the following steps:

  1. Go to the All Configurations page of Flink by referring to Modifying Cluster Service Configuration Parameters.

  2. On the menu bar on the left, select the log menu of the target role.

  3. Select a desired log level.

  4. Save the configuration. In the displayed dialog box, click OK to make the configurations take effect.

Note

  • After the configuration is complete, you do not need to restart the service. Download the client again for the configuration to take effect.

  • You can also change the configuration items corresponding to the log level in log4j-cli.properties, log4j.properties, and log4j-session.properties in /opt/client/Flink/flink/conf/ on the client. /opt/client is the client installation directory.

  • When a job is submitted using a client, a log file is generated in the log folder on the client. The default umask value is 0022. Therefore, the default log permission is 644. To change the file permission, you need to change the umask value. For example, to change the umask value of user omm:

    • Add umask 0026 to the end of the /home/omm/.baskrc file.

    • Run the source /home/omm/.baskrc command to make the file permission take effect.

Log Format¶

Table 4 Log formats¶

Type

Format

Example

Run log

<yyyy-MM-dd HH:mm:ss,SSS>|<Log level>|<Name of the thread that generates the log>|<Message in the log>|<Location where the log event occurs>

2019-06-27 21:30:31,778 | INFO | [flink-akka.actor.default-dispatcher-3] | TaskManager container_e10_1498290698388_0004_02_000007 has started. | org.apache.flink.yarn.YarnFlinkResourceManager (FlinkResourceManager.java:368)

  • Prev
  • Next
last updated: 2025-07-09 15:07 UTC - commit: cb943fa3145d5c3e150bb4fa1a987d24c3077fe9
Edit pageReport Documentation Bug
Page Contents
  • Flink Log Overview
    • Log Description
    • Log Level
    • Log Format
© T-Systems International GmbH
  • Contact
  • Data privacy
  • Disclaimer of Liabilities
  • Imprint