section> Computing
  • Auto Scaling
  • Bare Metal Server
  • Dedicated Host
  • Elastic Cloud Server
  • FunctionGraph
  • Image Management Service
Network
  • Direct Connect
  • Domain Name Service
  • Elastic IP
  • Elastic Load Balancing
  • Enterprise Router
  • NAT Gateway
  • Private Link Access Service
  • Secure Mail Gateway
  • Virtual Private Cloud
  • Virtual Private Network
  • VPC Endpoint
Storage
  • Cloud Backup and Recovery
  • Cloud Server Backup Service
  • Elastic Volume Service
  • Object Storage Service
  • Scalable File Service
  • Storage Disaster Recovery Service
  • Volume Backup Service
Application
  • API Gateway (APIG)
  • Application Operations Management
  • Application Performance Management
  • Distributed Message Service (for Kafka)
  • Simple Message Notification
Data Analysis
  • Cloud Search Service
  • Data Lake Insight
  • Data Warehouse Service
  • DataArts Studio
  • MapReduce Service
  • ModelArts
  • Optical Character Recognition
Container
  • Application Service Mesh
  • Cloud Container Engine
  • Cloud Container Instance
  • Software Repository for Containers
Databases
  • Data Replication Service
  • Distributed Cache Service
  • Distributed Database Middleware
  • Document Database Service
  • GeminiDB
  • Relational Database Service
  • TaurusDB
Management & Deployment
  • Cloud Create
  • Cloud Eye
  • Cloud Trace Service
  • Config
  • Log Tank Service
  • Resource Formation Service
  • Tag Management Service
Security Services
  • Anti-DDoS
  • Cloud Firewall
  • Database Security Service
  • Dedicated Web Application Firewall
  • Host Security Service
  • Identity and Access Management
  • Key Management Service
  • Web Application Firewall
Other
  • Enterprise Dashboard
  • Marketplace
  • Price Calculator
  • Status Dashboard
APIs
  • REST API
  • API Usage Guidelines
  • Endpoints
Development and Automation
  • SDKs
  • Drivers and Tools
  • Terraform
  • Ansible
  • Cloud Create
Architecture Center
  • Best Practices
  • Blueprints
IaaSComputingAuto ScalingBare Metal ServerDedicated HostElastic Cloud ServerFunctionGraphImage Management ServiceNetworkDirect ConnectDomain Name ServiceElastic IPElastic Load BalancingEnterprise RouterNAT GatewayPrivate Link Access ServiceSecure Mail GatewayVirtual Private CloudVirtual Private NetworkVPC EndpointStorageCloud Backup and RecoveryCloud Server Backup ServiceElastic Volume ServiceObject Storage ServiceScalable File ServiceStorage Disaster Recovery ServiceVolume Backup ServicePaaSApplicationAPI Gateway (APIG)Application Operations ManagementApplication Performance ManagementDistributed Message Service (for Kafka)Simple Message NotificationData AnalysisCloud Search ServiceData Lake InsightData Warehouse ServiceDataArts StudioMapReduce ServiceModelArtsOptical Character RecognitionContainerApplication Service MeshCloud Container EngineCloud Container InstanceSoftware Repository for ContainersDatabasesData Replication ServiceDistributed Cache ServiceDistributed Database MiddlewareDocument Database ServiceGeminiDBRelational Database ServiceTaurusDBManagementManagement & DeploymentCloud CreateCloud EyeCloud Trace ServiceConfigLog Tank ServiceResource Formation ServiceTag Management ServiceSecuritySecurity ServicesAnti-DDoSCloud FirewallDatabase Security ServiceDedicated Web Application FirewallHost Security ServiceIdentity and Access ManagementKey Management ServiceWeb Application FirewallOtherOtherEnterprise DashboardMarketplacePrice CalculatorStatus Dashboard

DataArts Studio

  • Service Overview
  • Preparations
  • User Guide
    • Management Console
    • Management Center
    • DataArts Migration
      • Overview
      • Constraints
      • Supported Data Sources
      • Managing Clusters
      • Managing Links
        • Creating Links
        • Managing Drivers
        • Managing Agents
        • Managing Cluster Configurations
        • Link to OBS
        • Link to PostgreSQL/SQLServer
        • Link to DWS
        • Link to an RDS for MySQL/MySQL Database
        • Link to an Oracle Database
        • Link to DLI
        • Link to Hive
        • Link to HBase
        • Link to HDFS
        • Link to an FTP or SFTP Server
        • Link to Redis
        • Link to DDS
        • Link to CloudTable
        • Link to MongoDB
        • Link to Cassandra
        • Link to Kafka
        • Link to DMS Kafka
        • Link to CSS
        • Link to Elasticsearch
        • Link to SAP HANA
        • Link to a Database Shard
        • Link to MRS Hudi
        • Link to MRS ClickHouse
        • Link to a ShenTong Database
        • Link to CloudTable OpenTSDB
        • Link to Doris
      • Managing Jobs
      • Improving Migration Performance
      • Key Operation Guide
      • Tutorials
    • DataArts Factory
    • Audit Log
  • FAQs
  • Change History
  • User Guide
  • User Guide
  • DataArts Migration
  • Managing Links
  • Link to HDFS

Link to HDFS¶

CDM supports the following HDFS data sources:

  • MRS HDFS

  • FusionInsight HDFS

  • Apache HDFS

    Note

    Do not change the password or user when the job is running. If you do so, the password will not take effect immediately and the job will fail.

MRS HDFS¶

When connecting CDM to HDFS of MRS, configure the parameters as described in Table 1.

Note

  • Before creating an MRS link, you need to add an authenticated Kerberos user on MRS and log in to the MRS management page to change the initial password. Then use the new user to create an MRS link.

  • To connect to an MRS 2.x cluster, create a CDM cluster of version 2.x first. CDM 1.8.x clusters cannot connect to MRS 2.x clusters.

  • If the connection fails after you select a cluster, check whether the MRS cluster can communicate with the CDM instance which functions as the agent. They can communicate with each other in the following scenarios:

    • If the CDM cluster in the DataArts Studio instance and the MRS cluster are in different regions, a public network or a dedicated connection is required. If the Internet is used for communication, ensure that an EIP has been bound to the CDM cluster, and the MRS cluster can access the Internet and the port has been enabled in the firewall rule.

    • If the CDM cluster in the DataArts Studio instance and the cloud service are in the same region, VPC, subnet, and security group, they can communicate with each other by default. If they are in the same VPC but in different subnets or security groups, you must configure routing rules and security group rules. For details about how to configure routing rules, see "Adding a Custom Route" in Virtual Private Cloud (VPC) Usage Guide. For details about how to configure security group rules, see "Security Group" > "Adding a Security Group Rule" in Virtual Private Cloud (VPC) Usage Guide.

    • The MRS cluster and the DataArts Studio workspace belong to the same enterprise project. If they do not, you can modify the enterprise project of the workspace.

Table 1 MRS HDFS link parameters¶

Parameter

Description

Example Value

Name

Link name, which should be defined based on the data source type, so it is easier to remember what the link is for

mrs_hdfs_link

Manager IP

Floating IP address of MRS Manager. Click Select next to the Manager IP text box to select an MRS cluster. CDM automatically fills in the authentication information.

Note

DataArts Studio does not support MRS clusters whose Kerberos encryption type is aes256-sha2,aes128-sha2, and only supports MRS clusters whose Kerberos encryption type is aes256-sha1,aes128-sha1.

127.0.0.1

Username

If Authentication Method is set to KERBEROS, you must provide the username and password used for logging in to MRS Manager. If you need to create a snapshot when exporting a directory from HDFS, the user configured here must have the administrator permission on HDFS.

To create a data connection for an MRS security cluster, do not use user admin. The admin user is the default management page user and cannot be used as the authentication user of the security cluster. You can create an MRS user and set Username and Password to the username and password of the created MRS user when creating an MRS data connection.

Note

  • If the CDM cluster version is 2.9.0 or later and the MRS cluster version is 3.1.0 or later, the created user must have the permissions of the Manager_viewer role to create links on CDM. To perform operations on databases, tables, and columns of an MRS component, you also need to add the database, table, and column permissions of the MRS component to the user by following the instructions in the MRS documentation.

  • If the CDM cluster version is earlier than 2.9.0 or the MRS cluster version is earlier than 3.1.0, the created user must have the permissions of Manager_administrator or System_administrator to create links on CDM.

  • A user with only the Manager_tenant or Manager_auditor permission cannot create connections.

cdm

Password

Password used for logging in to MRS Manager

-

Authentication Method

Authentication method used for accessing MRS

  • SIMPLE: Select this for non-security mode.

  • KERBEROS: Select this for security mode.

SIMPLE

Run Mode

Run mode of the HDFS link. The options are as follows:

  • EMBEDDED: The link instance runs with CDM. This mode delivers better performance.

  • STANDALONE: The link instance runs in an independent process. If CDM needs to connect to multiple Hadoop data sources (MRS, Hadoop, or CloudTable) with both Kerberos and Simple authentication modes, select STANDALONE or configure different agents.

    Note: The STANDALONE mode is used to solve the version conflict problem. If the connector versions of the source and destination ends of the same link are different, a JAR file conflict occurs. In this case, you need to place the source or destination end in the STANDALONE process to prevent the migration failure caused by the conflict.

  • Agent: The link instance runs on an agent.

If Agent is not used, and the CDM cluster connects to two or more clusters with Kerberos authentication enabled and the same realm, only one cluster can be connected in EMBEDDED mode, and the other clusters must be in STANDALONE mode.

STANDALONE

Agent

Click Select and select the agent created in Connecting to an Agent. This parameter is displayed when Run Mode is set to Agent.

-

Use Cluster Config

You can use the cluster configuration to simplify parameter settings for the Hadoop connection.

No

Cluster Config Name

This parameter is valid only when Use Cluster Config is set to Yes. Select a cluster configuration that has been created.

For details about how to configure a cluster, see "DataArts Migration" > "Managing Links" > "Managing Cluster Configurations" in User Guide.

hdfs_01

Click Show Advanced Attributes, and then click Add to add configuration attributes of other clients. The name and value of each attribute must be configured. You can click Delete to delete no longer used attributes.

FusionInsight HDFS¶

When connecting CDM to HDFS of FusionInsight HD, configure the parameters as described in Table 2.

Table 2 FusionInsight HDFS link parameters¶

Parameter

Description

Example Value

Name

Link name, which should be defined based on the data source type, so it is easier to remember what the link is for

FI_hdfs_link

Manager IP

IP address of FusionInsight Manager

127.0.0.1

Manager Port

Port number of FusionInsight Manager

28443

CAS Server Port

Port number of the CAS server used to connect to FusionInsight

20009

Username

Username used for logging in to FusionInsight Manager.

If you need to create a snapshot when exporting a directory from HDFS, the user configured here must have the administrator permission on HDFS.

cdm

Password

Password used for logging in to FusionInsight Manager

-

Authentication Method

Authentication method used for accessing the cluster:

  • SIMPLE: Select this for non-security mode.

  • KERBEROS: Select this for security mode.

KERBEROS

Run Mode

Run mode of the HDFS link. The options are as follows:

  • EMBEDDED: The link instance runs with CDM. This mode delivers better performance.

  • STANDALONE: The link instance runs in an independent process. If CDM needs to connect to multiple Hadoop data sources (MRS, Hadoop, or CloudTable) with both Kerberos and Simple authentication modes, select STANDALONE or configure different agents.

    Note: The STANDALONE mode is used to solve the version conflict problem. If the connector versions of the source and destination ends of the same link are different, a JAR file conflict occurs. In this case, you need to place the source or destination end in the STANDALONE process to prevent the migration failure caused by the conflict.

  • Agent: The link instance runs on an agent.

STANDALONE

Agent

Click Select and select the agent created in Connecting to an Agent. This parameter is displayed when Run Mode is set to Agent.

-

Use Cluster Config

You can use the cluster configuration to simplify parameter settings for the Hadoop connection.

No

Cluster Config Name

This parameter is valid only when Use Cluster Config is set to Yes. Select a cluster configuration that has been created.

For details about how to configure a cluster, see "DataArts Migration" > "Managing Links" > "Managing Cluster Configurations" in User Guide.

hdfs_01

Click Show Advanced Attributes, and then click Add to add configuration attributes of other clients. The name and value of each attribute must be configured. You can click Delete to delete no longer used attributes.

Apache HDFS¶

When connecting CDM to HDFS of Apache Hadoop, configure the parameters as described in Table 3.

Table 3 Apache HDFS link parameters¶

Parameter

Description

Example Value

Name

Link name, which should be defined based on the data source type, so it is easier to remember what the link is for

hadoop_hdfs_link

URI

NameNode URI You can enter hdfs://IP address of the NameNode instance:8020.

hdfs://IP:8020

Authentication Method

Authentication method used for accessing the cluster:

  • SIMPLE: Select this for non-security mode.

  • KERBEROS: Select this for security mode.

KERBEROS

Run Mode

Run mode of the HDFS link. The options are as follows:

  • EMBEDDED: The link instance runs with CDM. This mode delivers better performance.

  • STANDALONE: The link instance runs in an independent process. If CDM needs to connect to multiple Hadoop data sources (MRS, Hadoop, or CloudTable) with both Kerberos and Simple authentication modes, select STANDALONE or configure different agents.

    Note: The STANDALONE mode is used to solve the version conflict problem. If the connector versions of the source and destination ends of the same link are different, a JAR file conflict occurs. In this case, you need to place the source or destination end in the STANDALONE process to prevent the migration failure caused by the conflict.

  • Agent: The link instance runs on an agent. For Apache HDFS, you can select Agent only if Authentication Method is set to SIMPLE.

STANDALONE

IP and Host Name Mapping

This parameter is used only when Run Mode is set to EMBEDDED or STANDALONE.

If the HDFS configuration file uses the host name, configure the mapping between the IP address and host name. Separate the IP addresses and host names by spaces and mappings by semicolons (;), carriage returns, or line feeds.

10.1.6.9 hostname01

10.2.7.9 hostname02

Agent

This parameter is required when Authentication Method is set to SIMPLE and Run Mode is set to Agent. Select the agent created in Connecting to an Agent.

-

Use Cluster Config

You can use the cluster configuration to simplify parameter settings for the Hadoop connection.

No

Cluster Config Name

This parameter is valid when Use Cluster Config is set to Yes or Authentication Method is set to KERBEROS. Select a cluster configuration that has been created.

For details about how to configure a cluster, see "DataArts Migration" > "Managing Links" > "Managing Cluster Configurations" in User Guide.

hdfs_01

  • Prev
  • Next
last updated: 2025-07-09 16:05 UTC - commit: 61336f3c2e25ce26cb038bce82678d028fe5d438
Edit pageReport Documentation Bug
Page Contents
  • Link to HDFS
    • MRS HDFS
    • FusionInsight HDFS
    • Apache HDFS
© T-Systems International GmbH
  • Contact
  • Data privacy
  • Disclaimer of Liabilities
  • Imprint