section> Computing
  • Auto Scaling
  • Bare Metal Server
  • Dedicated Host
  • Elastic Cloud Server
  • FunctionGraph
  • Image Management Service
Network
  • Direct Connect
  • Domain Name Service
  • Elastic IP
  • Elastic Load Balancing
  • Enterprise Router
  • NAT Gateway
  • Private Link Access Service
  • Secure Mail Gateway
  • Virtual Private Cloud
  • Virtual Private Network
  • VPC Endpoint
Storage
  • Cloud Backup and Recovery
  • Cloud Server Backup Service
  • Elastic Volume Service
  • Object Storage Service
  • Scalable File Service
  • Storage Disaster Recovery Service
  • Volume Backup Service
Application
  • API Gateway (APIG)
  • Application Operations Management
  • Application Performance Management
  • Distributed Message Service (for Kafka)
  • Simple Message Notification
Data Analysis
  • Cloud Search Service
  • Data Lake Insight
  • Data Warehouse Service
  • DataArts Studio
  • MapReduce Service
  • ModelArts
  • Optical Character Recognition
Container
  • Application Service Mesh
  • Cloud Container Engine
  • Cloud Container Instance
  • Software Repository for Containers
Databases
  • Data Replication Service
  • Distributed Cache Service
  • Distributed Database Middleware
  • Document Database Service
  • GeminiDB
  • Relational Database Service
  • TaurusDB
Management & Deployment
  • Cloud Create
  • Cloud Eye
  • Cloud Trace Service
  • Config
  • Log Tank Service
  • Resource Formation Service
  • Tag Management Service
Security Services
  • Anti-DDoS
  • Cloud Firewall
  • Database Security Service
  • Dedicated Web Application Firewall
  • Host Security Service
  • Identity and Access Management
  • Key Management Service
  • Web Application Firewall
Other
  • Enterprise Dashboard
  • Marketplace
  • Price Calculator
  • Status Dashboard
APIs
  • REST API
  • API Usage Guidelines
  • Endpoints
Development and Automation
  • SDKs
  • Drivers and Tools
  • Terraform
  • Ansible
  • Cloud Create
Architecture Center
  • Best Practices
  • Blueprints
IaaSComputingAuto ScalingBare Metal ServerDedicated HostElastic Cloud ServerFunctionGraphImage Management ServiceNetworkDirect ConnectDomain Name ServiceElastic IPElastic Load BalancingEnterprise RouterNAT GatewayPrivate Link Access ServiceSecure Mail GatewayVirtual Private CloudVirtual Private NetworkVPC EndpointStorageCloud Backup and RecoveryCloud Server Backup ServiceElastic Volume ServiceObject Storage ServiceScalable File ServiceStorage Disaster Recovery ServiceVolume Backup ServicePaaSApplicationAPI Gateway (APIG)Application Operations ManagementApplication Performance ManagementDistributed Message Service (for Kafka)Simple Message NotificationData AnalysisCloud Search ServiceData Lake InsightData Warehouse ServiceDataArts StudioMapReduce ServiceModelArtsOptical Character RecognitionContainerApplication Service MeshCloud Container EngineCloud Container InstanceSoftware Repository for ContainersDatabasesData Replication ServiceDistributed Cache ServiceDistributed Database MiddlewareDocument Database ServiceGeminiDBRelational Database ServiceTaurusDBManagementManagement & DeploymentCloud CreateCloud EyeCloud Trace ServiceConfigLog Tank ServiceResource Formation ServiceTag Management ServiceSecuritySecurity ServicesAnti-DDoSCloud FirewallDatabase Security ServiceDedicated Web Application FirewallHost Security ServiceIdentity and Access ManagementKey Management ServiceWeb Application FirewallOtherOtherEnterprise DashboardMarketplacePrice CalculatorStatus Dashboard

MapReduce Service

  • Using CarbonData
    • Spark CarbonData Overview
    • Configuration Reference
    • CarbonData Operation Guide
    • CarbonData Performance Tuning
    • CarbonData Access Control
    • CarbonData Syntax Reference
    • CarbonData Troubleshooting
    • CarbonData FAQ
      • Why Is Incorrect Output Displayed When I Perform Query with Filter on Decimal Data Type Values?
      • How to Avoid Minor Compaction for Historical Data?
      • How to Change the Default Group Name for CarbonData Data Loading?
      • Why Does INSERT INTO CARBON TABLE Command Fail?
      • Why Is the Data Logged in Bad Records Different from the Original Input Data with Escape Characters?
      • Why Data Load Performance Decreases due to Bad Records?
      • Why INSERT INTO/LOAD DATA Task Distribution Is Incorrect and the Opened Tasks Are Less Than the Available Executors when the Number of Initial Executors Is Zero?
      • Why Does CarbonData Require Additional Executors Even Though the Parallelism Is Greater Than the Number of Blocks to Be Processed?
      • Why Data loading Fails During off heap?
      • Why Do I Fail to Create a Hive Table?
      • Why CarbonData tables created in V100R002C50RC1 not reflecting the privileges provided in Hive Privileges for non-owner?
      • How Do I Logically Split Data Across Different Namespaces?
      • Why Missing Privileges Exception is Reported When I Perform Drop Operation on Databases?
      • Why the UPDATE Command Cannot Be Executed in Spark Shell?
      • How Do I Configure Unsafe Memory in CarbonData?
      • Why Exception Occurs in CarbonData When Disk Space Quota is Set for Storage Directory in HDFS?
      • Why Does Data Query or Loading Fail and "org.apache.carbondata.core.memory.MemoryException: Not enough memory" Is Displayed?
  • Using CDL
  • Using ClickHouse
  • Using DBService
  • Using Flink
  • Using Flume
  • Using HBase
  • Using HDFS
  • Using HetuEngine
  • Using Hive
  • Using Hudi
  • Using Hue
  • Using IoTDB
  • Using Kafka
  • Using Loader
  • Using MapReduce
  • Using Oozie
  • Using Ranger
  • Using Spark2x
  • Using Tez
  • Using Yarn
  • Using ZooKeeper
  • Appendix
  • Change History
  • Component Operation Guide (LTS)
  • Using CarbonData
  • CarbonData FAQ

CarbonData FAQ¶

  • Why Is Incorrect Output Displayed When I Perform Query with Filter on Decimal Data Type Values?

  • How to Avoid Minor Compaction for Historical Data?

  • How to Change the Default Group Name for CarbonData Data Loading?

  • Why Does INSERT INTO CARBON TABLE Command Fail?

  • Why Is the Data Logged in Bad Records Different from the Original Input Data with Escape Characters?

  • Why Data Load Performance Decreases due to Bad Records?

  • Why INSERT INTO/LOAD DATA Task Distribution Is Incorrect and the Opened Tasks Are Less Than the Available Executors when the Number of Initial Executors Is Zero?

  • Why Does CarbonData Require Additional Executors Even Though the Parallelism Is Greater Than the Number of Blocks to Be Processed?

  • Why Data loading Fails During off heap?

  • Why Do I Fail to Create a Hive Table?

  • Why CarbonData tables created in V100R002C50RC1 not reflecting the privileges provided in Hive Privileges for non-owner?

  • How Do I Logically Split Data Across Different Namespaces?

  • Why Missing Privileges Exception is Reported When I Perform Drop Operation on Databases?

  • Why the UPDATE Command Cannot Be Executed in Spark Shell?

  • How Do I Configure Unsafe Memory in CarbonData?

  • Why Exception Occurs in CarbonData When Disk Space Quota is Set for Storage Directory in HDFS?

  • Why Does Data Query or Loading Fail and "org.apache.carbondata.core.memory.MemoryException: Not enough memory" Is Displayed?

  • Prev
  • Next
last updated: 2025-07-09 15:07 UTC - commit: cb943fa3145d5c3e150bb4fa1a987d24c3077fe9
Edit pageReport Documentation Bug
Page Contents
  • CarbonData FAQ
© T-Systems International GmbH
  • Contact
  • Data privacy
  • Disclaimer of Liabilities
  • Imprint