Published at: 2025-10-30

Object Import Rules


Function Description

When importing data, you can run APL code before the import to perform preprocessing tasks. For example:

Scenario 1: When importing orders, you may need to calculate the sum of all line item amounts and assign the total to the “Total Amount” field in the order header.

Scenario 2: During import preprocessing, you may want to process data in batches (e.g., 20 records per batch) to prevent a single error from rolling back the entire import.

Function Overview

Navigation Path: [Admin Console] → [Object Management] → [Custom Object Management] → [Import Settings]

picture coming soon:

1. Import Data Preprocessing

There are two preprocessing methods for imports: Pre-validation and Preprocessing.

picture coming soon:

1.1 Pre-validation

  • The pre-validation function is similar to the validation function used during record creation/editing. It can display validation messages and support blocking actions.
  • If data import fails or validation errors occur, error messages will be populated in the Excel failure list.
  • The return value type should be validateResult.

Pre-validation APL Code Example: // Assign values to imported data context.data.owner = ["1000"] // Validation logic ValidateResult validate = ValidateResult.builder() .success(false) // Validation success flag .errorMessage("Error message") // Error prompt when validation fails .build() return validate

1.2 Preprocessing

  • Import preprocessing APL code executes before the pre-validation APL code.
  • Complex validation logic can be calculated in the preprocessing APL code and stored in cache.
  • During pre-validation execution, the system can retrieve cached calculation results for validation.

Preprocessing APL Code Example: ``` def taskId = context.task.taskId as String log.info(context.task.taskId) // Get import task ID log.info(context.task.lastBatch) // Check if this is the final batch

// Import preprocessing processes data in batches (20 records per batch) List<Map> dataList = context.dataList as List // Cache information for pre-validation function Cache cache = Fx.cache.defaultCache dataList.each{ data -> def rowNo = data.RowNo as String def name = data.field_MG1ch__c as String def key = taskId + “” + rowNo log.info(key) def value = “” + name cache.put(key, value, 30) }

return ValidateResult.builder() .success(false) // Returns false to terminate the import .errorMessage(“test”) .build() ```

2. Processing Timing

There are two processing timings for imports: During new record import or during record update. Both timings support adding preprocessing and pre-validation APL.

picture coming soon:

These two import timings correspond to the frontend import methods: Adding new data and updating existing data.

When using these import methods in the frontend, the corresponding configured APL code will execute.

picture coming soon:

Each processing timing allows only one preprocessing APL and one pre-validation APL. Please include all processing logic in a single APL code.

picture coming soon:

3. Import Control Methods

Both processing timings support “Import Control Methods”, with different control scopes:

New Record Import: - [Trigger Workflows and Pipeline] - [Trigger Approval Processes]

Record Update Import: - [Trigger Workflows]

picture coming soon:

These settings in the Admin Console are tenant-level controls. Once configured here, the frontend import options will synchronize with these settings and become disabled.

picture coming soon:

Submit Feedback