DLI Spark jobs provide fully managed Spark computing services.
On the Overview page, click Create Job in the upper right corner of the Spark Jobs tab or click Create Job in the upper right corner of the Spark Jobs page. The Spark job editing page is displayed.
On the Spark job editing page, a message is displayed, indicating that a temporary DLI data bucket will be created. The created bucket is used to store temporary data generated by DLI, such as job logs and job results. You cannot view job logs if you choose not to create the bucket. The bucket will be created and the default bucket name is used.
If you do not need to create a DLI temporary data bucket and do not want to receive this message, select Do not show again and click Cancel.
On the Resources > Queue Management page, locate the queue you have created, click More in the Operation column, and select Test Address Connectivity to check if the network connection between the queue and the data source is normal. For details, see Testing the Network Connectivity Between a Queue and a Data Source.
Click Create Job in the upper right corner. In the job editing window, you can set parameters in Fill Form mode or Write API mode.
The following uses the Fill Form as an example. In Write API mode, refer to the Data Lake Insight API Reference for parameter settings.
You are advised not to use Spark of different versions for a long time.
Parameter | Description |
|---|---|
Application | Select the package to be executed. The value can be .jar or .py. There are the following ways to manage JAR files:
For Spark 3.3.x or later, you can only select packages in OBS paths. |
Agency | Before using Spark 3.3.1 or later (Spark general queue scenario) to run jobs, you need to create an agency on the IAM console and add the new agency information. For details, see Customizing DLI Agency Permissions. Common scenarios for creating an agency: DLI is allowed to read and write data from and to OBS to transfer logs. DLI is allowed to access DEW to obtain data access credentials and access catalogs to obtain metadata. |
Main Class (--class) | Enter the name of the main class. When the application type is .jar, the main class name cannot be empty. |
Application Parameters | User-defined parameters. Separate multiple parameters by Enter. These parameters can be replaced with global variables. For example, if you create a global variable batch_num on the Global Configuration > Global Variables page, you can use {{batch_num}} to replace a parameter with this variable after the job is submitted. |
Parameter | Description |
|---|---|
Job Name (--name) | Set a job name. |
Spark Arguments(--conf) | Enter a parameter in the format of key=value. Press Enter to separate multiple key-value pairs. These parameters can be replaced with global variables. For example, if you create a global variable custom_class on the Global Configuration > Global Variables page, you can use "spark.sql.catalog"={{custom_class}} to replace a parameter with this variable after the job is submitted. NOTE:
If you select 3.3.1 for Spark Version, you can configure compute resource specification parameters in Spark Argument(--conf). Note that the configuration priority of Spark Argument(--conf) is higher than that of Resource Specifications(Optional) in Advanced Settings. Table 4 describes the parameter mapping. NOTE: When configuring compute resource specification parameters in Spark Argument(--conf), you can use the unit M, G, or K. If the unit is not specified, the default unit is byte. |
Access Metadata | Choose whether to enable access to metadata for a Spark job. Set it to Yes if you need to configure the metadata type accessed by the job. DLI metadata is accessed by default. When set to Yes, you also need to set Metadata Source. |
Retry upon Failure | Indicates whether to retry a failed job. If you select Yes, you need to set the following parameters: Maximum Retries: Maximum number of retry times. The maximum value is 100. |
Datasource | Example Value |
|---|---|
CSS | spark.driver.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/css/* spark.executor.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/css/* |
DWS | spark.driver.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/dws/* spark.executor.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/dws/* |
HBase | spark.driver.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/hbase/* spark.executor.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/hbase/* |
OpenTSDB | spark.driver.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/opentsdb/* spark.executor.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/opentsdb/* |
RDS | spark.driver.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/rds/* spark.executor.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/rds/* |
Redis | spark.driver.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/redis/* spark.executor.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/redis/* |
Console Parameter | Spark Argument(--conf) | Description | Notes and Constraints |
|---|---|---|---|
Executor Memory Complete executor memory = spark.executor.memory + spark.executor.memoryOverhead | spark.executor.memory | Executor memory, which is configurable. | - |
spark.executor.memoryOverhead | Amount of off-heap memory for each executor in a Spark application. This parameter is not configurable. spark.executor.memoryOverhead=spark.executor.memory * spark.executor.memoryOverheadFactor | The minimum value is 384 MB. That is, when the value of spark.executor.memory multiplied by spark.executor.memoryOverheadFactor is less than 384 MB, the system automatically sets the value to 384 MB. | |
spark.executor.memoryOverheadFactor | This parameter determines the ratio of off-heap memory allocation to on-heap memory allocation. The default value is 0.1 for Spark applications run with the JAR file and 0.4 for those run with Python. This parameter is configurable. | The priority of spark.executor.memoryOverheadFactor is higher than that of spark.kubernetes.memoryOverheadFactor. | |
Executor Cores | spark.executor.cores | Number of executor cores, which is configurable. | - |
Executors | spark.executor.instances | Number of executors, which is configurable. | - |
Driver Cores | spark.driver.cores | Number of driver cores, which is configurable. | - |
Driver Memory Complete driver memory = spark.driver.memory + spark.edriver.memoryOverhead | spark.driver.memory | Driver memory, which is configurable. | - |
spark.driver.memoryOverhead | Amount of off-heap memory for each driver in a Spark application. This parameter is not configurable. spark.driver.memoryOverhead= spark.driver.memory * spark.driver.memoryOverheadFactor | The minimum value is 384 MB. That is, when the value of spark.driver.memory multiplied by spark.driver.memoryOverheadFactor is less than 384 MB, the system automatically sets the value to 384 MB. | |
spark.driver.memoryOverheadFactor | This parameter determines the ratio of off-heap memory allocation to on-heap memory allocation. The default value is 0.1 for Spark applications run with the JAR file and 0.4 for those run with Python. This parameter is configurable. | The priority of spark.driver.memoryOverheadFactor is higher than that of spark.kubernetes.memoryOverheadFactor. | |
- | spark.kubernetes.memoryOverheadFactor | Amount of memory allocated outside the memory assigned to Spark executors. The default value is 0.1 for Spark applications run with the JAR file and 0.4 for those run with Python. This parameter is configurable. | The priority of spark.executor.memoryOverheadFactor and spark.driver.memoryOverheadFactor is higher than that of spark.kubernetes.memoryOverheadFactor. |
Parameter | Description |
|---|---|
JAR Package Dependencies (--jars) | JAR file on which the Spark job depends. You can enter the JAR file name or the OBS path of the JAR file in the format of obs://Bucket name/Folder path/JAR file name. |
Python File Dependencies (--py-files) | py-files on which the Spark job depends. You can enter the Python file name or the corresponding OBS path of the Python file. The format is as follows: obs://Bucket name/Folder name/File name. |
Other Dependencies (--files) | Other files on which the Spark job depends. You can enter the name of the dependency file or the corresponding OBS path of the dependency file. The format is as follows: obs://Bucket name/Folder name/File name. |
Group Name | If you select a group when creating a package, you can select all the packages and files in the group. For how to create a package, see Creating a DLI Package. Spark 3.3.x or later does not support group information configuration. |
The parallelism degree of Spark resources is jointly determined by the number of Executors and the number of Executor CPU cores.
Maximum number of tasks that can be concurrently executed = Number of Executors x Number of Executor CPU cores
You can properly plan compute resource specifications based on the compute CUs of the queue you have purchased.
Note that Spark tasks need to be jointly executed by multiple roles, such as driver and executor. So, the number of executors multiplied by the number of executor CPU cores must be less than the number of compute CUs of the queue to prevent other roles from failing to start Spark tasks. For more information about roles for Spark tasks, see Apache Spark.
Calculation formula for Spark job parameters:
Parameter | Description |
|---|---|
modules | If the Spark version is 3.1.1, you do not need to select a module. Configure Spark parameters (--conf). Dependency modules provided by DLI for executing datasource connection jobs. To access different services, you need to select different modules.
|
Resource Package | JAR package on which the Spark job depends. Spark 3.3.x or later does not support this parameter. Configure resource package information in jars, pyFiles, and files. |
Parameter | Description |
|---|---|
Resource Specifications | Select a resource specification from the drop-down list box. The system provides three resource specification options for you to choose from. Resource specifications involve the following parameters:
If modified, your modified settings of the items are used. |
Executor Memory | Customize the configuration item based on the selected resource specifications. Memory of each Executor. It is recommended that the ratio of Executor CPU cores to Executor memory be 1:4. |
Executor Cores | Number of CPU cores of each Executor applied for by Spark jobs, which determines the capability of each Executor to execute tasks concurrently. |
Executors | Number of Executors applied for by a Spark job |
Driver Cores | Number of CPU cores of the driver |
Driver Memory | Driver memory size. It is recommended that the ratio of the number of driver CPU cores to the driver memory be 1:4. |
Table 4 describes the parameter mapping.
When configuring compute resource specification parameters in Spark Argument(--conf), you can use the unit M, G, or K. If the unit is not specified, the default unit is byte.
If the compute resource specification is set too high, beyond the resource allocation capacity of the cluster or project, the job may fail to run due to resource request failures.
Parameter | Elastic Resource Pool of Standard Edition After Modification | Elastic Resource Pool of Basic Edition |
|---|---|---|
Executor Memory | 450 MB to 64 GB | 450 MB to 16 GB |
Executor Cores | 0 to 16 | 0 to 4 |
Executors | Unlimited | Unlimited |
Driver Cores | 0 to 16 | 0 to 4 |
Driver Memory | 450 MB to 64 GB | 450 MB to 16 GB |
Job CU Quota | Unlimited | Unlimited |
After the message "Batch processing job submitted successfully" is displayed, you can view the status and logs of the submitted job on the Spark Jobs page.
During the Spark job submission process, if the job fails to acquire resources successfully for an extended period, the job status will change to Failed after waiting for approximately 3 hours, indicating that the session has exited. For details about Spark job statuses, see Viewing Basic Information.