Creating a Spark Job
DLI Spark jobs provide fully managed Spark computing services.
On the Overview page, click Create Job in the upper right corner of the Spark Jobs tab or click Create Job in the upper right corner of the Spark Jobs page. The Spark job editing page is displayed.
On the Spark job editing page, a message is displayed, indicating that a temporary DLI data bucket will be created. The created bucket is used to store temporary data generated by DLI, such as job logs and job results. You cannot view job logs if you choose not to create the bucket. The bucket will be created and the default bucket name is used.
If you do not need to create a DLI temporary data bucket and do not want to receive this message, select Do not show again and click Cancel.
Prerequisites
- You have uploaded the dependencies to the corresponding OBS bucket on the Data Management > Package Management page.
- Before creating a Spark job to access other external data sources, such as OpenTSDB, HBase, Kafka, GaussDB(DWS), RDS, CSS, CloudTable, DCS Redis, and DDS, you need to create a datasource connection to enable the network between the job running queue and external data sources.
- For details about the external data sources that can be accessed by Spark jobs, see Common Development Methods for DLI Cross-Source Analysis.
- For how to create a datasource connection, see Configuring the Network Connection Between DLI and Data Sources (Enhanced Datasource Connection).
On the Resources > Queue Management page, locate the queue you have created, click More in the Operation column, and select Test Address Connectivity to check if the network connection between the queue and the data source is normal. For details, see Testing Address Connectivity.
Procedure
- In the left navigation pane of the DLI management console, choose Job Management > Spark Jobs. The Spark Jobs page is displayed.
Click Create Job in the upper right corner. In the job editing window, you can set parameters in Fill Form mode or Write API mode.
The following uses the Fill Form as an example. In Write API mode, refer to the Data Lake Insight API Reference for parameter settings.
- Select a queue.
- Queues: Select a queue from the drop-down list.
- Spark Version: Select a Spark version from the drop-down list. The latest version is recommended.Note
You are advised not to use Spark of different versions for a long time.
- Doing so can lead to code incompatibility, which can negatively impact the job execution efficiency.
- Doing so may result in job execution failures due to conflicts in dependencies. Jobs rely on specific versions of libraries or components.
- Configure the application.
Table 1 Application configuration parameters Parameter
Description
Application
Select the package to be executed. The value can be .jar or .py.
There are the following ways to manage JAR files:
- Upload packages to OBS: Upload JAR files to an OBS bucket in advance and select the corresponding OBS path.
- Upload packages to DLI: Upload JAR files to an OBS bucket in advance and create a package on the Data Management > Package Management page of the DLI management console. For details, see Creating a DLI Package.
For Spark 3.3.x or later, you can only select packages in OBS paths.
Agency
Before using Spark 3.3.1 or later (Spark general queue scenario) to run jobs, you need to create an agency on the IAM console and add the new agency information. For details, see Customizing DLI Agency Permissions.
Common scenarios for creating an agency: DLI is allowed to read and write data from and to OBS to transfer logs. DLI is allowed to access DEW to obtain data access credentials and access catalogs to obtain metadata.
Main Class (--class)
Enter the name of the main class. When the application type is .jar, the main class name cannot be empty.
Application Parameters
User-defined parameters. Separate multiple parameters by Enter.
These parameters can be replaced with global variables. For example, if you create a global variable batch_num on the Global Configuration > Global Variables page, you can use {{batch_num}} to replace a parameter with this variable after the job is submitted.
- Configure the job.
Table 2 Job configuration parameters Parameter
Description
Job Name (--name)
Set a job name.
Spark Arguments(--conf)
Enter a parameter in the format of key=value. Press Enter to separate multiple key-value pairs.
These parameters can be replaced with global variables. For example, if you create a global variable custom_class on the Global Configuration > Global Variables page, you can use "spark.sql.catalog"={{custom_class}} to replace a parameter with this variable after the job is submitted.
NOTE:- The JVM garbage collection algorithm cannot be customized for Spark jobs.
- If the Spark version is 3.1.1, configure Spark parameters (--conf) to select a dependency module. For details about the example configuration, see Table 3.
If you select 3.3.1 for Spark Version, you can configure compute resource specification parameters in Spark Argument(--conf). Note that the configuration priority of Spark Argument(--conf) is higher than that of Resource Specifications(Optional) in Advanced Settings.
Table 4 describes the parameter mapping.
NOTE:When configuring compute resource specification parameters in Spark Argument(--conf), you can use the unit M, G, or K. If the unit is not specified, the default unit is byte.
Access Metadata
Whether to access metadata through Spark jobs.
Retry upon Failure
Indicates whether to retry a failed job.
If you select Yes, you need to set the following parameters:
Maximum Retries: Maximum number of retry times. The maximum value is 100.
Table 3 Spark Parameter (--conf) configuration Datasource
Example Value
CSS
spark.driver.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/css/*
spark.executor.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/css/*
GaussDB(DWS)
spark.driver.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/dws/*
spark.executor.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/dws/*
HBase
spark.driver.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/hbase/*
spark.executor.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/hbase/*
OpenTSDB
park.driver.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/opentsdb/*
spark.executor.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/opentsdb/*
RDS
spark.driver.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/rds/*
spark.executor.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/rds/*
Redis
spark.driver.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/redis/*
spark.executor.extraClassPath=/usr/share/extension/dli/spark-jar/datasource/redis/*
Table 4 Mapping between compute resource specification parameters on the console and Spark Argument(--conf) Console Parameter
Spark Argument(--conf)
Description
Notes and Constraints
Executor Memory
Complete executor memory = spark.executor.memory + spark.executor.memoryOverhead
spark.executor.memory
Executor memory, which is configurable.
-
spark.executor.memoryOverhead
Amount of off-heap memory for each executor in a Spark application. This parameter is not configurable.
spark.executor.memoryOverhead=spark.executor.memory * spark.executor.memoryOverheadFactor
The minimum value is 384 MB.
That is, when the value of spark.executor.memory multiplied by spark.executor.memoryOverheadFactor is less than 384 MB, the system automatically sets the value to 384 MB.
spark.executor.memoryOverheadFactor
This parameter determines the ratio of off-heap memory allocation to on-heap memory allocation. The default value is 0.1 for Spark applications run with the JAR file and 0.4 for those run with Python. This parameter is configurable.
The priority of spark.executor.memoryOverheadFactor is higher than that of spark.kubernetes.memoryOverheadFactor.
Executor Cores
spark.executor.cores
Number of executor cores, which is configurable.
-
Executors
spark.executor.instances
Number of executors, which is configurable.
-
Driver Cores
spark.driver.cores
Number of driver cores, which is configurable.
-
Driver Memory
Complete driver memory = spark.driver.memory + spark.edriver.memoryOverhead
spark.driver.memory
Driver memory, which is configurable.
-
spark.driver.memoryOverhead
Amount of off-heap memory for each driver in a Spark application.
This parameter is not configurable.
spark.driver.memoryOverhead=
spark.driver.memory * spark.driver.memoryOverheadFactor
The minimum value is 384 MB. That is, when the value of spark.driver.memory multiplied by spark.driver.memoryOverheadFactor is less than 384 MB, the system automatically sets the value to 384 MB.
spark.driver.memoryOverheadFactor
This parameter determines the ratio of off-heap memory allocation to on-heap memory allocation. The default value is 0.1 for Spark applications run with the JAR file and 0.4 for those run with Python. This parameter is configurable.
The priority of spark.driver.memoryOverheadFactor is higher than that of spark.kubernetes.memoryOverheadFactor.
-
spark.kubernetes.memoryOverheadFactor
Amount of memory allocated outside of the memory assigned to Spark executors. The default value is 0.1 for Spark applications run with the JAR file and 0.4 for those run with Python. This parameter is configurable.
The priority of spark.executor.memoryOverheadFactor and spark.driver.memoryOverheadFactor is higher than that of spark.kubernetes.memoryOverheadFactor.
- (Optional) Configure dependencies.
Table 5 Dependency configuration parameters Parameter
Description
JAR Package Dependencies (--jars)
JAR file on which the Spark job depends. You can enter the JAR file name or the OBS path of the JAR file in the format of obs://Bucket name/Folder path/JAR file name.
Python File Dependencies (--py-files)
py-files on which the Spark job depends. You can enter the Python file name or the corresponding OBS path of the Python file. The format is as follows: obs://Bucket name/Folder name/File name.
Other Dependencies (--files)
Other files on which the Spark job depends. You can enter the name of the dependency file or the corresponding OBS path of the dependency file. The format is as follows: obs://Bucket name/Folder name/File name.
Group Name
If you select a group when creating a package, you can select all the packages and files in the group. For how to create a package, see Creating a DLI Package.
Spark 3.3.x or later does not support group information configuration.
- Set the following parameters in advanced settings:
- Select Dependency Resources: For details about the parameters, see Table 6.
- Configure Resources: For details about the parameters, see Table 7.Note
The parallelism degree of Spark resources is jointly determined by the number of Executors and the number of Executor CPU cores.
Maximum number of tasks that can be concurrently executed = Number of Executors x Number of Executor CPU cores
You can properly plan compute resource specifications based on the compute CUs of the queue you have purchased.
Note that Spark tasks need to be jointly executed by multiple roles, such as driver and executor. So, the number of executors multiplied by the number of executor CPU cores must be less than the number of compute CUs of the queue to prevent other roles from failing to start Spark tasks. For more information about roles for Spark tasks, see Apache Spark.
Calculation formula for Spark job parameters:
- Number of CUs = Actual number of CUs = Max{(Driver Cores + Executors x Executor Cores), [(Driver Memory + Executors x Executor Memory)/4]}
- Memory = Driver Memory + (Executors x Executor Memory)
Table 6 Parameters for selecting dependency resources Parameter
Description
modules
If the Spark version is 3.1.1, you do not need to select a module. Configure Spark parameters (--conf).
Dependency modules provided by DLI for executing datasource connection jobs. To access different services, you need to select different modules.
- MRS HBase: sys.datasource.hbase
- DDS: sys.datasource.mongo
- MRS OpenTSDB: sys.datasource.opentsdb
- DWS: sys.datasource.dws
- RDS MySQL: sys.datasource.rds
- RDS PostGre: sys.datasource.rds
- DCS: sys.datasource.redis
- CSS: sys.datasource.css
Resource Package
JAR package on which the Spark job depends.
Spark 3.3.x or later does not support this parameter. Configure resource package information in jars, pyFiles, and files.
Table 7 Resource specification parameters Parameter
Description
Resource Specifications
Select a resource specification from the drop-down list box. The system provides three resource specification options for you to choose from.
Resource specifications involve the following parameters:
- Executor Memory
- Executor Cores
- Executors
- Driver Cores
- Driver Memory
If modified, your modified settings of the items are used.
Executor Memory
Customize the configuration item based on the selected resource specifications.
Memory of each Executor. It is recommended that the ratio of Executor CPU cores to Executor memory be 1:4.
Executor Cores
Number of CPU cores of each Executor applied for by Spark jobs, which determines the capability of each Executor to execute tasks concurrently.
Executors
Number of Executors applied for by a Spark job
Driver Cores
Number of CPU cores of the driver
Driver Memory
Driver memory size. It is recommended that the ratio of the number of driver CPU cores to the driver memory be 1:4.
- If you select 3.3.1 for Spark Version, you can configure compute resource specification parameters in Spark Argument(--conf). Note that the configuration priority of Spark Argument(--conf) is higher than that of Resource Specifications(Optional) in Advanced Settings.
Table 4 describes the parameter mapping.
NoteWhen configuring compute resource specification parameters in Spark Argument(--conf), you can use the unit M, G, or K. If the unit is not specified, the default unit is byte.
- Spark 3.3.1 or later includes notes and constraints on the compute resource specifications for jobs. For details, see Table 8.
Table 8 Value ranges of compute resources specifications Parameter
Elastic Resource Pool of Standard Edition After Modification
Elastic Resource Pool of Basic Edition
Executor Memory
450 MB to 64 GB
450 MB to 16 GB
Executor Cores
0 to 16
0 to 4
Executors
Unlimited
Unlimited
Driver Cores
0 to 16
0 to 4
Driver Memory
450 MB to 64 GB
450 MB to 16 GB
Job CU Quota
Unlimited
Unlimited
- Click Execute in the upper right corner of the Spark job editing page.
After the message "Batch processing job submitted successfully" is displayed, you can view the status and logs of the submitted job on the Spark Jobs page.
- Prerequisites
- Procedure