Logstash Configuration File Templates
Introduction to the Templates
Table 1 lists commonly used Logstash configuration file templates.
Name | Description | Details |
---|---|---|
redis | Imports data from a Redis database to an Elasticsearch cluster. | |
elasticsearch | Migrates data between Elasticsearch clusters. | |
jdbc | Imports data from JDBC to an Elasticsearch cluster. | |
kafka | Imports data from Kafka to an Elasticsearch cluster. | |
beats | Imports data from Beats to an Elasticsearch cluster. | |
dis | Imports data from DIS to an Elasticsearch cluster. |
Redis Template
Imports data from a Redis database to an Elasticsearch cluster.
input {redis {data_type => "pattern_channel" # data_type, one of ["list", "channel", "pattern_channel"]key => "lgs-*" # The name of a Redis list or channel.host => "xxx.xxx.xxx.xxxx"port => 6379}}filter {# Delete some fields added by Logstash.mutate {remove_field => ["@timestamp", "@version"]}}output {elasticsearch {hosts => ["http://xxx.xxx.xxx.xxx:9200", "http://xxx.xxx.xxx.xxx:9200", "http://xxx.xxx.xxx.xxx:9200"] # host# user => "xxxx" # user name, just for security cluster# password => "xxxx" # password, just for security clusterindex => "xxxxxx" # destination _index}}
Configuration Item | Mandatory | Description |
---|---|---|
data_type | Yes | Data source type. The options are list, channel, and pattern_channel.
|
key | Yes | Redis list or channel name |
host | Yes | IP address of the Redis server |
port | No | Number of the port to be connected. Default value: 6379. |
hosts | Yes | Address for accessing the Elasticsearch cluster. |
user | No | Username for accessing the Elasticsearch cluster. Generally, the value is admin. This parameter is required only for a security cluster. |
password | No | Password for accessing the Elasticsearch cluster. The password is set when the cluster is created. This parameter is required only for a security cluster. |
index | Yes | Index to which the data is to be migrated. Only one index can be configured. |
For more information, see the Logstash document Redis input plugin.
Elasticsearch Template
Migrates data between Elasticsearch clusters.
input {elasticsearch {# Source ES cluster IP address. When SSL is enabled, please use 7.10.0 and you donot need to add a protocol, otherwise an error is reported.hosts => ["http://xxx.xxx.xxx.xxx:9200", "http://xxx.xxx.xxx.xxx:9200"]# user => "xxxx" # user name, just for security cluster# password => "xxxx" # password, just for security clusterindex => "xxxx,xxx,xxx" # List of indexes to be migrated, separate with commas (,)docinfo => true# source ES certificate, the cluster on the cloud this value remains the same. Enter the corresponding path when using a custom certificate. Self-built Logstash cluster, You can download the certs file from the ES cluster details page, enter the corresponding path here.# ca_file => "/rds/datastore/logstash/v7.10.0/package/logstash-7.10.0/extend/certs" # for 7.10.0# ssl => true # Set to true when SSL is enabled.}}filter {# Delete some fields added by Logstash.mutate {remove_field => ["@timestamp", "@version"]}}output {elasticsearch {# Destination ES cluster IP address. When SSL is enabled, please use 7.10.0 and you donot need to add a protocol.hosts => ["http://xxx.xxx.xxx.xxx:9200", "http://xxx.xxx.xxx.xxx:9200"]# user => "xxx" # user name, just for security cluster# password => "xxxx" # password, just for security clusterindex => "%{[@metadata][_index]}" # destination _index, this config ensure that the index is consistent with that on the source. You can also specify the index name.# document_type => "%{[@metadata][_type]}" # destination _type, this config ensure that the document_type is consistent with that on the source.# document_id => "%{[@metadata][_id]}" # destination _id, if you do not need to retain the original _id, delete this line to improve performance.# source ES certificate, the cluster on the cloud this value remains the same. Enter the corresponding path when using a custom certificate. Self-built Logstash cluster, You can download the certs file from the ES cluster details page, enter the corresponding path here.# cacert => "/rds/datastore/logstash/v7.10.0/package/logstash-7.10.0/extend/certs" # for 7.10.0# ssl => true # Set to true when SSL is enabled.# ssl_certificate_verification => false # Set to false to ignore the validation server certificate when SSL is enabled.}}
Configuration Item | Mandatory | Description |
---|---|---|
hosts | Yes | Address for accessing the destination Elasticsearch cluster to which data is imported. |
user | No | Username for accessing the Elasticsearch cluster. Generally, the value is admin. This parameter is required only for a security cluster. |
password | No | Password for accessing the Elasticsearch cluster. The password is set when the cluster is created. This parameter is required only for a security cluster. |
index | Yes | Index from which data is to be migrated. |
docinfo | No | Document information. Value: true or false. If this parameter is specified, include Elasticsearch document information in the event, such as the index, type, and ID. |
ca_file | No | Default value: /rds/datastore/logstash/v7.10.0/package/logstash-7.10.0/extend/certs. For a cloud-based Logstash cluster, retain the default value, or when using a custom certificate, enter the path of the custom certificate. For an in-house built Logstash cluster, you can download the certificate on the details page of the SSL-enabled Elasticsearch cluster. Enter the path of the certificate here. |
ssl | No | Set this parameter to true if SSL is enabled for the source Elasticsearch cluster. |
hosts | Yes | Address for accessing the source Elasticsearch cluster where data comes from. |
user | No | Username for accessing the Elasticsearch cluster. Generally, the value is admin. This parameter is required only for a security cluster. |
password | No | Password for accessing the Elasticsearch cluster. The password is set when the cluster is created. This parameter is required only for a security cluster. |
index | Yes | Index to which the data is to be migrated. Only one index can be configured. |
document_type | No |
|
document_id | No |
|
cacert | No | Default value: /rds/datastore/logstash/v7.10.0/package/logstash-7.10.0/extend/certs.
|
ssl | No | Set this parameter to true if SSL is enabled for the destination Elasticsearch cluster. |
ssl_certificate_verification | No | Set this parameter to false to enable SSL and ignore the server certificate verification. |
For more information, see the Logstash document Elasticsearch input plugin.
JDBC Template
Imports data from JDBC to an Elasticsearch cluster.
input {jdbc {# for 7.10.0, jdbc_driver_library => "/rds/datastore/logstash/v7.10.0/package/logstash-7.10.0/extend/jars/mariadb-java-client-2.7.0.jar"jdbc_driver_library => "xxxxxxxxxxx"jdbc_driver_class => "org.mariadb.jdbc.Driver"jdbc_connection_string => "jdbc:mariadb://xxx.xxx.xxx.xxx:xxx/data_base_name"jdbc_user => "xxxx"jdbc_password => "xxxx"statement => "SELECT * from table_name" # This SQL statement determines the data to be input}}filter {# Delete some fields added by Logstash.mutate {remove_field => ["@timestamp", "@version"]}}output {elasticsearch {hosts => ["http://xxx.xxx.xxx.xxx:9200", "http://xxx.xxx.xxx.xxx:9200", "http://xxx.xxx.xxx.xxx:9200"] # host# user => "xxxx" # user name, just for security cluster# password => "xxxx" # password, just for security clusterindex => "xxxxxx" # destination _index}}
Configuration Item | Mandatory | Description |
---|---|---|
jdbc_driver_library | Yes | Path of the JDBC driver library.
Currently, only these existing drivers are supported. User-defined upload is not supported. |
jdbc_driver_class | Yes | JDBC driver class to be loaded, for example, org.mariadb.jdbc.Driver. |
jdbc_connection_string | Yes | JDBC connection string |
jdbc_user | Yes | JDBC username |
jdbc_password | Yes | JDBC password |
statement | Yes | SQL statement of the input data. |
hosts | Yes | Address for accessing the source Elasticsearch cluster where data comes from. |
user | No | Username for accessing the Elasticsearch cluster. Generally, the value is admin. This parameter is required only for a security cluster. |
password | No | Password for accessing the Elasticsearch cluster. The password is set when the cluster is created. This parameter is required only for a security cluster. |
index | Yes | Index to which the data is to be migrated. Only one index can be configured. |
For more information, see the Logstash document Jdbc input plugin.
Kafka Template
Imports data from Kafka to an Elasticsearch cluster.
input {kafka{bootstrap_servers=>"xxx.xxx.xxx.xxx:xxxx"topics=>["xxxxxxxx"]group_id=>"kafka_es_test"auto_offset_reset=>"earliest"}}filter {# Delete some fields added by Logstash.mutate {remove_field => ["@timestamp", "@version"]}}output {elasticsearch {hosts => ["http://xxx.xxx.xxx.xxx:9200", "http://xxx.xxx.xxx.xxx:9200", "http://xxx.xxx.xxx.xxx:9200"] # host# user => "xxxx" # user name, just for security cluster# password => "xxxx" # password, just for security clusterindex => "xxxxxx" # destination _index}}
Configuration Item | Mandatory | Description |
---|---|---|
bootstrap_servers | Yes | IP address and port number of the Kafka instance |
topics | Yes | List of topics to be subscribed to |
group_id | Yes | Identifier of the group to which the consumer belongs. |
auto_offset_reset | Yes | Initial offset in Kafka.
|
hosts | Yes | Address for accessing the source Elasticsearch cluster where data comes from. |
user | No | Username for accessing the Elasticsearch cluster. Generally, the value is admin. This parameter is required only for a security cluster. |
password | No | Password for accessing the Elasticsearch cluster. The password is set when the cluster is created. This parameter is required only for a security cluster. |
index | Yes | Index to which the data is to be migrated. Only one index can be configured. |
For more information, see the Logstash document Kafka input plugin.
DIS Template
Imports data from DIS to an Elasticsearch cluster.
input {dis {streams => ["YOUR_DIS_STREAM_NAME"]endpoint => "https://dis.xxxxx.myhuxxxoud.com" # xxxxx should be replaced by your region nameak => "YOUR_ACCESS_KEY_ID"sk => "YOUR_SECRET_KEY_ID"region => "YOUR_Region"project_id => "YOUR_PROJECT_ID"group_id => "YOUR_APP_ID"client_id => "YOUR_CLIENT_ID"auto_offset_reset => "earliest"}}filter {# Delete some fields added by Logstash.mutate {remove_field => ["@timestamp", "@version"]}}output {elasticsearch {hosts => ["http://xxx.xxx.xxx.xxx:9200", "http://xxx.xxx.xxx.xxx:9200", "http://xxx.xxx.xxx.xxx:9200"] # host# user => "xxxx" # user name, just for security cluster# password => "xxxx" # password, just for security clusterindex => "xxxxxx" # destination _index}}
Configuration Item | Mandatory | Description |
---|---|---|
streams | Yes | Name of the DIS stream. The entered DIS stream name must be the same as the stream name specified when you are creating a DIS stream on the DIS console. |
endpoint | Yes | Data API address of the region where DIS resides. |
ak | Yes | User's access key (AK). |
sk | Yes | User's secret key (SK). |
region | Yes | Region where DIS is supported. |
project_id | Yes | Project ID of the region. For details, see . |
group_id | Yes | DIS App name, used to identify a consumer group. The value can be any character string. |
client_id | No | Client ID, which identifies a consumer in a consumer group. If multiple pipelines or Logstash instances are started for consumption, set this parameter to different values. For example, the value of instance 1 is client1, and the value of instance 2 is client2. |
auto_offset_reset | No | Position where data starts to be consumed from the stream. The options are as follows:
|
hosts | Yes | Address for accessing the source Elasticsearch cluster where data comes from. |
user | No | Username for accessing the Elasticsearch cluster. Generally, the value is admin. This parameter is required only for a security cluster. |
password | No | Password for accessing the Elasticsearch cluster. The password is set when the cluster is created. This parameter is required only for a security cluster. |
index | Yes | Index to which the data is to be migrated. Only one index can be configured. |
Beats Template
Imports data from Beats to an Elasticsearch cluster.
input {beats {port => 5044 # port}}filter {# Delete some fields added by Logstash.mutate {remove_field => ["@timestamp", "@version"]}}output {elasticsearch {hosts => ["http://xxx.xxx.xxx.xxx:9200", "http://xxx.xxx.xxx.xxx:9200", "http://xxx.xxx.xxx.xxx:9200"] # host# user => "xxxx" # user name, just for security cluster# password => "xxxx" # password, just for security clusterindex => "xxxxxx" # destination _index}}
Configuration Item | Mandatory | Description |
---|---|---|
port | Yes | The port number 5044 is used for connecting and indexing Elasticsearch through Beats. |
hosts | Yes | Address for accessing the source Elasticsearch cluster where data comes from. |
user | No | Username for accessing the Elasticsearch cluster. Generally, the value is admin. This parameter is required only for a security cluster. |
password | No | Password for accessing the Elasticsearch cluster. The password is set when the cluster is created. This parameter is required only for a security cluster. |
index | Yes | Index to which the data is to be migrated. Only one index can be configured. |
For more information, see the Logstash document Beats input plugin.
- Introduction to the Templates
- Redis Template
- Elasticsearch Template
- JDBC Template
- Kafka Template
- DIS Template
- Beats Template