Mastering Real-Time Alerting with ElastAlert2: Detecting DOS Attacks from WAF Logs
Introduction
ElastAlert 2 is a simple framework for alerting on anomalies, spikes, and other patterns of interest in data from Elasticsearch and OpenSearch.
ElastAlert 2 is a tool for monitoring real-time data in Elasticsearch and alerting on matching patterns.
Elastalert accepts this Alert type:
- AWS SES (Amazon Simple Email Service)
- AWS SNS (Amazon Simple Notification Service)
- Chatwork Command
- Datadog
- GoogleChat
- Jira Slack
- Telegram
It is a powerful monitoring tool, but it demands equal amounts of effort to realize its full potential.
Use Case: Real-Time DOS Attack Detection on WAF Logs with ElastAlert2
In a recent project, I used ElastAlert2 to monitor and detect Denial of Service (DoS) attacks by monitoring WAF data stored in Elasticsearch. The purpose was to send email alerts in near real time when suspicious traffic patterns indicating a DOS assault were observed.
Challenge
While ElastAlert2 is an effective framework for anomaly detection and alerting on Elasticsearch/OpenSearch data, the setup process might be difficult. The documentation is minimal, and setups require careful attention to detail, especially when interacting with custom pipelines and email notifications.
Solution
Here’s how I structured the solution:
1. Mechanism for alerting
The alert type was set to email (SMTP). When the alert condition is met (for example, a DOS pattern is discovered in logs), an email is automatically sent to the incident response team with complete information.
2. Rule Type: Frequency.
Developed a frequency rule to detect at least one DOS-related log item within a 60-minute timeframe. The rule employed wildcard filters to capture variations such as DOS* and Behavioral* in the dos_attack_name column.
3. Pipeline for Parsing Nested JSON Logs
Because the essential log data was nested within the _source field and stored as raw JSON in event.original, I built a custom Ingest Pipeline in Elasticsearch to convert the JSON to structured fields. This enabled ElastAlert2 to view and evaluate the real log material.
4. Test before deployment.
Before setting the alert as a service, I used elastalert-test-rule to evaluate the configuration and rule logic, which helped me identify typos in the syntax.
5. Service Integration.
Finally, set up ElastAlert2 as a systemd service, guaranteeing that it runs continually in the background and restarts automatically on reboot.
Key Takeaways
ElastAlert2 is great for custom alerting but requires a steep setup curve. JSON parsing through Elasticsearch pipelines is essential when logs are nested. Always validate your rules before production deployment using elastalert-test-rule
Prerequisites
-
Python >= 3.9 version
sudo yum groupinstall "Development Tools" sudo yum install libffi-devel sudo yum install python3-devel
Please note that the commands provided are solely for Red Hat-based Linux systems. Here are the commands for Ubuntu-based Linux systems:
sudo apt-get install build-essential sudo apt-get install libffi-dev sudo apt-get install python3-dev
-
ElastAlert2 Installation:
git clone https://github.com/jertel/elastalert2.git python3 setup.py install elastalert-create-index New index name (Default elastalert_status) Name of existing index to copy (Default None) New index elastalert_status created Done!
ElastAlert2 Configuration Setup:
ElastAlert comprises numerous configuration files; it is up to you whether you include them all in one file or separate the configuration files based on their nature. For example, here, I separate my configuration file in two ways:
- Main Configuration
- Rules Configuration
Main Config:
Path: /root/elastalert2/examples
# This is the folder that contains the rule yaml files # This can also be a list of directories # Any .yaml file will be loaded as a rule rules_folder: /root/elastalert2/examples/rules # How often ElastAlert will query Elasticsearch # The unit can be anything from weeks to seconds run_every: minutes: 2 # ElastAlert will buffer results from the most recent # period of time, in case some log sources are not in real time buffer_time: minutes: 15 # The Elasticsearch hostname for metadata writeback # Note that every rule can have its own Elasticsearch host es_host: <es_host_ip> # The Elasticsearch port es_port: 9200 # The AWS profile to use. Use this if you are using an AWS CLI profile. # See http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html # for details #profile: test # Optional URL prefix for Elasticsearch #es_url_prefix: elasticsearch # Optional prefix for statsd metrics #statsd_instance_tag: elastalert # Optional statsd host #statsd_host: dogstatsd # Connect with TLS to Elasticsearch #use_ssl: True # Verify TLS certificates #verify_certs: True # Show TLS or certificate related warnings #ssl_show_warn: True # GET request with body is the default option for Elasticsearch. # If it fails for some reason, you can pass 'GET', 'POST' or 'source'. # See https://elasticsearch-py.readthedocs.io/en/master/connection.html?highlight=send_get_body_as#transport # for details es_send_get_body_as: GET # Option basic-auth username and password for Elasticsearch #es_username: someusername #es_password: somepassword # Use SSL authentication with client certificates client_cert must be # a pem file containing both cert and key for client #ca_certs: /path/to/cacert.pem #client_cert: /path/to/client_cert.pem #client_key: /path/to/client_key.key # The index on es_host which is used for metadata storage # This can be a unmapped index, but it is recommended that you run # elastalert-create-index to set a mapping writeback_index: elastalert_status # If an alert fails for some reason, ElastAlert will retry # sending the alert until this time period has elapsed alert_time_limit: seconds: 10 _source_enabled: true logging: version: 1 formatters: simple: format: '%(asctime)s %(levelname)s %(message)s' handlers: console: class: logging.StreamHandler formatter: simple level: DEBUG file: class: logging.FileHandler formatter: simple level: DEBUG filename: /var/log/elastalert.log # Adjust the path as needed root: level: DEBUG handlers: [console, file] smtp_auth_file: /root/elastalert2/examples/smtp_auth_file.yaml # Alerting (Email) smtp_host: <smtp_host_address> smtp_port: <smtp_port> #smtp_ssl: false #verify_certs: false
I am using SMTP Configuration to send notifications via EMAIL.
smtp_auth_file This file includes the username and password for the SMTP server. _source_enabled This variable is vital, so make sure you include it in your settings.
rules_folder If you look at this, I’m referring to my Rules Configuration, which uses this variable
Learn more about other parameters at the link below. All variables play a crucial role in testing your rule. Elastalert rarely detects errors or missing variables, so it’s important to pay attention to these.
Testing Your Rule
However, there is one bonus: you can test your config and rule before enabling your Elastalert service, and if there is an exception or syntax issue, you will notice it right away.
elastalert-test-rule my_rules/rule1.yaml configuration/config.yaml
Rule Configuration:
Path: /root/elastalert2/examples/rules
# Name of the rule name: "DOS Attack Detection" # Type of alert rule type: frequency # The Elasticsearch index to search index: big_ip-waf-logs-* # The filter to match DOS attack logs filter: - bool: should: - wildcard: dos_attack_name: "DOS*" - wildcard: dos_attack_name: "Behavioral*" # The timeframe to search for the keyword timeframe: minutes: 60 # Alert if at least 1 event is found num_events: 1 # Use local time for alerts use_local_time: false # SMTP authentication file smtp_auth_file: /root/elastalert2/examples/smtp_auth_file.yaml # Alert actions alert: - email # Email alert settings from_addr: "cloudapm@tothenew.com" email: - "chetan.singh1@tothenew.com" # Additional alert details to include in the alert message alert_subject: "DOS Attack Detected" alert_text: | DEVICE TYPE : ELK APPLICATION NAME : {0} ACTION : {1} DOS ATTACK ID : {2} HOST NAME : {3} CONTEXT NAME : {4} ATTACK IP ADDRESS : {5} DATE : {6} ITEM NAME : {7} VALUE : {8} SEVERITY : Exception GROUP : L2 Wintel alert_text_args: ["_index", "action", "dos_attack_id", "hostname", "context_name", "source_ip", "@timestamp", "dos_attack_name", "dos_attack_tps"] alert_text_type: alert_text_only alert_missing_value: "NOT FOUND" # Disable the inclusion of the full event #include: [] # Define how often this rule can trigger #realert: # minutes: 1 # Aggregation period #aggregation: # minutes: 5
I’m using Frequency as my Rule Type in this configuration, and I’m looking for DOS attacks in my Elasticsearch Index, or more specifically, in WAF Logs.
The rule and configuration were running properly; however, one step failed. Any ideas?
The _source Block Challenge
I was unable to extract the values from Elasticsearch since the value we received in the Elasticsearch Index was contained within the _source block.
https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-source-field.html
If you look at the official documentation for the _source block, you will notice that it is not searchable; instead, you may see it in Kibana Discover.
To address this issue, I enabled the _source_enabled variable and created a separate pipeline to parse the original event from the index.
PUT _ingest/pipeline/parse_json { "description": "Parse JSON string in event.original field", "processors": [ { "json": { "field": "event.original", "target_field": "parsed_event" } } ] }
By utilizing that, I was able to extract values from the Elasticsearch Index using the Elastalert configuration.
Running ElastAlert as a Service
To launch ElastAlert as a service, create the following file: /etc/systemd/system/elastalert.service.
And it includes:
[Unit] Description=elastalert After=multi-user.target [Service] Type=simple User=root Group=root WorkingDirectory=/root/elastalert2/ ExecStart=/usr/bin/python3 -m elastalert.elastalert --verbose --config /root/elastalert2/examples/config.yaml --rule /root/elastalert2/examples/rules/example_frequency.yaml StandardOutput=syslog StandardError=syslog KillSignal=SIGKILL [Install] WantedBy=multi-user.target
Final Thoughts
Setting up ElastAlert2 can be intimidating at first owing to its lack of documentation and setup complexity. However, if you understand its structure and quirks—particularly those involving the _source field—it becomes an extremely useful alerting tool.
By breaking down configurations into manageable parts, testing rules thoroughly, and leveraging tools like ingest pipelines, you can turn ElastAlert2 into a powerful ally for proactive monitoring.
If you work with Elasticsearch and need dependable, configurable alerting, ElastAlert2 is well worth the setup effort.
References
https://elastalert2.readthedocs.io/en/latest/ruletypes.html
https://github.com/jertel/elastalert2/tree/master/examples/rules
https://elastalert2.readthedocs.io/_/downloads/en/latest/pdf/