Compare commits
20 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| e64b880c97 | |||
| e559e16e35 | |||
| 12e6ba0135 | |||
| 0f0bdd2da7 | |||
| 07c768a4cf | |||
| 0b64f2ed03 | |||
| d102dc30f4 | |||
| 6f7e99639c | |||
| bebedb1e15 | |||
| ff7bbb98d0 | |||
| 57d7688c3a | |||
| 83b25d81a6 | |||
| 7e24379fa1 | |||
| d03018de9b | |||
| f65b2d468d | |||
| e119bc7194 | |||
| c5a446ea65 | |||
| b8b91880d6 | |||
| e7730ebde5 | |||
| 63ee043f34 |
7
.gitignore
vendored
Normal file → Executable file
7
.gitignore
vendored
Normal file → Executable file
@@ -1,4 +1,5 @@
|
||||
__pycache__/*
|
||||
*.pyc
|
||||
__pycache__/
|
||||
monitoring_data.json
|
||||
log_position.txt
|
||||
.DS_Store
|
||||
monitoring.db
|
||||
*.log
|
||||
|
||||
47
AGENTS.md
Executable file
47
AGENTS.md
Executable file
@@ -0,0 +1,47 @@
|
||||
# AGENTS.md
|
||||
|
||||
This document outlines the autonomous and human agents involved in the LLM-Powered Monitoring Agent project.
|
||||
|
||||
## Human Agents
|
||||
|
||||
### Inanis
|
||||
|
||||
- **Role**: Primary Operator, Project Owner
|
||||
- **Responsibilities**:
|
||||
- Defines project goals and requirements.
|
||||
- Provides high-level guidance and approval for major changes.
|
||||
- Reviews agent outputs and provides feedback.
|
||||
- Manages overall project direction.
|
||||
- **Contact**: [If Inanis wants to provide contact info, it would go here]
|
||||
|
||||
## Autonomous Agents
|
||||
|
||||
### Blight (LLM-Powered Monitoring Agent)
|
||||
|
||||
- **Role**: Autonomous Monitoring and Anomaly Detection Agent
|
||||
- **Type**: Large Language Model (LLM) based agent
|
||||
- **Capabilities**:
|
||||
- Collects system and network metrics (logs, temperatures, network performance, Nmap scans).
|
||||
- Analyzes collected data against historical baselines.
|
||||
- Detects anomalies using an integrated LLM (Llama3.1).
|
||||
- Generates actionable reports on detected anomalies.
|
||||
- Sends alerts via Discord and Google Home.
|
||||
- Provides daily recaps of events.
|
||||
- **Interaction**:
|
||||
- Receives instructions and context from Inanis via CLI.
|
||||
- Provides analysis and reports in JSON format.
|
||||
- Operates continuously in the background (unless in test mode).
|
||||
- **Dependencies**:
|
||||
- `ollama` (for LLM inference)
|
||||
- `nmap`
|
||||
- `lm-sensors`
|
||||
- Python libraries (as listed in `requirements.txt`)
|
||||
- **Configuration**: Configured via `config.py`, `CONSTRAINTS.md`, and `known_issues.json`.
|
||||
- **Status**: Operational and continuously evolving.
|
||||
|
||||
## Agent Interactions
|
||||
|
||||
- **Inanis -> Blight**: Inanis provides high-level tasks, reviews Blight's output, and refines its behavior through code modifications and configuration updates.
|
||||
- **Blight -> Inanis**: Blight reports detected anomalies, system status, and daily summaries to Inanis through configured alerting channels (Discord, Google Home) and logs.
|
||||
- **Blight <-> System**: Blight interacts with the local operating system to collect data (reading logs, running commands like `sensors` and `nmap`).
|
||||
- **Blight <-> LLM**: Blight sends collected and processed data to the local Ollama LLM for intelligent analysis and receives anomaly reports.
|
||||
2
CONSTRAINTS.md
Normal file → Executable file
2
CONSTRAINTS.md
Normal file → Executable file
@@ -1,6 +1,8 @@
|
||||
## LLM Constraints and Guidelines
|
||||
- Not everything is an anamoly. Err on the side of caution when selecting severity. Its ok not to report anything. You don't have to say anything if you don't want to, or don't need to.
|
||||
- Please do not report on anything that is older then 24 hours.
|
||||
- The server uses a custom DNS server at 192.168.2.112.
|
||||
- Please think carefully on if the measured values exceed the averages by any significant margin. A few seconds, or a few degrees in difference do not mean a significant margin. Only report anomolies with delta values greater then 10.
|
||||
|
||||
### Important Things to Focus On:
|
||||
- Security-related events such as failed login attempts, unauthorized access, or unusual network connections.
|
||||
|
||||
104
PROGRESS.md
Normal file → Executable file
104
PROGRESS.md
Normal file → Executable file
@@ -13,52 +13,84 @@
|
||||
|
||||
## Phase 2: Data Storage
|
||||
|
||||
9. [x] Create `data_storage.py`
|
||||
10. [x] Implement data storage functions in `data_storage.py`
|
||||
11. [x] Update `monitor_agent.py` to use data storage
|
||||
12. [x] Update `SPEC.md` to reflect data storage functionality
|
||||
9. [x] Implement data storage functions in `data_storage.py`
|
||||
10. [x] Update `monitor_agent.py` to use data storage
|
||||
11. [x] Update `SPEC.md` to reflect data storage functionality
|
||||
|
||||
## Phase 3: Expanded Monitoring
|
||||
|
||||
13. [x] Implement CPU temperature monitoring
|
||||
14. [x] Implement GPU temperature monitoring
|
||||
15. [x] Implement system login attempt monitoring
|
||||
16. [x] Update `monitor_agent.py` to include new metrics
|
||||
17. [x] Update `SPEC.md` to reflect new metrics
|
||||
18. [x] Extend `calculate_baselines` to include system temps
|
||||
12. [x] Implement CPU temperature monitoring
|
||||
13. [x] Implement GPU temperature monitoring
|
||||
14. [x] Implement system login attempt monitoring
|
||||
15. [x] Update `monitor_agent.py` to include new metrics
|
||||
16. [x] Update `SPEC.md` to reflect new metrics
|
||||
17. [x] Extend `calculate_baselines` to include system temps
|
||||
|
||||
## Phase 4: Troubleshooting
|
||||
|
||||
19. [x] Investigated and resolved issue with `jc` library
|
||||
20. [x] Removed `jc` library as a dependency
|
||||
21. [x] Implemented manual parsing of `sensors` command output
|
||||
18. [x] Investigated and resolved issue with `jc` library
|
||||
19. [x] Removed `jc` library as a dependency
|
||||
20. [x] Implemented manual parsing of `sensors` command output
|
||||
|
||||
## Tasks Already Done
|
||||
## Phase 5: Network Scanning (Nmap Integration)
|
||||
|
||||
[x] Ensure we aren't using mockdata for get_system_logs() and get_network_metrics()
|
||||
[x] Improve `get_system_logs()` to read new lines since last check
|
||||
[x] Improve `get_network_metrics()` by using a library like `pingparsing`
|
||||
[x] Ensure we are including CONSTRAINTS.md in our analyze_data_with_llm() function
|
||||
[x] Summarize entire report into a single sentence to said to Home Assistant
|
||||
[x] Figure out why Home Assitant isn't using the speaker
|
||||
|
||||
## Keeping track of Current Objectives
|
||||
|
||||
[x] Improve "high" priority detection by explicitly instructing LLM to output severity in structured JSON format.
|
||||
[x] Implement dynamic contextual information (Known/Resolved Issues Feed) for LLM to improve severity detection.
|
||||
|
||||
## Network Scanning (Nmap Integration)
|
||||
|
||||
1. [x] Add `python-nmap` to `requirements.txt` and install.
|
||||
2. [x] Define `NMAP_TARGETS` and `NMAP_SCAN_OPTIONS` in `config.py`.
|
||||
3. [x] Create a new function `get_nmap_scan_results()` in `monitor_agent.py`:
|
||||
21. [x] Add `python-nmap` to `requirements.txt` and install.
|
||||
22. [x] Define `NMAP_TARGETS` and `NMAP_SCAN_OPTIONS` in `config.py`.
|
||||
23. [x] Create a new function `get_nmap_scan_results()` in `monitor_agent.py`:
|
||||
* [x] Use `python-nmap` to perform a scan on the defined targets with the specified options.
|
||||
* [x] Return the parsed results.
|
||||
4. [x] Integrate `get_nmap_scan_results()` into the main monitoring loop:
|
||||
24. [x] Integrate `get_nmap_scan_results()` into the main monitoring loop:
|
||||
* [x] Call this function periodically (e.g., less frequently than other metrics).
|
||||
* [x] Add the `nmap` results to the `combined_data` dictionary.
|
||||
5. [x] Update `data_storage.py` to store `nmap` results.
|
||||
6. [x] Extend `calculate_baselines()` in `data_storage.py` to include `nmap` baselines:
|
||||
25. [x] Update `data_storage.py` to store `nmap` results.
|
||||
26. [x] Extend `calculate_baselines()` in `data_storage.py` to include `nmap` baselines:
|
||||
* [x] Compare current `nmap` results with historical data to identify changes.
|
||||
7. [x] Modify `analyze_data_with_llm()` prompt to include `nmap` scan results for analysis.
|
||||
8. [x] Consider how to handle `nmap` permissions.
|
||||
27. [x] Modify `analyze_data_with_llm()` prompt to include `nmap` scan results for analysis.
|
||||
28. [x] Consider how to handle `nmap` permissions.
|
||||
29. [x] Improve Nmap data logging to include IP addresses, open ports, and service details.
|
||||
|
||||
## Phase 6: Code Refactoring and Documentation
|
||||
|
||||
30. [x] Remove duplicate `pingparsing` import in `monitor_agent.py`.
|
||||
31. [x] Refactor `get_cpu_temperature` and `get_gpu_temperature` to call `sensors` command only once.
|
||||
32. [x] Refactor `get_login_attempts` to use a position file for efficient log reading.
|
||||
33. [x] Simplify JSON parsing in `analyze_data_with_llm`.
|
||||
34. [x] Move LLM prompt to a separate function `build_llm_prompt`.
|
||||
35. [x] Refactor main loop into smaller functions (`run_monitoring_cycle`, `main`).
|
||||
36. [x] Create helper function in `data_storage.py` for calculating average metrics.
|
||||
37. [x] Update `README.md` with current project status and improvements.
|
||||
38. [x] Create `AGENTS.md` to document human and autonomous agents.
|
||||
|
||||
## Previous TODO
|
||||
|
||||
- [x] Improve "high" priority detection by explicitly instructing LLM to output severity in structured JSON format.
|
||||
- [x] Implement dynamic contextual information (Known/Resolved Issues Feed) for LLM to improve severity detection.
|
||||
- [x] Change baseline calculations to only use integers instead of floats.
|
||||
- [x] Add a log file that only keeps records for the past 24 hours.
|
||||
- [x] Log all LLM responses to the console.
|
||||
- [x] Reduce alerts to only happen between 9am and 12am.
|
||||
- [x] Get hostnames of devices in Nmap scan.
|
||||
- [x] Filter out RTT fluctuations below 10 seconds.
|
||||
- [x] Filter out temperature fluctuations with differences less than 5 degrees.
|
||||
- [x] Create a list of known port numbers and their applications for the LLM to check against to see if an open port is a threat
|
||||
- [x] When calculating averages, please round up to the nearest integer. We only want to deliver whole integers to the LLM to process, and nothing with decimal points. It gets confused with decimal points.
|
||||
- [x] In the discord message, please include the exact specific details and the log of the problem that prompted the alert
|
||||
|
||||
## Phase 7: Offloading Analysis from LLM
|
||||
|
||||
39. [x] Create a new function `analyze_data_locally` in `monitor_agent.py`.
|
||||
39.1. [x] This function will take `data`, `baselines`, `known_issues`, and `port_applications` as input.
|
||||
39.2. [x] It will contain the logic to compare current data with baselines and predefined thresholds.
|
||||
39.3. [x] It will be responsible for identifying anomalies for various metrics (CPU/GPU temp, network RTT, failed logins, Nmap changes).
|
||||
39.4. [x] It will return a list of dictionaries, where each dictionary represents an anomaly and contains 'severity' and 'reason' keys.
|
||||
40. [x] Refactor `analyze_data_with_llm` into a new function called `generate_llm_report`.
|
||||
40.1. [x] This function will take the list of anomalies from `analyze_data_locally` as input.
|
||||
40.2. [x] It will construct a simple prompt to ask the LLM to generate a human-readable summary of the anomalies.
|
||||
40.3. [x] The LLM will no longer be making analytical decisions.
|
||||
41. [x] Update `run_monitoring_cycle` to orchestrate the new workflow.
|
||||
41.1. [x] Call `analyze_data_locally` to get the list of anomalies.
|
||||
41.2. [x] If anomalies are found, call `generate_llm_report` to create the report.
|
||||
41.3. [x] Use the output of `generate_llm_report` for alerting.
|
||||
42. [x] Remove the detailed analytical instructions from `build_llm_prompt` as they will be handled by `analyze_data_locally`.
|
||||
|
||||
## TODO
|
||||
|
||||
147
README.md
Normal file → Executable file
147
README.md
Normal file → Executable file
@@ -1,104 +1,93 @@
|
||||
# LLM-Powered Monitoring Agent
|
||||
|
||||
This project is a self-hosted monitoring agent that uses a local Large Language Model (LLM) to detect anomalies in system and network data. It's designed to be a simple, self-contained Python script that can be easily deployed on a server.
|
||||
This project implements an LLM-powered monitoring agent designed to continuously collect system and network data, analyze it against historical baselines, and alert on anomalies. The agent leverages a local Large Language Model (LLM) for intelligent anomaly detection and integrates with Discord and Google Home for notifications.
|
||||
|
||||
## 1. Installation
|
||||
## Features
|
||||
|
||||
To get started, you'll need to have Python 3.8 or newer installed. Then, follow these steps:
|
||||
- **System Log Monitoring**: Tracks new entries in `/var/log/syslog` and `/var/log/auth.log` (for login attempts).
|
||||
- **Network Metrics**: Gathers network performance data by pinging a public IP (e.g., 8.8.8.8).
|
||||
- **Hardware Monitoring**: Collects CPU and GPU temperature data.
|
||||
- **Nmap Scanning**: Periodically performs network scans to discover hosts and open ports.
|
||||
- **Historical Baseline Analysis**: Compares current data against a 24-hour rolling baseline to identify deviations.
|
||||
- **LLM-Powered Anomaly Detection**: Utilizes a local LLM (Ollama with Llama3.1) to analyze combined system data, baselines, and Nmap changes for anomalies.
|
||||
- **Alerting**: Sends high-severity anomaly alerts to Discord and Google Home speakers (via Home Assistant).
|
||||
- **Daily Recap**: Provides a daily summary of detected events.
|
||||
|
||||
1. **Clone the repository or download the files:**
|
||||
## Recent Improvements
|
||||
|
||||
```bash
|
||||
git clone <repository_url>
|
||||
cd <repository_directory>
|
||||
```
|
||||
- **Enhanced Nmap Data Logging**: The Nmap scan results are now processed and stored in a more structured format, including:
|
||||
- Discovered IP addresses.
|
||||
- Status of each host.
|
||||
- Detailed list of open ports for each host, including service, product, and version information.
|
||||
This significantly improves the clarity and utility of Nmap data for anomaly detection.
|
||||
- **Code Refactoring (`monitor_agent.py`)**:
|
||||
- **Optimized Sensor Data Collection**: CPU and GPU temperature data are now collected with a single call to the `sensors` command, improving efficiency.
|
||||
- **Efficient Login Attempt Logging**: The agent now tracks its position in `/var/log/auth.log`, preventing redundant reads of the entire file and improving performance for large log files.
|
||||
- **Modular Main Loop**: The core monitoring logic has been broken down into smaller, more manageable functions, enhancing readability and maintainability.
|
||||
- **Separated LLM Prompt Building**: The complex LLM prompt construction logic has been moved into a dedicated function, making `analyze_data_with_llm` more focused.
|
||||
- **Code Refactoring (`data_storage.py`)**:
|
||||
- **Streamlined Baseline Calculations**: Helper functions have been introduced to reduce code duplication and improve clarity in the calculation of average metrics for baselines.
|
||||
|
||||
2. **Create and activate a Python virtual environment:**
|
||||
|
||||
```bash
|
||||
python -m venv venv
|
||||
source venv/bin/activate # On Windows, use `venv\Scripts\activate`
|
||||
```
|
||||
|
||||
3. **Install the required Python libraries:**
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
## 2. Setup
|
||||
|
||||
Before running the agent, you need to configure it and ensure the necessary services are running.
|
||||
## Setup and Installation
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- **Ollama:** The agent requires that [Ollama](https://ollama.com/) is installed and running on the server.
|
||||
- **LLM Model:** You must have the `llama3.1:8b` model pulled and available in Ollama. You can pull it with the following command:
|
||||
- Python 3.x
|
||||
- `ollama` installed and running with the `llama3.1:8b` model pulled (`ollama pull llama3.1:8b`)
|
||||
- `nmap` installed
|
||||
- `lm-sensors` installed (for CPU/GPU temperature monitoring)
|
||||
- Discord webhook URL
|
||||
- (Optional) Home Assistant instance with a long-lived access token and a Google Home speaker configured.
|
||||
|
||||
### Installation
|
||||
|
||||
1. Clone the repository:
|
||||
```bash
|
||||
ollama pull llama3.1:8b
|
||||
git clone <repository_url>
|
||||
cd LLM-Powered-Monitoring-Agent
|
||||
```
|
||||
2. Install Python dependencies:
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
3. Configure the agent:
|
||||
- Open `config.py` and update the following variables:
|
||||
- `DISCORD_WEBHOOK_URL`
|
||||
- `HOME_ASSISTANT_URL` (if using Google Home alerts)
|
||||
- `HOME_ASSISTANT_TOKEN` (if using Google Home alerts)
|
||||
- `GOOGLE_HOME_SPEAKER_ID` (if using Google Home alerts)
|
||||
- `NMAP_TARGETS` (e.g., "192.168.1.0/24" or "192.168.1.100")
|
||||
- `NMAP_SCAN_OPTIONS` (default is "-sS -T4")
|
||||
- `DAILY_RECAP_TIME` (e.g., "20:00" for 8 PM)
|
||||
- `TEST_MODE` (set to `True` for a single run, `False` for continuous operation)
|
||||
|
||||
### Configuration
|
||||
## Usage
|
||||
|
||||
All configuration is done in the `config.py` file. You will need to replace the placeholder values with your actual credentials and URLs.
|
||||
|
||||
- `DISCORD_WEBHOOK_URL`: Your Discord channel's webhook URL. This is used to send alerts.
|
||||
- `HOME_ASSISTANT_URL`: The URL of your Home Assistant instance (e.g., `http://192.168.1.50:8123`).
|
||||
- `HOME_ASSISTANT_TOKEN`: A Long-Lived Access Token for your Home Assistant instance. You can generate this in your Home Assistant profile settings.
|
||||
- `GOOGLE_HOME_SPEAKER_ID`: The `media_player` entity ID for your Google Home speaker in Home Assistant (e.g., `media_player.kitchen_speaker`).
|
||||
|
||||
## 3. Usage
|
||||
|
||||
Once the installation and setup are complete, you can run the monitoring agent with the following command:
|
||||
To run the monitoring agent:
|
||||
|
||||
```bash
|
||||
python monitor_agent.py
|
||||
```
|
||||
|
||||
The script will start a continuous monitoring loop. Every 5 minutes, it will:
|
||||
### Test Mode
|
||||
|
||||
1. Collect simulated system and network data.
|
||||
2. Send the data to the local LLM for analysis.
|
||||
3. If the LLM detects a **high-severity** anomaly, it will send an alert to your configured Discord channel and broadcast a message to your Google Home speaker via Home Assistant.
|
||||
4. At the time specified in `DAILY_RECAP_TIME`, a summary of all anomalies for the day will be sent to the Discord channel.
|
||||
Set `TEST_MODE = True` in `config.py` to run the agent once and exit. This is useful for testing configurations and initial setup.
|
||||
|
||||
The script will print its status and any detected anomalies to the console.
|
||||
## Extending and Customizing
|
||||
|
||||
### Nmap Scans
|
||||
- **Adding New Metrics**: You can add new data collection functions in `monitor_agent.py` and include their results in the `combined_data` dictionary.
|
||||
- **Customizing LLM Analysis**: Modify the `CONSTRAINTS.md` file to provide specific instructions or constraints to the LLM for anomaly detection.
|
||||
- **Known Issues**: Update `known_issues.json` with any known or expected system behaviors to prevent the LLM from flagging them as anomalies.
|
||||
- **Alerting Mechanisms**: Implement additional alerting functions (e.g., email, SMS) in `monitor_agent.py` and integrate them into the anomaly detection logic.
|
||||
|
||||
The agent uses `nmap` to scan the network for open ports. By default, it uses a TCP SYN scan (`-sS`), which requires root privileges. If the script is not run as root, it will fall back to a TCP connect scan (`-sT`), which does not require root privileges but is slower and more likely to be detected.
|
||||
## Project Structure
|
||||
|
||||
To run the agent with root privileges, use the `sudo` command:
|
||||
|
||||
```bash
|
||||
sudo python monitor_agent.py
|
||||
```
|
||||
|
||||
## 4. Features
|
||||
|
||||
### Priority System
|
||||
|
||||
The monitoring agent uses a priority system to classify anomalies. The LLM is instructed to return a severity level for each anomaly it detects. The possible severity levels are:
|
||||
|
||||
- **high**: Indicates a critical issue that requires immediate attention. An alert is sent to Discord and Google Home.
|
||||
- **medium**: Indicates a non-critical issue that should be investigated. No alert is sent.
|
||||
- **low**: Indicates a minor issue or a potential false positive. No alert is sent.
|
||||
- **none**: Indicates that no anomaly was detected.
|
||||
|
||||
### Known Issues Feed
|
||||
|
||||
The agent uses a `known_issues.json` file to provide the LLM with a list of known issues and their resolutions. This helps the LLM to avoid flagging resolved or expected issues as anomalies.
|
||||
|
||||
You can add new issues to the `known_issues.json` file by following the existing format. Each issue should have an "issue" and a "resolution" key. For example:
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"issue": "CPU temperature spikes to 80C under heavy load",
|
||||
"resolution": "This is normal behavior for this CPU model and is not a cause for concern."
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
**Note on Mock Data:** The current version of the script uses mock data for system logs and network metrics. To use this in a real-world scenario, you would need to replace the mock data with actual data from your systems.
|
||||
- `monitor_agent.py`: Main script for data collection, LLM interaction, and alerting.
|
||||
- `data_storage.py`: Handles loading, storing, and calculating baselines from historical data.
|
||||
- `config.py`: Stores configurable parameters for the agent.
|
||||
- `requirements.txt`: Lists Python dependencies.
|
||||
- `CONSTRAINTS.md`: Defines constraints and guidelines for the LLM's analysis.
|
||||
- `known_issues.json`: A JSON file containing a list of known issues to be considered by the LLM.
|
||||
- `monitoring_data.json`: (Generated) Stores historical monitoring data.
|
||||
- `log_position.txt`: (Generated) Stores the last read position for `/var/log/syslog`.
|
||||
- `auth_log_position.txt`: (Generated) Stores the last read position for `/var/log/auth.log`.
|
||||
65
SPEC.md
Normal file → Executable file
65
SPEC.md
Normal file → Executable file
@@ -14,6 +14,10 @@ The project will be composed of the following files:
|
||||
- **`README.md`**: A documentation file providing an overview of the project, setup instructions, and usage examples.
|
||||
- **`.gitignore`**: A file to specify which files and directories should be ignored by Git.
|
||||
- **`PROGRESS.md`**: A file to track the development progress of the project.
|
||||
- **`data_storage.py`**: Handles loading, storing, and calculating baselines from historical data.
|
||||
- **`CONSTRAINTS.md`**: Defines constraints and guidelines for the LLM's analysis.
|
||||
- **`known_issues.json`**: A JSON file containing a list of known issues to be considered by the LLM.
|
||||
- **`AGENTS.md`**: Documents the human and autonomous agents involved in the project.
|
||||
|
||||
## 3. Functional Requirements
|
||||
|
||||
@@ -26,10 +30,12 @@ The project will be composed of the following files:
|
||||
- `HOME_ASSISTANT_TOKEN`
|
||||
- `GOOGLE_HOME_SPEAKER_ID`
|
||||
- `DAILY_RECAP_TIME`
|
||||
- `NMAP_TARGETS`
|
||||
- `NMAP_SCAN_OPTIONS`
|
||||
|
||||
### 3.2. Data Ingestion and Parsing
|
||||
|
||||
- The agent must be able to collect and parse system logs.
|
||||
- The agent must be able to collect and parse system logs (syslog and auth.log).
|
||||
- The agent must be able to collect and parse network metrics.
|
||||
- The parsing of this data should result in a structured format (JSON or Python dictionary).
|
||||
|
||||
@@ -38,24 +44,25 @@ The project will be composed of the following files:
|
||||
- **CPU Temperature**: The agent will monitor the CPU temperature.
|
||||
- **GPU Temperature**: The agent will monitor the GPU temperature.
|
||||
- **System Login Attempts**: The agent will monitor system login attempts.
|
||||
- **Network Scan Results (Nmap)**: The agent will periodically perform Nmap scans to discover hosts and open ports, logging detailed information including IP addresses, host status, and open ports with service details.
|
||||
|
||||
### 3.3. LLM Analysis
|
||||
### 3.4. LLM Analysis
|
||||
|
||||
- The agent must use a local LLM (via Ollama) to analyze the collected data.
|
||||
- The agent must construct a specific prompt to guide the LLM in identifying anomalies.
|
||||
- The LLM's response will be either "OK" (no anomaly) or a natural language paragraph describing the anomaly, including a severity level (high, medium, low).
|
||||
- The agent must construct a specific prompt to guide the LLM in identifying anomalies, incorporating historical baselines and known issues.
|
||||
- The LLM's response will be a structured JSON object with `severity` (high, medium, low, none) and `reason` fields.
|
||||
|
||||
### 3.4. Alerting
|
||||
### 3.5. Alerting
|
||||
|
||||
- The agent must be able to send alerts to a Discord webhook.
|
||||
- The agent must be able to trigger a text-to-speech (TTS) alert on a Google Home speaker via Home Assistant.
|
||||
|
||||
### 3.5. Alerting Logic
|
||||
### 3.6. Alerting Logic
|
||||
|
||||
- Immediate alerts (Discord and Home Assistant) will only be sent for "high" severity anomalies.
|
||||
- A daily recap of all anomalies (high, medium, and low) will be sent at a configurable time.
|
||||
|
||||
### 3.6. Main Loop
|
||||
### 3.7. Main Loop
|
||||
|
||||
- The agent will run in a continuous loop.
|
||||
- The loop will execute the data collection, analysis, and alerting steps periodically.
|
||||
@@ -64,30 +71,66 @@ The project will be composed of the following files:
|
||||
## 4. Data Storage and Baselining
|
||||
|
||||
- **4.1. Data Storage**: The agent will store historical monitoring data in a JSON file (`monitoring_data.json`).
|
||||
- **4.2. Baselining**: The agent will calculate baseline averages for key metrics (e.g., RTT, packet loss) from the stored historical data. This baseline will be used by the LLM to improve anomaly detection accuracy.
|
||||
- **4.2. Baselining**: The agent will calculate baseline averages for key metrics (e.g., RTT, packet loss, temperatures, open ports) from the stored historical data. This baseline will be used by the LLM to improve anomaly detection accuracy.
|
||||
|
||||
## 5. Technical Requirements
|
||||
|
||||
- **Language**: Python 3.8+
|
||||
- **LLM**: `llama3.1:8b` running on a local Ollama instance.
|
||||
- **Prerequisites**: `nmap`, `lm-sensors`
|
||||
- **Libraries**:
|
||||
- `ollama`
|
||||
- `discord-webhook`
|
||||
- `requests`
|
||||
- `syslog-rfc5424-parser`
|
||||
- `apachelogs`
|
||||
- `jc`
|
||||
- `pingparsing`
|
||||
- `python-nmap`
|
||||
|
||||
## 6. Project Structure
|
||||
|
||||
```
|
||||
/
|
||||
├── .gitignore
|
||||
├── AGENTS.md
|
||||
├── config.py
|
||||
├── CONSTRAINTS.md
|
||||
├── data_storage.py
|
||||
├── known_issues.json
|
||||
├── log_position.txt
|
||||
├── auth_log_position.txt
|
||||
├── monitor_agent.py
|
||||
├── PROMPT.md
|
||||
├── README.md
|
||||
├── requirements.txt
|
||||
├── PROGRESS.md
|
||||
└── SPEC.md
|
||||
```
|
||||
```
|
||||
|
||||
## 7. Testing and Debugging
|
||||
The script is equipped with a test mode, that only runs the script once, and not continuously. To enable, change the `TEST_MODE` variable in `config.py` to `True`. Once finished testing, change the variable back to `False`.
|
||||
|
||||
## 8. Future Enhancements
|
||||
|
||||
### 8.1. Process Monitoring
|
||||
|
||||
**Description:** The agent will be able to monitor a list of critical processes to ensure they are running. If a process is not running, an anomaly will be generated.
|
||||
|
||||
**Implementation Plan:**
|
||||
|
||||
1. **Configuration:** Add a new list variable to `config.py` named `PROCESSES_TO_MONITOR` which will contain the names of the processes to be monitored.
|
||||
2. **Data Ingestion:** Create a new function in `monitor_agent.py` called `get_running_processes()` that uses the `psutil` library to get a list of all running processes.
|
||||
3. **Data Analysis:** In `analyze_data_locally()`, compare the list of running processes with the `PROCESSES_TO_MONITOR` list from the configuration. If a process from the configured list is not found in the running processes, generate a "high" severity anomaly.
|
||||
4. **LLM Integration:** The existing `generate_llm_report()` function will be used to generate a report for the new anomaly type.
|
||||
5. **Alerting:** The existing alerting system will be used to send alerts for the new anomaly type.
|
||||
|
||||
### 8.2. Docker Container Monitoring
|
||||
|
||||
**Description:** The agent will be able to monitor a list of critical Docker containers to ensure they are running and healthy. If a container is not running or is in an unhealthy state, an anomaly will be generated.
|
||||
|
||||
**Implementation Plan:**
|
||||
|
||||
1. **Configuration:** Add a new list variable to `config.py` named `DOCKER_CONTAINERS_TO_MONITOR` which will contain the names of the Docker containers to be monitored.
|
||||
2. **Data Ingestion:** Create a new function in `monitor_agent.py` called `get_docker_container_status()` that uses the `docker` Python library to get the status of all running containers.
|
||||
3. **Data Analysis:** In `analyze_data_locally()`, iterate through the `DOCKER_CONTAINERS_TO_MONITOR` list. For each container, check its status. If a container is not running or its status is not "running", generate a "high" severity anomaly.
|
||||
4. **LLM Integration:** The existing `generate_llm_report()` function will be used to generate a report for the new anomaly type.
|
||||
5. **Alerting:** The existing alerting system will be used to send alerts for the new anomaly type.
|
||||
1
auth_log_position.txt
Executable file
1
auth_log_position.txt
Executable file
@@ -0,0 +1 @@
|
||||
449823
|
||||
11
config.py
Normal file → Executable file
11
config.py
Normal file → Executable file
@@ -9,11 +9,14 @@ HOME_ASSISTANT_TOKEN = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJjOGRmZjI
|
||||
GOOGLE_HOME_SPEAKER_ID = "media_player.spencer_room_speaker"
|
||||
|
||||
# Daily Recap Time (in 24-hour format, e.g., "20:00")
|
||||
DAILY_RECAP_TIME = "20:00"
|
||||
DAILY_RECAP_TIME = "22:00"
|
||||
|
||||
# Nmap Configuration
|
||||
NMAP_TARGETS = "192.168.1.0/24"
|
||||
NMAP_SCAN_OPTIONS = "-sS -T4"
|
||||
NMAP_TARGETS = "192.168.2.0/24"
|
||||
NMAP_SCAN_OPTIONS = "-sS -T4 -R"
|
||||
|
||||
# Docker Configuration
|
||||
DOCKER_CONTAINERS_TO_MONITOR = ["gitea","portainer","gluetun","mealie","n8n","minecraft"]
|
||||
|
||||
# Test Mode (True to run once and exit, False to run continuously)
|
||||
TEST_MODE = False
|
||||
TEST_MODE = False
|
||||
@@ -1,56 +0,0 @@
|
||||
import json
|
||||
import os
|
||||
from datetime import datetime, timedelta, timezone
|
||||
|
||||
DATA_FILE = 'monitoring_data.json'
|
||||
|
||||
def load_data():
|
||||
if os.path.exists(DATA_FILE):
|
||||
with open(DATA_FILE, 'r') as f:
|
||||
return json.load(f)
|
||||
return []
|
||||
|
||||
def store_data(new_data):
|
||||
data = load_data()
|
||||
data.append(new_data)
|
||||
with open(DATA_FILE, 'w') as f:
|
||||
json.dump(data, f, indent=4)
|
||||
|
||||
def calculate_baselines():
|
||||
data = load_data()
|
||||
if not data:
|
||||
return {}
|
||||
|
||||
# For simplicity, we'll average the last 24 hours of data
|
||||
# More complex logic can be added here
|
||||
recent_data = [d for d in data if 'timestamp' in d and datetime.fromisoformat(d['timestamp'].replace('Z', '')).replace(tzinfo=timezone.utc) > datetime.now(timezone.utc) - timedelta(hours=24)]
|
||||
|
||||
if not recent_data:
|
||||
return {}
|
||||
|
||||
baseline_metrics = {
|
||||
'avg_rtt': sum(d['network_metrics']['rtt_avg'] for d in recent_data if 'rtt_avg' in d['network_metrics']) / len(recent_data),
|
||||
'packet_loss': sum(d['network_metrics']['packet_loss_rate'] for d in recent_data if 'packet_loss_rate' in d['network_metrics']) / len(recent_data),
|
||||
'avg_cpu_temp': sum(d['cpu_temperature']['cpu_temperature'] for d in recent_data if d['cpu_temperature']['cpu_temperature'] != "N/A") / len(recent_data),
|
||||
'avg_gpu_temp': sum(d['gpu_temperature']['gpu_temperature'] for d in recent_data if d['gpu_temperature']['gpu_temperature'] != "N/A") / len(recent_data),
|
||||
}
|
||||
|
||||
# Baseline for open ports from nmap scans
|
||||
host_ports = {}
|
||||
for d in recent_data:
|
||||
if 'nmap_results' in d and 'scan' in d['nmap_results']:
|
||||
for host, scan_data in d['nmap_results']['scan'].items():
|
||||
if host not in host_ports:
|
||||
host_ports[host] = set()
|
||||
if 'tcp' in scan_data:
|
||||
for port, port_data in scan_data['tcp'].items():
|
||||
if port_data['state'] == 'open':
|
||||
host_ports[host].add(port)
|
||||
|
||||
# Convert sets to sorted lists for JSON serialization
|
||||
for host, ports in host_ports.items():
|
||||
host_ports[host] = sorted(list(ports))
|
||||
|
||||
baseline_metrics['host_ports'] = host_ports
|
||||
|
||||
return baseline_metrics
|
||||
262
database.py
Executable file
262
database.py
Executable file
@@ -0,0 +1,262 @@
|
||||
import sqlite3
|
||||
import json
|
||||
from datetime import datetime, timedelta, timezone
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
DATABASE_FILE = 'monitoring.db'
|
||||
|
||||
def initialize_database():
|
||||
"""Initializes the database and creates tables if they don't exist."""
|
||||
try:
|
||||
conn = sqlite3.connect(DATABASE_FILE)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Main table for monitoring data
|
||||
cursor.execute("""
|
||||
CREATE TABLE IF NOT EXISTS monitoring_data (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
timestamp TEXT NOT NULL
|
||||
)
|
||||
""")
|
||||
|
||||
# Table for network metrics
|
||||
cursor.execute("""
|
||||
CREATE TABLE IF NOT EXISTS network_metrics (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
monitoring_data_id INTEGER,
|
||||
rtt_avg REAL,
|
||||
packet_loss_rate REAL,
|
||||
FOREIGN KEY (monitoring_data_id) REFERENCES monitoring_data (id)
|
||||
)
|
||||
""")
|
||||
|
||||
# Table for temperatures
|
||||
cursor.execute("""
|
||||
CREATE TABLE IF NOT EXISTS temperatures (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
monitoring_data_id INTEGER,
|
||||
cpu_temp REAL,
|
||||
gpu_temp REAL,
|
||||
FOREIGN KEY (monitoring_data_id) REFERENCES monitoring_data (id)
|
||||
)
|
||||
""")
|
||||
|
||||
# Table for login attempts
|
||||
cursor.execute("""
|
||||
CREATE TABLE IF NOT EXISTS login_attempts (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
monitoring_data_id INTEGER,
|
||||
log_line TEXT,
|
||||
FOREIGN KEY (monitoring_data_id) REFERENCES monitoring_data (id)
|
||||
)
|
||||
""")
|
||||
|
||||
# Table for Nmap scans
|
||||
cursor.execute("""
|
||||
CREATE TABLE IF NOT EXISTS nmap_scans (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
monitoring_data_id INTEGER,
|
||||
scan_data TEXT,
|
||||
FOREIGN KEY (monitoring_data_id) REFERENCES monitoring_data (id)
|
||||
)
|
||||
""")
|
||||
|
||||
# Table for Docker status
|
||||
cursor.execute("""
|
||||
CREATE TABLE IF NOT EXISTS docker_status (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
monitoring_data_id INTEGER,
|
||||
container_name TEXT,
|
||||
status TEXT,
|
||||
FOREIGN KEY (monitoring_data_id) REFERENCES monitoring_data (id)
|
||||
)
|
||||
""")
|
||||
|
||||
# Table for syslog
|
||||
cursor.execute("""
|
||||
CREATE TABLE IF NOT EXISTS syslog (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
monitoring_data_id INTEGER,
|
||||
log_data TEXT,
|
||||
FOREIGN KEY (monitoring_data_id) REFERENCES monitoring_data (id)
|
||||
)
|
||||
""")
|
||||
|
||||
# Table for ufw logs
|
||||
cursor.execute("""
|
||||
CREATE TABLE IF NOT EXISTS ufw_logs (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
monitoring_data_id INTEGER,
|
||||
log_line TEXT,
|
||||
FOREIGN KEY (monitoring_data_id) REFERENCES monitoring_data (id)
|
||||
)
|
||||
""")
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
logger.info("Database initialized successfully.")
|
||||
except sqlite3.Error as e:
|
||||
logger.error(f"Error initializing database: {e}")
|
||||
|
||||
def store_data(new_data):
|
||||
"""Stores new monitoring data in the database."""
|
||||
try:
|
||||
conn = sqlite3.connect(DATABASE_FILE)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Insert into main table
|
||||
cursor.execute("INSERT INTO monitoring_data (timestamp) VALUES (?)", (new_data['timestamp'],))
|
||||
monitoring_data_id = cursor.lastrowid
|
||||
|
||||
# Insert into network_metrics
|
||||
if 'network_metrics' in new_data:
|
||||
nm = new_data['network_metrics']
|
||||
cursor.execute("INSERT INTO network_metrics (monitoring_data_id, rtt_avg, packet_loss_rate) VALUES (?, ?, ?)",
|
||||
(monitoring_data_id, nm.get('rtt_avg'), nm.get('packet_loss_rate')))
|
||||
|
||||
# Insert into temperatures
|
||||
if 'cpu_temperature' in new_data or 'gpu_temperature' in new_data:
|
||||
cpu_temp = new_data.get('cpu_temperature', {}).get('cpu_temperature')
|
||||
gpu_temp = new_data.get('gpu_temperature', {}).get('gpu_temperature')
|
||||
cursor.execute("INSERT INTO temperatures (monitoring_data_id, cpu_temp, gpu_temp) VALUES (?, ?, ?)",
|
||||
(monitoring_data_id, cpu_temp, gpu_temp))
|
||||
|
||||
# Insert into login_attempts
|
||||
if 'login_attempts' in new_data and new_data['login_attempts'].get('failed_login_attempts'):
|
||||
for line in new_data['login_attempts']['failed_login_attempts']:
|
||||
cursor.execute("INSERT INTO login_attempts (monitoring_data_id, log_line) VALUES (?, ?)",
|
||||
(monitoring_data_id, line))
|
||||
|
||||
# Insert into nmap_scans
|
||||
if 'nmap_results' in new_data:
|
||||
cursor.execute("INSERT INTO nmap_scans (monitoring_data_id, scan_data) VALUES (?, ?)",
|
||||
(monitoring_data_id, json.dumps(new_data['nmap_results'])))
|
||||
|
||||
# Insert into docker_status
|
||||
if 'docker_container_status' in new_data:
|
||||
for name, status in new_data['docker_container_status'].get('docker_container_status', {}).items():
|
||||
cursor.execute("INSERT INTO docker_status (monitoring_data_id, container_name, status) VALUES (?, ?, ?)",
|
||||
(monitoring_data_id, name, status))
|
||||
|
||||
# Insert into syslog
|
||||
if 'system_logs' in new_data:
|
||||
for log in new_data['system_logs'].get('syslog', []):
|
||||
cursor.execute("INSERT INTO syslog (monitoring_data_id, log_data) VALUES (?, ?)",
|
||||
(monitoring_data_id, json.dumps(log)))
|
||||
|
||||
# Insert into ufw_logs
|
||||
if 'ufw_logs' in new_data:
|
||||
for line in new_data['ufw_logs']:
|
||||
cursor.execute("INSERT INTO ufw_logs (monitoring_data_id, log_line) VALUES (?, ?)",
|
||||
(monitoring_data_id, line))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
except sqlite3.Error as e:
|
||||
logger.error(f"Error storing data: {e}")
|
||||
|
||||
def calculate_baselines():
|
||||
"""Calculates baseline metrics from data in the last 24 hours."""
|
||||
try:
|
||||
conn = sqlite3.connect(DATABASE_FILE)
|
||||
cursor = conn.cursor()
|
||||
|
||||
twenty_four_hours_ago = (datetime.now(timezone.utc) - timedelta(hours=24)).isoformat()
|
||||
|
||||
# Calculate average RTT and packet loss
|
||||
cursor.execute("""
|
||||
SELECT AVG(nm.rtt_avg), AVG(nm.packet_loss_rate)
|
||||
FROM network_metrics nm
|
||||
JOIN monitoring_data md ON nm.monitoring_data_id = md.id
|
||||
WHERE md.timestamp > ?
|
||||
""", (twenty_four_hours_ago,))
|
||||
avg_rtt, avg_packet_loss = cursor.fetchone()
|
||||
|
||||
# Calculate average temperatures
|
||||
cursor.execute("""
|
||||
SELECT AVG(t.cpu_temp), AVG(t.gpu_temp)
|
||||
FROM temperatures t
|
||||
JOIN monitoring_data md ON t.monitoring_data_id = md.id
|
||||
WHERE md.timestamp > ?
|
||||
""", (twenty_four_hours_ago,))
|
||||
avg_cpu_temp, avg_gpu_temp = cursor.fetchone()
|
||||
|
||||
# Get baseline open ports
|
||||
cursor.execute("""
|
||||
SELECT ns.scan_data
|
||||
FROM nmap_scans ns
|
||||
JOIN monitoring_data md ON ns.monitoring_data_id = md.id
|
||||
WHERE md.timestamp > ?
|
||||
ORDER BY md.timestamp DESC
|
||||
LIMIT 1
|
||||
""", (twenty_four_hours_ago,))
|
||||
latest_nmap_scan = cursor.fetchone()
|
||||
|
||||
host_ports = {}
|
||||
if latest_nmap_scan:
|
||||
scan_data = json.loads(latest_nmap_scan[0])
|
||||
if 'hosts' in scan_data:
|
||||
for host_info in scan_data['hosts']:
|
||||
host_ip = host_info['ip']
|
||||
if host_ip not in host_ports:
|
||||
host_ports[host_ip] = set()
|
||||
for port_info in host_info.get('open_ports', []):
|
||||
host_ports[host_ip].add(port_info['port'])
|
||||
|
||||
for host, ports in host_ports.items():
|
||||
host_ports[host] = sorted(list(ports))
|
||||
|
||||
conn.close()
|
||||
|
||||
return {
|
||||
'avg_rtt': avg_rtt or 0,
|
||||
'packet_loss': avg_packet_loss or 0,
|
||||
'avg_cpu_temp': avg_cpu_temp or 0,
|
||||
'avg_gpu_temp': avg_gpu_temp or 0,
|
||||
'host_ports': host_ports
|
||||
}
|
||||
|
||||
except sqlite3.Error as e:
|
||||
logger.error(f"Error calculating baselines: {e}")
|
||||
return {}
|
||||
|
||||
def enforce_retention_policy(retention_days=7):
|
||||
"""Enforces the data retention policy by deleting old data."""
|
||||
try:
|
||||
conn = sqlite3.connect(DATABASE_FILE)
|
||||
cursor = conn.cursor()
|
||||
|
||||
retention_cutoff = (datetime.now(timezone.utc) - timedelta(days=retention_days)).isoformat()
|
||||
|
||||
# Find old monitoring_data IDs
|
||||
cursor.execute("SELECT id FROM monitoring_data WHERE timestamp < ?", (retention_cutoff,))
|
||||
old_ids = [row[0] for row in cursor.fetchall()]
|
||||
|
||||
if not old_ids:
|
||||
logger.info("No old data to delete.")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
# Create a placeholder string for the IN clause
|
||||
placeholders = ','.join('?' for _ in old_ids)
|
||||
|
||||
# Delete from child tables
|
||||
cursor.execute(f"DELETE FROM network_metrics WHERE monitoring_data_id IN ({placeholders})", old_ids)
|
||||
cursor.execute(f"DELETE FROM temperatures WHERE monitoring_data_id IN ({placeholders})", old_ids)
|
||||
cursor.execute(f"DELETE FROM login_attempts WHERE monitoring_data_id IN ({placeholders})", old_ids)
|
||||
cursor.execute(f"DELETE FROM nmap_scans WHERE monitoring_data_id IN ({placeholders})", old_ids)
|
||||
cursor.execute(f"DELETE FROM docker_status WHERE monitoring_data_id IN ({placeholders})", old_ids)
|
||||
cursor.execute(f"DELETE FROM syslog WHERE monitoring_data_id IN ({placeholders})", old_ids)
|
||||
cursor.execute(f"DELETE FROM ufw_logs WHERE monitoring_data_id IN ({placeholders})", old_ids)
|
||||
|
||||
# Delete from the main table
|
||||
cursor.execute(f"DELETE FROM monitoring_data WHERE id IN ({placeholders})", old_ids)
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
logger.info(f"Deleted {len(old_ids)} old records.")
|
||||
except sqlite3.Error as e:
|
||||
logger.error(f"Error enforcing retention policy: {e}")
|
||||
|
||||
20
known_issues.json
Normal file → Executable file
20
known_issues.json
Normal file → Executable file
@@ -1,10 +1,26 @@
|
||||
[
|
||||
{
|
||||
"issue": "CPU temperature spikes to 90C under heavy load",
|
||||
"resolution": "This is normal behavior for this CPU model and is not a cause for concern."
|
||||
"issue": "CPU temperatures less then the average",
|
||||
"resolution": "This is normal behavior for CPU's when not in use. Lower Temps are usually a good thing"
|
||||
},
|
||||
{
|
||||
"issue": "Access attempts from unknown IP Addresses",
|
||||
"resolution": "ufw has been enabled, and blocks all default connections by default. The only IP Addresses allowed are 192.168.2.0/24 and 100.64.0.0/10"
|
||||
},
|
||||
{
|
||||
"issue": "Network Timing values are lower then average",
|
||||
"resolution": "In networking, timing values lower then the average are often good things, and do not need to be considered an anomaly"
|
||||
},
|
||||
{
|
||||
"issue": "Port 62078 is open",
|
||||
"resolution": "This is normal behavior for Apple devices. Do not report."
|
||||
},
|
||||
{
|
||||
"issue": "RTT averages are higher then average",
|
||||
"resolution": "Fluctuation is normal, and there is no need to report anything within 5s of the average RTT."
|
||||
},
|
||||
{
|
||||
"issue": "Temperatures are higher then average",
|
||||
"resolution": "Fluctuation is normal, and there is no need to report anything within 5deg Celcius of the average Temperature."
|
||||
}
|
||||
]
|
||||
1
log_position.txt
Executable file
1
log_position.txt
Executable file
@@ -0,0 +1 @@
|
||||
82868478
|
||||
1167
monitor_agent.log
1167
monitor_agent.log
File diff suppressed because it is too large
Load Diff
556
monitor_agent.py
Normal file → Executable file
556
monitor_agent.py
Normal file → Executable file
@@ -6,22 +6,71 @@ import subprocess
|
||||
import ollama
|
||||
from discord_webhook import DiscordWebhook
|
||||
import requests
|
||||
import data_storage
|
||||
import database as data_storage
|
||||
import re
|
||||
import os
|
||||
from datetime import datetime, timezone
|
||||
import pingparsing
|
||||
import nmap
|
||||
import logging
|
||||
from logging.handlers import TimedRotatingFileHandler
|
||||
import docker
|
||||
|
||||
import schedule
|
||||
|
||||
# Load configuration
|
||||
import config
|
||||
|
||||
from syslog_rfc5424_parser import parser
|
||||
|
||||
# --- Logging Configuration ---
|
||||
LOG_FILE = "./tmp/monitoring_agent.log"
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.setLevel(logging.INFO)
|
||||
|
||||
# Create a handler that rotates logs daily, keeping 1 backup
|
||||
file_handler = TimedRotatingFileHandler(LOG_FILE, when="midnight", interval=1, backupCount=1)
|
||||
file_handler.setFormatter(logging.Formatter('%(asctime)s - %(levelname)s - %(message)s'))
|
||||
|
||||
# Create a handler for console output
|
||||
console_handler = logging.StreamHandler()
|
||||
console_handler.setFormatter(logging.Formatter('%(asctime)s - %(levelname)s - %(message)s'))
|
||||
|
||||
logger.addHandler(file_handler)
|
||||
logger.addHandler(console_handler)
|
||||
|
||||
|
||||
LOG_POSITION_FILE = 'log_position.txt'
|
||||
AUTH_LOG_POSITION_FILE = 'auth_log_position.txt'
|
||||
UFW_LOG_POSITION_FILE = 'ufw_log_position.txt'
|
||||
|
||||
# --- Data Ingestion & Parsing Functions ---
|
||||
|
||||
def get_ufw_logs():
|
||||
"""Gets new lines from /var/log/ufw.log since the last check."""
|
||||
try:
|
||||
last_position = 0
|
||||
if os.path.exists(UFW_LOG_POSITION_FILE):
|
||||
with open(UFW_LOG_POSITION_FILE, 'r') as f:
|
||||
last_position = int(f.read())
|
||||
|
||||
with open("/var/log/ufw.log", "r") as f:
|
||||
f.seek(last_position)
|
||||
log_lines = f.readlines()
|
||||
current_position = f.tell()
|
||||
|
||||
with open(UFW_LOG_POSITION_FILE, 'w') as f:
|
||||
f.write(str(current_position))
|
||||
|
||||
return log_lines
|
||||
except FileNotFoundError:
|
||||
logger.error("/var/log/ufw.log not found.")
|
||||
return []
|
||||
except Exception as e:
|
||||
logger.error(f"Error reading ufw.log: {e}")
|
||||
return []
|
||||
|
||||
|
||||
def get_system_logs():
|
||||
"""Gets new lines from /var/log/syslog since the last check."""
|
||||
try:
|
||||
@@ -48,13 +97,12 @@ def get_system_logs():
|
||||
|
||||
return {"syslog": parsed_logs}
|
||||
except FileNotFoundError:
|
||||
print("Error: /var/log/syslog not found.")
|
||||
logger.error("/var/log/syslog not found.")
|
||||
return {"syslog": []}
|
||||
except Exception as e:
|
||||
print(f"Error reading syslog: {e}")
|
||||
logger.error(f"Error reading syslog: {e}")
|
||||
return {"syslog": []}
|
||||
|
||||
import pingparsing
|
||||
|
||||
def get_network_metrics():
|
||||
"""Gets network metrics by pinging 8.8.8.8."""
|
||||
@@ -66,48 +114,61 @@ def get_network_metrics():
|
||||
result = transmitter.ping()
|
||||
return ping_parser.parse(result).as_dict()
|
||||
except Exception as e:
|
||||
print(f"Error getting network metrics: {e}")
|
||||
logger.error(f"Error getting network metrics: {e}")
|
||||
return {"error": "ping command failed"}
|
||||
|
||||
def get_cpu_temperature():
|
||||
"""Gets the CPU temperature using the sensors command."""
|
||||
def get_sensor_data():
|
||||
"""Gets all sensor data at once."""
|
||||
try:
|
||||
sensors_output = subprocess.check_output(["sensors"], text=True)
|
||||
# Use regex to find the CPU temperature
|
||||
match = re.search(r"Package id 0:\s+\+([\d\.]+)", sensors_output)
|
||||
if match:
|
||||
return {"cpu_temperature": float(match.group(1))}
|
||||
else:
|
||||
return {"cpu_temperature": "N/A"}
|
||||
return subprocess.check_output(["sensors"], text=True)
|
||||
except (subprocess.CalledProcessError, FileNotFoundError):
|
||||
print("Error: 'sensors' command not found. Please install lm-sensors.")
|
||||
logger.error("'sensors' command not found. Please install lm-sensors.")
|
||||
return None
|
||||
|
||||
def get_cpu_temperature(sensors_output):
|
||||
"""Gets the CPU temperature from the sensors output."""
|
||||
if not sensors_output:
|
||||
return {"cpu_temperature": "N/A"}
|
||||
# Use regex to find the CPU temperature
|
||||
match = re.search(r"Package id 0:\s+\+([\d\.]+)", sensors_output)
|
||||
if match:
|
||||
return {"cpu_temperature": float(match.group(1))}
|
||||
else:
|
||||
return {"cpu_temperature": "N/A"}
|
||||
|
||||
def get_gpu_temperature():
|
||||
"""Gets the GPU temperature using the sensors command."""
|
||||
try:
|
||||
sensors_output = subprocess.check_output(["sensors"], text=True)
|
||||
# Use regex to find the GPU temperature for amdgpu
|
||||
match = re.search(r"edge:\s+\+([\d\.]+)", sensors_output)
|
||||
def get_gpu_temperature(sensors_output):
|
||||
"""Gets the GPU temperature from the sensors output."""
|
||||
if not sensors_output:
|
||||
return {"gpu_temperature": "N/A"}
|
||||
# Use regex to find the GPU temperature for amdgpu
|
||||
match = re.search(r"edge:\s+\+([\d\.]+)", sensors_output)
|
||||
if match:
|
||||
return {"gpu_temperature": float(match.group(1))}
|
||||
else:
|
||||
# if amdgpu not found, try radeon
|
||||
match = re.search(r"temp1:\s+\+([\d\.]+)", sensors_output)
|
||||
if match:
|
||||
return {"gpu_temperature": float(match.group(1))}
|
||||
else:
|
||||
# if amdgpu not found, try radeon
|
||||
match = re.search(r"temp1:\s+\+([\d\.]+)", sensors_output)
|
||||
if match:
|
||||
return {"gpu_temperature": float(match.group(1))}
|
||||
else:
|
||||
return {"gpu_temperature": "N/A"}
|
||||
except (subprocess.CalledProcessError, FileNotFoundError):
|
||||
print("Error: 'sensors' command not found. Please install lm-sensors.")
|
||||
return {"gpu_temperature": "N/A"}
|
||||
return {"gpu_temperature": "N/A"}
|
||||
|
||||
|
||||
def get_login_attempts():
|
||||
"""Gets system login attempts from /var/log/auth.log."""
|
||||
"""Gets system login attempts from /var/log/auth.log since the last check."""
|
||||
try:
|
||||
last_position = 0
|
||||
if os.path.exists(AUTH_LOG_POSITION_FILE):
|
||||
with open(AUTH_LOG_POSITION_FILE, 'r') as f:
|
||||
last_position = int(f.read())
|
||||
|
||||
with open("/var/log/auth.log", "r") as f:
|
||||
f.seek(last_position)
|
||||
log_lines = f.readlines()
|
||||
|
||||
current_position = f.tell()
|
||||
|
||||
with open(AUTH_LOG_POSITION_FILE, 'w') as f:
|
||||
f.write(str(current_position))
|
||||
|
||||
failed_logins = []
|
||||
for line in log_lines:
|
||||
if "Failed password" in line:
|
||||
@@ -115,129 +176,241 @@ def get_login_attempts():
|
||||
|
||||
return {"failed_login_attempts": failed_logins}
|
||||
except FileNotFoundError:
|
||||
print("Error: /var/log/auth.log not found.")
|
||||
logger.error("/var/log/auth.log not found.")
|
||||
return {"failed_login_attempts": []}
|
||||
except Exception as e:
|
||||
print(f"Error reading login attempts: {e}")
|
||||
logger.error(f"Error reading login attempts: {e}")
|
||||
return {"failed_logins": []}
|
||||
|
||||
def get_nmap_scan_results():
|
||||
"""Performs an Nmap scan and returns the results."""
|
||||
"""Performs an Nmap scan and returns a structured summary."""
|
||||
try:
|
||||
nm = nmap.PortScanner()
|
||||
scan_options = config.NMAP_SCAN_OPTIONS
|
||||
if os.geteuid() != 0 and "-sS" in scan_options:
|
||||
print("Warning: Nmap -sS scan requires root privileges. Falling back to -sT.")
|
||||
logger.warning("Nmap -sS scan requires root privileges. Falling back to -sT.")
|
||||
scan_options = scan_options.replace("-sS", "-sT")
|
||||
|
||||
scan_results = nm.scan(hosts=config.NMAP_TARGETS, arguments=scan_options)
|
||||
return scan_results
|
||||
|
||||
# Process the results into a more structured format
|
||||
processed_results = {"hosts": []}
|
||||
if "scan" in scan_results:
|
||||
for host, scan_data in scan_results["scan"].items():
|
||||
host_info = {
|
||||
"ip": host,
|
||||
"status": scan_data.get("status", {}).get("state", "unknown"),
|
||||
"hostname": scan_data.get("hostnames", [{}])[0].get("name", ""),
|
||||
"open_ports": []
|
||||
}
|
||||
if "tcp" in scan_data:
|
||||
for port, port_data in scan_data["tcp"].items():
|
||||
if port_data.get("state") == "open":
|
||||
host_info["open_ports"].append({
|
||||
"port": port,
|
||||
"service": port_data.get("name", ""),
|
||||
"product": port_data.get("product", ""),
|
||||
"version": port_data.get("version", "")
|
||||
})
|
||||
processed_results["hosts"].append(host_info)
|
||||
|
||||
return processed_results
|
||||
except Exception as e:
|
||||
print(f"Error performing Nmap scan: {e}")
|
||||
logger.error(f"Error performing Nmap scan: {e}")
|
||||
return {"error": "Nmap scan failed"}
|
||||
|
||||
# --- LLM Interaction Function ---
|
||||
def get_docker_container_status():
|
||||
"""Gets the status of configured Docker containers."""
|
||||
if not config.DOCKER_CONTAINERS_TO_MONITOR:
|
||||
return {"docker_container_status": {}}
|
||||
|
||||
def analyze_data_with_llm(data, baselines):
|
||||
"""Analyzes data with the local LLM."""
|
||||
with open("CONSTRAINTS.md", "r") as f:
|
||||
constraints = f.read()
|
||||
try:
|
||||
client = docker.from_env()
|
||||
containers = client.containers.list(all=True)
|
||||
status = {}
|
||||
for container in containers:
|
||||
if container.name in config.DOCKER_CONTAINERS_TO_MONITOR:
|
||||
status[container.name] = container.status
|
||||
return {"docker_container_status": status}
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting Docker container status: {e}")
|
||||
return {"docker_container_status": {}}
|
||||
|
||||
with open("known_issues.json", "r") as f:
|
||||
known_issues = json.load(f)
|
||||
# --- Data Analysis ---
|
||||
|
||||
# Compare current nmap results with baseline
|
||||
nmap_changes = {"new_hosts": [], "changed_ports": {}}
|
||||
def analyze_data_locally(data, baselines, known_issues, port_applications):
|
||||
"""Analyzes the collected data to find anomalies without using an LLM."""
|
||||
anomalies = []
|
||||
|
||||
# Temperature checks
|
||||
cpu_temp = data.get("cpu_temperature", {}).get("cpu_temperature")
|
||||
gpu_temp = data.get("gpu_temperature", {}).get("gpu_temperature")
|
||||
baseline_cpu_temp = baselines.get("average_cpu_temperature")
|
||||
baseline_gpu_temp = baselines.get("average_gpu_temperature")
|
||||
|
||||
if isinstance(cpu_temp, (int, float)) and isinstance(baseline_cpu_temp, (int, float)):
|
||||
if abs(cpu_temp - baseline_cpu_temp) > 5:
|
||||
anomalies.append({
|
||||
"severity": "medium",
|
||||
"reason": f"CPU temperature deviation detected. Current: {cpu_temp}°C, Baseline: {baseline_cpu_temp}°C"
|
||||
})
|
||||
|
||||
if isinstance(gpu_temp, (int, float)) and isinstance(baseline_gpu_temp, (int, float)):
|
||||
if abs(gpu_temp - baseline_gpu_temp) > 5:
|
||||
anomalies.append({
|
||||
"severity": "medium",
|
||||
"reason": f"GPU temperature deviation detected. Current: {gpu_temp}°C, Baseline: {baseline_gpu_temp}°C"
|
||||
})
|
||||
|
||||
# Network RTT check
|
||||
current_rtt = data.get("network_metrics", {}).get("rtt_avg")
|
||||
baseline_rtt = baselines.get("average_rtt_avg")
|
||||
|
||||
if isinstance(current_rtt, (int, float)) and isinstance(baseline_rtt, (int, float)):
|
||||
if abs(current_rtt - baseline_rtt) > 10000:
|
||||
anomalies.append({
|
||||
"severity": "high",
|
||||
"reason": f"High network RTT fluctuation detected. Current: {current_rtt}ms, Baseline: {baseline_rtt}ms"
|
||||
})
|
||||
|
||||
# Failed login attempts check
|
||||
failed_logins = data.get("login_attempts", {}).get("failed_login_attempts")
|
||||
if failed_logins:
|
||||
anomalies.append({
|
||||
"severity": "high",
|
||||
"reason": f"{len(failed_logins)} failed login attempts detected."
|
||||
})
|
||||
|
||||
# Nmap scan changes check
|
||||
if "nmap_results" in data and "host_ports" in baselines:
|
||||
current_hosts = set(data["nmap_results"].get("scan", {}).keys())
|
||||
current_hosts_info = {host['ip']: host for host in data["nmap_results"].get("hosts", [])}
|
||||
current_hosts = set(current_hosts_info.keys())
|
||||
baseline_hosts = set(baselines["host_ports"].keys())
|
||||
|
||||
# New hosts
|
||||
nmap_changes["new_hosts"] = sorted(list(current_hosts - baseline_hosts))
|
||||
new_hosts = sorted(list(current_hosts - baseline_hosts))
|
||||
for host in new_hosts:
|
||||
anomalies.append({
|
||||
"severity": "high",
|
||||
"reason": f"New host detected on the network: {host}"
|
||||
})
|
||||
|
||||
# Changed ports on existing hosts
|
||||
for host in current_hosts.intersection(baseline_hosts):
|
||||
current_ports = set()
|
||||
if "tcp" in data["nmap_results"]["scan"][host]:
|
||||
for port, port_data in data["nmap_results"]["scan"][host]["tcp"].items():
|
||||
if port_data["state"] == "open":
|
||||
current_ports.add(port)
|
||||
|
||||
baseline_ports = set(baselines["host_ports"].get(host, []))
|
||||
for host_ip in current_hosts.intersection(baseline_hosts):
|
||||
current_ports = set(p['port'] for p in current_hosts_info[host_ip].get("open_ports", []))
|
||||
baseline_ports = set(baselines["host_ports"].get(host_ip, []))
|
||||
|
||||
newly_opened = sorted(list(current_ports - baseline_ports))
|
||||
newly_closed = sorted(list(baseline_ports - current_ports))
|
||||
|
||||
for port in newly_opened:
|
||||
port_info = port_applications.get(str(port), "Unknown")
|
||||
anomalies.append({
|
||||
"severity": "medium",
|
||||
"reason": f"New port opened on {host_ip}: {port} ({port_info})"
|
||||
})
|
||||
|
||||
if newly_opened or newly_closed:
|
||||
nmap_changes["changed_ports"][host] = {"opened": newly_opened, "closed": newly_closed}
|
||||
# Docker container status check
|
||||
docker_status = data.get("docker_container_status", {}).get("docker_container_status")
|
||||
if docker_status:
|
||||
for container_name, status in docker_status.items():
|
||||
if status != "running":
|
||||
anomalies.append({
|
||||
"severity": "high",
|
||||
"reason": f"Docker container '{container_name}' is not running. Current status: {status}"
|
||||
})
|
||||
|
||||
prompt = f"""
|
||||
**Role:** You are a dedicated and expert system administrator. Your primary role is to identify anomalies and provide concise, actionable reports.
|
||||
# UFW log analysis
|
||||
ufw_logs = data.get("ufw_logs", [])
|
||||
if ufw_logs:
|
||||
blocked_ips = {}
|
||||
for log_line in ufw_logs:
|
||||
if "[UFW BLOCK]" in log_line:
|
||||
match = re.search(r"SRC=([\d\.]+)", log_line)
|
||||
if match:
|
||||
ip = match.group(1)
|
||||
blocked_ips[ip] = blocked_ips.get(ip, 0) + 1
|
||||
|
||||
for ip, count in blocked_ips.items():
|
||||
if count > 10:
|
||||
anomalies.append({
|
||||
"severity": "medium",
|
||||
"reason": f"High number of blocked connections ({count}) from IP address: {ip}"
|
||||
})
|
||||
|
||||
**Instruction:** Analyze the following system and network data for any activity that appears out of place or different. Consider unusual values, errors, or unexpected patterns as anomalies. Compare the current data with the historical baseline data to identify significant deviations. Consult the known issues feed to avoid flagging resolved or expected issues. Pay special attention to the Nmap scan results for any new or unexpected open ports.
|
||||
return anomalies
|
||||
|
||||
**Context:**
|
||||
Here is the system data in JSON format for your analysis: {json.dumps(data, indent=2)}
|
||||
# --- LLM Interaction Function ---
|
||||
|
||||
**Historical Baseline Data:**
|
||||
{json.dumps(baselines, indent=2)}
|
||||
def build_llm_prompt(anomalies):
|
||||
"""Builds the prompt for the LLM to generate a report from anomalies."""
|
||||
return f"""
|
||||
**Role:** You are a dedicated and expert system administrator. Your primary role is to provide a concise, actionable report based on a list of pre-identified anomalies.
|
||||
|
||||
**Nmap Scan Changes:**
|
||||
{json.dumps(nmap_changes, indent=2)}
|
||||
**Instruction:** Please synthesize the following list of anomalies into a single, human-readable report. The report should be a single JSON object with two keys: "severity" and "reason". The "severity" should be the highest severity from the list of anomalies. The "reason" should be a summary of all the anomalies.
|
||||
|
||||
**Known Issues Feed:**
|
||||
{json.dumps(known_issues, indent=2)}
|
||||
**Anomalies:**
|
||||
{json.dumps(anomalies, indent=2)}
|
||||
|
||||
**Constraints and Guidelines:**
|
||||
{constraints}
|
||||
|
||||
**Output Request:** If you find an anomaly, provide a report as a single JSON object with two keys: "severity" and "reason". The "severity" must be one of "high", "medium", "low", or "none". The "reason" must be a natural language explanation of the anomaly. If no anomaly is found, return a single JSON object with "severity" set to "none" and "reason" as an empty string. Do not wrap the JSON in markdown or any other formatting.
|
||||
|
||||
**Reasoning Hint:** Think step by step to come to your conclusion. This is very important.
|
||||
**Output Request:** Provide a report as a single JSON object with two keys: "severity" and "reason". The "severity" must be one of "high", "medium", "low", or "none". The "reason" must be a natural language explanation of the anomaly. If no anomaly is found, return a single JSON object with "severity" set to "none" and "reason" as an empty string. Do not wrap the JSON in markdown or any other formatting. Only return the JSON, and nothing else.
|
||||
"""
|
||||
|
||||
def generate_llm_report(anomalies):
|
||||
"""Generates a report from a list of anomalies using the local LLM."""
|
||||
logger.info("Generating LLM report...")
|
||||
if not anomalies:
|
||||
return {"severity": "none", "reason": ""}
|
||||
|
||||
prompt = build_llm_prompt(anomalies)
|
||||
|
||||
try:
|
||||
response = ollama.generate(model="llama3.1:8b", prompt=prompt)
|
||||
# Sanitize the response to ensure it's valid JSON
|
||||
response = ollama.generate(model="phi4-mini", prompt=prompt)
|
||||
sanitized_response = response['response'].strip()
|
||||
# Find the first '{' and the last '}' to extract the JSON object
|
||||
start_index = sanitized_response.find('{')
|
||||
end_index = sanitized_response.rfind('}')
|
||||
if start_index != -1 and end_index != -1:
|
||||
json_string = sanitized_response[start_index:end_index+1]
|
||||
try:
|
||||
return json.loads(json_string)
|
||||
except json.JSONDecodeError:
|
||||
# If parsing a single object fails, try parsing as a list
|
||||
try:
|
||||
json_list = json.loads(json_string)
|
||||
if isinstance(json_list, list) and json_list:
|
||||
return json_list[0] # Return the first object in the list
|
||||
except json.JSONDecodeError as e:
|
||||
print(f"Error decoding LLM response: {e}")
|
||||
# Fallback for invalid JSON
|
||||
return {{"severity": "low", "reason": response['response'].strip()}} # type: ignore
|
||||
else:
|
||||
# Handle cases where the response is not valid JSON
|
||||
print(f"LLM returned a non-JSON response: {sanitized_response}")
|
||||
return {{"severity": "low", "reason": sanitized_response}} # type: ignore
|
||||
|
||||
# Extract JSON from the response
|
||||
try:
|
||||
# Find the first '{' and the last '}' to extract the JSON object
|
||||
start_index = sanitized_response.find('{')
|
||||
end_index = sanitized_response.rfind('}')
|
||||
if start_index != -1 and end_index != -1:
|
||||
json_string = sanitized_response[start_index:end_index+1]
|
||||
llm_response = json.loads(json_string)
|
||||
logger.info(f"LLM Response: {llm_response}")
|
||||
return llm_response
|
||||
else:
|
||||
# Handle cases where the response is not valid JSON
|
||||
logger.warning(f"LLM returned a non-JSON response: {sanitized_response}")
|
||||
return {"severity": "low", "reason": sanitized_response}
|
||||
except json.JSONDecodeError as e:
|
||||
logger.error(f"Error decoding LLM response: {e}")
|
||||
# Fallback for invalid JSON
|
||||
return {"severity": "low", "reason": sanitized_response}
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error interacting with LLM: {e}")
|
||||
logger.error(f"Error interacting with LLM: {e}")
|
||||
return None
|
||||
|
||||
|
||||
# --- Alerting Functions ---
|
||||
|
||||
def send_discord_alert(message):
|
||||
def send_discord_alert(llm_response, combined_data):
|
||||
"""Sends an alert to Discord."""
|
||||
reason = llm_response.get('reason', 'No reason provided.')
|
||||
message = f"""**High Severity Alert:**
|
||||
> {reason}
|
||||
|
||||
**Relevant Data:**
|
||||
```json
|
||||
{json.dumps(combined_data, indent=2)}
|
||||
```"""
|
||||
webhook = DiscordWebhook(url=config.DISCORD_WEBHOOK_URL, content=message)
|
||||
try:
|
||||
response = webhook.execute()
|
||||
if response.status_code == 200:
|
||||
print("Discord alert sent successfully.")
|
||||
logger.info("Discord alert sent successfully.")
|
||||
else:
|
||||
print(f"Error sending Discord alert: {response.status_code} - {response.content}")
|
||||
logger.error(f"Error sending Discord alert: {response.status_code} - {response.content}")
|
||||
except Exception as e:
|
||||
print(f"Error sending Discord alert: {e}")
|
||||
logger.error(f"Error sending Discord alert: {e}")
|
||||
|
||||
def send_google_home_alert(message):
|
||||
"""Sends an alert to a Google Home speaker via Home Assistant."""
|
||||
@@ -246,8 +419,8 @@ def send_google_home_alert(message):
|
||||
response = ollama.generate(model="llama3.1:8b", prompt=f"Summarize the following message in a single sentence: {message}")
|
||||
simplified_message = response['response'].strip()
|
||||
except Exception as e:
|
||||
print(f"Error summarizing message: {e}")
|
||||
simplified_message = message.split('.')[0] # Take the first sentence as a fallback
|
||||
logger.error(f"Error summarizing message: {e}")
|
||||
simplified_.message = message.split('.')[0] # Take the first sentence as a fallback
|
||||
|
||||
url = f"{config.HOME_ASSISTANT_URL}/api/services/tts/speak"
|
||||
headers = {
|
||||
@@ -257,99 +430,120 @@ def send_google_home_alert(message):
|
||||
data = {
|
||||
"entity_id": "all",
|
||||
"media_player_entity_id": config.GOOGLE_HOME_SPEAKER_ID,
|
||||
"message": simplified_message,
|
||||
"message": simplified_message, # type: ignore
|
||||
}
|
||||
try:
|
||||
response = requests.post(url, headers=headers, json=data)
|
||||
if response.status_code == 200:
|
||||
print("Google Home alert sent successfully.")
|
||||
logger.info("Google Home alert sent successfully.")
|
||||
else:
|
||||
print(f"Error sending Google Home alert: {response.status_code} - {response.text}")
|
||||
logger.error(f"Error sending Google Home alert: {response.status_code} - {response.text}")
|
||||
except Exception as e:
|
||||
print(f"Error sending Google Home alert: {e}")
|
||||
logger.error(f"Error sending Google Home alert: {e}")
|
||||
|
||||
# --- Main Script Logic ---
|
||||
|
||||
def is_alerting_time():
|
||||
"""Checks if the current time is within the alerting window (9am - 12am)."""
|
||||
current_hour = datetime.now().hour
|
||||
return 9 <= current_hour < 24
|
||||
|
||||
daily_events = []
|
||||
|
||||
if __name__ == "__main__":
|
||||
if config.TEST_MODE:
|
||||
print("Running in test mode...")
|
||||
system_logs = get_system_logs()
|
||||
network_metrics = get_network_metrics()
|
||||
cpu_temp = get_cpu_temperature()
|
||||
gpu_temp = get_gpu_temperature()
|
||||
login_attempts = get_login_attempts()
|
||||
|
||||
def send_daily_recap():
|
||||
"""Sends a daily recap of events to Discord."""
|
||||
global daily_events
|
||||
if daily_events:
|
||||
recap_message = "**Daily Recap:**\n" + "\n".join(daily_events)
|
||||
|
||||
# Split the message into chunks of 2000 characters
|
||||
message_chunks = [recap_message[i:i+2000] for i in range(0, len(recap_message), 2000)]
|
||||
|
||||
for chunk in message_chunks:
|
||||
webhook = DiscordWebhook(url=config.DISCORD_WEBHOOK_URL, content=chunk)
|
||||
try:
|
||||
response = webhook.execute()
|
||||
if response.status_code == 200:
|
||||
logger.info("Daily recap chunk sent successfully.")
|
||||
else:
|
||||
logger.error(f"Error sending daily recap chunk: {response.status_code} - {response.content}")
|
||||
except Exception as e:
|
||||
logger.error(f"Error sending daily recap chunk: {e}")
|
||||
time.sleep(1) # Wait 1 second between chunks to avoid rate limiting
|
||||
|
||||
daily_events = [] # Reset for the next day
|
||||
|
||||
|
||||
def run_monitoring_cycle(nmap_scan_counter):
|
||||
|
||||
"""Runs a single monitoring cycle."""
|
||||
logger.info("Running monitoring cycle...")
|
||||
system_logs = get_system_logs()
|
||||
network_metrics = get_network_metrics()
|
||||
sensors_output = get_sensor_data()
|
||||
cpu_temp = get_cpu_temperature(sensors_output)
|
||||
gpu_temp = get_gpu_temperature(sensors_output)
|
||||
login_attempts = get_login_attempts()
|
||||
docker_container_status = get_docker_container_status()
|
||||
ufw_logs = get_ufw_logs()
|
||||
|
||||
nmap_results = None
|
||||
if nmap_scan_counter == 0:
|
||||
nmap_results = get_nmap_scan_results()
|
||||
|
||||
nmap_scan_counter = (nmap_scan_counter + 1) % 4 # Run nmap scan every 4th cycle (20 minutes)
|
||||
|
||||
if system_logs and network_metrics:
|
||||
combined_data = {
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
"system_logs": system_logs,
|
||||
"network_metrics": network_metrics,
|
||||
"cpu_temperature": cpu_temp,
|
||||
"gpu_temperature": gpu_temp,
|
||||
"login_attempts": login_attempts,
|
||||
"nmap_results": nmap_results
|
||||
}
|
||||
data_storage.store_data(combined_data)
|
||||
if system_logs and network_metrics:
|
||||
combined_data = {
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
"system_logs": system_logs,
|
||||
"network_metrics": network_metrics,
|
||||
"cpu_temperature": cpu_temp,
|
||||
"gpu_temperature": gpu_temp,
|
||||
"login_attempts": login_attempts,
|
||||
"docker_container_status": docker_container_status,
|
||||
"ufw_logs": ufw_logs
|
||||
}
|
||||
|
||||
llm_response = analyze_data_with_llm(combined_data, data_storage.calculate_baselines())
|
||||
if nmap_results:
|
||||
combined_data["nmap_results"] = nmap_results
|
||||
|
||||
data_storage.store_data(combined_data)
|
||||
data_storage.enforce_retention_policy()
|
||||
|
||||
with open("known_issues.json", "r") as f:
|
||||
known_issues = json.load(f)
|
||||
|
||||
with open("port_applications.json", "r") as f:
|
||||
port_applications = json.load(f)
|
||||
|
||||
baselines = data_storage.calculate_baselines()
|
||||
anomalies = analyze_data_locally(combined_data, baselines, known_issues, port_applications)
|
||||
|
||||
if anomalies:
|
||||
logger.info(f"Detected {len(anomalies)} anomalies: {anomalies}")
|
||||
llm_response = generate_llm_report(anomalies)
|
||||
if llm_response and llm_response.get('severity') != "none":
|
||||
print(f"Anomaly detected: {llm_response.get('reason')}")
|
||||
if llm_response.get('severity') == "high":
|
||||
send_discord_alert(llm_response.get('reason'))
|
||||
daily_events.append(llm_response.get('reason'))
|
||||
if llm_response.get('severity') == "high" and is_alerting_time():
|
||||
send_discord_alert(llm_response, combined_data)
|
||||
send_google_home_alert(llm_response.get('reason'))
|
||||
else:
|
||||
print("No anomaly detected.")
|
||||
return nmap_scan_counter
|
||||
|
||||
def main():
|
||||
"""Main function to run the monitoring agent."""
|
||||
data_storage.initialize_database()
|
||||
if config.TEST_MODE:
|
||||
logger.info("Running in test mode...")
|
||||
run_monitoring_cycle(0)
|
||||
else:
|
||||
schedule.every().day.at(config.DAILY_RECAP_TIME).do(send_daily_recap)
|
||||
nmap_scan_counter = 0
|
||||
while True:
|
||||
print("Running monitoring cycle...")
|
||||
system_logs = get_system_logs()
|
||||
network_metrics = get_network_metrics()
|
||||
cpu_temp = get_cpu_temperature()
|
||||
gpu_temp = get_gpu_temperature()
|
||||
login_attempts = get_login_attempts()
|
||||
|
||||
nmap_results = None
|
||||
if nmap_scan_counter == 0:
|
||||
nmap_results = get_nmap_scan_results()
|
||||
|
||||
nmap_scan_counter = (nmap_scan_counter + 1) % 4 # Run nmap scan every 4th cycle (20 minutes)
|
||||
|
||||
if system_logs and network_metrics:
|
||||
combined_data = {
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
"system_logs": system_logs,
|
||||
"network_metrics": network_metrics,
|
||||
"cpu_temperature": cpu_temp,
|
||||
"gpu_temperature": gpu_temp,
|
||||
"login_attempts": login_attempts
|
||||
}
|
||||
|
||||
if nmap_results:
|
||||
combined_data["nmap_results"] = nmap_results
|
||||
|
||||
data_storage.store_data(combined_data)
|
||||
|
||||
llm_response = analyze_data_with_llm(combined_data, data_storage.calculate_baselines())
|
||||
|
||||
if llm_response and llm_response.get('severity') != "none":
|
||||
daily_events.append(llm_response.get('reason'))
|
||||
if llm_response.get('severity') == "high":
|
||||
send_discord_alert(llm_response.get('reason'))
|
||||
send_google_home_alert(llm_response.get('reason'))
|
||||
|
||||
# Daily Recap Logic
|
||||
current_time = time.strftime("%H:%M")
|
||||
if current_time == config.DAILY_RECAP_TIME and daily_events:
|
||||
recap_message = "\n".join(daily_events)
|
||||
send_discord_alert(f"**Daily Recap:**\n{recap_message}")
|
||||
daily_events = [] # Reset for the next day
|
||||
|
||||
nmap_scan_counter = run_monitoring_cycle(nmap_scan_counter)
|
||||
schedule.run_pending()
|
||||
time.sleep(300) # Run every 5 minutes
|
||||
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
||||
19
port_applications.json
Executable file
19
port_applications.json
Executable file
@@ -0,0 +1,19 @@
|
||||
|
||||
{
|
||||
"20": "FTP",
|
||||
"21": "FTP",
|
||||
"22": "SSH",
|
||||
"23": "Telnet",
|
||||
"25": "SMTP",
|
||||
"53": "DNS",
|
||||
"80": "HTTP",
|
||||
"110": "POP3",
|
||||
"143": "IMAP",
|
||||
"443": "HTTPS",
|
||||
"445": "SMB",
|
||||
"587": "SMTP",
|
||||
"993": "IMAPS",
|
||||
"995": "POP3S",
|
||||
"3306": "MySQL",
|
||||
"3389": "RDP"
|
||||
}
|
||||
8
requirements.txt
Normal file → Executable file
8
requirements.txt
Normal file → Executable file
@@ -1,6 +1,8 @@
|
||||
discord-webhook
|
||||
pingparsing
|
||||
requests
|
||||
discord-webhook
|
||||
ollama
|
||||
syslog-rfc5424-parser
|
||||
pingparsing
|
||||
python-nmap
|
||||
python-nmap
|
||||
schedule
|
||||
docker
|
||||
|
||||
22
test_output.log
Executable file
22
test_output.log
Executable file
@@ -0,0 +1,22 @@
|
||||
Traceback (most recent call last):
|
||||
File "/home/artanis/Documents/LLM-Powered-Monitoring-Agent/monitor_agent.py", line 31, in <module>
|
||||
file_handler = TimedRotatingFileHandler(LOG_FILE, when="midnight", interval=1, backupCount=1)
|
||||
File "/home/artanis/.pyenv/versions/3.13.1/lib/python3.13/logging/handlers.py", line 223, in __init__
|
||||
BaseRotatingHandler.__init__(self, filename, 'a', encoding=encoding,
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
delay=delay, errors=errors)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
File "/home/artanis/.pyenv/versions/3.13.1/lib/python3.13/logging/handlers.py", line 64, in __init__
|
||||
logging.FileHandler.__init__(self, filename, mode=mode,
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
encoding=encoding, delay=delay,
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
errors=errors)
|
||||
^^^^^^^^^^^^^^
|
||||
File "/home/artanis/.pyenv/versions/3.13.1/lib/python3.13/logging/__init__.py", line 1218, in __init__
|
||||
StreamHandler.__init__(self, self._open())
|
||||
~~~~~~~~~~^^
|
||||
File "/home/artanis/.pyenv/versions/3.13.1/lib/python3.13/logging/__init__.py", line 1247, in _open
|
||||
return open_func(self.baseFilename, self.mode,
|
||||
encoding=self.encoding, errors=self.errors)
|
||||
PermissionError: [Errno 13] Permission denied: '/home/artanis/Documents/LLM-Powered-Monitoring-Agent/monitoring_agent.log'
|
||||
479
tmp/monitoring_agent.log
Executable file
479
tmp/monitoring_agent.log
Executable file
@@ -0,0 +1,479 @@
|
||||
2025-09-15 00:01:21,407 - INFO - Running monitoring cycle...
|
||||
2025-09-15 00:31:11,922 - INFO - Running monitoring cycle...
|
||||
2025-09-15 00:36:14,048 - INFO - Running monitoring cycle...
|
||||
2025-09-15 00:41:16,122 - INFO - Running monitoring cycle...
|
||||
2025-09-15 00:46:18,223 - INFO - Running monitoring cycle...
|
||||
2025-09-15 00:53:17,684 - INFO - Running monitoring cycle...
|
||||
2025-09-15 00:58:19,786 - INFO - Running monitoring cycle...
|
||||
2025-09-15 01:03:21,873 - INFO - Running monitoring cycle...
|
||||
2025-09-15 01:08:23,956 - INFO - Running monitoring cycle...
|
||||
2025-09-15 01:15:53,304 - INFO - Running monitoring cycle...
|
||||
2025-09-15 01:20:55,400 - INFO - Running monitoring cycle...
|
||||
2025-09-15 01:25:57,573 - INFO - Running monitoring cycle...
|
||||
2025-09-15 01:30:59,656 - INFO - Running monitoring cycle...
|
||||
2025-09-15 01:49:24,983 - INFO - Running monitoring cycle...
|
||||
2025-09-15 01:54:27,106 - INFO - Running monitoring cycle...
|
||||
2025-09-15 01:59:29,198 - INFO - Running monitoring cycle...
|
||||
2025-09-15 02:04:31,335 - INFO - Running monitoring cycle...
|
||||
2025-09-15 02:05:49,829 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 02:05:49,829 - INFO - Generating LLM report...
|
||||
2025-09-15 02:05:54,309 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is experiencing issues with a high severity level because it has exited unexpectedly."}
|
||||
2025-09-15 02:10:54,309 - INFO - Running monitoring cycle...
|
||||
2025-09-15 02:10:56,390 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 02:10:56,390 - INFO - Generating LLM report...
|
||||
2025-09-15 02:11:00,906 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is currently stopped (exited). This may lead to Minecraft service disruptions."}
|
||||
2025-09-15 02:16:00,906 - INFO - Running monitoring cycle...
|
||||
2025-09-15 02:16:02,986 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 02:16:02,986 - INFO - Generating LLM report...
|
||||
2025-09-15 02:16:07,417 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is experiencing issues; it has exited unexpectedly without starting."}
|
||||
2025-09-15 02:21:07,417 - INFO - Running monitoring cycle...
|
||||
2025-09-15 02:21:09,515 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 02:21:09,515 - INFO - Generating LLM report...
|
||||
2025-09-15 02:21:13,947 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' has exited unexpectedly; it is currently stopped."}
|
||||
2025-09-15 02:26:13,948 - INFO - Running monitoring cycle...
|
||||
2025-09-15 02:28:09,890 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 02:28:09,890 - INFO - Generating LLM report...
|
||||
2025-09-15 02:28:14,339 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is currently stopped; it exited unexpectedly."}
|
||||
2025-09-15 02:33:14,339 - INFO - Running monitoring cycle...
|
||||
2025-09-15 02:33:16,482 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 02:33:16,482 - INFO - Generating LLM report...
|
||||
2025-09-15 02:33:20,965 - INFO - LLM Response: {'severity': 'high', 'reason': "The Docker container named 'minecraft' is currently stopped; its status shows it has exited."}
|
||||
2025-09-15 02:38:20,965 - INFO - Running monitoring cycle...
|
||||
2025-09-15 02:38:23,059 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 02:38:23,059 - INFO - Generating LLM report...
|
||||
2025-09-15 02:38:27,574 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is experiencing a critical failure; it has exited unexpectedly without proper shutdown."}
|
||||
2025-09-15 02:43:27,574 - INFO - Running monitoring cycle...
|
||||
2025-09-15 02:43:29,681 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 02:43:29,681 - INFO - Generating LLM report...
|
||||
2025-09-15 02:43:34,112 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is currently exited; it should be running."}
|
||||
2025-09-15 02:48:34,112 - INFO - Running monitoring cycle...
|
||||
2025-09-15 02:50:08,317 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 02:50:08,317 - INFO - Generating LLM report...
|
||||
2025-09-15 02:50:12,959 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is experiencing a high-severity issue due to it being currently stopped; its status indicates that it's exited."}
|
||||
2025-09-15 02:55:12,959 - INFO - Running monitoring cycle...
|
||||
2025-09-15 02:55:15,068 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 02:55:15,068 - INFO - Generating LLM report...
|
||||
2025-09-15 02:55:19,562 - INFO - LLM Response: {'severity': 'high', 'reason': "The Docker container named 'minecraft' has exited; it is currently stopped."}
|
||||
2025-09-15 03:00:19,563 - INFO - Running monitoring cycle...
|
||||
2025-09-15 03:00:21,651 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 03:00:21,651 - INFO - Generating LLM report...
|
||||
2025-09-15 03:00:26,074 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is currently exited; it needs restarting."}
|
||||
2025-09-15 03:05:26,074 - INFO - Running monitoring cycle...
|
||||
2025-09-15 03:05:28,216 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 03:05:28,216 - INFO - Generating LLM report...
|
||||
2025-09-15 03:05:32,610 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is currently exited but expected to be running."}
|
||||
2025-09-15 03:10:32,610 - INFO - Running monitoring cycle...
|
||||
2025-09-15 03:13:12,236 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 03:13:12,236 - INFO - Generating LLM report...
|
||||
2025-09-15 03:13:16,630 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is experiencing issues; it has exited prematurely."}
|
||||
2025-09-15 03:18:16,630 - INFO - Running monitoring cycle...
|
||||
2025-09-15 03:18:18,787 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 03:18:18,787 - INFO - Generating LLM report...
|
||||
2025-09-15 03:18:23,312 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is experiencing a critical issue; it has exited unexpectedly without starting."}
|
||||
2025-09-15 03:23:23,312 - INFO - Running monitoring cycle...
|
||||
2025-09-15 03:23:25,413 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 03:23:25,413 - INFO - Generating LLM report...
|
||||
2025-09-15 03:23:29,917 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is experiencing issues with its operational status; it has exited unexpectedly."}
|
||||
2025-09-15 03:28:29,917 - INFO - Running monitoring cycle...
|
||||
2025-09-15 03:28:32,051 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 03:28:32,052 - INFO - Generating LLM report...
|
||||
2025-09-15 03:28:36,665 - INFO - LLM Response: {'severity': 'high', 'reason': "The Docker container named 'minecraft' is currently stopped with status 'exited', which could indicate a failure to start correctly."}
|
||||
2025-09-15 03:33:36,665 - INFO - Running monitoring cycle...
|
||||
2025-09-15 03:54:15,994 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 03:54:15,994 - INFO - Generating LLM report...
|
||||
2025-09-15 03:54:20,384 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is down; it has exited."}
|
||||
2025-09-15 03:59:20,384 - INFO - Running monitoring cycle...
|
||||
2025-09-15 03:59:22,474 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 03:59:22,474 - INFO - Generating LLM report...
|
||||
2025-09-15 03:59:26,867 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is currently stopped with status exited."}
|
||||
2025-09-15 04:04:26,867 - INFO - Running monitoring cycle...
|
||||
2025-09-15 04:04:28,958 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 04:04:28,958 - INFO - Generating LLM report...
|
||||
2025-09-15 04:04:33,343 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is currently stopped (exited)."}
|
||||
2025-09-15 04:09:33,344 - INFO - Running monitoring cycle...
|
||||
2025-09-15 04:09:35,442 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 04:09:35,442 - INFO - Generating LLM report...
|
||||
2025-09-15 04:09:39,882 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is currently exited; it needs restarting."}
|
||||
2025-09-15 04:14:39,882 - INFO - Running monitoring cycle...
|
||||
2025-09-15 04:17:37,763 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 04:17:37,763 - INFO - Generating LLM report...
|
||||
2025-09-15 04:17:42,223 - INFO - LLM Response: {'severity': 'high', 'reason': "The Docker container 'minecraft' is currently stopped with a status of exited."}
|
||||
2025-09-15 04:22:42,224 - INFO - Running monitoring cycle...
|
||||
2025-09-15 04:22:44,301 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 04:22:44,301 - INFO - Generating LLM report...
|
||||
2025-09-15 04:22:48,808 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is experiencing a high severity issue because it has exited unexpectedly."}
|
||||
2025-09-15 04:27:48,808 - INFO - Running monitoring cycle...
|
||||
2025-09-15 04:27:50,896 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 04:27:50,896 - INFO - Generating LLM report...
|
||||
2025-09-15 04:27:55,278 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is currently exited but should be running."}
|
||||
2025-09-15 04:32:55,279 - INFO - Running monitoring cycle...
|
||||
2025-09-15 04:32:57,383 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 04:32:57,383 - INFO - Generating LLM report...
|
||||
2025-09-15 04:33:01,780 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is experiencing issues; it has exited unexpectedly."}
|
||||
2025-09-15 04:38:01,781 - INFO - Running monitoring cycle...
|
||||
2025-09-15 04:44:04,873 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 04:44:04,873 - INFO - Generating LLM report...
|
||||
2025-09-15 04:44:09,313 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is experiencing issues since it has exited unexpectedly."}
|
||||
2025-09-15 04:49:09,313 - INFO - Running monitoring cycle...
|
||||
2025-09-15 04:49:11,409 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 04:49:11,410 - INFO - Generating LLM report...
|
||||
2025-09-15 04:49:15,896 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is experiencing issues; it has exited without completing its intended function."}
|
||||
2025-09-15 04:54:15,896 - INFO - Running monitoring cycle...
|
||||
2025-09-15 04:54:17,996 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 04:54:17,996 - INFO - Generating LLM report...
|
||||
2025-09-15 04:54:22,383 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is currently stopped because it exited unexpectedly."}
|
||||
2025-09-15 04:59:22,383 - INFO - Running monitoring cycle...
|
||||
2025-09-15 04:59:24,512 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 04:59:24,512 - INFO - Generating LLM report...
|
||||
2025-09-15 04:59:28,919 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is currently stopped; it exited unexpectedly."}
|
||||
2025-09-15 05:04:28,919 - INFO - Running monitoring cycle...
|
||||
2025-09-15 05:06:54,084 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 05:06:54,085 - INFO - Generating LLM report...
|
||||
2025-09-15 05:06:58,635 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is stopped with status exited; current state indicates it did not start properly."}
|
||||
2025-09-15 05:11:58,635 - INFO - Running monitoring cycle...
|
||||
2025-09-15 05:12:00,747 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 05:12:00,747 - INFO - Generating LLM report...
|
||||
2025-09-15 05:12:05,264 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is currently stopped (exited). It needs to be restarted."}
|
||||
2025-09-15 05:17:05,265 - INFO - Running monitoring cycle...
|
||||
2025-09-15 05:17:07,399 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 05:17:07,399 - INFO - Generating LLM report...
|
||||
2025-09-15 05:17:11,941 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is stopped with status exited; this can cause application downtime if it was running."}
|
||||
2025-09-15 05:22:11,941 - INFO - Running monitoring cycle...
|
||||
2025-09-15 05:22:14,045 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 05:22:14,045 - INFO - Generating LLM report...
|
||||
2025-09-15 05:22:18,427 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is down because it has exited unexpectedly."}
|
||||
2025-09-15 05:27:18,428 - INFO - Running monitoring cycle...
|
||||
2025-09-15 05:33:49,638 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 05:33:49,638 - INFO - Generating LLM report...
|
||||
2025-09-15 05:33:54,110 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is experiencing issues; it has exited unexpectedly."}
|
||||
2025-09-15 05:38:54,111 - INFO - Running monitoring cycle...
|
||||
2025-09-15 05:38:56,191 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 05:38:56,191 - INFO - Generating LLM report...
|
||||
2025-09-15 05:39:00,598 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is experiencing issues; it has exited without running."}
|
||||
2025-09-15 05:44:00,598 - INFO - Running monitoring cycle...
|
||||
2025-09-15 05:44:02,752 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 05:44:02,752 - INFO - Generating LLM report...
|
||||
2025-09-15 05:44:07,209 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is not running due to its current status being exited."}
|
||||
2025-09-15 05:49:07,210 - INFO - Running monitoring cycle...
|
||||
2025-09-15 05:49:09,336 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 05:49:09,336 - INFO - Generating LLM report...
|
||||
2025-09-15 05:49:13,748 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is currently stopped with status exited."}
|
||||
2025-09-15 05:54:13,749 - INFO - Running monitoring cycle...
|
||||
2025-09-15 06:01:11,734 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 06:01:11,735 - INFO - Generating LLM report...
|
||||
2025-09-15 06:01:16,281 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is experiencing issues; it has exited without completing its intended task."}
|
||||
2025-09-15 06:06:16,281 - INFO - Running monitoring cycle...
|
||||
2025-09-15 06:06:18,358 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 06:06:18,358 - INFO - Generating LLM report...
|
||||
2025-09-15 06:06:22,810 - INFO - LLM Response: {'severity': 'high', 'reason': "The Docker container 'minecraft' is currently not running; it exited unexpectedly."}
|
||||
2025-09-15 06:11:22,810 - INFO - Running monitoring cycle...
|
||||
2025-09-15 06:11:24,896 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 06:11:24,896 - INFO - Generating LLM report...
|
||||
2025-09-15 06:11:29,368 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is experiencing issues with its operational status; it has exited unexpectedly."}
|
||||
2025-09-15 06:16:29,368 - INFO - Running monitoring cycle...
|
||||
2025-09-15 06:16:31,452 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 06:16:31,452 - INFO - Generating LLM report...
|
||||
2025-09-15 06:16:35,863 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is currently exited; it needs restarting."}
|
||||
2025-09-15 06:21:35,864 - INFO - Running monitoring cycle...
|
||||
2025-09-15 06:26:27,967 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 06:26:27,967 - INFO - Generating LLM report...
|
||||
2025-09-15 06:26:32,378 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is experiencing issues; it has exited unexpectedly."}
|
||||
2025-09-15 06:31:32,378 - INFO - Running monitoring cycle...
|
||||
2025-09-15 06:31:34,493 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 06:31:34,494 - INFO - Generating LLM report...
|
||||
2025-09-15 06:31:39,022 - INFO - LLM Response: {'severity': 'high', 'reason': "The Docker container named 'minecraft' is currently stopped; its status indicates that it has exited."}
|
||||
2025-09-15 06:36:39,022 - INFO - Running monitoring cycle...
|
||||
2025-09-15 06:36:41,124 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 06:36:41,124 - INFO - Generating LLM report...
|
||||
2025-09-15 06:36:45,614 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is currently exited; it was previously running but has stopped without apparent cause."}
|
||||
2025-09-15 06:41:45,614 - INFO - Running monitoring cycle...
|
||||
2025-09-15 06:41:47,715 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 06:41:47,715 - INFO - Generating LLM report...
|
||||
2025-09-15 06:41:52,176 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is experiencing issues; it has exited without starting."}
|
||||
2025-09-15 06:46:52,177 - INFO - Running monitoring cycle...
|
||||
2025-09-15 06:47:20,506 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 06:47:20,506 - INFO - Generating LLM report...
|
||||
2025-09-15 06:47:24,980 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is currently stopped with status 'exited'."}
|
||||
2025-09-15 06:52:24,980 - INFO - Running monitoring cycle...
|
||||
2025-09-15 06:52:27,071 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 06:52:27,071 - INFO - Generating LLM report...
|
||||
2025-09-15 06:52:31,558 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is experiencing a critical issue since it exited; it's currently non-operational."}
|
||||
2025-09-15 06:57:31,559 - INFO - Running monitoring cycle...
|
||||
2025-09-15 06:57:33,644 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 06:57:33,644 - INFO - Generating LLM report...
|
||||
2025-09-15 06:57:38,061 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is experiencing issues since it exited unexpectedly without running."}
|
||||
2025-09-15 07:02:38,061 - INFO - Running monitoring cycle...
|
||||
2025-09-15 07:02:40,160 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 07:02:40,160 - INFO - Generating LLM report...
|
||||
2025-09-15 07:02:44,585 - INFO - LLM Response: {'severity': 'high', 'reason': "The Docker container named 'minecraft' is currently stopped because it has exited."}
|
||||
2025-09-15 07:07:44,585 - INFO - Running monitoring cycle...
|
||||
2025-09-15 07:08:51,220 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 07:08:51,220 - INFO - Generating LLM report...
|
||||
2025-09-15 07:08:55,675 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is currently stopped; it exited unexpectedly."}
|
||||
2025-09-15 07:13:55,675 - INFO - Running monitoring cycle...
|
||||
2025-09-15 07:13:57,772 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 07:13:57,773 - INFO - Generating LLM report...
|
||||
2025-09-15 07:14:02,247 - INFO - LLM Response: {'severity': 'high', 'reason': "The Docker container named 'minecraft' has exited unexpectedly; it is currently stopped."}
|
||||
2025-09-15 07:19:02,247 - INFO - Running monitoring cycle...
|
||||
2025-09-15 07:19:04,378 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 07:19:04,378 - INFO - Generating LLM report...
|
||||
2025-09-15 07:19:08,835 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is currently stopped because it exited unexpectedly."}
|
||||
2025-09-15 07:24:08,836 - INFO - Running monitoring cycle...
|
||||
2025-09-15 07:24:10,941 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 07:24:10,941 - INFO - Generating LLM report...
|
||||
2025-09-15 07:24:15,376 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is experiencing a critical issue: it has exited unexpectedly."}
|
||||
2025-09-15 07:29:15,376 - INFO - Running monitoring cycle...
|
||||
2025-09-15 07:31:35,749 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 07:31:35,749 - INFO - Generating LLM report...
|
||||
2025-09-15 07:31:40,194 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is experiencing issues; it has exited unexpectedly."}
|
||||
2025-09-15 07:36:40,195 - INFO - Running monitoring cycle...
|
||||
2025-09-15 07:36:42,291 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 07:36:42,291 - INFO - Generating LLM report...
|
||||
2025-09-15 07:36:46,704 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is reported missing; it exited unexpectedly."}
|
||||
2025-09-15 07:41:46,705 - INFO - Running monitoring cycle...
|
||||
2025-09-15 07:41:48,797 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 07:41:48,797 - INFO - Generating LLM report...
|
||||
2025-09-15 07:41:53,308 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is currently exited; it was previously running but has stopped unexpectedly."}
|
||||
2025-09-15 07:46:53,309 - INFO - Running monitoring cycle...
|
||||
2025-09-15 07:46:55,406 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 07:46:55,406 - INFO - Generating LLM report...
|
||||
2025-09-15 07:46:59,887 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is currently stopped (exited), which may lead to service disruption."}
|
||||
2025-09-15 07:51:59,887 - INFO - Running monitoring cycle...
|
||||
2025-09-15 07:54:25,483 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 07:54:25,483 - INFO - Generating LLM report...
|
||||
2025-09-15 07:54:30,100 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is experiencing a high severity issue due to it being non-operational with its current status reported as exited."}
|
||||
2025-09-15 07:59:30,100 - INFO - Running monitoring cycle...
|
||||
2025-09-15 07:59:32,238 - INFO - Detected 1 anomalies: [{'severity': 'high', 'reason': "Docker container 'minecraft' is not running. Current status: exited"}]
|
||||
2025-09-15 07:59:32,238 - INFO - Generating LLM report...
|
||||
2025-09-15 07:59:36,730 - INFO - LLM Response: {'severity': 'high', 'reason': "Docker container 'minecraft' is experiencing issues since it exited without completing its intended tasks."}
|
||||
2025-09-15 08:04:36,731 - INFO - Running monitoring cycle...
|
||||
2025-09-15 08:09:38,841 - INFO - Running monitoring cycle...
|
||||
2025-09-15 08:14:40,943 - INFO - Running monitoring cycle...
|
||||
2025-09-15 08:22:01,659 - INFO - Running monitoring cycle...
|
||||
2025-09-15 08:27:03,759 - INFO - Running monitoring cycle...
|
||||
2025-09-15 08:32:05,908 - INFO - Running monitoring cycle...
|
||||
2025-09-15 08:37:08,055 - INFO - Running monitoring cycle...
|
||||
2025-09-15 08:45:34,653 - INFO - Running monitoring cycle...
|
||||
2025-09-15 08:50:36,768 - INFO - Running monitoring cycle...
|
||||
2025-09-15 08:55:38,898 - INFO - Running monitoring cycle...
|
||||
2025-09-15 09:00:40,997 - INFO - Running monitoring cycle...
|
||||
2025-09-15 09:07:54,915 - INFO - Running monitoring cycle...
|
||||
2025-09-15 09:12:57,048 - INFO - Running monitoring cycle...
|
||||
2025-09-15 09:17:59,145 - INFO - Running monitoring cycle...
|
||||
2025-09-15 09:23:01,297 - INFO - Running monitoring cycle...
|
||||
2025-09-15 09:28:39,356 - INFO - Running monitoring cycle...
|
||||
2025-09-15 09:33:41,445 - INFO - Running monitoring cycle...
|
||||
2025-09-15 09:38:43,524 - INFO - Running monitoring cycle...
|
||||
2025-09-15 09:43:45,620 - INFO - Running monitoring cycle...
|
||||
2025-09-15 09:49:26,414 - INFO - Running monitoring cycle...
|
||||
2025-09-15 09:54:28,554 - INFO - Running monitoring cycle...
|
||||
2025-09-15 09:59:30,653 - INFO - Running monitoring cycle...
|
||||
2025-09-15 10:04:32,778 - INFO - Running monitoring cycle...
|
||||
2025-09-15 10:13:01,370 - INFO - Running monitoring cycle...
|
||||
2025-09-15 10:18:03,453 - INFO - Running monitoring cycle...
|
||||
2025-09-15 10:23:05,550 - INFO - Running monitoring cycle...
|
||||
2025-09-15 10:28:07,634 - INFO - Running monitoring cycle...
|
||||
2025-09-15 10:36:19,972 - INFO - Running monitoring cycle...
|
||||
2025-09-15 10:41:22,091 - INFO - Running monitoring cycle...
|
||||
2025-09-15 10:46:24,244 - INFO - Running monitoring cycle...
|
||||
2025-09-15 10:51:26,346 - INFO - Running monitoring cycle...
|
||||
2025-09-15 11:00:24,637 - INFO - Running monitoring cycle...
|
||||
2025-09-15 11:05:26,720 - INFO - Running monitoring cycle...
|
||||
2025-09-15 11:10:28,819 - INFO - Running monitoring cycle...
|
||||
2025-09-15 11:15:30,897 - INFO - Running monitoring cycle...
|
||||
2025-09-15 11:24:21,912 - INFO - Running monitoring cycle...
|
||||
2025-09-15 11:29:23,994 - INFO - Running monitoring cycle...
|
||||
2025-09-15 11:34:26,089 - INFO - Running monitoring cycle...
|
||||
2025-09-15 11:39:28,234 - INFO - Running monitoring cycle...
|
||||
2025-09-15 11:50:22,435 - INFO - Running monitoring cycle...
|
||||
2025-09-15 11:55:24,575 - INFO - Running monitoring cycle...
|
||||
2025-09-15 12:00:26,724 - INFO - Running monitoring cycle...
|
||||
2025-09-15 12:05:28,874 - INFO - Running monitoring cycle...
|
||||
2025-09-15 12:12:34,647 - INFO - Running monitoring cycle...
|
||||
2025-09-15 12:17:36,748 - INFO - Running monitoring cycle...
|
||||
2025-09-15 12:22:38,907 - INFO - Running monitoring cycle...
|
||||
2025-09-15 12:27:40,996 - INFO - Running monitoring cycle...
|
||||
2025-09-15 12:34:57,190 - INFO - Running monitoring cycle...
|
||||
2025-09-15 12:39:59,344 - INFO - Running monitoring cycle...
|
||||
2025-09-15 12:42:28,467 - INFO - Running monitoring cycle...
|
||||
2025-09-15 12:43:10,948 - INFO - Running monitoring cycle...
|
||||
2025-09-15 12:43:13,084 - WARNING - Nmap -sS scan requires root privileges. Falling back to -sT.
|
||||
2025-09-15 12:45:11,051 - INFO - Running in test mode...
|
||||
2025-09-15 12:45:11,051 - INFO - Running monitoring cycle...
|
||||
2025-09-15 12:45:13,146 - WARNING - Nmap -sS scan requires root privileges. Falling back to -sT.
|
||||
2025-09-15 12:45:44,457 - INFO - Running in test mode...
|
||||
2025-09-15 12:45:44,457 - INFO - Running monitoring cycle...
|
||||
2025-09-15 12:45:46,590 - WARNING - Nmap -sS scan requires root privileges. Falling back to -sT.
|
||||
2025-09-15 12:46:33,528 - INFO - Running in test mode...
|
||||
2025-09-15 12:46:33,529 - INFO - Running monitoring cycle...
|
||||
2025-09-15 12:46:35,614 - WARNING - Nmap -sS scan requires root privileges. Falling back to -sT.
|
||||
2025-09-15 12:47:39,333 - INFO - Running in test mode...
|
||||
2025-09-15 12:47:39,333 - INFO - Running monitoring cycle...
|
||||
2025-09-15 12:47:41,432 - WARNING - Nmap -sS scan requires root privileges. Falling back to -sT.
|
||||
2025-09-15 12:58:20,016 - DEBUG - Entering main
|
||||
2025-09-15 12:58:20,016 - INFO - Running in test mode...
|
||||
2025-09-15 12:58:20,016 - DEBUG - Entering run_monitoring_cycle
|
||||
2025-09-15 12:58:20,016 - INFO - Running monitoring cycle...
|
||||
2025-09-15 12:58:20,016 - DEBUG - Entering get_system_logs
|
||||
2025-09-15 12:58:20,016 - DEBUG - Exiting get_system_logs
|
||||
2025-09-15 12:58:20,016 - DEBUG - Entering get_network_metrics
|
||||
2025-09-15 12:58:22,047 - DEBUG - Exiting get_network_metrics
|
||||
2025-09-15 12:58:22,061 - DEBUG - Entering get_sensor_data
|
||||
2025-09-15 12:58:22,078 - DEBUG - Exiting get_sensor_data
|
||||
2025-09-15 12:58:22,078 - DEBUG - Entering get_cpu_temperature
|
||||
2025-09-15 12:58:22,078 - DEBUG - Exiting get_cpu_temperature
|
||||
2025-09-15 12:58:22,078 - DEBUG - Entering get_gpu_temperature
|
||||
2025-09-15 12:58:22,078 - DEBUG - Exiting get_gpu_temperature
|
||||
2025-09-15 12:58:22,079 - DEBUG - Entering get_login_attempts
|
||||
2025-09-15 12:58:22,079 - DEBUG - Exiting get_login_attempts
|
||||
2025-09-15 12:58:22,079 - DEBUG - Entering get_docker_container_status
|
||||
2025-09-15 12:58:22,111 - DEBUG - Exiting get_docker_container_status
|
||||
2025-09-15 12:58:22,113 - DEBUG - Entering get_nmap_scan_results
|
||||
2025-09-15 12:58:22,117 - WARNING - Nmap -sS scan requires root privileges. Falling back to -sT.
|
||||
2025-09-15 12:58:28,544 - DEBUG - Exiting get_nmap_scan_results
|
||||
2025-09-15 12:58:28,552 - DEBUG - Entering analyze_data_locally
|
||||
2025-09-15 12:58:28,553 - DEBUG - Exiting analyze_data_locally
|
||||
2025-09-15 12:58:28,553 - DEBUG - Exiting run_monitoring_cycle
|
||||
2025-09-15 12:58:28,553 - DEBUG - Exiting main
|
||||
2025-09-15 12:58:31,241 - DEBUG - Entering main
|
||||
2025-09-15 12:58:31,242 - INFO - Running in test mode...
|
||||
2025-09-15 12:58:31,242 - DEBUG - Entering run_monitoring_cycle
|
||||
2025-09-15 12:58:31,242 - INFO - Running monitoring cycle...
|
||||
2025-09-15 12:58:31,242 - DEBUG - Entering get_system_logs
|
||||
2025-09-15 12:58:31,242 - DEBUG - Exiting get_system_logs
|
||||
2025-09-15 12:58:31,242 - DEBUG - Entering get_network_metrics
|
||||
2025-09-15 12:58:33,272 - DEBUG - Exiting get_network_metrics
|
||||
2025-09-15 12:58:33,275 - DEBUG - Entering get_sensor_data
|
||||
2025-09-15 12:58:33,289 - DEBUG - Exiting get_sensor_data
|
||||
2025-09-15 12:58:33,289 - DEBUG - Entering get_cpu_temperature
|
||||
2025-09-15 12:58:33,289 - DEBUG - Exiting get_cpu_temperature
|
||||
2025-09-15 12:58:33,289 - DEBUG - Entering get_gpu_temperature
|
||||
2025-09-15 12:58:33,289 - DEBUG - Exiting get_gpu_temperature
|
||||
2025-09-15 12:58:33,289 - DEBUG - Entering get_login_attempts
|
||||
2025-09-15 12:58:33,290 - DEBUG - Exiting get_login_attempts
|
||||
2025-09-15 12:58:33,290 - DEBUG - Entering get_docker_container_status
|
||||
2025-09-15 12:58:33,319 - DEBUG - Exiting get_docker_container_status
|
||||
2025-09-15 12:58:33,320 - DEBUG - Entering get_nmap_scan_results
|
||||
2025-09-15 12:58:33,324 - WARNING - Nmap -sS scan requires root privileges. Falling back to -sT.
|
||||
2025-09-15 12:59:20,558 - DEBUG - Exiting get_nmap_scan_results
|
||||
2025-09-15 12:59:20,568 - DEBUG - Entering analyze_data_locally
|
||||
2025-09-15 12:59:20,569 - DEBUG - Exiting analyze_data_locally
|
||||
2025-09-15 12:59:20,569 - DEBUG - Exiting run_monitoring_cycle
|
||||
2025-09-15 12:59:20,569 - DEBUG - Exiting main
|
||||
2025-09-15 12:59:45,756 - DEBUG - __main__ - Entering main
|
||||
2025-09-15 12:59:45,756 - INFO - database - Database initialized successfully.
|
||||
2025-09-15 12:59:45,756 - INFO - __main__ - Running in test mode...
|
||||
2025-09-15 12:59:45,756 - DEBUG - __main__ - Entering run_monitoring_cycle
|
||||
2025-09-15 12:59:45,756 - INFO - __main__ - Running monitoring cycle...
|
||||
2025-09-15 12:59:45,757 - DEBUG - __main__ - Entering get_system_logs
|
||||
2025-09-15 12:59:45,757 - DEBUG - __main__ - Exiting get_system_logs
|
||||
2025-09-15 12:59:45,757 - DEBUG - __main__ - Entering get_network_metrics
|
||||
2025-09-15 12:59:47,785 - DEBUG - __main__ - Exiting get_network_metrics
|
||||
2025-09-15 12:59:47,795 - DEBUG - __main__ - Entering get_sensor_data
|
||||
2025-09-15 12:59:47,819 - DEBUG - __main__ - Exiting get_sensor_data
|
||||
2025-09-15 12:59:47,820 - DEBUG - __main__ - Entering get_cpu_temperature
|
||||
2025-09-15 12:59:47,820 - DEBUG - __main__ - Exiting get_cpu_temperature
|
||||
2025-09-15 12:59:47,820 - DEBUG - __main__ - Entering get_gpu_temperature
|
||||
2025-09-15 12:59:47,821 - DEBUG - __main__ - Exiting get_gpu_temperature
|
||||
2025-09-15 12:59:47,821 - DEBUG - __main__ - Entering get_login_attempts
|
||||
2025-09-15 12:59:47,821 - DEBUG - __main__ - Exiting get_login_attempts
|
||||
2025-09-15 12:59:47,822 - DEBUG - __main__ - Entering get_docker_container_status
|
||||
2025-09-15 12:59:47,822 - DEBUG - docker.utils.config - Trying paths: ['/home/artanis/.docker/config.json', '/home/artanis/.dockercfg']
|
||||
2025-09-15 12:59:47,822 - DEBUG - docker.utils.config - No config file found
|
||||
2025-09-15 12:59:47,823 - DEBUG - docker.utils.config - Trying paths: ['/home/artanis/.docker/config.json', '/home/artanis/.dockercfg']
|
||||
2025-09-15 12:59:47,823 - DEBUG - docker.utils.config - No config file found
|
||||
2025-09-15 12:59:47,833 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /version HTTP/1.1" 200 822
|
||||
2025-09-15 12:59:47,836 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/json?limit=-1&all=1&size=0&trunc_cmd=0 HTTP/1.1" 200 None
|
||||
2025-09-15 12:59:47,838 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/6fe246915fcd7e9ba47ab659c2bded702a248ba7ba0bea67d5440a429059ecf9/json HTTP/1.1" 200 None
|
||||
2025-09-15 12:59:47,839 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/db9267cbc792fd3b42cbe3c91a81c9e9d9c8f10784264bbaa5dd6c8443f1ebec/json HTTP/1.1" 200 None
|
||||
2025-09-15 12:59:47,840 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/04947c346ebea841c3ff66821fb02cceb1ce6fc1e249dda03f6cfcc7ab1387ee/json HTTP/1.1" 200 None
|
||||
2025-09-15 12:59:47,841 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/892ca3318ca6c7f59efdafb7c7fe72c2fd29b2163ba93bd7a96b08bdf11149c7/json HTTP/1.1" 200 None
|
||||
2025-09-15 12:59:47,842 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/e4c49da7ccd7dbe046e4b16b44da696c7ff6dbe2bfce332f55830677c8bb5385/json HTTP/1.1" 200 None
|
||||
2025-09-15 12:59:47,843 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/eaf91d09a18ebc4c4a5273ea3e40ee5b235ff601b36df03b622ef7d4c711e14d/json HTTP/1.1" 200 None
|
||||
2025-09-15 12:59:47,845 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/8ee77507e001ffa2e3c49fd0dff574b560301c74fe897e44d1b64bb30891b5dd/json HTTP/1.1" 200 None
|
||||
2025-09-15 12:59:47,846 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/193897be46b32bbdcd70d9f8f00f4bb3a0ba4a9ad23222620a15b65aaa9407ea/json HTTP/1.1" 200 None
|
||||
2025-09-15 12:59:47,847 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/ea66b86039b4d69764c32380e51f437cff7f5edd693c08343a6a305caf52d329/json HTTP/1.1" 200 None
|
||||
2025-09-15 12:59:47,848 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/3af5798ed8340c94591efaa44b4beed306c4b753380f8fde0fd66dafcbf7491b/json HTTP/1.1" 200 None
|
||||
2025-09-15 12:59:47,849 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/9bada910535adab609ae61c561e3373b2f7c5749fe831406f4f95d4262c40768/json HTTP/1.1" 200 None
|
||||
2025-09-15 12:59:47,850 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/c8349318a9b41ee73228fd8017e54bfda30f09e196688b0e1adfdfe88d0e7809/json HTTP/1.1" 200 None
|
||||
2025-09-15 12:59:47,851 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/dcaec110abb26aebf65c0dd85daccc345283ec3d6bacf3d64e42fbe8187ec005/json HTTP/1.1" 200 None
|
||||
2025-09-15 12:59:47,852 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/2e4b6585210f65df2ec680fe3df7673fc7c5078d24e2103677409ece211b71c4/json HTTP/1.1" 200 None
|
||||
2025-09-15 12:59:47,853 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/cd875071300812e4c3a15e2c84b9b73b36f67a236c1fdd46c5a49f3992aa429f/json HTTP/1.1" 200 None
|
||||
2025-09-15 12:59:47,854 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/393705e06222d67c9de37dce4b03c036bc3774deb9d8a39bda8096481be569c3/json HTTP/1.1" 200 None
|
||||
2025-09-15 12:59:47,856 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/0ca3adee66289acbaff8a2cae54e888b3fffe2f8b645ce326cf9072023f2d81c/json HTTP/1.1" 200 None
|
||||
2025-09-15 12:59:47,858 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/1a4d4abeea6d3488f754679bde7063749213120e9f243c56f060a636ae5ea187/json HTTP/1.1" 200 None
|
||||
2025-09-15 12:59:47,859 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/ae68bc651bf3188f354038b4acc819b30960bb0ce6e6569b132562f15b9d54e8/json HTTP/1.1" 200 None
|
||||
2025-09-15 12:59:47,859 - DEBUG - __main__ - Exiting get_docker_container_status
|
||||
2025-09-15 12:59:47,861 - DEBUG - __main__ - Entering get_nmap_scan_results
|
||||
2025-09-15 12:59:47,865 - WARNING - __main__ - Nmap -sS scan requires root privileges. Falling back to -sT.
|
||||
2025-09-15 13:00:16,585 - DEBUG - __main__ - Exiting get_nmap_scan_results
|
||||
2025-09-15 13:00:16,588 - INFO - database - Retention cutoff: 2025-09-15T18:00:15.588626+00:00
|
||||
2025-09-15 13:00:16,589 - INFO - database - Found 1 old records to delete.
|
||||
2025-09-15 13:00:16,591 - INFO - database - Deleted 1 old records.
|
||||
2025-09-15 13:00:16,591 - DEBUG - __main__ - Entering analyze_data_locally
|
||||
2025-09-15 13:00:16,591 - DEBUG - __main__ - Exiting analyze_data_locally
|
||||
2025-09-15 13:00:16,591 - DEBUG - __main__ - Exiting run_monitoring_cycle
|
||||
2025-09-15 13:00:16,591 - DEBUG - __main__ - Exiting main
|
||||
2025-09-15 13:00:19,271 - DEBUG - __main__ - Entering main
|
||||
2025-09-15 13:00:19,271 - INFO - database - Database initialized successfully.
|
||||
2025-09-15 13:00:19,271 - INFO - __main__ - Running in test mode...
|
||||
2025-09-15 13:00:19,271 - DEBUG - __main__ - Entering run_monitoring_cycle
|
||||
2025-09-15 13:00:19,271 - INFO - __main__ - Running monitoring cycle...
|
||||
2025-09-15 13:00:19,271 - DEBUG - __main__ - Entering get_system_logs
|
||||
2025-09-15 13:00:19,271 - DEBUG - __main__ - Exiting get_system_logs
|
||||
2025-09-15 13:00:19,272 - DEBUG - __main__ - Entering get_network_metrics
|
||||
2025-09-15 13:00:21,297 - DEBUG - __main__ - Exiting get_network_metrics
|
||||
2025-09-15 13:00:21,299 - DEBUG - __main__ - Entering get_sensor_data
|
||||
2025-09-15 13:00:21,314 - DEBUG - __main__ - Exiting get_sensor_data
|
||||
2025-09-15 13:00:21,314 - DEBUG - __main__ - Entering get_cpu_temperature
|
||||
2025-09-15 13:00:21,315 - DEBUG - __main__ - Exiting get_cpu_temperature
|
||||
2025-09-15 13:00:21,315 - DEBUG - __main__ - Entering get_gpu_temperature
|
||||
2025-09-15 13:00:21,315 - DEBUG - __main__ - Exiting get_gpu_temperature
|
||||
2025-09-15 13:00:21,315 - DEBUG - __main__ - Entering get_login_attempts
|
||||
2025-09-15 13:00:21,315 - DEBUG - __main__ - Exiting get_login_attempts
|
||||
2025-09-15 13:00:21,315 - DEBUG - __main__ - Entering get_docker_container_status
|
||||
2025-09-15 13:00:21,315 - DEBUG - docker.utils.config - Trying paths: ['/home/artanis/.docker/config.json', '/home/artanis/.dockercfg']
|
||||
2025-09-15 13:00:21,315 - DEBUG - docker.utils.config - No config file found
|
||||
2025-09-15 13:00:21,315 - DEBUG - docker.utils.config - Trying paths: ['/home/artanis/.docker/config.json', '/home/artanis/.dockercfg']
|
||||
2025-09-15 13:00:21,315 - DEBUG - docker.utils.config - No config file found
|
||||
2025-09-15 13:00:21,321 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /version HTTP/1.1" 200 822
|
||||
2025-09-15 13:00:21,324 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/json?limit=-1&all=1&size=0&trunc_cmd=0 HTTP/1.1" 200 None
|
||||
2025-09-15 13:00:21,326 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/6fe246915fcd7e9ba47ab659c2bded702a248ba7ba0bea67d5440a429059ecf9/json HTTP/1.1" 200 None
|
||||
2025-09-15 13:00:21,327 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/db9267cbc792fd3b42cbe3c91a81c9e9d9c8f10784264bbaa5dd6c8443f1ebec/json HTTP/1.1" 200 None
|
||||
2025-09-15 13:00:21,328 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/04947c346ebea841c3ff66821fb02cceb1ce6fc1e249dda03f6cfcc7ab1387ee/json HTTP/1.1" 200 None
|
||||
2025-09-15 13:00:21,329 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/892ca3318ca6c7f59efdafb7c7fe72c2fd29b2163ba93bd7a96b08bdf11149c7/json HTTP/1.1" 200 None
|
||||
2025-09-15 13:00:21,331 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/e4c49da7ccd7dbe046e4b16b44da696c7ff6dbe2bfce332f55830677c8bb5385/json HTTP/1.1" 200 None
|
||||
2025-09-15 13:00:21,332 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/eaf91d09a18ebc4c4a5273ea3e40ee5b235ff601b36df03b622ef7d4c711e14d/json HTTP/1.1" 200 None
|
||||
2025-09-15 13:00:21,334 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/8ee77507e001ffa2e3c49fd0dff574b560301c74fe897e44d1b64bb30891b5dd/json HTTP/1.1" 200 None
|
||||
2025-09-15 13:00:21,335 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/193897be46b32bbdcd70d9f8f00f4bb3a0ba4a9ad23222620a15b65aaa9407ea/json HTTP/1.1" 200 None
|
||||
2025-09-15 13:00:21,336 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/ea66b86039b4d69764c32380e51f437cff7f5edd693c08343a6a305caf52d329/json HTTP/1.1" 200 None
|
||||
2025-09-15 13:00:21,337 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/3af5798ed8340c94591efaa44b4beed306c4b753380f8fde0fd66dafcbf7491b/json HTTP/1.1" 200 None
|
||||
2025-09-15 13:00:21,338 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/9bada910535adab609ae61c561e3373b2f7c5749fe831406f4f95d4262c40768/json HTTP/1.1" 200 None
|
||||
2025-09-15 13:00:21,339 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/c8349318a9b41ee73228fd8017e54bfda30f09e196688b0e1adfdfe88d0e7809/json HTTP/1.1" 200 None
|
||||
2025-09-15 13:00:21,340 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/dcaec110abb26aebf65c0dd85daccc345283ec3d6bacf3d64e42fbe8187ec005/json HTTP/1.1" 200 None
|
||||
2025-09-15 13:00:21,341 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/2e4b6585210f65df2ec680fe3df7673fc7c5078d24e2103677409ece211b71c4/json HTTP/1.1" 200 None
|
||||
2025-09-15 13:00:21,343 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/cd875071300812e4c3a15e2c84b9b73b36f67a236c1fdd46c5a49f3992aa429f/json HTTP/1.1" 200 None
|
||||
2025-09-15 13:00:21,344 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/393705e06222d67c9de37dce4b03c036bc3774deb9d8a39bda8096481be569c3/json HTTP/1.1" 200 None
|
||||
2025-09-15 13:00:21,345 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/0ca3adee66289acbaff8a2cae54e888b3fffe2f8b645ce326cf9072023f2d81c/json HTTP/1.1" 200 None
|
||||
2025-09-15 13:00:21,346 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/1a4d4abeea6d3488f754679bde7063749213120e9f243c56f060a636ae5ea187/json HTTP/1.1" 200 None
|
||||
2025-09-15 13:00:21,347 - DEBUG - urllib3.connectionpool - http://localhost:None "GET /v1.51/containers/ae68bc651bf3188f354038b4acc819b30960bb0ce6e6569b132562f15b9d54e8/json HTTP/1.1" 200 None
|
||||
2025-09-15 13:00:21,347 - DEBUG - __main__ - Exiting get_docker_container_status
|
||||
2025-09-15 13:00:21,349 - DEBUG - __main__ - Entering get_nmap_scan_results
|
||||
2025-09-15 13:00:21,353 - WARNING - __main__ - Nmap -sS scan requires root privileges. Falling back to -sT.
|
||||
2025-09-15 13:05:10,688 - DEBUG - __main__ - Exiting get_nmap_scan_results
|
||||
2025-09-15 13:05:10,691 - INFO - database - Retention cutoff: 2025-09-15T18:05:09.691390+00:00
|
||||
2025-09-15 13:05:10,691 - INFO - database - Found 1 old records to delete.
|
||||
2025-09-15 13:05:10,693 - INFO - database - Deleted 1 old records.
|
||||
2025-09-15 13:05:10,694 - DEBUG - __main__ - Entering analyze_data_locally
|
||||
2025-09-15 13:05:10,695 - DEBUG - __main__ - Exiting analyze_data_locally
|
||||
2025-09-15 13:05:10,695 - DEBUG - __main__ - Exiting run_monitoring_cycle
|
||||
2025-09-15 13:05:10,695 - DEBUG - __main__ - Exiting main
|
||||
2025-09-15 13:21:41,948 - INFO - Running in test mode...
|
||||
2025-09-15 13:21:41,949 - INFO - Running monitoring cycle...
|
||||
2025-09-15 13:21:44,096 - WARNING - Nmap -sS scan requires root privileges. Falling back to -sT.
|
||||
2025-09-15 13:21:56,641 - INFO - Detected 9 anomalies: [{'severity': 'high', 'reason': 'High number of blocked connections (1477) from IP address: 23.28.198.165'}, {'severity': 'high', 'reason': 'High number of blocked connections (33) from IP address: 84.252.134.217'}, {'severity': 'high', 'reason': 'High number of blocked connections (140) from IP address: 51.250.10.6'}, {'severity': 'high', 'reason': 'High number of blocked connections (48) from IP address: 158.160.20.113'}, {'severity': 'high', 'reason': 'High number of blocked connections (13) from IP address: 182.93.50.90'}, {'severity': 'high', 'reason': 'High number of blocked connections (82) from IP address: 172.22.0.2'}, {'severity': 'high', 'reason': 'High number of blocked connections (591) from IP address: 192.168.2.117'}, {'severity': 'high', 'reason': 'High number of blocked connections (12) from IP address: 172.23.0.2'}, {'severity': 'high', 'reason': 'High number of blocked connections (11) from IP address: 192.168.2.104'}]
|
||||
2025-09-15 13:21:56,642 - INFO - Generating LLM report...
|
||||
2025-09-15 13:22:04,084 - INFO - LLM Response: {'severity': 'high', 'reason': 'High number of blocked connections detected from multiple IP addresses: 23.28.198.165 (1477), 84.252.134.217 (33), 51.250.10.6 (140), 158.160.20.113 (48), 182.93.50.90 (13), 172.22.0.2 (82), 192.168.2.117 (591), 172.23.0.2 (12), and 192.168.2.104 (11). This indicates a potential coordinated attack or misconfigured system.'}
|
||||
2025-09-15 13:22:04,982 - ERROR - Error sending Discord alert: 400 - b'{"content": ["Must be 2000 or fewer in length."]}'
|
||||
2025-09-15 13:22:11,390 - INFO - Google Home alert sent successfully.
|
||||
2025-09-15 13:25:08,619 - INFO - Running monitoring cycle...
|
||||
32
tmp/monitoring_agent.log.2025-09-14
Executable file
32
tmp/monitoring_agent.log.2025-09-14
Executable file
@@ -0,0 +1,32 @@
|
||||
2025-09-14 20:27:49,614 - INFO - Running monitoring cycle...
|
||||
2025-09-14 20:34:15,578 - INFO - Running monitoring cycle...
|
||||
2025-09-14 20:39:17,650 - INFO - Running monitoring cycle...
|
||||
2025-09-14 20:44:19,738 - INFO - Running monitoring cycle...
|
||||
2025-09-14 20:49:21,809 - INFO - Running monitoring cycle...
|
||||
2025-09-14 20:55:57,821 - INFO - Running monitoring cycle...
|
||||
2025-09-14 21:00:59,895 - INFO - Running monitoring cycle...
|
||||
2025-09-14 21:06:02,000 - INFO - Running monitoring cycle...
|
||||
2025-09-14 21:11:04,092 - INFO - Running monitoring cycle...
|
||||
2025-09-14 21:46:00,340 - INFO - Running monitoring cycle...
|
||||
2025-09-14 21:51:02,413 - INFO - Running monitoring cycle...
|
||||
2025-09-14 21:56:04,515 - INFO - Running monitoring cycle...
|
||||
2025-09-14 22:01:06,608 - INFO - Running monitoring cycle...
|
||||
2025-09-14 22:08:01,730 - INFO - Running monitoring cycle...
|
||||
2025-09-14 22:13:03,882 - INFO - Running monitoring cycle...
|
||||
2025-09-14 22:18:06,032 - INFO - Running monitoring cycle...
|
||||
2025-09-14 22:23:08,183 - INFO - Running monitoring cycle...
|
||||
2025-09-14 22:29:47,066 - INFO - Running monitoring cycle...
|
||||
2025-09-14 22:34:49,156 - INFO - Running monitoring cycle...
|
||||
2025-09-14 22:39:51,311 - INFO - Running monitoring cycle...
|
||||
2025-09-14 22:44:53,423 - INFO - Running monitoring cycle...
|
||||
2025-09-14 22:53:51,148 - INFO - Running monitoring cycle...
|
||||
2025-09-14 22:58:53,301 - INFO - Running monitoring cycle...
|
||||
2025-09-14 23:03:55,388 - INFO - Running monitoring cycle...
|
||||
2025-09-14 23:08:57,530 - INFO - Running monitoring cycle...
|
||||
2025-09-14 23:18:07,849 - INFO - Running monitoring cycle...
|
||||
2025-09-14 23:23:09,993 - INFO - Running monitoring cycle...
|
||||
2025-09-14 23:28:12,167 - INFO - Running monitoring cycle...
|
||||
2025-09-14 23:33:14,332 - INFO - Running monitoring cycle...
|
||||
2025-09-14 23:46:15,054 - INFO - Running monitoring cycle...
|
||||
2025-09-14 23:51:17,204 - INFO - Running monitoring cycle...
|
||||
2025-09-14 23:56:19,308 - INFO - Running monitoring cycle...
|
||||
1
ufw_log_position.txt
Normal file
1
ufw_log_position.txt
Normal file
@@ -0,0 +1 @@
|
||||
822805
|
||||
Reference in New Issue
Block a user