Compare commits
13 Commits
NMAP
...
6f7e99639c
| Author | SHA1 | Date | |
|---|---|---|---|
| 6f7e99639c | |||
| bebedb1e15 | |||
| ff7bbb98d0 | |||
| 57d7688c3a | |||
| 83b25d81a6 | |||
| 7e24379fa1 | |||
| d03018de9b | |||
| f65b2d468d | |||
| e119bc7194 | |||
| c5a446ea65 | |||
| b8b91880d6 | |||
| e7730ebde5 | |||
| 63ee043f34 |
2
.gitignore
vendored
2
.gitignore
vendored
@@ -2,3 +2,5 @@ __pycache__/*
|
|||||||
__pycache__/
|
__pycache__/
|
||||||
monitoring_data.json
|
monitoring_data.json
|
||||||
log_position.txt
|
log_position.txt
|
||||||
|
auth_log_position.txt
|
||||||
|
monitoring_agent.log*
|
||||||
|
|||||||
47
AGENTS.md
Normal file
47
AGENTS.md
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
# AGENTS.md
|
||||||
|
|
||||||
|
This document outlines the autonomous and human agents involved in the LLM-Powered Monitoring Agent project.
|
||||||
|
|
||||||
|
## Human Agents
|
||||||
|
|
||||||
|
### Inanis
|
||||||
|
|
||||||
|
- **Role**: Primary Operator, Project Owner
|
||||||
|
- **Responsibilities**:
|
||||||
|
- Defines project goals and requirements.
|
||||||
|
- Provides high-level guidance and approval for major changes.
|
||||||
|
- Reviews agent outputs and provides feedback.
|
||||||
|
- Manages overall project direction.
|
||||||
|
- **Contact**: [If Inanis wants to provide contact info, it would go here]
|
||||||
|
|
||||||
|
## Autonomous Agents
|
||||||
|
|
||||||
|
### Blight (LLM-Powered Monitoring Agent)
|
||||||
|
|
||||||
|
- **Role**: Autonomous Monitoring and Anomaly Detection Agent
|
||||||
|
- **Type**: Large Language Model (LLM) based agent
|
||||||
|
- **Capabilities**:
|
||||||
|
- Collects system and network metrics (logs, temperatures, network performance, Nmap scans).
|
||||||
|
- Analyzes collected data against historical baselines.
|
||||||
|
- Detects anomalies using an integrated LLM (Llama3.1).
|
||||||
|
- Generates actionable reports on detected anomalies.
|
||||||
|
- Sends alerts via Discord and Google Home.
|
||||||
|
- Provides daily recaps of events.
|
||||||
|
- **Interaction**:
|
||||||
|
- Receives instructions and context from Inanis via CLI.
|
||||||
|
- Provides analysis and reports in JSON format.
|
||||||
|
- Operates continuously in the background (unless in test mode).
|
||||||
|
- **Dependencies**:
|
||||||
|
- `ollama` (for LLM inference)
|
||||||
|
- `nmap`
|
||||||
|
- `lm-sensors`
|
||||||
|
- Python libraries (as listed in `requirements.txt`)
|
||||||
|
- **Configuration**: Configured via `config.py`, `CONSTRAINTS.md`, and `known_issues.json`.
|
||||||
|
- **Status**: Operational and continuously evolving.
|
||||||
|
|
||||||
|
## Agent Interactions
|
||||||
|
|
||||||
|
- **Inanis -> Blight**: Inanis provides high-level tasks, reviews Blight's output, and refines its behavior through code modifications and configuration updates.
|
||||||
|
- **Blight -> Inanis**: Blight reports detected anomalies, system status, and daily summaries to Inanis through configured alerting channels (Discord, Google Home) and logs.
|
||||||
|
- **Blight <-> System**: Blight interacts with the local operating system to collect data (reading logs, running commands like `sensors` and `nmap`).
|
||||||
|
- **Blight <-> LLM**: Blight sends collected and processed data to the local Ollama LLM for intelligent analysis and receives anomaly reports.
|
||||||
@@ -1,6 +1,8 @@
|
|||||||
## LLM Constraints and Guidelines
|
## LLM Constraints and Guidelines
|
||||||
|
- Not everything is an anamoly. Err on the side of caution when selecting severity. Its ok not to report anything. You don't have to say anything if you don't want to, or don't need to.
|
||||||
- Please do not report on anything that is older then 24 hours.
|
- Please do not report on anything that is older then 24 hours.
|
||||||
- The server uses a custom DNS server at 192.168.2.112.
|
- The server uses a custom DNS server at 192.168.2.112.
|
||||||
|
- Please think carefully on if the measured values exceed the averages by any significant margin. A few seconds, or a few degrees in difference do not mean a significant margin. Only report anomolies with delta values greater then 10.
|
||||||
|
|
||||||
### Important Things to Focus On:
|
### Important Things to Focus On:
|
||||||
- Security-related events such as failed login attempts, unauthorized access, or unusual network connections.
|
- Security-related events such as failed login attempts, unauthorized access, or unusual network connections.
|
||||||
|
|||||||
104
PROGRESS.md
104
PROGRESS.md
@@ -13,52 +13,84 @@
|
|||||||
|
|
||||||
## Phase 2: Data Storage
|
## Phase 2: Data Storage
|
||||||
|
|
||||||
9. [x] Create `data_storage.py`
|
9. [x] Implement data storage functions in `data_storage.py`
|
||||||
10. [x] Implement data storage functions in `data_storage.py`
|
10. [x] Update `monitor_agent.py` to use data storage
|
||||||
11. [x] Update `monitor_agent.py` to use data storage
|
11. [x] Update `SPEC.md` to reflect data storage functionality
|
||||||
12. [x] Update `SPEC.md` to reflect data storage functionality
|
|
||||||
|
|
||||||
## Phase 3: Expanded Monitoring
|
## Phase 3: Expanded Monitoring
|
||||||
|
|
||||||
13. [x] Implement CPU temperature monitoring
|
12. [x] Implement CPU temperature monitoring
|
||||||
14. [x] Implement GPU temperature monitoring
|
13. [x] Implement GPU temperature monitoring
|
||||||
15. [x] Implement system login attempt monitoring
|
14. [x] Implement system login attempt monitoring
|
||||||
16. [x] Update `monitor_agent.py` to include new metrics
|
15. [x] Update `monitor_agent.py` to include new metrics
|
||||||
17. [x] Update `SPEC.md` to reflect new metrics
|
16. [x] Update `SPEC.md` to reflect new metrics
|
||||||
18. [x] Extend `calculate_baselines` to include system temps
|
17. [x] Extend `calculate_baselines` to include system temps
|
||||||
|
|
||||||
## Phase 4: Troubleshooting
|
## Phase 4: Troubleshooting
|
||||||
|
|
||||||
19. [x] Investigated and resolved issue with `jc` library
|
18. [x] Investigated and resolved issue with `jc` library
|
||||||
20. [x] Removed `jc` library as a dependency
|
19. [x] Removed `jc` library as a dependency
|
||||||
21. [x] Implemented manual parsing of `sensors` command output
|
20. [x] Implemented manual parsing of `sensors` command output
|
||||||
|
|
||||||
## Tasks Already Done
|
## Phase 5: Network Scanning (Nmap Integration)
|
||||||
|
|
||||||
[x] Ensure we aren't using mockdata for get_system_logs() and get_network_metrics()
|
21. [x] Add `python-nmap` to `requirements.txt` and install.
|
||||||
[x] Improve `get_system_logs()` to read new lines since last check
|
22. [x] Define `NMAP_TARGETS` and `NMAP_SCAN_OPTIONS` in `config.py`.
|
||||||
[x] Improve `get_network_metrics()` by using a library like `pingparsing`
|
23. [x] Create a new function `get_nmap_scan_results()` in `monitor_agent.py`:
|
||||||
[x] Ensure we are including CONSTRAINTS.md in our analyze_data_with_llm() function
|
|
||||||
[x] Summarize entire report into a single sentence to said to Home Assistant
|
|
||||||
[x] Figure out why Home Assitant isn't using the speaker
|
|
||||||
|
|
||||||
## Keeping track of Current Objectives
|
|
||||||
|
|
||||||
[x] Improve "high" priority detection by explicitly instructing LLM to output severity in structured JSON format.
|
|
||||||
[x] Implement dynamic contextual information (Known/Resolved Issues Feed) for LLM to improve severity detection.
|
|
||||||
|
|
||||||
## Network Scanning (Nmap Integration)
|
|
||||||
|
|
||||||
1. [x] Add `python-nmap` to `requirements.txt` and install.
|
|
||||||
2. [x] Define `NMAP_TARGETS` and `NMAP_SCAN_OPTIONS` in `config.py`.
|
|
||||||
3. [x] Create a new function `get_nmap_scan_results()` in `monitor_agent.py`:
|
|
||||||
* [x] Use `python-nmap` to perform a scan on the defined targets with the specified options.
|
* [x] Use `python-nmap` to perform a scan on the defined targets with the specified options.
|
||||||
* [x] Return the parsed results.
|
* [x] Return the parsed results.
|
||||||
4. [x] Integrate `get_nmap_scan_results()` into the main monitoring loop:
|
24. [x] Integrate `get_nmap_scan_results()` into the main monitoring loop:
|
||||||
* [x] Call this function periodically (e.g., less frequently than other metrics).
|
* [x] Call this function periodically (e.g., less frequently than other metrics).
|
||||||
* [x] Add the `nmap` results to the `combined_data` dictionary.
|
* [x] Add the `nmap` results to the `combined_data` dictionary.
|
||||||
5. [x] Update `data_storage.py` to store `nmap` results.
|
25. [x] Update `data_storage.py` to store `nmap` results.
|
||||||
6. [x] Extend `calculate_baselines()` in `data_storage.py` to include `nmap` baselines:
|
26. [x] Extend `calculate_baselines()` in `data_storage.py` to include `nmap` baselines:
|
||||||
* [x] Compare current `nmap` results with historical data to identify changes.
|
* [x] Compare current `nmap` results with historical data to identify changes.
|
||||||
7. [x] Modify `analyze_data_with_llm()` prompt to include `nmap` scan results for analysis.
|
27. [x] Modify `analyze_data_with_llm()` prompt to include `nmap` scan results for analysis.
|
||||||
8. [x] Consider how to handle `nmap` permissions.
|
28. [x] Consider how to handle `nmap` permissions.
|
||||||
|
29. [x] Improve Nmap data logging to include IP addresses, open ports, and service details.
|
||||||
|
|
||||||
|
## Phase 6: Code Refactoring and Documentation
|
||||||
|
|
||||||
|
30. [x] Remove duplicate `pingparsing` import in `monitor_agent.py`.
|
||||||
|
31. [x] Refactor `get_cpu_temperature` and `get_gpu_temperature` to call `sensors` command only once.
|
||||||
|
32. [x] Refactor `get_login_attempts` to use a position file for efficient log reading.
|
||||||
|
33. [x] Simplify JSON parsing in `analyze_data_with_llm`.
|
||||||
|
34. [x] Move LLM prompt to a separate function `build_llm_prompt`.
|
||||||
|
35. [x] Refactor main loop into smaller functions (`run_monitoring_cycle`, `main`).
|
||||||
|
36. [x] Create helper function in `data_storage.py` for calculating average metrics.
|
||||||
|
37. [x] Update `README.md` with current project status and improvements.
|
||||||
|
38. [x] Create `AGENTS.md` to document human and autonomous agents.
|
||||||
|
|
||||||
|
## Previous TODO
|
||||||
|
|
||||||
|
- [x] Improve "high" priority detection by explicitly instructing LLM to output severity in structured JSON format.
|
||||||
|
- [x] Implement dynamic contextual information (Known/Resolved Issues Feed) for LLM to improve severity detection.
|
||||||
|
- [x] Change baseline calculations to only use integers instead of floats.
|
||||||
|
- [x] Add a log file that only keeps records for the past 24 hours.
|
||||||
|
- [x] Log all LLM responses to the console.
|
||||||
|
- [x] Reduce alerts to only happen between 9am and 12am.
|
||||||
|
- [x] Get hostnames of devices in Nmap scan.
|
||||||
|
- [x] Filter out RTT fluctuations below 10 seconds.
|
||||||
|
- [x] Filter out temperature fluctuations with differences less than 5 degrees.
|
||||||
|
- [x] Create a list of known port numbers and their applications for the LLM to check against to see if an open port is a threat
|
||||||
|
- [x] When calculating averages, please round up to the nearest integer. We only want to deliver whole integers to the LLM to process, and nothing with decimal points. It gets confused with decimal points.
|
||||||
|
- [x] In the discord message, please include the exact specific details and the log of the problem that prompted the alert
|
||||||
|
|
||||||
|
## TODO
|
||||||
|
|
||||||
|
## Phase 7: Offloading Analysis from LLM
|
||||||
|
|
||||||
|
39. [x] Create a new function `analyze_data_locally` in `monitor_agent.py`.
|
||||||
|
39.1. [x] This function will take `data`, `baselines`, `known_issues`, and `port_applications` as input.
|
||||||
|
39.2. [x] It will contain the logic to compare current data with baselines and predefined thresholds.
|
||||||
|
39.3. [x] It will be responsible for identifying anomalies for various metrics (CPU/GPU temp, network RTT, failed logins, Nmap changes).
|
||||||
|
39.4. [x] It will return a list of dictionaries, where each dictionary represents an anomaly and contains 'severity' and 'reason' keys.
|
||||||
|
40. [x] Refactor `analyze_data_with_llm` into a new function called `generate_llm_report`.
|
||||||
|
40.1. [x] This function will take the list of anomalies from `analyze_data_locally` as input.
|
||||||
|
40.2. [x] It will construct a simple prompt to ask the LLM to generate a human-readable summary of the anomalies.
|
||||||
|
40.3. [x] The LLM will no longer be making analytical decisions.
|
||||||
|
41. [x] Update `run_monitoring_cycle` to orchestrate the new workflow.
|
||||||
|
41.1. [x] Call `analyze_data_locally` to get the list of anomalies.
|
||||||
|
41.2. [x] If anomalies are found, call `generate_llm_report` to create the report.
|
||||||
|
41.3. [x] Use the output of `generate_llm_report` for alerting.
|
||||||
|
42. [x] Remove the detailed analytical instructions from `build_llm_prompt` as they will be handled by `analyze_data_locally`.
|
||||||
147
README.md
147
README.md
@@ -1,104 +1,93 @@
|
|||||||
# LLM-Powered Monitoring Agent
|
# LLM-Powered Monitoring Agent
|
||||||
|
|
||||||
This project is a self-hosted monitoring agent that uses a local Large Language Model (LLM) to detect anomalies in system and network data. It's designed to be a simple, self-contained Python script that can be easily deployed on a server.
|
This project implements an LLM-powered monitoring agent designed to continuously collect system and network data, analyze it against historical baselines, and alert on anomalies. The agent leverages a local Large Language Model (LLM) for intelligent anomaly detection and integrates with Discord and Google Home for notifications.
|
||||||
|
|
||||||
## 1. Installation
|
## Features
|
||||||
|
|
||||||
To get started, you'll need to have Python 3.8 or newer installed. Then, follow these steps:
|
- **System Log Monitoring**: Tracks new entries in `/var/log/syslog` and `/var/log/auth.log` (for login attempts).
|
||||||
|
- **Network Metrics**: Gathers network performance data by pinging a public IP (e.g., 8.8.8.8).
|
||||||
|
- **Hardware Monitoring**: Collects CPU and GPU temperature data.
|
||||||
|
- **Nmap Scanning**: Periodically performs network scans to discover hosts and open ports.
|
||||||
|
- **Historical Baseline Analysis**: Compares current data against a 24-hour rolling baseline to identify deviations.
|
||||||
|
- **LLM-Powered Anomaly Detection**: Utilizes a local LLM (Ollama with Llama3.1) to analyze combined system data, baselines, and Nmap changes for anomalies.
|
||||||
|
- **Alerting**: Sends high-severity anomaly alerts to Discord and Google Home speakers (via Home Assistant).
|
||||||
|
- **Daily Recap**: Provides a daily summary of detected events.
|
||||||
|
|
||||||
1. **Clone the repository or download the files:**
|
## Recent Improvements
|
||||||
|
|
||||||
```bash
|
- **Enhanced Nmap Data Logging**: The Nmap scan results are now processed and stored in a more structured format, including:
|
||||||
git clone <repository_url>
|
- Discovered IP addresses.
|
||||||
cd <repository_directory>
|
- Status of each host.
|
||||||
```
|
- Detailed list of open ports for each host, including service, product, and version information.
|
||||||
|
This significantly improves the clarity and utility of Nmap data for anomaly detection.
|
||||||
|
- **Code Refactoring (`monitor_agent.py`)**:
|
||||||
|
- **Optimized Sensor Data Collection**: CPU and GPU temperature data are now collected with a single call to the `sensors` command, improving efficiency.
|
||||||
|
- **Efficient Login Attempt Logging**: The agent now tracks its position in `/var/log/auth.log`, preventing redundant reads of the entire file and improving performance for large log files.
|
||||||
|
- **Modular Main Loop**: The core monitoring logic has been broken down into smaller, more manageable functions, enhancing readability and maintainability.
|
||||||
|
- **Separated LLM Prompt Building**: The complex LLM prompt construction logic has been moved into a dedicated function, making `analyze_data_with_llm` more focused.
|
||||||
|
- **Code Refactoring (`data_storage.py`)**:
|
||||||
|
- **Streamlined Baseline Calculations**: Helper functions have been introduced to reduce code duplication and improve clarity in the calculation of average metrics for baselines.
|
||||||
|
|
||||||
2. **Create and activate a Python virtual environment:**
|
## Setup and Installation
|
||||||
|
|
||||||
```bash
|
|
||||||
python -m venv venv
|
|
||||||
source venv/bin/activate # On Windows, use `venv\Scripts\activate`
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Install the required Python libraries:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pip install -r requirements.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
## 2. Setup
|
|
||||||
|
|
||||||
Before running the agent, you need to configure it and ensure the necessary services are running.
|
|
||||||
|
|
||||||
### Prerequisites
|
### Prerequisites
|
||||||
|
|
||||||
- **Ollama:** The agent requires that [Ollama](https://ollama.com/) is installed and running on the server.
|
- Python 3.x
|
||||||
- **LLM Model:** You must have the `llama3.1:8b` model pulled and available in Ollama. You can pull it with the following command:
|
- `ollama` installed and running with the `llama3.1:8b` model pulled (`ollama pull llama3.1:8b`)
|
||||||
|
- `nmap` installed
|
||||||
|
- `lm-sensors` installed (for CPU/GPU temperature monitoring)
|
||||||
|
- Discord webhook URL
|
||||||
|
- (Optional) Home Assistant instance with a long-lived access token and a Google Home speaker configured.
|
||||||
|
|
||||||
|
### Installation
|
||||||
|
|
||||||
|
1. Clone the repository:
|
||||||
```bash
|
```bash
|
||||||
ollama pull llama3.1:8b
|
git clone <repository_url>
|
||||||
|
cd LLM-Powered-Monitoring-Agent
|
||||||
```
|
```
|
||||||
|
2. Install Python dependencies:
|
||||||
|
```bash
|
||||||
|
pip install -r requirements.txt
|
||||||
|
```
|
||||||
|
3. Configure the agent:
|
||||||
|
- Open `config.py` and update the following variables:
|
||||||
|
- `DISCORD_WEBHOOK_URL`
|
||||||
|
- `HOME_ASSISTANT_URL` (if using Google Home alerts)
|
||||||
|
- `HOME_ASSISTANT_TOKEN` (if using Google Home alerts)
|
||||||
|
- `GOOGLE_HOME_SPEAKER_ID` (if using Google Home alerts)
|
||||||
|
- `NMAP_TARGETS` (e.g., "192.168.1.0/24" or "192.168.1.100")
|
||||||
|
- `NMAP_SCAN_OPTIONS` (default is "-sS -T4")
|
||||||
|
- `DAILY_RECAP_TIME` (e.g., "20:00" for 8 PM)
|
||||||
|
- `TEST_MODE` (set to `True` for a single run, `False` for continuous operation)
|
||||||
|
|
||||||
### Configuration
|
## Usage
|
||||||
|
|
||||||
All configuration is done in the `config.py` file. You will need to replace the placeholder values with your actual credentials and URLs.
|
To run the monitoring agent:
|
||||||
|
|
||||||
- `DISCORD_WEBHOOK_URL`: Your Discord channel's webhook URL. This is used to send alerts.
|
|
||||||
- `HOME_ASSISTANT_URL`: The URL of your Home Assistant instance (e.g., `http://192.168.1.50:8123`).
|
|
||||||
- `HOME_ASSISTANT_TOKEN`: A Long-Lived Access Token for your Home Assistant instance. You can generate this in your Home Assistant profile settings.
|
|
||||||
- `GOOGLE_HOME_SPEAKER_ID`: The `media_player` entity ID for your Google Home speaker in Home Assistant (e.g., `media_player.kitchen_speaker`).
|
|
||||||
|
|
||||||
## 3. Usage
|
|
||||||
|
|
||||||
Once the installation and setup are complete, you can run the monitoring agent with the following command:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
python monitor_agent.py
|
python monitor_agent.py
|
||||||
```
|
```
|
||||||
|
|
||||||
The script will start a continuous monitoring loop. Every 5 minutes, it will:
|
### Test Mode
|
||||||
|
|
||||||
1. Collect simulated system and network data.
|
Set `TEST_MODE = True` in `config.py` to run the agent once and exit. This is useful for testing configurations and initial setup.
|
||||||
2. Send the data to the local LLM for analysis.
|
|
||||||
3. If the LLM detects a **high-severity** anomaly, it will send an alert to your configured Discord channel and broadcast a message to your Google Home speaker via Home Assistant.
|
|
||||||
4. At the time specified in `DAILY_RECAP_TIME`, a summary of all anomalies for the day will be sent to the Discord channel.
|
|
||||||
|
|
||||||
The script will print its status and any detected anomalies to the console.
|
## Extending and Customizing
|
||||||
|
|
||||||
### Nmap Scans
|
- **Adding New Metrics**: You can add new data collection functions in `monitor_agent.py` and include their results in the `combined_data` dictionary.
|
||||||
|
- **Customizing LLM Analysis**: Modify the `CONSTRAINTS.md` file to provide specific instructions or constraints to the LLM for anomaly detection.
|
||||||
|
- **Known Issues**: Update `known_issues.json` with any known or expected system behaviors to prevent the LLM from flagging them as anomalies.
|
||||||
|
- **Alerting Mechanisms**: Implement additional alerting functions (e.g., email, SMS) in `monitor_agent.py` and integrate them into the anomaly detection logic.
|
||||||
|
|
||||||
The agent uses `nmap` to scan the network for open ports. By default, it uses a TCP SYN scan (`-sS`), which requires root privileges. If the script is not run as root, it will fall back to a TCP connect scan (`-sT`), which does not require root privileges but is slower and more likely to be detected.
|
## Project Structure
|
||||||
|
|
||||||
To run the agent with root privileges, use the `sudo` command:
|
- `monitor_agent.py`: Main script for data collection, LLM interaction, and alerting.
|
||||||
|
- `data_storage.py`: Handles loading, storing, and calculating baselines from historical data.
|
||||||
```bash
|
- `config.py`: Stores configurable parameters for the agent.
|
||||||
sudo python monitor_agent.py
|
- `requirements.txt`: Lists Python dependencies.
|
||||||
```
|
- `CONSTRAINTS.md`: Defines constraints and guidelines for the LLM's analysis.
|
||||||
|
- `known_issues.json`: A JSON file containing a list of known issues to be considered by the LLM.
|
||||||
## 4. Features
|
- `monitoring_data.json`: (Generated) Stores historical monitoring data.
|
||||||
|
- `log_position.txt`: (Generated) Stores the last read position for `/var/log/syslog`.
|
||||||
### Priority System
|
- `auth_log_position.txt`: (Generated) Stores the last read position for `/var/log/auth.log`.
|
||||||
|
|
||||||
The monitoring agent uses a priority system to classify anomalies. The LLM is instructed to return a severity level for each anomaly it detects. The possible severity levels are:
|
|
||||||
|
|
||||||
- **high**: Indicates a critical issue that requires immediate attention. An alert is sent to Discord and Google Home.
|
|
||||||
- **medium**: Indicates a non-critical issue that should be investigated. No alert is sent.
|
|
||||||
- **low**: Indicates a minor issue or a potential false positive. No alert is sent.
|
|
||||||
- **none**: Indicates that no anomaly was detected.
|
|
||||||
|
|
||||||
### Known Issues Feed
|
|
||||||
|
|
||||||
The agent uses a `known_issues.json` file to provide the LLM with a list of known issues and their resolutions. This helps the LLM to avoid flagging resolved or expected issues as anomalies.
|
|
||||||
|
|
||||||
You can add new issues to the `known_issues.json` file by following the existing format. Each issue should have an "issue" and a "resolution" key. For example:
|
|
||||||
|
|
||||||
```json
|
|
||||||
[
|
|
||||||
{
|
|
||||||
"issue": "CPU temperature spikes to 80C under heavy load",
|
|
||||||
"resolution": "This is normal behavior for this CPU model and is not a cause for concern."
|
|
||||||
}
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note on Mock Data:** The current version of the script uses mock data for system logs and network metrics. To use this in a real-world scenario, you would need to replace the mock data with actual data from your systems.
|
|
||||||
37
SPEC.md
37
SPEC.md
@@ -14,6 +14,10 @@ The project will be composed of the following files:
|
|||||||
- **`README.md`**: A documentation file providing an overview of the project, setup instructions, and usage examples.
|
- **`README.md`**: A documentation file providing an overview of the project, setup instructions, and usage examples.
|
||||||
- **`.gitignore`**: A file to specify which files and directories should be ignored by Git.
|
- **`.gitignore`**: A file to specify which files and directories should be ignored by Git.
|
||||||
- **`PROGRESS.md`**: A file to track the development progress of the project.
|
- **`PROGRESS.md`**: A file to track the development progress of the project.
|
||||||
|
- **`data_storage.py`**: Handles loading, storing, and calculating baselines from historical data.
|
||||||
|
- **`CONSTRAINTS.md`**: Defines constraints and guidelines for the LLM's analysis.
|
||||||
|
- **`known_issues.json`**: A JSON file containing a list of known issues to be considered by the LLM.
|
||||||
|
- **`AGENTS.md`**: Documents the human and autonomous agents involved in the project.
|
||||||
|
|
||||||
## 3. Functional Requirements
|
## 3. Functional Requirements
|
||||||
|
|
||||||
@@ -26,10 +30,12 @@ The project will be composed of the following files:
|
|||||||
- `HOME_ASSISTANT_TOKEN`
|
- `HOME_ASSISTANT_TOKEN`
|
||||||
- `GOOGLE_HOME_SPEAKER_ID`
|
- `GOOGLE_HOME_SPEAKER_ID`
|
||||||
- `DAILY_RECAP_TIME`
|
- `DAILY_RECAP_TIME`
|
||||||
|
- `NMAP_TARGETS`
|
||||||
|
- `NMAP_SCAN_OPTIONS`
|
||||||
|
|
||||||
### 3.2. Data Ingestion and Parsing
|
### 3.2. Data Ingestion and Parsing
|
||||||
|
|
||||||
- The agent must be able to collect and parse system logs.
|
- The agent must be able to collect and parse system logs (syslog and auth.log).
|
||||||
- The agent must be able to collect and parse network metrics.
|
- The agent must be able to collect and parse network metrics.
|
||||||
- The parsing of this data should result in a structured format (JSON or Python dictionary).
|
- The parsing of this data should result in a structured format (JSON or Python dictionary).
|
||||||
|
|
||||||
@@ -38,24 +44,25 @@ The project will be composed of the following files:
|
|||||||
- **CPU Temperature**: The agent will monitor the CPU temperature.
|
- **CPU Temperature**: The agent will monitor the CPU temperature.
|
||||||
- **GPU Temperature**: The agent will monitor the GPU temperature.
|
- **GPU Temperature**: The agent will monitor the GPU temperature.
|
||||||
- **System Login Attempts**: The agent will monitor system login attempts.
|
- **System Login Attempts**: The agent will monitor system login attempts.
|
||||||
|
- **Network Scan Results (Nmap)**: The agent will periodically perform Nmap scans to discover hosts and open ports, logging detailed information including IP addresses, host status, and open ports with service details.
|
||||||
|
|
||||||
### 3.3. LLM Analysis
|
### 3.4. LLM Analysis
|
||||||
|
|
||||||
- The agent must use a local LLM (via Ollama) to analyze the collected data.
|
- The agent must use a local LLM (via Ollama) to analyze the collected data.
|
||||||
- The agent must construct a specific prompt to guide the LLM in identifying anomalies.
|
- The agent must construct a specific prompt to guide the LLM in identifying anomalies, incorporating historical baselines and known issues.
|
||||||
- The LLM's response will be either "OK" (no anomaly) or a natural language paragraph describing the anomaly, including a severity level (high, medium, low).
|
- The LLM's response will be a structured JSON object with `severity` (high, medium, low, none) and `reason` fields.
|
||||||
|
|
||||||
### 3.4. Alerting
|
### 3.5. Alerting
|
||||||
|
|
||||||
- The agent must be able to send alerts to a Discord webhook.
|
- The agent must be able to send alerts to a Discord webhook.
|
||||||
- The agent must be able to trigger a text-to-speech (TTS) alert on a Google Home speaker via Home Assistant.
|
- The agent must be able to trigger a text-to-speech (TTS) alert on a Google Home speaker via Home Assistant.
|
||||||
|
|
||||||
### 3.5. Alerting Logic
|
### 3.6. Alerting Logic
|
||||||
|
|
||||||
- Immediate alerts (Discord and Home Assistant) will only be sent for "high" severity anomalies.
|
- Immediate alerts (Discord and Home Assistant) will only be sent for "high" severity anomalies.
|
||||||
- A daily recap of all anomalies (high, medium, and low) will be sent at a configurable time.
|
- A daily recap of all anomalies (high, medium, and low) will be sent at a configurable time.
|
||||||
|
|
||||||
### 3.6. Main Loop
|
### 3.7. Main Loop
|
||||||
|
|
||||||
- The agent will run in a continuous loop.
|
- The agent will run in a continuous loop.
|
||||||
- The loop will execute the data collection, analysis, and alerting steps periodically.
|
- The loop will execute the data collection, analysis, and alerting steps periodically.
|
||||||
@@ -64,26 +71,33 @@ The project will be composed of the following files:
|
|||||||
## 4. Data Storage and Baselining
|
## 4. Data Storage and Baselining
|
||||||
|
|
||||||
- **4.1. Data Storage**: The agent will store historical monitoring data in a JSON file (`monitoring_data.json`).
|
- **4.1. Data Storage**: The agent will store historical monitoring data in a JSON file (`monitoring_data.json`).
|
||||||
- **4.2. Baselining**: The agent will calculate baseline averages for key metrics (e.g., RTT, packet loss) from the stored historical data. This baseline will be used by the LLM to improve anomaly detection accuracy.
|
- **4.2. Baselining**: The agent will calculate baseline averages for key metrics (e.g., RTT, packet loss, temperatures, open ports) from the stored historical data. This baseline will be used by the LLM to improve anomaly detection accuracy.
|
||||||
|
|
||||||
## 5. Technical Requirements
|
## 5. Technical Requirements
|
||||||
|
|
||||||
- **Language**: Python 3.8+
|
- **Language**: Python 3.8+
|
||||||
- **LLM**: `llama3.1:8b` running on a local Ollama instance.
|
- **LLM**: `llama3.1:8b` running on a local Ollama instance.
|
||||||
|
- **Prerequisites**: `nmap`, `lm-sensors`
|
||||||
- **Libraries**:
|
- **Libraries**:
|
||||||
- `ollama`
|
- `ollama`
|
||||||
- `discord-webhook`
|
- `discord-webhook`
|
||||||
- `requests`
|
- `requests`
|
||||||
- `syslog-rfc5424-parser`
|
- `syslog-rfc5424-parser`
|
||||||
- `apachelogs`
|
- `pingparsing`
|
||||||
- `jc`
|
- `python-nmap`
|
||||||
|
|
||||||
## 6. Project Structure
|
## 6. Project Structure
|
||||||
|
|
||||||
```
|
```
|
||||||
/
|
/
|
||||||
├── .gitignore
|
├── .gitignore
|
||||||
|
├── AGENTS.md
|
||||||
├── config.py
|
├── config.py
|
||||||
|
├── CONSTRAINTS.md
|
||||||
|
├── data_storage.py
|
||||||
|
├── known_issues.json
|
||||||
|
├── log_position.txt
|
||||||
|
├── auth_log_position.txt
|
||||||
├── monitor_agent.py
|
├── monitor_agent.py
|
||||||
├── PROMPT.md
|
├── PROMPT.md
|
||||||
├── README.md
|
├── README.md
|
||||||
@@ -91,3 +105,6 @@ The project will be composed of the following files:
|
|||||||
├── PROGRESS.md
|
├── PROGRESS.md
|
||||||
└── SPEC.md
|
└── SPEC.md
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## 7. Testing and Debugging
|
||||||
|
The script is equipped with a test mode, that only runs the script once, and not continuously. To enable, change the `TEST_MODE` variable in `config.py` to `True`. Once finished testing, change the variable back to `False`.
|
||||||
|
|||||||
@@ -9,11 +9,11 @@ HOME_ASSISTANT_TOKEN = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJjOGRmZjI
|
|||||||
GOOGLE_HOME_SPEAKER_ID = "media_player.spencer_room_speaker"
|
GOOGLE_HOME_SPEAKER_ID = "media_player.spencer_room_speaker"
|
||||||
|
|
||||||
# Daily Recap Time (in 24-hour format, e.g., "20:00")
|
# Daily Recap Time (in 24-hour format, e.g., "20:00")
|
||||||
DAILY_RECAP_TIME = "20:00"
|
DAILY_RECAP_TIME = "18:28"
|
||||||
|
|
||||||
# Nmap Configuration
|
# Nmap Configuration
|
||||||
NMAP_TARGETS = "192.168.1.0/24"
|
NMAP_TARGETS = "192.168.2.0/24"
|
||||||
NMAP_SCAN_OPTIONS = "-sS -T4"
|
NMAP_SCAN_OPTIONS = "-sS -T4 -R"
|
||||||
|
|
||||||
# Test Mode (True to run once and exit, False to run continuously)
|
# Test Mode (True to run once and exit, False to run continuously)
|
||||||
TEST_MODE = False
|
TEST_MODE = False
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
from datetime import datetime, timedelta, timezone
|
from datetime import datetime, timedelta, timezone
|
||||||
|
import math
|
||||||
|
|
||||||
DATA_FILE = 'monitoring_data.json'
|
DATA_FILE = 'monitoring_data.json'
|
||||||
|
|
||||||
@@ -16,6 +17,11 @@ def store_data(new_data):
|
|||||||
with open(DATA_FILE, 'w') as f:
|
with open(DATA_FILE, 'w') as f:
|
||||||
json.dump(data, f, indent=4)
|
json.dump(data, f, indent=4)
|
||||||
|
|
||||||
|
def _calculate_average(data, key1, key2):
|
||||||
|
"""Helper function to calculate the average of a nested key in a list of dicts."""
|
||||||
|
values = [d[key1][key2] for d in data if key1 in d and key2 in d[key1] and d[key1][key2] != "N/A"]
|
||||||
|
return math.ceil(sum(values) / len(values)) if values else 0
|
||||||
|
|
||||||
def calculate_baselines():
|
def calculate_baselines():
|
||||||
data = load_data()
|
data = load_data()
|
||||||
if not data:
|
if not data:
|
||||||
@@ -29,23 +35,23 @@ def calculate_baselines():
|
|||||||
return {}
|
return {}
|
||||||
|
|
||||||
baseline_metrics = {
|
baseline_metrics = {
|
||||||
'avg_rtt': sum(d['network_metrics']['rtt_avg'] for d in recent_data if 'rtt_avg' in d['network_metrics']) / len(recent_data),
|
'avg_rtt': _calculate_average(recent_data, 'network_metrics', 'rtt_avg'),
|
||||||
'packet_loss': sum(d['network_metrics']['packet_loss_rate'] for d in recent_data if 'packet_loss_rate' in d['network_metrics']) / len(recent_data),
|
'packet_loss': _calculate_average(recent_data, 'network_metrics', 'packet_loss_rate'),
|
||||||
'avg_cpu_temp': sum(d['cpu_temperature']['cpu_temperature'] for d in recent_data if d['cpu_temperature']['cpu_temperature'] != "N/A") / len(recent_data),
|
'avg_cpu_temp': _calculate_average(recent_data, 'cpu_temperature', 'cpu_temperature'),
|
||||||
'avg_gpu_temp': sum(d['gpu_temperature']['gpu_temperature'] for d in recent_data if d['gpu_temperature']['gpu_temperature'] != "N/A") / len(recent_data),
|
'avg_gpu_temp': _calculate_average(recent_data, 'gpu_temperature', 'gpu_temperature'),
|
||||||
}
|
}
|
||||||
|
|
||||||
# Baseline for open ports from nmap scans
|
# Baseline for open ports from nmap scans
|
||||||
host_ports = {}
|
host_ports = {}
|
||||||
for d in recent_data:
|
for d in recent_data:
|
||||||
if 'nmap_results' in d and 'scan' in d['nmap_results']:
|
if 'nmap_results' in d and 'hosts' in d.get('nmap_results', {}):
|
||||||
for host, scan_data in d['nmap_results']['scan'].items():
|
for host_info in d['nmap_results']['hosts']:
|
||||||
if host not in host_ports:
|
host_ip = host_info['ip']
|
||||||
host_ports[host] = set()
|
if host_ip not in host_ports:
|
||||||
if 'tcp' in scan_data:
|
host_ports[host_ip] = set()
|
||||||
for port, port_data in scan_data['tcp'].items():
|
|
||||||
if port_data['state'] == 'open':
|
for port_info in host_info.get('open_ports', []):
|
||||||
host_ports[host].add(port)
|
host_ports[host_ip].add(port_info['port'])
|
||||||
|
|
||||||
# Convert sets to sorted lists for JSON serialization
|
# Convert sets to sorted lists for JSON serialization
|
||||||
for host, ports in host_ports.items():
|
for host, ports in host_ports.items():
|
||||||
|
|||||||
@@ -1,10 +1,26 @@
|
|||||||
[
|
[
|
||||||
{
|
{
|
||||||
"issue": "CPU temperature spikes to 90C under heavy load",
|
"issue": "CPU temperatures less then the average",
|
||||||
"resolution": "This is normal behavior for this CPU model and is not a cause for concern."
|
"resolution": "This is normal behavior for CPU's when not in use. Lower Temps are usually a good thing"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"issue": "Access attempts from unknown IP Addresses",
|
"issue": "Access attempts from unknown IP Addresses",
|
||||||
"resolution": "ufw has been enabled, and blocks all default connections by default. The only IP Addresses allowed are 192.168.2.0/24 and 100.64.0.0/10"
|
"resolution": "ufw has been enabled, and blocks all default connections by default. The only IP Addresses allowed are 192.168.2.0/24 and 100.64.0.0/10"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"issue": "Network Timing values are lower then average",
|
||||||
|
"resolution": "In networking, timing values lower then the average are often good things, and do not need to be considered an anomaly"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"issue": "Port 62078 is open",
|
||||||
|
"resolution": "This is normal behavior for Apple devices. Do not report."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"issue": "RTT averages are higher then average",
|
||||||
|
"resolution": "Fluctuation is normal, and there is no need to report anything within 5s of the average RTT."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"issue": "Temperatures are higher then average",
|
||||||
|
"resolution": "Fluctuation is normal, and there is no need to report anything within 5deg Celcius of the average Temperature."
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
1167
monitor_agent.log
1167
monitor_agent.log
File diff suppressed because it is too large
Load Diff
376
monitor_agent.py
376
monitor_agent.py
@@ -12,13 +12,35 @@ import os
|
|||||||
from datetime import datetime, timezone
|
from datetime import datetime, timezone
|
||||||
import pingparsing
|
import pingparsing
|
||||||
import nmap
|
import nmap
|
||||||
|
import logging
|
||||||
|
from logging.handlers import TimedRotatingFileHandler
|
||||||
|
|
||||||
|
import schedule
|
||||||
|
|
||||||
# Load configuration
|
# Load configuration
|
||||||
import config
|
import config
|
||||||
|
|
||||||
from syslog_rfc5424_parser import parser
|
from syslog_rfc5424_parser import parser
|
||||||
|
|
||||||
|
# --- Logging Configuration ---
|
||||||
|
LOG_FILE = "monitoring_agent.log"
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
logger.setLevel(logging.INFO)
|
||||||
|
|
||||||
|
# Create a handler that rotates logs daily, keeping 1 backup
|
||||||
|
file_handler = TimedRotatingFileHandler(LOG_FILE, when="midnight", interval=1, backupCount=1)
|
||||||
|
file_handler.setFormatter(logging.Formatter('%(asctime)s - %(levelname)s - %(message)s'))
|
||||||
|
|
||||||
|
# Create a handler for console output
|
||||||
|
console_handler = logging.StreamHandler()
|
||||||
|
console_handler.setFormatter(logging.Formatter('%(asctime)s - %(levelname)s - %(message)s'))
|
||||||
|
|
||||||
|
logger.addHandler(file_handler)
|
||||||
|
logger.addHandler(console_handler)
|
||||||
|
|
||||||
|
|
||||||
LOG_POSITION_FILE = 'log_position.txt'
|
LOG_POSITION_FILE = 'log_position.txt'
|
||||||
|
AUTH_LOG_POSITION_FILE = 'auth_log_position.txt'
|
||||||
|
|
||||||
# --- Data Ingestion & Parsing Functions ---
|
# --- Data Ingestion & Parsing Functions ---
|
||||||
|
|
||||||
@@ -48,13 +70,12 @@ def get_system_logs():
|
|||||||
|
|
||||||
return {"syslog": parsed_logs}
|
return {"syslog": parsed_logs}
|
||||||
except FileNotFoundError:
|
except FileNotFoundError:
|
||||||
print("Error: /var/log/syslog not found.")
|
logger.error("/var/log/syslog not found.")
|
||||||
return {"syslog": []}
|
return {"syslog": []}
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Error reading syslog: {e}")
|
logger.error(f"Error reading syslog: {e}")
|
||||||
return {"syslog": []}
|
return {"syslog": []}
|
||||||
|
|
||||||
import pingparsing
|
|
||||||
|
|
||||||
def get_network_metrics():
|
def get_network_metrics():
|
||||||
"""Gets network metrics by pinging 8.8.8.8."""
|
"""Gets network metrics by pinging 8.8.8.8."""
|
||||||
@@ -66,27 +87,32 @@ def get_network_metrics():
|
|||||||
result = transmitter.ping()
|
result = transmitter.ping()
|
||||||
return ping_parser.parse(result).as_dict()
|
return ping_parser.parse(result).as_dict()
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Error getting network metrics: {e}")
|
logger.error(f"Error getting network metrics: {e}")
|
||||||
return {"error": "ping command failed"}
|
return {"error": "ping command failed"}
|
||||||
|
|
||||||
def get_cpu_temperature():
|
def get_sensor_data():
|
||||||
"""Gets the CPU temperature using the sensors command."""
|
"""Gets all sensor data at once."""
|
||||||
try:
|
try:
|
||||||
sensors_output = subprocess.check_output(["sensors"], text=True)
|
return subprocess.check_output(["sensors"], text=True)
|
||||||
|
except (subprocess.CalledProcessError, FileNotFoundError):
|
||||||
|
logger.error("'sensors' command not found. Please install lm-sensors.")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def get_cpu_temperature(sensors_output):
|
||||||
|
"""Gets the CPU temperature from the sensors output."""
|
||||||
|
if not sensors_output:
|
||||||
|
return {"cpu_temperature": "N/A"}
|
||||||
# Use regex to find the CPU temperature
|
# Use regex to find the CPU temperature
|
||||||
match = re.search(r"Package id 0:\s+\+([\d\.]+)", sensors_output)
|
match = re.search(r"Package id 0:\s+\+([\d\.]+)", sensors_output)
|
||||||
if match:
|
if match:
|
||||||
return {"cpu_temperature": float(match.group(1))}
|
return {"cpu_temperature": float(match.group(1))}
|
||||||
else:
|
else:
|
||||||
return {"cpu_temperature": "N/A"}
|
return {"cpu_temperature": "N/A"}
|
||||||
except (subprocess.CalledProcessError, FileNotFoundError):
|
|
||||||
print("Error: 'sensors' command not found. Please install lm-sensors.")
|
|
||||||
return {"cpu_temperature": "N/A"}
|
|
||||||
|
|
||||||
def get_gpu_temperature():
|
def get_gpu_temperature(sensors_output):
|
||||||
"""Gets the GPU temperature using the sensors command."""
|
"""Gets the GPU temperature from the sensors output."""
|
||||||
try:
|
if not sensors_output:
|
||||||
sensors_output = subprocess.check_output(["sensors"], text=True)
|
return {"gpu_temperature": "N/A"}
|
||||||
# Use regex to find the GPU temperature for amdgpu
|
# Use regex to find the GPU temperature for amdgpu
|
||||||
match = re.search(r"edge:\s+\+([\d\.]+)", sensors_output)
|
match = re.search(r"edge:\s+\+([\d\.]+)", sensors_output)
|
||||||
if match:
|
if match:
|
||||||
@@ -98,15 +124,23 @@ def get_gpu_temperature():
|
|||||||
return {"gpu_temperature": float(match.group(1))}
|
return {"gpu_temperature": float(match.group(1))}
|
||||||
else:
|
else:
|
||||||
return {"gpu_temperature": "N/A"}
|
return {"gpu_temperature": "N/A"}
|
||||||
except (subprocess.CalledProcessError, FileNotFoundError):
|
|
||||||
print("Error: 'sensors' command not found. Please install lm-sensors.")
|
|
||||||
return {"gpu_temperature": "N/A"}
|
|
||||||
|
|
||||||
def get_login_attempts():
|
def get_login_attempts():
|
||||||
"""Gets system login attempts from /var/log/auth.log."""
|
"""Gets system login attempts from /var/log/auth.log since the last check."""
|
||||||
try:
|
try:
|
||||||
|
last_position = 0
|
||||||
|
if os.path.exists(AUTH_LOG_POSITION_FILE):
|
||||||
|
with open(AUTH_LOG_POSITION_FILE, 'r') as f:
|
||||||
|
last_position = int(f.read())
|
||||||
|
|
||||||
with open("/var/log/auth.log", "r") as f:
|
with open("/var/log/auth.log", "r") as f:
|
||||||
|
f.seek(last_position)
|
||||||
log_lines = f.readlines()
|
log_lines = f.readlines()
|
||||||
|
current_position = f.tell()
|
||||||
|
|
||||||
|
with open(AUTH_LOG_POSITION_FILE, 'w') as f:
|
||||||
|
f.write(str(current_position))
|
||||||
|
|
||||||
failed_logins = []
|
failed_logins = []
|
||||||
for line in log_lines:
|
for line in log_lines:
|
||||||
@@ -115,129 +149,189 @@ def get_login_attempts():
|
|||||||
|
|
||||||
return {"failed_login_attempts": failed_logins}
|
return {"failed_login_attempts": failed_logins}
|
||||||
except FileNotFoundError:
|
except FileNotFoundError:
|
||||||
print("Error: /var/log/auth.log not found.")
|
logger.error("/var/log/auth.log not found.")
|
||||||
return {"failed_login_attempts": []}
|
return {"failed_login_attempts": []}
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Error reading login attempts: {e}")
|
logger.error(f"Error reading login attempts: {e}")
|
||||||
return {"failed_logins": []}
|
return {"failed_logins": []}
|
||||||
|
|
||||||
def get_nmap_scan_results():
|
def get_nmap_scan_results():
|
||||||
"""Performs an Nmap scan and returns the results."""
|
"""Performs an Nmap scan and returns a structured summary."""
|
||||||
try:
|
try:
|
||||||
nm = nmap.PortScanner()
|
nm = nmap.PortScanner()
|
||||||
scan_options = config.NMAP_SCAN_OPTIONS
|
scan_options = config.NMAP_SCAN_OPTIONS
|
||||||
if os.geteuid() != 0 and "-sS" in scan_options:
|
if os.geteuid() != 0 and "-sS" in scan_options:
|
||||||
print("Warning: Nmap -sS scan requires root privileges. Falling back to -sT.")
|
logger.warning("Nmap -sS scan requires root privileges. Falling back to -sT.")
|
||||||
scan_options = scan_options.replace("-sS", "-sT")
|
scan_options = scan_options.replace("-sS", "-sT")
|
||||||
|
|
||||||
scan_results = nm.scan(hosts=config.NMAP_TARGETS, arguments=scan_options)
|
scan_results = nm.scan(hosts=config.NMAP_TARGETS, arguments=scan_options)
|
||||||
return scan_results
|
|
||||||
|
# Process the results into a more structured format
|
||||||
|
processed_results = {"hosts": []}
|
||||||
|
if "scan" in scan_results:
|
||||||
|
for host, scan_data in scan_results["scan"].items():
|
||||||
|
host_info = {
|
||||||
|
"ip": host,
|
||||||
|
"status": scan_data.get("status", {}).get("state", "unknown"),
|
||||||
|
"hostname": scan_data.get("hostnames", [{}])[0].get("name", ""),
|
||||||
|
"open_ports": []
|
||||||
|
}
|
||||||
|
if "tcp" in scan_data:
|
||||||
|
for port, port_data in scan_data["tcp"].items():
|
||||||
|
if port_data.get("state") == "open":
|
||||||
|
host_info["open_ports"].append({
|
||||||
|
"port": port,
|
||||||
|
"service": port_data.get("name", ""),
|
||||||
|
"product": port_data.get("product", ""),
|
||||||
|
"version": port_data.get("version", "")
|
||||||
|
})
|
||||||
|
processed_results["hosts"].append(host_info)
|
||||||
|
|
||||||
|
return processed_results
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Error performing Nmap scan: {e}")
|
logger.error(f"Error performing Nmap scan: {e}")
|
||||||
return {"error": "Nmap scan failed"}
|
return {"error": "Nmap scan failed"}
|
||||||
|
|
||||||
# --- LLM Interaction Function ---
|
# --- Data Analysis ---
|
||||||
|
|
||||||
def analyze_data_with_llm(data, baselines):
|
def analyze_data_locally(data, baselines, known_issues, port_applications):
|
||||||
"""Analyzes data with the local LLM."""
|
"""Analyzes the collected data to find anomalies without using an LLM."""
|
||||||
with open("CONSTRAINTS.md", "r") as f:
|
anomalies = []
|
||||||
constraints = f.read()
|
|
||||||
|
|
||||||
with open("known_issues.json", "r") as f:
|
# Temperature checks
|
||||||
known_issues = json.load(f)
|
cpu_temp = data.get("cpu_temperature", {}).get("cpu_temperature")
|
||||||
|
gpu_temp = data.get("gpu_temperature", {}).get("gpu_temperature")
|
||||||
|
baseline_cpu_temp = baselines.get("average_cpu_temperature")
|
||||||
|
baseline_gpu_temp = baselines.get("average_gpu_temperature")
|
||||||
|
|
||||||
# Compare current nmap results with baseline
|
if isinstance(cpu_temp, (int, float)) and isinstance(baseline_cpu_temp, (int, float)):
|
||||||
nmap_changes = {"new_hosts": [], "changed_ports": {}}
|
if abs(cpu_temp - baseline_cpu_temp) > 5:
|
||||||
|
anomalies.append({
|
||||||
|
"severity": "medium",
|
||||||
|
"reason": f"CPU temperature deviation detected. Current: {cpu_temp}°C, Baseline: {baseline_cpu_temp}°C"
|
||||||
|
})
|
||||||
|
|
||||||
|
if isinstance(gpu_temp, (int, float)) and isinstance(baseline_gpu_temp, (int, float)):
|
||||||
|
if abs(gpu_temp - baseline_gpu_temp) > 5:
|
||||||
|
anomalies.append({
|
||||||
|
"severity": "medium",
|
||||||
|
"reason": f"GPU temperature deviation detected. Current: {gpu_temp}°C, Baseline: {baseline_gpu_temp}°C"
|
||||||
|
})
|
||||||
|
|
||||||
|
# Network RTT check
|
||||||
|
current_rtt = data.get("network_metrics", {}).get("rtt_avg")
|
||||||
|
baseline_rtt = baselines.get("average_rtt_avg")
|
||||||
|
|
||||||
|
if isinstance(current_rtt, (int, float)) and isinstance(baseline_rtt, (int, float)):
|
||||||
|
if abs(current_rtt - baseline_rtt) > 10000:
|
||||||
|
anomalies.append({
|
||||||
|
"severity": "high",
|
||||||
|
"reason": f"High network RTT fluctuation detected. Current: {current_rtt}ms, Baseline: {baseline_rtt}ms"
|
||||||
|
})
|
||||||
|
|
||||||
|
# Failed login attempts check
|
||||||
|
failed_logins = data.get("login_attempts", {}).get("failed_login_attempts")
|
||||||
|
if failed_logins:
|
||||||
|
anomalies.append({
|
||||||
|
"severity": "high",
|
||||||
|
"reason": f"{len(failed_logins)} failed login attempts detected."
|
||||||
|
})
|
||||||
|
|
||||||
|
# Nmap scan changes check
|
||||||
if "nmap_results" in data and "host_ports" in baselines:
|
if "nmap_results" in data and "host_ports" in baselines:
|
||||||
current_hosts = set(data["nmap_results"].get("scan", {}).keys())
|
current_hosts_info = {host['ip']: host for host in data["nmap_results"].get("hosts", [])}
|
||||||
|
current_hosts = set(current_hosts_info.keys())
|
||||||
baseline_hosts = set(baselines["host_ports"].keys())
|
baseline_hosts = set(baselines["host_ports"].keys())
|
||||||
|
|
||||||
# New hosts
|
# New hosts
|
||||||
nmap_changes["new_hosts"] = sorted(list(current_hosts - baseline_hosts))
|
new_hosts = sorted(list(current_hosts - baseline_hosts))
|
||||||
|
for host in new_hosts:
|
||||||
|
anomalies.append({
|
||||||
|
"severity": "high",
|
||||||
|
"reason": f"New host detected on the network: {host}"
|
||||||
|
})
|
||||||
|
|
||||||
# Changed ports on existing hosts
|
# Changed ports on existing hosts
|
||||||
for host in current_hosts.intersection(baseline_hosts):
|
for host_ip in current_hosts.intersection(baseline_hosts):
|
||||||
current_ports = set()
|
current_ports = set(p['port'] for p in current_hosts_info[host_ip].get("open_ports", []))
|
||||||
if "tcp" in data["nmap_results"]["scan"][host]:
|
baseline_ports = set(baselines["host_ports"].get(host_ip, []))
|
||||||
for port, port_data in data["nmap_results"]["scan"][host]["tcp"].items():
|
|
||||||
if port_data["state"] == "open":
|
|
||||||
current_ports.add(port)
|
|
||||||
|
|
||||||
baseline_ports = set(baselines["host_ports"].get(host, []))
|
|
||||||
|
|
||||||
newly_opened = sorted(list(current_ports - baseline_ports))
|
newly_opened = sorted(list(current_ports - baseline_ports))
|
||||||
newly_closed = sorted(list(baseline_ports - current_ports))
|
|
||||||
|
|
||||||
if newly_opened or newly_closed:
|
for port in newly_opened:
|
||||||
nmap_changes["changed_ports"][host] = {"opened": newly_opened, "closed": newly_closed}
|
port_info = port_applications.get(str(port), "Unknown")
|
||||||
|
anomalies.append({
|
||||||
|
"severity": "medium",
|
||||||
|
"reason": f"New port opened on {host_ip}: {port} ({port_info})"
|
||||||
|
})
|
||||||
|
|
||||||
prompt = f"""
|
return anomalies
|
||||||
**Role:** You are a dedicated and expert system administrator. Your primary role is to identify anomalies and provide concise, actionable reports.
|
|
||||||
|
|
||||||
**Instruction:** Analyze the following system and network data for any activity that appears out of place or different. Consider unusual values, errors, or unexpected patterns as anomalies. Compare the current data with the historical baseline data to identify significant deviations. Consult the known issues feed to avoid flagging resolved or expected issues. Pay special attention to the Nmap scan results for any new or unexpected open ports.
|
# --- LLM Interaction Function ---
|
||||||
|
|
||||||
**Context:**
|
def build_llm_prompt(anomalies):
|
||||||
Here is the system data in JSON format for your analysis: {json.dumps(data, indent=2)}
|
"""Builds the prompt for the LLM to generate a report from anomalies."""
|
||||||
|
return f"""
|
||||||
|
**Role:** You are a dedicated and expert system administrator. Your primary role is to provide a concise, actionable report based on a list of pre-identified anomalies.
|
||||||
|
|
||||||
**Historical Baseline Data:**
|
**Instruction:** Please synthesize the following list of anomalies into a single, human-readable report. The report should be a single JSON object with two keys: "severity" and "reason". The "severity" should be the highest severity from the list of anomalies. The "reason" should be a summary of all the anomalies.
|
||||||
{json.dumps(baselines, indent=2)}
|
|
||||||
|
|
||||||
**Nmap Scan Changes:**
|
**Anomalies:**
|
||||||
{json.dumps(nmap_changes, indent=2)}
|
{json.dumps(anomalies, indent=2)}
|
||||||
|
|
||||||
**Known Issues Feed:**
|
**Output Request:** Provide a report as a single JSON object with two keys: "severity" and "reason". The "severity" must be one of "high", "medium", "low", or "none". The "reason" must be a natural language explanation of the anomaly. If no anomaly is found, return a single JSON object with "severity" set to "none" and "reason" as an empty string. Do not wrap the JSON in markdown or any other formatting. Only return the JSON, and nothing else.
|
||||||
{json.dumps(known_issues, indent=2)}
|
|
||||||
|
|
||||||
**Constraints and Guidelines:**
|
|
||||||
{constraints}
|
|
||||||
|
|
||||||
**Output Request:** If you find an anomaly, provide a report as a single JSON object with two keys: "severity" and "reason". The "severity" must be one of "high", "medium", "low", or "none". The "reason" must be a natural language explanation of the anomaly. If no anomaly is found, return a single JSON object with "severity" set to "none" and "reason" as an empty string. Do not wrap the JSON in markdown or any other formatting.
|
|
||||||
|
|
||||||
**Reasoning Hint:** Think step by step to come to your conclusion. This is very important.
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
def generate_llm_report(anomalies):
|
||||||
|
"""Generates a report from a list of anomalies using the local LLM."""
|
||||||
|
if not anomalies:
|
||||||
|
return {"severity": "none", "reason": ""}
|
||||||
|
|
||||||
|
prompt = build_llm_prompt(anomalies)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
response = ollama.generate(model="llama3.1:8b", prompt=prompt)
|
response = ollama.generate(model="llama3.1:8b", prompt=prompt)
|
||||||
# Sanitize the response to ensure it's valid JSON
|
|
||||||
sanitized_response = response['response'].strip()
|
sanitized_response = response['response'].strip()
|
||||||
|
|
||||||
|
# Extract JSON from the response
|
||||||
|
try:
|
||||||
# Find the first '{' and the last '}' to extract the JSON object
|
# Find the first '{' and the last '}' to extract the JSON object
|
||||||
start_index = sanitized_response.find('{')
|
start_index = sanitized_response.find('{')
|
||||||
end_index = sanitized_response.rfind('}')
|
end_index = sanitized_response.rfind('}')
|
||||||
if start_index != -1 and end_index != -1:
|
if start_index != -1 and end_index != -1:
|
||||||
json_string = sanitized_response[start_index:end_index+1]
|
json_string = sanitized_response[start_index:end_index+1]
|
||||||
try:
|
llm_response = json.loads(json_string)
|
||||||
return json.loads(json_string)
|
logger.info(f"LLM Response: {llm_response}")
|
||||||
except json.JSONDecodeError:
|
return llm_response
|
||||||
# If parsing a single object fails, try parsing as a list
|
|
||||||
try:
|
|
||||||
json_list = json.loads(json_string)
|
|
||||||
if isinstance(json_list, list) and json_list:
|
|
||||||
return json_list[0] # Return the first object in the list
|
|
||||||
except json.JSONDecodeError as e:
|
|
||||||
print(f"Error decoding LLM response: {e}")
|
|
||||||
# Fallback for invalid JSON
|
|
||||||
return {{"severity": "low", "reason": response['response'].strip()}} # type: ignore
|
|
||||||
else:
|
else:
|
||||||
# Handle cases where the response is not valid JSON
|
# Handle cases where the response is not valid JSON
|
||||||
print(f"LLM returned a non-JSON response: {sanitized_response}")
|
logger.warning(f"LLM returned a non-JSON response: {sanitized_response}")
|
||||||
return {{"severity": "low", "reason": sanitized_response}} # type: ignore
|
return {"severity": "low", "reason": sanitized_response}
|
||||||
|
except json.JSONDecodeError as e:
|
||||||
|
logger.error(f"Error decoding LLM response: {e}")
|
||||||
|
# Fallback for invalid JSON
|
||||||
|
return {"severity": "low", "reason": sanitized_response}
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Error interacting with LLM: {e}")
|
logger.error(f"Error interacting with LLM: {e}")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
|
||||||
# --- Alerting Functions ---
|
# --- Alerting Functions ---
|
||||||
|
|
||||||
def send_discord_alert(message):
|
def send_discord_alert(llm_response, combined_data):
|
||||||
"""Sends an alert to Discord."""
|
"""Sends an alert to Discord."""
|
||||||
|
reason = llm_response.get('reason', 'No reason provided.')
|
||||||
|
message = f"**High Severity Alert:**\n> {reason}\n\n**Relevant Data:**\n```json\n{json.dumps(combined_data, indent=2)}\n```"
|
||||||
webhook = DiscordWebhook(url=config.DISCORD_WEBHOOK_URL, content=message)
|
webhook = DiscordWebhook(url=config.DISCORD_WEBHOOK_URL, content=message)
|
||||||
try:
|
try:
|
||||||
response = webhook.execute()
|
response = webhook.execute()
|
||||||
if response.status_code == 200:
|
if response.status_code == 200:
|
||||||
print("Discord alert sent successfully.")
|
logger.info("Discord alert sent successfully.")
|
||||||
else:
|
else:
|
||||||
print(f"Error sending Discord alert: {response.status_code} - {response.content}")
|
logger.error(f"Error sending Discord alert: {response.status_code} - {response.content}")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Error sending Discord alert: {e}")
|
logger.error(f"Error sending Discord alert: {e}")
|
||||||
|
|
||||||
def send_google_home_alert(message):
|
def send_google_home_alert(message):
|
||||||
"""Sends an alert to a Google Home speaker via Home Assistant."""
|
"""Sends an alert to a Google Home speaker via Home Assistant."""
|
||||||
@@ -246,8 +340,8 @@ def send_google_home_alert(message):
|
|||||||
response = ollama.generate(model="llama3.1:8b", prompt=f"Summarize the following message in a single sentence: {message}")
|
response = ollama.generate(model="llama3.1:8b", prompt=f"Summarize the following message in a single sentence: {message}")
|
||||||
simplified_message = response['response'].strip()
|
simplified_message = response['response'].strip()
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Error summarizing message: {e}")
|
logger.error(f"Error summarizing message: {e}")
|
||||||
simplified_message = message.split('.')[0] # Take the first sentence as a fallback
|
simplified_.message = message.split('.')[0] # Take the first sentence as a fallback
|
||||||
|
|
||||||
url = f"{config.HOME_ASSISTANT_URL}/api/services/tts/speak"
|
url = f"{config.HOME_ASSISTANT_URL}/api/services/tts/speak"
|
||||||
headers = {
|
headers = {
|
||||||
@@ -262,55 +356,47 @@ def send_google_home_alert(message):
|
|||||||
try:
|
try:
|
||||||
response = requests.post(url, headers=headers, json=data)
|
response = requests.post(url, headers=headers, json=data)
|
||||||
if response.status_code == 200:
|
if response.status_code == 200:
|
||||||
print("Google Home alert sent successfully.")
|
logger.info("Google Home alert sent successfully.")
|
||||||
else:
|
else:
|
||||||
print(f"Error sending Google Home alert: {response.status_code} - {response.text}")
|
logger.error(f"Error sending Google Home alert: {response.status_code} - {response.text}")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Error sending Google Home alert: {e}")
|
logger.error(f"Error sending Google Home alert: {e}")
|
||||||
|
|
||||||
# --- Main Script Logic ---
|
# --- Main Script Logic ---
|
||||||
|
|
||||||
|
def is_alerting_time():
|
||||||
|
"""Checks if the current time is within the alerting window (9am - 12am)."""
|
||||||
|
current_hour = datetime.now().hour
|
||||||
|
return 9 <= current_hour < 24
|
||||||
|
|
||||||
daily_events = []
|
daily_events = []
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
if config.TEST_MODE:
|
def send_daily_recap():
|
||||||
print("Running in test mode...")
|
"""Sends a daily recap of events to Discord."""
|
||||||
|
global daily_events
|
||||||
|
if daily_events:
|
||||||
|
recap_message = "\n".join(daily_events)
|
||||||
|
webhook = DiscordWebhook(url=config.DISCORD_WEBHOOK_URL, content=f"**Daily Recap:**\n{recap_message}")
|
||||||
|
try:
|
||||||
|
response = webhook.execute()
|
||||||
|
if response.status_code == 200:
|
||||||
|
logger.info("Daily recap sent successfully.")
|
||||||
|
else:
|
||||||
|
logger.error(f"Error sending daily recap: {response.status_code} - {response.content}")
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error sending daily recap: {e}")
|
||||||
|
daily_events = [] # Reset for the next day
|
||||||
|
|
||||||
|
def run_monitoring_cycle(nmap_scan_counter):
|
||||||
|
|
||||||
|
"""Runs a single monitoring cycle."""
|
||||||
|
logger.info("Running monitoring cycle...")
|
||||||
system_logs = get_system_logs()
|
system_logs = get_system_logs()
|
||||||
network_metrics = get_network_metrics()
|
network_metrics = get_network_metrics()
|
||||||
cpu_temp = get_cpu_temperature()
|
sensors_output = get_sensor_data()
|
||||||
gpu_temp = get_gpu_temperature()
|
cpu_temp = get_cpu_temperature(sensors_output)
|
||||||
login_attempts = get_login_attempts()
|
gpu_temp = get_gpu_temperature(sensors_output)
|
||||||
nmap_results = get_nmap_scan_results()
|
|
||||||
|
|
||||||
if system_logs and network_metrics:
|
|
||||||
combined_data = {
|
|
||||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
|
||||||
"system_logs": system_logs,
|
|
||||||
"network_metrics": network_metrics,
|
|
||||||
"cpu_temperature": cpu_temp,
|
|
||||||
"gpu_temperature": gpu_temp,
|
|
||||||
"login_attempts": login_attempts,
|
|
||||||
"nmap_results": nmap_results
|
|
||||||
}
|
|
||||||
data_storage.store_data(combined_data)
|
|
||||||
|
|
||||||
llm_response = analyze_data_with_llm(combined_data, data_storage.calculate_baselines())
|
|
||||||
|
|
||||||
if llm_response and llm_response.get('severity') != "none":
|
|
||||||
print(f"Anomaly detected: {llm_response.get('reason')}")
|
|
||||||
if llm_response.get('severity') == "high":
|
|
||||||
send_discord_alert(llm_response.get('reason'))
|
|
||||||
send_google_home_alert(llm_response.get('reason'))
|
|
||||||
else:
|
|
||||||
print("No anomaly detected.")
|
|
||||||
else:
|
|
||||||
nmap_scan_counter = 0
|
|
||||||
while True:
|
|
||||||
print("Running monitoring cycle...")
|
|
||||||
system_logs = get_system_logs()
|
|
||||||
network_metrics = get_network_metrics()
|
|
||||||
cpu_temp = get_cpu_temperature()
|
|
||||||
gpu_temp = get_gpu_temperature()
|
|
||||||
login_attempts = get_login_attempts()
|
login_attempts = get_login_attempts()
|
||||||
|
|
||||||
nmap_results = None
|
nmap_results = None
|
||||||
@@ -334,22 +420,36 @@ if __name__ == "__main__":
|
|||||||
|
|
||||||
data_storage.store_data(combined_data)
|
data_storage.store_data(combined_data)
|
||||||
|
|
||||||
llm_response = analyze_data_with_llm(combined_data, data_storage.calculate_baselines())
|
with open("known_issues.json", "r") as f:
|
||||||
|
known_issues = json.load(f)
|
||||||
|
|
||||||
|
with open("port_applications.json", "r") as f:
|
||||||
|
port_applications = json.load(f)
|
||||||
|
|
||||||
|
baselines = data_storage.calculate_baselines()
|
||||||
|
anomalies = analyze_data_locally(combined_data, baselines, known_issues, port_applications)
|
||||||
|
|
||||||
|
if anomalies:
|
||||||
|
llm_response = generate_llm_report(anomalies)
|
||||||
if llm_response and llm_response.get('severity') != "none":
|
if llm_response and llm_response.get('severity') != "none":
|
||||||
daily_events.append(llm_response.get('reason'))
|
daily_events.append(llm_response.get('reason'))
|
||||||
if llm_response.get('severity') == "high":
|
if llm_response.get('severity') == "high" and is_alerting_time():
|
||||||
send_discord_alert(llm_response.get('reason'))
|
send_discord_alert(llm_response, combined_data)
|
||||||
send_google_home_alert(llm_response.get('reason'))
|
send_google_home_alert(llm_response.get('reason'))
|
||||||
|
return nmap_scan_counter
|
||||||
|
|
||||||
# Daily Recap Logic
|
def main():
|
||||||
current_time = time.strftime("%H:%M")
|
"""Main function to run the monitoring agent."""
|
||||||
if current_time == config.DAILY_RECAP_TIME and daily_events:
|
if config.TEST_MODE:
|
||||||
recap_message = "\n".join(daily_events)
|
logger.info("Running in test mode...")
|
||||||
send_discord_alert(f"**Daily Recap:**\n{recap_message}")
|
run_monitoring_cycle(0)
|
||||||
daily_events = [] # Reset for the next day
|
else:
|
||||||
|
schedule.every().day.at(config.DAILY_RECAP_TIME).do(send_daily_recap)
|
||||||
|
nmap_scan_counter = 0
|
||||||
|
while True:
|
||||||
|
nmap_scan_counter = run_monitoring_cycle(nmap_scan_counter)
|
||||||
|
schedule.run_pending()
|
||||||
time.sleep(300) # Run every 5 minutes
|
time.sleep(300) # Run every 5 minutes
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
|
|||||||
19
port_applications.json
Normal file
19
port_applications.json
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
|
||||||
|
{
|
||||||
|
"20": "FTP",
|
||||||
|
"21": "FTP",
|
||||||
|
"22": "SSH",
|
||||||
|
"23": "Telnet",
|
||||||
|
"25": "SMTP",
|
||||||
|
"53": "DNS",
|
||||||
|
"80": "HTTP",
|
||||||
|
"110": "POP3",
|
||||||
|
"143": "IMAP",
|
||||||
|
"443": "HTTPS",
|
||||||
|
"445": "SMB",
|
||||||
|
"587": "SMTP",
|
||||||
|
"993": "IMAPS",
|
||||||
|
"995": "POP3S",
|
||||||
|
"3306": "MySQL",
|
||||||
|
"3389": "RDP"
|
||||||
|
}
|
||||||
@@ -1,6 +1,7 @@
|
|||||||
discord-webhook
|
pingparsing
|
||||||
requests
|
requests
|
||||||
|
discord-webhook
|
||||||
ollama
|
ollama
|
||||||
syslog-rfc5424-parser
|
syslog-rfc5424-parser
|
||||||
pingparsing
|
|
||||||
python-nmap
|
python-nmap
|
||||||
|
schedule
|
||||||
22
test_output.log
Normal file
22
test_output.log
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
Traceback (most recent call last):
|
||||||
|
File "/home/artanis/Documents/LLM-Powered-Monitoring-Agent/monitor_agent.py", line 31, in <module>
|
||||||
|
file_handler = TimedRotatingFileHandler(LOG_FILE, when="midnight", interval=1, backupCount=1)
|
||||||
|
File "/home/artanis/.pyenv/versions/3.13.1/lib/python3.13/logging/handlers.py", line 223, in __init__
|
||||||
|
BaseRotatingHandler.__init__(self, filename, 'a', encoding=encoding,
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
delay=delay, errors=errors)
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
File "/home/artanis/.pyenv/versions/3.13.1/lib/python3.13/logging/handlers.py", line 64, in __init__
|
||||||
|
logging.FileHandler.__init__(self, filename, mode=mode,
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
encoding=encoding, delay=delay,
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
errors=errors)
|
||||||
|
^^^^^^^^^^^^^^
|
||||||
|
File "/home/artanis/.pyenv/versions/3.13.1/lib/python3.13/logging/__init__.py", line 1218, in __init__
|
||||||
|
StreamHandler.__init__(self, self._open())
|
||||||
|
~~~~~~~~~~^^
|
||||||
|
File "/home/artanis/.pyenv/versions/3.13.1/lib/python3.13/logging/__init__.py", line 1247, in _open
|
||||||
|
return open_func(self.baseFilename, self.mode,
|
||||||
|
encoding=self.encoding, errors=self.errors)
|
||||||
|
PermissionError: [Errno 13] Permission denied: '/home/artanis/Documents/LLM-Powered-Monitoring-Agent/monitoring_agent.log'
|
||||||
Reference in New Issue
Block a user