LLM-Powered Monitoring Agent

This project is a self-hosted monitoring agent that uses a local Large Language Model (LLM) to detect anomalies in system and network data. It's designed to be a simple, self-contained Python script that can be easily deployed on a server.

1. Installation

To get started, you'll need to have Python 3.8 or newer installed. Then, follow these steps:

  1. Clone the repository or download the files:

    git clone <repository_url>
    cd <repository_directory>
    
  2. Create and activate a Python virtual environment:

    python -m venv venv
    source venv/bin/activate  # On Windows, use `venv\Scripts\activate`
    
  3. Install the required Python libraries:

    pip install -r requirements.txt
    

2. Setup

Before running the agent, you need to configure it and ensure the necessary services are running.

Prerequisites

  • Ollama: The agent requires that Ollama is installed and running on the server.

  • LLM Model: You must have the llama3.1:8b model pulled and available in Ollama. You can pull it with the following command:

    ollama pull llama3.1:8b
    

Configuration

All configuration is done in the config.py file. You will need to replace the placeholder values with your actual credentials and URLs.

  • DISCORD_WEBHOOK_URL: Your Discord channel's webhook URL. This is used to send alerts.
  • HOME_ASSISTANT_URL: The URL of your Home Assistant instance (e.g., http://192.168.1.50:8123).
  • HOME_ASSISTANT_TOKEN: A Long-Lived Access Token for your Home Assistant instance. You can generate this in your Home Assistant profile settings.
  • GOOGLE_HOME_SPEAKER_ID: The media_player entity ID for your Google Home speaker in Home Assistant (e.g., media_player.kitchen_speaker).

3. Usage

Once the installation and setup are complete, you can run the monitoring agent with the following command:

python monitor_agent.py

The script will start a continuous monitoring loop. Every 5 minutes, it will:

  1. Collect simulated system and network data.
  2. Send the data to the local LLM for analysis.
  3. If the LLM detects a high-severity anomaly, it will send an alert to your configured Discord channel and broadcast a message to your Google Home speaker via Home Assistant.
  4. At the time specified in DAILY_RECAP_TIME, a summary of all anomalies for the day will be sent to the Discord channel.

The script will print its status and any detected anomalies to the console.

4. Features

Priority System

The monitoring agent uses a priority system to classify anomalies. The LLM is instructed to return a severity level for each anomaly it detects. The possible severity levels are:

  • high: Indicates a critical issue that requires immediate attention. An alert is sent to Discord and Google Home.
  • medium: Indicates a non-critical issue that should be investigated. No alert is sent.
  • low: Indicates a minor issue or a potential false positive. No alert is sent.
  • none: Indicates that no anomaly was detected.

Known Issues Feed

The agent uses a known_issues.json file to provide the LLM with a list of known issues and their resolutions. This helps the LLM to avoid flagging resolved or expected issues as anomalies.

You can add new issues to the known_issues.json file by following the existing format. Each issue should have an "issue" and a "resolution" key. For example:

[
    {
        "issue": "CPU temperature spikes to 80C under heavy load",
        "resolution": "This is normal behavior for this CPU model and is not a cause for concern."
    }
]

Note on Mock Data: The current version of the script uses mock data for system logs and network metrics. To use this in a real-world scenario, you would need to replace the mock data with actual data from your systems.

Description
No description provided
Readme 8.9 MiB
Languages
Python 100%