Compare commits

...

2 Commits

Author SHA1 Message Date
0b64f2ed03 Switched to SQLlite database 2025-09-14 22:01:00 -05:00
d102dc30f4 Offloaded data detection from the LLM and hardcoded it 2025-08-24 13:30:21 -05:00
7 changed files with 41 additions and 6 deletions

0
AGENTS.md Normal file → Executable file
View File

0
CONSTRAINTS.md Normal file → Executable file
View File

6
PROGRESS.md Normal file → Executable file
View File

@@ -76,8 +76,6 @@
- [x] When calculating averages, please round up to the nearest integer. We only want to deliver whole integers to the LLM to process, and nothing with decimal points. It gets confused with decimal points.
- [x] In the discord message, please include the exact specific details and the log of the problem that prompted the alert
## TODO
## Phase 7: Offloading Analysis from LLM
39. [x] Create a new function `analyze_data_locally` in `monitor_agent.py`.
@@ -93,4 +91,6 @@
41.1. [x] Call `analyze_data_locally` to get the list of anomalies.
41.2. [x] If anomalies are found, call `generate_llm_report` to create the report.
41.3. [x] Use the output of `generate_llm_report` for alerting.
42. [x] Remove the detailed analytical instructions from `build_llm_prompt` as they will be handled by `analyze_data_locally`.
42. [x] Remove the detailed analytical instructions from `build_llm_prompt` as they will be handled by `analyze_data_locally`.
## TODO

0
README.md Normal file → Executable file
View File

26
SPEC.md Normal file → Executable file
View File

@@ -108,3 +108,29 @@ The project will be composed of the following files:
## 7. Testing and Debugging
The script is equipped with a test mode, that only runs the script once, and not continuously. To enable, change the `TEST_MODE` variable in `config.py` to `True`. Once finished testing, change the variable back to `False`.
## 8. Future Enhancements
### 8.1. Process Monitoring
**Description:** The agent will be able to monitor a list of critical processes to ensure they are running. If a process is not running, an anomaly will be generated.
**Implementation Plan:**
1. **Configuration:** Add a new list variable to `config.py` named `PROCESSES_TO_MONITOR` which will contain the names of the processes to be monitored.
2. **Data Ingestion:** Create a new function in `monitor_agent.py` called `get_running_processes()` that uses the `psutil` library to get a list of all running processes.
3. **Data Analysis:** In `analyze_data_locally()`, compare the list of running processes with the `PROCESSES_TO_MONITOR` list from the configuration. If a process from the configured list is not found in the running processes, generate a "high" severity anomaly.
4. **LLM Integration:** The existing `generate_llm_report()` function will be used to generate a report for the new anomaly type.
5. **Alerting:** The existing alerting system will be used to send alerts for the new anomaly type.
### 8.2. Docker Container Monitoring
**Description:** The agent will be able to monitor a list of critical Docker containers to ensure they are running and healthy. If a container is not running or is in an unhealthy state, an anomaly will be generated.
**Implementation Plan:**
1. **Configuration:** Add a new list variable to `config.py` named `DOCKER_CONTAINERS_TO_MONITOR` which will contain the names of the Docker containers to be monitored.
2. **Data Ingestion:** Create a new function in `monitor_agent.py` called `get_docker_container_status()` that uses the `docker` Python library to get the status of all running containers.
3. **Data Analysis:** In `analyze_data_locally()`, iterate through the `DOCKER_CONTAINERS_TO_MONITOR` list. For each container, check its status. If a container is not running or its status is not "running", generate a "high" severity anomaly.
4. **LLM Integration:** The existing `generate_llm_report()` function will be used to generate a report for the new anomaly type.
5. **Alerting:** The existing alerting system will be used to send alerts for the new anomaly type.

View File

@@ -284,6 +284,7 @@ def build_llm_prompt(anomalies):
def generate_llm_report(anomalies):
"""Generates a report from a list of anomalies using the local LLM."""
logger.info("Generating LLM report...")
if not anomalies:
return {"severity": "none", "reason": ""}
@@ -322,7 +323,13 @@ def generate_llm_report(anomalies):
def send_discord_alert(llm_response, combined_data):
"""Sends an alert to Discord."""
reason = llm_response.get('reason', 'No reason provided.')
message = f"**High Severity Alert:**\n> {reason}\n\n**Relevant Data:**\n```json\n{json.dumps(combined_data, indent=2)}\n```"
message = f"""**High Severity Alert:**
> {reason}
**Relevant Data:**
```json
{json.dumps(combined_data, indent=2)}
```"""
webhook = DiscordWebhook(url=config.DISCORD_WEBHOOK_URL, content=message)
try:
response = webhook.execute()
@@ -430,6 +437,7 @@ def run_monitoring_cycle(nmap_scan_counter):
anomalies = analyze_data_locally(combined_data, baselines, known_issues, port_applications)
if anomalies:
logger.info(f"Detected {len(anomalies)} anomalies: {anomalies}")
llm_response = generate_llm_report(anomalies)
if llm_response and llm_response.get('severity') != "none":
daily_events.append(llm_response.get('reason'))
@@ -452,4 +460,4 @@ def main():
time.sleep(300) # Run every 5 minutes
if __name__ == "__main__":
main()
main()

3
requirements.txt Normal file → Executable file
View File

@@ -4,4 +4,5 @@ discord-webhook
ollama
syslog-rfc5424-parser
python-nmap
schedule
schedule
docker