96 lines
5.3 KiB
Markdown
96 lines
5.3 KiB
Markdown
# Project Progress
|
|
|
|
## Phase 1: Initial Setup
|
|
|
|
1. [x] Create `monitor_agent.py`
|
|
2. [x] Create `config.py`
|
|
3. [x] Create `requirements.txt`
|
|
4. [x] Create `README.md`
|
|
5. [x] Create `.gitignore`
|
|
6. [x] Create `SPEC.md`
|
|
7. [x] Create `PROMPT.md`
|
|
8. [x] Create `CONSTRAINTS.md`
|
|
|
|
## Phase 2: Data Storage
|
|
|
|
9. [x] Implement data storage functions in `data_storage.py`
|
|
10. [x] Update `monitor_agent.py` to use data storage
|
|
11. [x] Update `SPEC.md` to reflect data storage functionality
|
|
|
|
## Phase 3: Expanded Monitoring
|
|
|
|
12. [x] Implement CPU temperature monitoring
|
|
13. [x] Implement GPU temperature monitoring
|
|
14. [x] Implement system login attempt monitoring
|
|
15. [x] Update `monitor_agent.py` to include new metrics
|
|
16. [x] Update `SPEC.md` to reflect new metrics
|
|
17. [x] Extend `calculate_baselines` to include system temps
|
|
|
|
## Phase 4: Troubleshooting
|
|
|
|
18. [x] Investigated and resolved issue with `jc` library
|
|
19. [x] Removed `jc` library as a dependency
|
|
20. [x] Implemented manual parsing of `sensors` command output
|
|
|
|
## Phase 5: Network Scanning (Nmap Integration)
|
|
|
|
21. [x] Add `python-nmap` to `requirements.txt` and install.
|
|
22. [x] Define `NMAP_TARGETS` and `NMAP_SCAN_OPTIONS` in `config.py`.
|
|
23. [x] Create a new function `get_nmap_scan_results()` in `monitor_agent.py`:
|
|
* [x] Use `python-nmap` to perform a scan on the defined targets with the specified options.
|
|
* [x] Return the parsed results.
|
|
24. [x] Integrate `get_nmap_scan_results()` into the main monitoring loop:
|
|
* [x] Call this function periodically (e.g., less frequently than other metrics).
|
|
* [x] Add the `nmap` results to the `combined_data` dictionary.
|
|
25. [x] Update `data_storage.py` to store `nmap` results.
|
|
26. [x] Extend `calculate_baselines()` in `data_storage.py` to include `nmap` baselines:
|
|
* [x] Compare current `nmap` results with historical data to identify changes.
|
|
27. [x] Modify `analyze_data_with_llm()` prompt to include `nmap` scan results for analysis.
|
|
28. [x] Consider how to handle `nmap` permissions.
|
|
29. [x] Improve Nmap data logging to include IP addresses, open ports, and service details.
|
|
|
|
## Phase 6: Code Refactoring and Documentation
|
|
|
|
30. [x] Remove duplicate `pingparsing` import in `monitor_agent.py`.
|
|
31. [x] Refactor `get_cpu_temperature` and `get_gpu_temperature` to call `sensors` command only once.
|
|
32. [x] Refactor `get_login_attempts` to use a position file for efficient log reading.
|
|
33. [x] Simplify JSON parsing in `analyze_data_with_llm`.
|
|
34. [x] Move LLM prompt to a separate function `build_llm_prompt`.
|
|
35. [x] Refactor main loop into smaller functions (`run_monitoring_cycle`, `main`).
|
|
36. [x] Create helper function in `data_storage.py` for calculating average metrics.
|
|
37. [x] Update `README.md` with current project status and improvements.
|
|
38. [x] Create `AGENTS.md` to document human and autonomous agents.
|
|
|
|
## Previous TODO
|
|
|
|
- [x] Improve "high" priority detection by explicitly instructing LLM to output severity in structured JSON format.
|
|
- [x] Implement dynamic contextual information (Known/Resolved Issues Feed) for LLM to improve severity detection.
|
|
- [x] Change baseline calculations to only use integers instead of floats.
|
|
- [x] Add a log file that only keeps records for the past 24 hours.
|
|
- [x] Log all LLM responses to the console.
|
|
- [x] Reduce alerts to only happen between 9am and 12am.
|
|
- [x] Get hostnames of devices in Nmap scan.
|
|
- [x] Filter out RTT fluctuations below 10 seconds.
|
|
- [x] Filter out temperature fluctuations with differences less than 5 degrees.
|
|
- [x] Create a list of known port numbers and their applications for the LLM to check against to see if an open port is a threat
|
|
- [x] When calculating averages, please round up to the nearest integer. We only want to deliver whole integers to the LLM to process, and nothing with decimal points. It gets confused with decimal points.
|
|
- [x] In the discord message, please include the exact specific details and the log of the problem that prompted the alert
|
|
|
|
## TODO
|
|
|
|
## Phase 7: Offloading Analysis from LLM
|
|
|
|
39. [x] Create a new function `analyze_data_locally` in `monitor_agent.py`.
|
|
39.1. [x] This function will take `data`, `baselines`, `known_issues`, and `port_applications` as input.
|
|
39.2. [x] It will contain the logic to compare current data with baselines and predefined thresholds.
|
|
39.3. [x] It will be responsible for identifying anomalies for various metrics (CPU/GPU temp, network RTT, failed logins, Nmap changes).
|
|
39.4. [x] It will return a list of dictionaries, where each dictionary represents an anomaly and contains 'severity' and 'reason' keys.
|
|
40. [x] Refactor `analyze_data_with_llm` into a new function called `generate_llm_report`.
|
|
40.1. [x] This function will take the list of anomalies from `analyze_data_locally` as input.
|
|
40.2. [x] It will construct a simple prompt to ask the LLM to generate a human-readable summary of the anomalies.
|
|
40.3. [x] The LLM will no longer be making analytical decisions.
|
|
41. [x] Update `run_monitoring_cycle` to orchestrate the new workflow.
|
|
41.1. [x] Call `analyze_data_locally` to get the list of anomalies.
|
|
41.2. [x] If anomalies are found, call `generate_llm_report` to create the report.
|
|
41.3. [x] Use the output of `generate_llm_report` for alerting.
|
|
42. [x] Remove the detailed analytical instructions from `build_llm_prompt` as they will be handled by `analyze_data_locally`. |