5.3 KiB
5.3 KiB
Project Progress
Phase 1: Initial Setup
- Create
monitor_agent.py - Create
config.py - Create
requirements.txt - Create
README.md - Create
.gitignore - Create
SPEC.md - Create
PROMPT.md - Create
CONSTRAINTS.md
Phase 2: Data Storage
- Implement data storage functions in
data_storage.py - Update
monitor_agent.pyto use data storage - Update
SPEC.mdto reflect data storage functionality
Phase 3: Expanded Monitoring
- Implement CPU temperature monitoring
- Implement GPU temperature monitoring
- Implement system login attempt monitoring
- Update
monitor_agent.pyto include new metrics - Update
SPEC.mdto reflect new metrics - Extend
calculate_baselinesto include system temps
Phase 4: Troubleshooting
- Investigated and resolved issue with
jclibrary - Removed
jclibrary as a dependency - Implemented manual parsing of
sensorscommand output
Phase 5: Network Scanning (Nmap Integration)
- Add
python-nmaptorequirements.txtand install. - Define
NMAP_TARGETSandNMAP_SCAN_OPTIONSinconfig.py. - Create a new function
get_nmap_scan_results()inmonitor_agent.py:- Use
python-nmapto perform a scan on the defined targets with the specified options. - Return the parsed results.
- Use
- Integrate
get_nmap_scan_results()into the main monitoring loop:- Call this function periodically (e.g., less frequently than other metrics).
- Add the
nmapresults to thecombined_datadictionary.
- Update
data_storage.pyto storenmapresults. - Extend
calculate_baselines()indata_storage.pyto includenmapbaselines:- Compare current
nmapresults with historical data to identify changes.
- Compare current
- Modify
analyze_data_with_llm()prompt to includenmapscan results for analysis. - Consider how to handle
nmappermissions. - Improve Nmap data logging to include IP addresses, open ports, and service details.
Phase 6: Code Refactoring and Documentation
- Remove duplicate
pingparsingimport inmonitor_agent.py. - Refactor
get_cpu_temperatureandget_gpu_temperatureto callsensorscommand only once. - Refactor
get_login_attemptsto use a position file for efficient log reading. - Simplify JSON parsing in
analyze_data_with_llm. - Move LLM prompt to a separate function
build_llm_prompt. - Refactor main loop into smaller functions (
run_monitoring_cycle,main). - Create helper function in
data_storage.pyfor calculating average metrics. - Update
README.mdwith current project status and improvements. - Create
AGENTS.mdto document human and autonomous agents.
Previous TODO
- Improve "high" priority detection by explicitly instructing LLM to output severity in structured JSON format.
- Implement dynamic contextual information (Known/Resolved Issues Feed) for LLM to improve severity detection.
- Change baseline calculations to only use integers instead of floats.
- Add a log file that only keeps records for the past 24 hours.
- Log all LLM responses to the console.
- Reduce alerts to only happen between 9am and 12am.
- Get hostnames of devices in Nmap scan.
- Filter out RTT fluctuations below 10 seconds.
- Filter out temperature fluctuations with differences less than 5 degrees.
- Create a list of known port numbers and their applications for the LLM to check against to see if an open port is a threat
- When calculating averages, please round up to the nearest integer. We only want to deliver whole integers to the LLM to process, and nothing with decimal points. It gets confused with decimal points.
- In the discord message, please include the exact specific details and the log of the problem that prompted the alert
Phase 7: Offloading Analysis from LLM
- Create a new function
analyze_data_locallyinmonitor_agent.py. 39.1. [x] This function will takedata,baselines,known_issues, andport_applicationsas input. 39.2. [x] It will contain the logic to compare current data with baselines and predefined thresholds. 39.3. [x] It will be responsible for identifying anomalies for various metrics (CPU/GPU temp, network RTT, failed logins, Nmap changes). 39.4. [x] It will return a list of dictionaries, where each dictionary represents an anomaly and contains 'severity' and 'reason' keys. - Refactor
analyze_data_with_llminto a new function calledgenerate_llm_report. 40.1. [x] This function will take the list of anomalies fromanalyze_data_locallyas input. 40.2. [x] It will construct a simple prompt to ask the LLM to generate a human-readable summary of the anomalies. 40.3. [x] The LLM will no longer be making analytical decisions. - Update
run_monitoring_cycleto orchestrate the new workflow. 41.1. [x] Callanalyze_data_locallyto get the list of anomalies. 41.2. [x] If anomalies are found, callgenerate_llm_reportto create the report. 41.3. [x] Use the output ofgenerate_llm_reportfor alerting. - Remove the detailed analytical instructions from
build_llm_promptas they will be handled byanalyze_data_locally.