Chen Wang, Eun Kyung Lee, et al.
KubeCon EU 2024
Classical machine-learning auto-tuners for OS control struggle with semantic gaps, brittle rewards, and unsafe exploration. We introduce an online, LLM-driven agent that emulates expert reasoning for continuous OS optimization. When tuning the Linux Completely Fair Scheduler’s hyperparameters, the agent outperforms Bayesian optimization by 5% in single-parameter tuning, 7.1% in two-parameter co-tuning, and a human expert by 2.98% overall, while converging faster and adapting more quickly to workload changes. When application counters are unavailable, system-level proxies (e.g., Instructions Per Cycle (IPC)) preserved tail latency in our setup. Putting this together, we propose adopting the Model Context Protocol (MCP) for tool/resource discovery and invocation and a logging channel; on top of that, we propose adding transactional apply--commit--revert, host-mediated approval gates, and policy controls in the OS-tuning server and host to ensure safe, auditable operation. Our results and reference design suggest a practical path toward safe, self‑adapting OS control.
Chen Wang, Eun Kyung Lee, et al.
KubeCon EU 2024
Christodoulos Constantinides, Dhaval Patel, et al.
NeurIPS 2025
Lisa Hamada, Akihiro Kishimoto, et al.
NeurIPS 2025
Dominik Metzler
PESM 2023