Fengjie Wang, Xuye Liu, et al.
CHI 2023
Many enterprise systems including large-scale deployment platforms like Ansible provide a declarative user interface through programming languages like JavaScript Object Notation (JSON). The integrity of such systems is maintained through validation rules, such as JSON schemas, enforced over the code. Enterprise tasks in such systems are often complex, involving multiple schemas, making it challenging for the developers to choose the correct set and subsequently write multiple schema-compliant code snippets for each task. Recently, Large Language Models (LLMs) have shown promising performance for many declarative code generation tasks when adopted with constrained generation using a pre-known schema. However, to cater to real-world enterprise tasks, each task often requiring multiple code snippets to generate while ensuring compliance with their respective schemas, we introduce a novel framework that allows LLMs to generate multiple code snippets while choosing an appropriate schema for each of the snippets for constrained generation. To the best of our knowledge, we are the first to study this crucial enterprise problem for declarative systems and preliminary results on two real-world use cases demonstrate substantial improvements in both syntactic and semantic task performance. These findings highlight the potential of the approach to enhance the reliability and scalability of LLMs in declarative enterprise systems, indicating a promising direction for future research and development.
Fengjie Wang, Xuye Liu, et al.
CHI 2023
Rangeet Pan, Myeongsoo Kim, et al.
ICSE 2025
Jiaqin Yuan, Michele Merler, et al.
ACL 2023
Prashanth Vijayaraghavan, Apoorva Nitsure, et al.
DAC 2025