Toward Efficient Software Engineering Automation

Advancing Efficient and Sustainable Software Engineering Automation via, Quantization, Knowledge Distillation and PEFT

As the demand for automated solutions in software engineering continues to rise, the efficiency of training and deploying large language models (LLMs) becomes a critical concern. Efficient software engineering automation aims to tackle this challenge by adopting techniques that optimize resource utilization while preserving high performance.

Parameter-Efficient Fine-Tuning (PEFT) allows large models to be adapted to new tasks by updating only a small portion of their parameters, significantly lowering computational overhead. Quantization reduces model size and accelerates inference by decreasing numerical precision, all while maintaining acceptable levels of accuracy. Knowledge distillation transfers the capabilities of a large model into a smaller, faster one, preserving much of the original performance at a fraction of the computational cost.

Our ongoing research investigates the use and combination of these methods to develop efficient, cost-effective, and scalable approaches for automating software engineering tasks—advancing practical, high-performance AI-driven development workflows.