Toward Sustainable Software Engineering Automation

Advancing Sustainable Software Engineering Automation via, Quantization, Knowledge Distillation and PEFT

As the demand for automated solutions in software engineering grows, the environmental impact of training and deploying large language models (LLMs) becomes increasingly significant. Sustainable software engineering automation seeks to address these challenges by leveraging techniques that enhance efficiency while maintaining high performance.

Parameter-Efficient Fine-Tuning (PEFT) enables the adaptation of large models to new tasks by training only a small subset of parameters, drastically reducing computational requirements and resource usage. Quantization compresses models by lowering the precision of numerical representations, reducing memory consumption and improving inference speeds without substantial loss in accuracy. Knowledge distillation facilitates the transfer of knowledge from a large model to a smaller, more efficient version, using the larger model as a guide to retain performance at a fraction of the computational cost.

Our ongoing research explores the application and integration of these techniques, aiming to develop greener, cost-effective, and scalable approaches for automating software engineering tasks, thereby promoting sustainable and responsible AI-driven solutions.