Blending Language Models and Domain-Specific Languages in Computer Science Education. A Case Study on API RESTFul

Author
Keywords
Abstract
Since Computer Science students are used to applying both General Purpose Programming Languages (GPPLs) and Domain-Specific Languages (DSLs), Generative Artificial Intelligence based on Language Models (LMs) can help them on automatic tasks, allowing them to focus on more creative tasks and higher skills. However, the teaching and evaluation of technical tasks in Computer Science can be inefficient and prone to errors. Thus, the main objective of this article is to explore the performance of LMs compared to that of undergraduate Computer Science students in a specific case study: designing and implementing RESTful APIs DSLs. This research aims to determine if LMs can enhance the efficiency and accuracy of these processes. Our case study involved 39 students and 5 different LMs that must use the two DSLs we also designed for their task assignment. To evaluate performance, we applied uniform criteria to student and LMs-generated solutions, enabling a comparative analysis of accuracy and effectiveness. With a case study comparing performance between students and LMs, this article contributes to checking to what extent LMs are able to carry out software development tasks involving the use of new DSLs specially designed for highly specific settings in a similar way as well-qualified Computer Science students are able to. The results underscore the importance of welldefined DSLs and effective prompting processes for optimal LM performance. Specifically, LMs demonstrated high variability in task execution, with two GPT-based LMs achieving similar grades to those scored by the best of the students for every task, obtaining 0.78 and 0.92 on a normalized scale [0, 1], with 0.23 and 0.14 Standard Deviation for ChatGPT-4 and ChatGPT-4o respectively. After the experience, we can conclude that a well-defined DSL and a proper prompting process, providing the LM with metadata, persistent prompts, and a good knowledge base, are crucial for good LM performance. When LMs receive the right prompts, both large and small LMs can achieve excellent results depending on the task.
Year of Publication
In Press
Journal
International Journal of Interactive Multimedia and Artificial Intelligence
Volume
In press
Start Page
1
Issue
In press
Number
In press
Number of Pages
1-19
Date Published
09/2025
ISSN Number
1989-1660
URL
DOI
Attachment