Download PDFOpen PDF in browserCan Generative AI Write Effective Abstracts for the ETC Community?EasyChair Preprint 160157 pages•Date: April 2, 2026AbstractThe proliferation of generative AI (GAI) raises the question as to whether writing-from-sources tasks remain valid credit-bearing assessments of learning and writing achievement. This study probed whether select GAIs, using popular science articles as input texts, produce effective abstracts for the engineering, technology, and computing (ETC) discourse community. We input two articles into three GAIs: M365 Copilot (GPT-5 model); ChatGPT (GPT-4 model); and Perplexity (GPT-5.1 model). Using two prompts—a basic single-shot prompt and a single-shot prompt with a macro-structure for abstracts—we generated 12 abstracts. We present two sets of data: numerical scoring of the GAI-generated abstracts and outstanding discourse features of the abstracts. Of the 12 abstracts, all received a passing score of 50% or higher. More guidance in prompting did not lead to overall improved abstracts. The macro-structure in prompt two eroded the accuracy of abstracts produced by Copilot and ChatGPT. In terms of discourse quality, while the GAIs maintained an academic register, their credibility as authors was questionable. Bear in mind that authors of studies are understood to be the authors of the abstracts of those studies. All three GAIs were unable to inhabit the role of author in their abstracts by avoiding the first person; making overt references to third parties; and using active-voice constructions.While this study does not recommend that for-credit writing-from-sources tasks be retired, it does present evidence of the need to re-design this type of assessment. Keyphrases: Engineering writing, Generative AI, abstract writing, writing from sources
|

