HIVEMIND kicks-off to advance AI-powered human-centric software development
This press release announces the launch of the HIVEMIND project, highlighting its goals, key technologies, and international partnerships aimed at advancing AI-driven software development.
Author: HIVEMIND
HIVEMIND NEWSLETTER_01
The first edition of the HIVEMIND newsletter provides a concise overview of the project’s initial six months. It highlights key milestones, including the kick-off meeting, engagement in sector events, and interviews with Work Package leaders on building a responsible and collaborative multi-agent AI system.
HIVEMIND project overview presentation
This presentation introduces the HIVEMIND project, outlining its objectives, technical architecture, specialised AI agents, and five industrial use cases. It also highlights the project’s anticipated scientific, industrial, and societal impacts.
HIVEMIND project trifold brochure
This trifold brochure provides an overview of the HIVEMIND project, presenting its vision, consortium, application domains, and key AI-powered agents supporting the software development lifecycle.
HIVEMIND project poster
This poster presents an overview of the HIVEMIND project, highlighting its human-centric, AI-driven multi-agent framework for accelerating the software development lifecycle. It outlines the project’s vision, architecture, specialised agents, data handling approach, fine-tuning methods, and real-world validation use cases.
What About Emotions? Guiding Fine-Grained Emotion Extraction from Mobile App Reviews
This paper explores the underexamined area of fine-grained emotion classification in app reviews, extending beyond the traditional focus on sentiment polarity (positive, negative, neutral). To capture the complexity of users’ affective responses, the study adapts Plutchik’s emotion taxonomy and introduces a structured annotation framework and dataset tailored to app reviews. Through an iterative human annotation process, the authors establish clear guidelines, highlight challenges in interpreting emotions, and assess the feasibility of automation with large language models (LLMs). The results show that LLMs substantially reduce manual annotation effort and achieve notable agreement with human annotators, though full automation remains difficult due to the nuanced nature of emotions. This work provides structured guidelines, an annotated dataset, and insights for building semi-automated pipelines, offering valuable contributions to opinion mining, requirements engineering, and user feedback analysis.
Multi-Agent Debate Strategies to Enhance Requirements Engineering with Large Language Models
This paper investigates the potential of Multi-Agent Debate (MAD) strategies to enhance the performance of Large Language Model (LLM) agents in Requirements Engineering (RE) tasks. While prior research has focused on prompt engineering, fine-tuning, and retrieval-augmented generation, these methods often treat LLMs as isolated black boxes, relying on single-pass outputs with limited robustness and adaptability. Inspired by the way human debates improve accuracy by incorporating diverse perspectives, this study explores whether collaborative interactions among multiple LLM agents can yield similar benefits. We systematically analyze existing MAD strategies across different domains, identifying their key characteristics and developing a taxonomy of core attributes. Building on this foundation, we implement and evaluate a preliminary MAD-based framework for RE classification. The results demonstrate both the feasibility and potential advantages of applying MAD to RE, paving the way for more robust, adaptive, and accurate use of LLMs in engineering contexts.
HIVEMIND NEWSLETTER_02
The second issue of the HIVEMIND project newsletter includes a brief first-year status update from the project coordinator, references to recent scientific publications, an animated introduction to the project’s core concept, and an overview of ongoing clustering and collaboration activities with related initiatives.
AI-Powered Software Testing Tools: Full Autonomy Remains a Distant Goal
This paper examines the current landscape of AI-powered software testing tools by systematically reviewing and classifying 56 commercially available solutions as of 2024. It analyses how these tools support different stages of the software testing process, ranging from test planning and test-case design to execution and maintenance, and highlights their potential to improve efficiency and effectiveness for test engineers. At the same time, the paper identifies key limitations, including false positives and insufficient contextual or domain understanding, which underscores the continued need for human oversight. The study argues that AI-assisted testing tools should be seen as complementary to human testers rather than fully autonomous solutions, with close human–AI collaboration remaining essential in the foreseeable future.