**Peer Review Journal ** DOI on demand of Author (Charges Apply) ** Fast Review and Publicaton Process ** Free E-Certificate to Each Author

Current Issues
     2026:7/3

International Journal of Multidisciplinary Research and Growth Evaluation

ISSN: (Print) | 2582-7138 (Online) | Impact Factor: 9.54 | Open Access

On the Disagreement Problem in Human-in-the-Loop Federated Machine Learning

Full Text (PDF)

Open Access - Free to Download

Download Full Article (PDF)

Abstract

The convergence of distributed model training with structured human oversight represents one of the most consequential methodological shifts in contemporary artificial intelligence, enabling organisations across heterogeneous domains to derive collective insight from decentralised data while preserving privacy and data sovereignty. However, when human judgement is introduced into collaborative training pipelines, a critical challenge emerges: participants frequently produce divergent annotations, conflicting model outputs, and incompatible preferences that resist straightforward reconciliation. This review examines the theoretical, methodological, and practical dimensions of reconciling inconsistent human and algorithmic judgements across decentralised computational networks. It synthesises evidence from multiple disciplines to characterise how divergence arises, how it propagates through aggregation mechanisms, and how it affects downstream model performance, trust, and regulatory compliance. The paper surveys foundational constructs of decentralised collaborative learning, delineates paradigms through which human expertise shapes algorithmic behaviour, and examines taxonomies, statistical signatures, and causal drivers of inconsistency across annotators, models, and client nodes. Methodological approaches ranging from Bayesian aggregation and weighted consensus to active learning and reinforcement-informed resolution are evaluated under non-independent, non-identically distributed data conditions. Governance, ethical, and explainability considerations are situated within a broader discussion of accountability in hybrid intelligent ecosystems. Application-oriented evidence from healthcare, finance, education, and cybersecurity demonstrates the contextual sensitivity of proposed remedies and exposes persisting gaps. The analysis charts emerging research frontiers, including large-model integration, uncertainty-aware consensus, and participatory governance. Collectively, the findings underscore that reconciling divergence is not merely a technical optimisation problem but a socio-technical challenge demanding sustained interdisciplinary engagement for trustworthy deployment of collaborative intelligent systems.

How to Cite This Article

Olasunkanmi Oluwasanjo Ladapo, Demilade Jooda, Adetomiwa A Dosunmu, Toyosi O Abolaji (2026). On the Disagreement Problem in Human-in-the-Loop Federated Machine Learning . International Journal of Multidisciplinary Research and Growth Evaluation (IJMRGE), 7(3), 178-192. DOI: https://doi.org/10.54660/.IJMRGE.2026.7.3.178-192

Share This Article: