American Impact ReviewA Peer-Reviewed Multidisciplinary Journal
Journal
JournalGetting StartedSubmission GuidelinesWhat We PublishWhy Publish With UsSubmit a Manuscript
About
AboutAbout the JournalEditorial BoardPeer ReviewersIndexing & RecognitionAuthor GuidelinesFor ResearchersReviewer GuidelinesEthics & PoliciesContact
Explore
ExploreBrowse ArticlesSubmit a Manuscript
Log inSign up
Browse ArticlesSubmit a Manuscript
JournalGetting StartedSubmission GuidelinesWhat We PublishWhy Publish With Us
AboutAbout the JournalEditorial BoardPeer ReviewersIndexing & RecognitionAuthor GuidelinesFor ResearchersReviewer GuidelinesEthics & PoliciesContact
AccountLog inSign up
American Impact Review
Peer-reviewed, open-access
multidisciplinary journal
Published by Global Talent Foundation
a 501(c)(3) nonprofit
Stay updated
Navigate
AboutFor AuthorsFor ResearchersReviewer GuidelinesPoliciesArchive
Legal & Contact
Privacy PolicyTerms of UsePublication EthicsContact Us
Indexed in:Crossref·Google Scholar·OpenAlex·Wikidata·Scilit
CC BY 4.0 · Open Access
ISSN 3071-124X · EIN: 33-2266959 · Verify on IRS.gov© 2026 American Impact Review
Computer Science & Information SystemsTheoretical ArticlePublished 5/4/2026 · 92 views0 downloadsDOI 10.66308/air.e2026041

Estimation as Laundering: How Machine Learning Converts Software Commitments into Apparent Predictions, and What an Estimation Hygiene Framework Can Recover

Yauheni KanavalikSolutions Architect, EPAM Systems, San Francisco, California, USA
Daria FirsovaIndependent Senior QA Engineer & AI Testing Specialist
Andrei DzeikaloIndependent Researcher
Vadim GoncharovJrnys Wellness Inc; Bachelor, Moscow Open Institute
Received 4/22/2026Accepted 5/4/2026
software effort estimationstory pointsmachine learningperformativityspeech actsalgorithmic managementdecision supportagile software development
Download PDF
Cover: Estimation as Laundering: How Machine Learning Converts Software Commitments into Apparent Predictions, and What an Estimation Hygiene Framework Can Recover

Abstract

Software effort estimation has often been treated as if it were primarily a prediction problem. The framing was inherited from parametric cost models in the 1970s and has been carried forward, with substantial methodological refinement, into machine-learning and large-language-model estimators. The empirical record across this lineage is mixed: replication studies and honest-baseline comparisons consistently report smaller and less stable gains than headline numbers suggest. This paper develops two related claims. The diagnostic claim is that machine-learning estimators can launder what is structurally a human or team commitment into the appearance of an objective technical output, with consequences for contestability, authorship, and accountability that accuracy metrics do not capture. The constructive claim is that an estimation hygiene discipline, five practices and a single operational decision rule, retains the genuine value of machine learning as decision support without permitting it to displace human commitment. The study is a conceptual, theory-building paper. It draws on the Austin and Searle speech-act apparatus, the performativity-of-models tradition, the methodological-critique literature on machine learning in software engineering and adjacent fields, and the political economy of measured work. The paper should be read as a conceptual synthesis rather than a systematic review. Recurring failure modes in machine-learning effort estimation, including fragile baselines, leakage, target ambiguity, and the cost of estimation itself, are read here as symptoms of a deeper category error rather than as independent technical problems. Reframing the estimate as a commissive speech act, contingent on explicit felicity conditions, clarifies why methodologically careful models can still under-deliver once deployed inside real planning practices. Three structural features distinguish the software case from the canonical financial-model performativity case: the absence of arbitrage closure, the intra-firm setting, and the combinatorial novelty of software work. The paper offers a sustained conceptual critique of machine-learning software effort estimation that targets the ontology of the target rather than only the validity of the models; names and analyses a specific sociotechnical mechanism, ML-as-laundering, by which automated estimators obscure commitment under the appearance of measurement; and proposes an estimation hygiene framework, including a disagreement-flag rule, that defines a defensible role for machine learning as decision support alongside human commitment.

Cite asYauheni Kanavalik, Daria Firsova, Andrei Dzeikalo, Vadim Goncharov (2026). Estimation as Laundering: How Machine Learning Converts Software Commitments into Apparent Predictions, and What an Estimation Hygiene Framework Can Recover. American Impact Review. https://doi.org/10.66308/air.e2026041Copy

Sections

    Article metrics

    Views92
    Downloads0