Journal of Applied Finance & Banking

Quantum-Inspired Counterfactual Explainable AI with Blockchain-Based Provenance for Governed Automated Decision-Making: An Empirical Evaluation on Credit Underwriting

  • Pdf Icon [ Download ]
  • Times downloaded: 14
  • Abstract

     

    Financial institutions deploy machine-learning models for high-stakes credit decisions, but deployed systems routinely fail to satisfy the joint requirements of explainability, auditability, and operational performance imposed by regulatory risk management frameworks. Counterfactual explanations align with legal notions of contestability, yet existing generators are expensive, unstable, and produce artifacts that are not independently verifiable. This paper presents an empirically evaluated governance-oriented architecture that integrates (i) a quantum-inspired evolutionary algorithm for counterfactual search (QIEA-CF), (ii) a sensitivity-based local linear model for interpretable explanation, and (iii) a blockchain-based provenance layer that commits versioned hashes via Merkle-batched anchoring. The architecture is evaluated on the FICO HELOC dataset (10,459 applications, 23 features) against three baselines across eight metrics. QIEA-CF achieves 96.7% validity with mean L1 proximity 2.418 and sparsity 6.8, outperforming the best baseline by 3.3 percentage points while reducing generation time from 1,847 ms to 198 ms per explanation. Batched Solana anchoring delivers a per-decision cost of US$9.75 × 10⁻⁷ at batch size 1,000 and a median verification latency of 47.9 ms. Results show that legally meaningful counterfactual explanation and cryptographically verifiable provenance are deliverable with sub-cent marginal cost and sub-250 ms latency.

     

    JEL classification numbers: C45, C61, G21, G28, K24, O33.

    Keywords: Explainable artificial intelligence, Counterfactual explanation, Quantum-inspired optimization, Blockchain provenance, AI governance, Credit underwriting.

ISSN: 1792-6599 (Online)
1792-6580 (Print)