digiqual

Statistical Toolkit for Reliability Assessment in NDT

digiqual is a Python library designed for Non-Destructive Evaluation (NDE) engineers. It provides a robust statistical framework for performing Model-Assisted Probability of Detection (MAPOD) studies and reliability assessments. The package is built to implement the Generalised \(\hat{a}\)-versus-\(a\) Method, allowing users to assess inspection reliability even when traditional assumptions (linearity, constant variance, Gaussian noise) are not met.

Core Features

1. Experimental Design

Before running expensive Finite Element (FE) simulations, digiqual helps you design your experiment efficiently.

  • Latin Hypercube Sampling (LHS): Generate space-filling experimental designs to cover your deterministic parameter space (e.g., defect size) and stochastic nuisance parameters (e.g., roughness, orientation).
  • Scale & Bound: Automatically scale samples to your specific variable bounds.

2. Data Validation & Diagnostics

Ensure your simulation outputs are statistically valid before processing.

  • Sanity Checks: Detects overlap between variables, type errors, and insufficient sample sizes.
  • Sufficiency Diagnostics: rigorous statistical tests to flag issues like “Input Coverage Gaps” or “Model Instability” before you trust the results.

3. Adaptive Refinement & Automated Optimisation

digiqual closes the loop between analysis and design.

  • Smart Refinement: Use refine() to identify specific weaknesses in your data. It uses bootstrap committees to find regions of high uncertainty and suggests new points exactly where the model is “confused”.
  • Automated Workflows: Use the optimise() method to run a fully automated “Active Learning” loop. It generates an initial design, executes your external solver, checks diagnostics, and iteratively refines the model until statistical requirements are met.

4. Generalised Reliability Analysis

The package includes a full statistical engine for calculating Probability of Detection (PoD) curves.

  • Relaxed Assumptions: Moves beyond the rigid constraints of the classical \(\hat{a}\)-versus-\(a\) method by handling non-linear signal responses and heteroscedastic noise.
  • Robust Statistics: Automatically selects the best polynomial degree and error distribution (e.g., Normal, Gumbel, Logistic) based on data fit (AIC).
  • Uncertainty Quantification: Uses bootstrap resampling to generate robust confidence bounds and \(a_{90/95}\) estimates.

References

This package implements methods described in:

Malkiel, N., Croxford, A. J., & Wilcox, P. D. (2025). A generalized method for the reliability assessment of safety–critical inspection. Proceedings of the Royal Society A, 481: 20240654. https://doi.org/10.1098/rspa.2024.0654