GUI Desktop Manual
This manual provides a detailed breakdown of the functionality found inside the DigiQual Shiny application. For instructions on how to install or launch the app, see the Launch the App guide.
The DigiQual GUI is built with a modern “Fluent” design system and provides a visual alternative to the Python API. It is split into four primary tabs that map directly to the programmatic workflow.
1. Experimental Design
Purpose: Generate a statistically sound framework of sampling points for your physics solvers to evaluate.
Instead of writing scripts to generate arrays, you can visually define your parameters and automatically export a completed Latin Hypercube Sample (LHS).
- Add Variables: Click “Add Variable” for each continuous parameter in your simulation (e.g.,
Length,Angle). - Define Bounds: Set the minimum and maximum physical limits for each parameter. The LHS algorithm will ensure these bounds are evenly covered.
- Specify Samples: Enter the total number of initial simulation runs you plan to execute (\(N\)).
- Generate Framework: The tool will instantly construct the multi-dimensional parameter space, attempting to maximize the minimum distance between points.
- Download CSV: Save
generated_sample.csv. You can now feed this CSV into your external solver tool (e.g., MATLAB, Abaqus, ANSYS) to calculate the actual signal responses.
2. Simulation Diagnostics
Purpose: Validate your simulation inputs and catch problematic gaps before performing an expensive reliability analysis.
Once your external physics solver has calculated the outcomes for your generated framework, you will upload the results back into this tab for validation.
- Upload Data: Upload the merged CSV containing your input conditions and their resulting output signals.
- Assign Columns: Identify which columns represent your inputs and which single column represents your outcome.
- Diagnostic Report: The engine runs sanity checks across your data, testing for sufficient sample sizes and input coverage. A tabular report provides a clear Pass/Fail for every parameter.
Adaptive Remediation
If the diagnostic engine detects a large deliberate gap or missing data (a failed Input Coverage test), the UI exposes a remediation tool.
Instead of guessing where to add new points, you can simply input how many new points you’d like to generate. The engine will target the specific empty regions and generate a new CSV containing targeted, gap-filling coordinates. Run these through your solver and merge the new data back into your main set!
3. Data Visualisation
Purpose: Deep-dive visual inspection of your data distribution and model stability.
This brand-new tab utilizes the data uploaded in the Simulation Diagnostics pane to generate interactive, exploratory plots.
- Summary Statistics: A complete breakout of the Min, Median, Max, Mean, and Standard Deviation for every uploaded variable.
- Variable Inspector:
- Trace any single variable’s distribution (complete with Kernel Density Estimates and rug plots).
- Plot a variable against the outcome to visually inspect underlying trends (with linear projection).
- Note
If a variable failed the coverage check in Tab 2, its specific gap will be shaded in red and orange on these plots!
- Input Space Coverage Overview: A macro-panel that places histograms for all input parameters side-by-side, providing a fast visual confirmation of the entire experimental design’s health.
- Outcome Diagnostic Overview: Advanced evaluation of the underlying surrogate models:
- Actual vs Predicted (Model Fit CV R²): Visualises how well a degree-3 polynomial fits your raw data. Points hugging the diagonal indicate strong predictability.
- Bootstrap Convergence Trace: Traces the running average and maximum Relative Standard Deviation across 100 bootstrap iterations. A line flattening beneath the threshold indicates that your results are stable; a declining trace suggests you need more physical samples.
4. PoD Analysis
Purpose: Constructing the final Probability of Detection curves and surfaces.
With a validated and well-understood dataset, you define the parameters for the Generalized \(a\)-versus-\(\hat{a}\) Method.
- Variable Designation: Select your Parameters of Interest (PoI) & Nuisance Parameters (Optional). Nuisance Parameters are mathematically marginalised out via Monte Carlo Integration.
- Set Threshold: Define the critical threshold (e.g., a detection limit of
18.0). - Generate Outcomes: The system automatically identifies the best predictive polynomial structure, infers error distribution, and generates bootstrap confidence intervals.
- Interactive Mapping: It renders two interactive plots side-by-side:
- The Signal Response Model (The underlying Physics).
- The Probability of Detection Mapping (The resulting Reliability).