Learn Configuration¶
Source code:
bff/workflows/configs.pybff/workflows/learn.pybff/bayes/learning.py
Purpose¶
bff learn runs posterior inference from previously trained surrogate models.
The reference observation vectors and effective observation counts are read
from the surrogate files themselves, not from the original QoI datasets.
Minimal Example¶
log: out.log
specs: ../02-training-data/trainset/specs.yaml
models:
rdf: ../04-train-lgp/models/rdf.lgp
hb: ../04-train-lgp/models/hb.lgp
mcmc:
total_steps: 10000
warmup: 2000
checkpoint: mcmc-checkpoint.pt
posterior: posterior.pt
priors: priors.pt
restart: false
device: cuda
Top-Level Keys¶
logWorkflow log file.specsForce-field specification file from the trainset stage.modelsNon-empty mapping from QoI name to trained.lgpmodel file.mcmcPosterior-sampling settings.
models Keys¶
Each key under models is the QoI name that should appear in logs and plots.
Each value is a path to the corresponding trained .lgp file.
mcmc Keys¶
priors_disttypePrior family, currently defaulting tonormal.total_stepsTotal MCMC steps.warmupBurn-in length.thinChain thinning factor.progress_strideLogging interval.n_walkersOptional walker count. If omitted, BFF chooses a default.checkpointCheckpoint file path.posteriorPosterior chain output path.priorsPrior output path.restartRestart from checkpoint if possible.deviceTorch device for MCMC.rhat_tolR-hat convergence threshold.ess_minMinimum effective sample size target.include_implicit_chargeIftrue, include the implicit charge in prepared posterior samples.