Bayesian Inference (cvx-bayes)
Motivation
Section titled “Motivation”Vector similarity answers “what is close?” but not “what is likely to succeed?”. A Bayesian network models the conditional dependencies between variables (task type, region, action, outcome) to compute:
This captures interactions that linear scoring cannot: the same action has different success rates depending on the context.
Theoretical Foundation
Section titled “Theoretical Foundation”A Bayesian network is a directed acyclic graph (DAG) where:
- Nodes are random variables with discrete states
- Edges encode conditional dependencies (parent → child)
- CPTs (Conditional Probability Tables) store
Inference computes the posterior by propagating beliefs through the graph.
Laplace Smoothing
Section titled “Laplace Smoothing”CPTs are learned from observations with Laplace smoothing to prevent zero probabilities:
where is the pseudo-count (default 1.0) and is the number of states.
Architecture
Section titled “Architecture”cvx-bayes├── Variable — discrete random variable with named states├── Cpt — conditional probability table with online learning└── BayesianNetwork — DAG structure + inferenceDefine Variables
Section titled “Define Variables”use cvx_bayes::{BayesianNetwork, Variable};
let mut bn = BayesianNetwork::new();
let task = bn.add_variable(Variable::new(0, "task_type", vec![ "pick_and_place".into(), "heat_then_place".into(), "clean_then_place".into(),]));
let region = bn.add_variable(Variable::new(1, "region", vec![ "kitchen".into(), "bathroom".into(), "bedroom".into(),]));
let success = bn.add_variable(Variable::binary(2, "success"));Define Dependencies
Section titled “Define Dependencies”// Success depends on both task type and regionbn.add_edge(task, success);bn.add_edge(region, success);bn.initialize_cpts();Learn from Observations
Section titled “Learn from Observations”// After each episode, observe the outcomebn.observe(&[(task, 0), (region, 0), (success, 0)]); // pick in kitchen → successbn.observe(&[(task, 1), (region, 1), (success, 1)]); // heat in bathroom → failure
// Update CPTs with accumulated countsbn.update_cpts();// P(success | task=heat, region=kitchen)let p = bn.query(success, 0, &[(task, 1), (region, 0)]);
// Most likely outcomelet (state, prob) = bn.map_estimate(success, &[(task, 0), (region, 0)]);
// Full posterior distributionlet posterior = bn.posterior(success, &[(task, 2)]);// posterior = [P(true), P(false)]Integration with CVX
Section titled “Integration with CVX”The Bayesian network augments scored_search by providing context-aware probability estimates instead of the fixed-weight linear scorer:
1. CVX retrieves k candidates via HNSW2. For each candidate, extract (task_type, region, action_type)3. BN computes P(success | task, region, action) per candidate4. Re-rank by posterior probabilityReferences
Section titled “References”- Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems
- Koller & Friedman (2009). Probabilistic Graphical Models
- Murphy, K. (2012). Machine Learning: A Probabilistic Perspective