Bounded Variables

PyMC3 includes the construct Bound for placing constraints on existing probability distributions. It modifies a given distribution to take values only within a specified interval.

Some types of variables require constraints. For instance, it doesn’t make sense for a standard deviation to have a negative value, so something like a Normal prior on a parameter that represents a standard deviation would be inappropriate. PyMC3 includes distributions that have positive support, such as Gamma or Exponential. PyMC3 also includes several bounded distributions, such as Uniform, HalfNormal, and HalfCauchy, that are restricted to a specific domain.

All univariate distributions in PyMC3 can be given bounds. The distribution of a continuous variable that has been bounded is automatically transformed into an unnormalized distribution whose domain is unconstrained. The transformation improves the efficiency of sampling and variational inference algorithms.


For example, one may have prior information that suggests that the value of a parameter representing a standard deviation is near one. One could use a Normal distribution while constraining the support to be positive. The specification of a bounded distribution should go within the model block:

import pymc3 as pm

with pm.Model() as model:
    BoundedNormal = pm.Bound(pm.Normal, lower=0.0)
    x = BoundedNormal('x', mu=1.0, sd=3.0)

If the bound will be applied to a single variable in the model, it may be cleaner notationally to define both the bound and variable together.

with model:
    x = pm.Bound(pm.Normal, lower=0.0)('x', mu=1.0, sd=3.0)

Bounds can also be applied to a vector of random variables. With the same BoundedNormal object we created previously we can write:

with model:
    x_vector = BoundedNormal('x_vector', mu=1.0, sd=3.0, shape=3)


  • Bounds cannot be given to variables that are observed. To model truncated data, use a Potential in combination with a cumulative probability function. See this example.
  • The automatic transformation applied to continuous distributions results in an unnormalized probability distribution. This doesn’t effect inference algorithms but may complicate some model comparison procedures.