Defines estimators which estimate a model’s ability to represent the data.

All estimators have to implement the Estimator interface.

create_cached_function(expression: Expr, parameters: Mapping[Symbol, ParameterValue], backend: str, free_parameters: Iterable[Symbol], use_cse: bool = True) β†’ tuple[ParametrizedFunction, DataTransformer][source]#

Create a function and data transformer for cached computations.

Once it is known which parameters in an expression are to be optimized, this function makes it easy to cache constant sub-trees.

  • expression – The Expr that should be expressed in a computational backend.

  • parameters – Symbols in the expression that should be interpreted as parameters. The values in this mapping will be used in the returned ParametrizedFunction.parameters.

  • backend – The computational backend to which in which to express the input expression.

  • free_parameters – Symbols in the expression that change and should not be cached.

  • use_cse – See create_parametrized_function().


A β€˜cached’ ParametrizedFunction with only the free parameters that are to be optimized and a DataTransformer that needs to be used to transform a data sample for the original expresion to the cached function.

See also

This function is an extension of prepare_caching() and create_parametrized_function(). Constant sub-expressions shows how to use this function.

gradient_creator(function: Callable[[Mapping[str, ParameterValue]], ParameterValue], backend: str) β†’ Callable[[Mapping[str, ParameterValue]], dict[str, ParameterValue]][source]#
class ChiSquared(function: ParametrizedFunction, domain: DataSample, observed_values: ndarray, weights: ndarray | None = None, backend: str = 'numpy')[source]#

Bases: Estimator

Chi-squared test estimator.

\[\chi^2 = \sum_{i=1}^n w_i\left(y_i - f_\mathbf{p}(x_i)\right)^2\]
  • function – A ParametrizedFunction \(f_\mathbf{p}\) with a set of free parameters \(\mathbf{p}\).

  • domain – Input data-set \(\mathbf{x}\) of \(n\) events \(x_i\) over which to compute function \(f_\mathbf{p}\).

  • observed_values – Observed values \(y_i\).

  • weights – Optional weights \(w_i\). Default: \(w_i=1\) (unweighted). A common choice is \(w_i = 1/\sigma_i^2\), with \(\sigma_i\) the uncertainty in each measured value of \(y_i\).

  • backend – Computational backend with which to compute the sum \(\sum_{i=1}^n\).

gradient(parameters: Mapping[str, ParameterValue]) β†’ dict[str, ParameterValue][source]#

Calculate gradient for given parameter mapping.

class UnbinnedNLL(function: ParametrizedFunction, data: DataSample, phsp: DataSample, phsp_volume: float = 1.0, backend: str = 'numpy')[source]#

Bases: Estimator

Unbinned negative log likelihood estimator.

The log likelihood \(\log\mathcal{L}\) for a given function \(f_\mathbf{p}: X^m \rightarrow \mathbb{R}\) over \(N\) data points \(\mathbf{x}\) and over a (phase space) domain of \(n_\mathrm{phsp}\) points \(\mathbf{x}_\mathrm{phsp}\), is given by:

\[-\log\mathcal{L} = N\log\lambda -\sum_{i=1}^N \log\left(f_\mathbf{p}(x_i)\right)\]

with \(\lambda\) the normalization integral over \(f_\mathbf{p}\). The integral is computed numerically by averaging over a significantly large (phase space) domain sample \(\mathbf{x}_\mathrm{phsp}\) of size \(n\):

\[\lambda = \frac{\sum_{j=1}^n V f_\mathbf{p}(x_{\mathrm{phsp},j})}{n}.\]
  • function – A ParametrizedFunction \(f_\mathbf{p}\) that describes a distribution over a certain domain.

  • data – The DataSample \(\mathbf{x}\) over which to compute \(f_\mathbf{p}\).

  • phsp – The domain (phase space) with which the likelihood is normalized. When correcting for the detector efficiency, use a phase space sample that passed the detector reconstruction.

  • phsp_volume – Optional phase space volume \(V\), used in the normalization factor. Default: \(V=1\).

  • backend – The computational back-end with which the sums and averages should be computed.

See also

Unbinned fit

gradient(parameters: Mapping[str, ParameterValue]) β†’ dict[str, ParameterValue][source]#

Calculate gradient for given parameter mapping.