estimator#
import tensorwaves.estimator
Defines estimators which estimate a modelβs ability to represent the data.
All estimators have to implement the Estimator
interface.
- create_cached_function(expression: Expr, parameters: Mapping[Symbol, ParameterValue], backend: str, free_parameters: Iterable[Symbol], use_cse: bool = True) tuple[ParametrizedFunction[DataSample, ndarray], DataTransformer] [source]#
Create a function and data transformer for cached computations.
Once it is known which parameters in an expression are to be optimized, this function makes it easy to cache constant sub-trees.
- Parameters:
expression β The
Expr
that should be expressed in a computational backend.parameters β Symbols in the
expression
that should be interpreted as parameters. The values in this mapping will be used in the returnedParametrizedFunction.parameters
.backend β The computational backend to which in which to express the input
expression
.free_parameters β Symbols in the expression that change and should not be cached.
use_cse β See
create_parametrized_function()
.
- Returns:
A βcachedβ
ParametrizedFunction
with only the freeparameters
that are to be optimized and aDataTransformer
that needs to be used to transform a data sample for the original expresion to the cached function.
See also
This function is an extension of
prepare_caching()
andcreate_parametrized_function()
. Constant sub-expressions shows how to use this function.
- gradient_creator(function: Callable[[Mapping[str, ParameterValue]], ParameterValue], backend: str) Callable[[Mapping[str, ParameterValue]], dict[str, ParameterValue]] [source]#
- class ChiSquared(function: ParametrizedFunction[DataSample, ndarray], domain: DataSample, observed_values: ndarray, weights: ndarray | None = None, backend: str = 'numpy')[source]#
Bases:
Estimator
Chi-squared test estimator.
\[\chi^2 = \sum_{i=1}^n w_i\left(y_i - f_\mathbf{p}(x_i)\right)^2\]- Parameters:
function β A
ParametrizedFunction
\(f_\mathbf{p}\) with a set of freeparameters
\(\mathbf{p}\).domain β Input data-set \(\mathbf{x}\) of \(n\) events \(x_i\) over which to compute
function
\(f_\mathbf{p}\).observed_values β Observed values \(y_i\).
weights β Optional weights \(w_i\). Default: \(w_i=1\) (unweighted). A common choice is \(w_i = 1/\sigma_i^2\), with \(\sigma_i\) the uncertainty in each measured value of \(y_i\).
backend β Computational backend with which to compute the sum \(\sum_{i=1}^n\).
See also
- gradient(parameters: Mapping[str, ParameterValue]) dict[str, ParameterValue] [source]#
Calculate gradient for given parameter mapping.
- class UnbinnedNLL(function: ParametrizedFunction[DataSample, ndarray], data: DataSample, phsp: DataSample, phsp_volume: float = 1.0, backend: str = 'numpy')[source]#
Bases:
Estimator
Unbinned negative log likelihood estimator.
The log likelihood \(\log\mathcal{L}\) for a given function \(f_\mathbf{p}: X^m \rightarrow \mathbb{R}\) over \(N\) data points \(\mathbf{x}\) and over a (phase space) domain of \(n_\mathrm{phsp}\) points \(\mathbf{x}_\mathrm{phsp}\), is given by:
\[-\log\mathcal{L} = N\log\lambda -\sum_{i=1}^N \log\left(f_\mathbf{p}(x_i)\right)\]with \(\lambda\) the normalization integral over \(f_\mathbf{p}\). The integral is computed numerically by averaging over a significantly large (phase space) domain sample \(\mathbf{x}_\mathrm{phsp}\) of size \(n\):
\[\lambda = \frac{\sum_{j=1}^n V f_\mathbf{p}(x_{\mathrm{phsp},j})}{n}.\]- Parameters:
function β A
ParametrizedFunction
\(f_\mathbf{p}\) that describes a distribution over a certain domain.data β The
DataSample
\(\mathbf{x}\) over which to compute \(f_\mathbf{p}\).phsp β The domain (phase space) with which the likelihood is normalized. When correcting for the detector efficiency, use a phase space sample that passed the detector reconstruction.
phsp_volume β Optional phase space volume \(V\), used in the normalization factor. Default: \(V=1\).
backend β The computational back-end with which the sums and averages should be computed.
See also
- gradient(parameters: Mapping[str, ParameterValue]) dict[str, ParameterValue] [source]#
Calculate gradient for given parameter mapping.