Binned fit#

Imagine we have the following data distribution over \(x\):

import numpy as np

sample_size = 1_000
rng = np.random.default_rng(seed=0)
x_distribution = rng.power(3, size=sample_size)

We sample the distribution into a histogram with matplotlib.pyplot.hist(). The bin edges will serve as \(x\)-values and the bin heights will be the observed \(y\)-values.

import matplotlib.pyplot as plt

fig, ax = plt.subplots(figsize=(8, 5))
ax.set_xlabel("$x$")
n_bins = 50
bin_values, bin_edges, _ = ax.hist(x_distribution, bins=n_bins)
plt.show()
../_images/2cd3d67c3440a8c9a4315d3480baff71a0b0a5c11bb99ed8a6fecad379f28dc0.svg
x_values = (bin_edges[1:] + bin_edges[:-1]) / 2
y_values = bin_values

The data distribution has been generated by numpy.random.power() and can therefore be described by the following expression:

import sympy as sp

n, a, x = sp.symbols("n a x")
expression = n * x ** (a - 1)
expression
\[\displaystyle n x^{a - 1}\]

We would like to find which values for \(a, n\) describe this distribution best. For fast computations, we first formulate this expression as a ParametrizedFunction with numpy as computational backend (numpy is fast enough for the small data samples with which we are working here). We also provide a first guess for the values of \(a\) and \(n\).

Hide code cell source
function.update_parameters(initial_parameters)
fig, ax = plt.subplots(figsize=(8, 5))
ax.set_xlabel("$x$")
ax.hist(x_distribution, bins=n_bins, label="Data distribution")
ax.plot(x_values, function({"x": x_values}), label="Initial fit model", c="red")
ax.legend()
plt.show()
../_images/10bfc772e8f2eca02beeba4f7fdcecc60f90aa0e48af876fcce02d4a8b763152.svg

This binned distribution lends itself well to be optimized with a ChiSquared estimator.

from tensorwaves.estimator import ChiSquared
from tensorwaves.optimizer import Minuit2

estimator = ChiSquared(
    function,
    domain={"x": x_values},
    observed_values=y_values,
    backend="numpy",
)
optimizer = Minuit2()
fit_result = optimizer.optimize(estimator, initial_parameters)
fit_result
FitResult(
 minimum_valid=True,
 execution_time=0.0227508544921875,
 function_calls=62,
 estimator_value=799.3094411131793,
 parameter_values={ 'a': 3.1589937147180245, 'n': 58.13793430101528,
 },
 parameter_errors={ 'a': 0.020006328065594166, 'n': 0.31140874864064955,
 },
)

The optimized value of \(a\) indeed lies close to the value of \(a=3\) with which we generated the distribution.

Hide code cell source
function.update_parameters(fit_result.parameter_values)
fig, ax = plt.subplots(figsize=(8, 5))
ax.set_xlabel("$x$")
ax.hist(x_distribution, bins=n_bins, label="Data distribution")
ax.plot(x_values, function({"x": x_values}), label="Optimized model", c="red")
ax.legend()
plt.show()
../_images/dac31affe5592e09cb47a1c5225e005cce362151f79d2cb5d3c4674972db97b8.svg

Also note that the integral over the expression with these optimized parameter values is close to the surface of the bins:

bin_width = bin_edges[1] - bin_edges[0]
surface = bin_values.sum() * bin_width
surface
18.424306915235626
Hide code cell source
from IPython.display import Math

substitutions = {
    a: fit_result.parameter_values["a"],
    n: fit_result.parameter_values["n"],
}
integral = sp.Integral(expression, (x, 0, 1))
evaluated_integral = integral.subs(substitutions).doit()
latex = sp.multiline_latex(integral, evaluated_integral, environment="eqnarray")
Math(latex)
\[\displaystyle \begin{eqnarray} \int\limits_{0}^{1} n x^{a - 1}\, dx & = & 18.4039411126859 \end{eqnarray}\]