Core ideas illustrated#
At core, tensorwaves
is a package that can ‘fit’ arbitrary mathematical expressions to a data set using different computational back-ends. It can also use those expressions to describe a distribution over which to generate data samples.
This page illustrate what’s going on behind the scenes with some simple 1-dimensional and 2-dimensional expressions. The main steps are:
Formulate a mathematical expression with
sympy
.Generate a distribution data sample for that expression.
Express the expression as a function in some computational back-end.
Tweak the
parameters
and fit theParametrizedFunction
to the generated distribution.
Show code cell content
import os
import warnings
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sympy as sp
from IPython.display import display
from matplotlib import MatplotlibDeprecationWarning
from sympy.plotting import plot3d
from tensorwaves.estimator import UnbinnedNLL
from tensorwaves.function.sympy import create_parametrized_function
from tensorwaves.optimizer import Minuit2, ScipyMinimizer
from tensorwaves.optimizer.callbacks import (
CallbackList,
CSVSummary,
TFSummary,
YAMLSummary,
)
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
warnings.filterwarnings("ignore", category=MatplotlibDeprecationWarning)
Formulate model#
First of all, we’ll formulate some expression with sympy
. In this example, we take a sum of Gaussians plus some Poisson distribution.
def gaussian(x: sp.Symbol, mu: sp.Symbol, sigma: sp.Symbol) -> sp.Expr:
return sp.exp(-(((x - mu) / sigma) ** 2) / 2)
def poisson(x: sp.Symbol, k: sp.Symbol) -> sp.Expr:
return x**k * sp.exp(-x) / sp.factorial(k)
Show code cell source
x, mu, sigma = sp.symbols("x mu sigma")
k = sp.Symbol("k", integer=True)
lam = sp.Symbol("lambda", positive=True)
style = "<style>#output-body{display:flex; flex-direction: row;}</style>"
display(
gaussian(x, mu, sigma),
poisson(lam, k),
)
x, a, b, c, mu1, mu2, sigma1, sigma2 = sp.symbols("x (a:c) mu_(:2) sigma_(:2)")
expression_1d = (
a * gaussian(x, mu1, sigma1) + b * gaussian(x, mu2, sigma2) + c * poisson(x, k=2)
)
expression_1d
The expression above consists of a number of Symbol
s that we want to identify as parameters (that we want to optimize with regard to a certain data sample) and variables (in which the data sample is expressed). Let’s say \(x\) is the variable and that the rest of the Symbol
s are the parameters.
Here, we’ll pick some default values for the parameter and use them to plot the model with regard to the variable \(x\) (see subs()
). The default values are used later on as well when we generate data.
%config InlineBackend.figure_formats = ['svg']
parameter_defaults = {
a: 0.15,
b: 0.05,
c: 0.3,
mu1: 1.0,
sigma1: 0.3,
mu2: 2.7,
sigma2: 0.5,
}
x_range = (x, 0, 5)
substituted_expr_1d = expression_1d.subs(parameter_defaults)
p1 = sp.plot(substituted_expr_1d, x_range, show=False, line_color="red")
p2 = sp.plot(*substituted_expr_1d.args, x_range, show=False, line_color="gray")
p2.append(p1[0])
p2.show()
Convert to backend#
So far, all we did was using sympy
to symbolically formulate a mathematical expression. We now need to lambdify()
that expression to some computational backend, so that we can efficiently generate data and/or optimize the parameters in the function to ‘fit’ the model to some data sample. TensorWaves can do this with the create_parametrized_function()
function:
function_1d = create_parametrized_function(
expression=expression_1d,
parameters=parameter_defaults,
backend="jax",
use_cse=False,
)
Tip
Here, we used use_cse=False
in create_parametrized_function()
. Setting this argument to True
(the default) causes sympy
to search for common sub-expressions, which speeds up lambdification in large expressions and makes the lambdified source code more efficient. See also cse()
.
The resulting ParametrizedBackendFunction
internally carries some source code that numpy
understands. With get_source_code()
, we can see that it indeed looks similar to the expression that we formulated in Formulate model:
Show code cell source
from black import FileMode, format_str
from tensorwaves.function import get_source_code
src = get_source_code(function_1d)
src = format_str(src, mode=FileMode())
print(src)
def _lambdifygenerated(x, a, b, c, mu_0, mu_1, sigma_0, sigma_1):
return (
a * exp(-1 / 2 * (-mu_0 + x) ** 2 / sigma_0**2)
+ b * exp(-1 / 2 * (-mu_1 + x) ** 2 / sigma_1**2)
+ (1 / 2) * c * x**2 * exp(-x)
)
The ParametrizedBackendFunction
also carries the original default values for the parameters
that we defined earlier on.
function_1d.parameters
{'a': 0.15,
'b': 0.05,
'c': 0.3,
'mu_0': 1.0,
'sigma_0': 0.3,
'mu_1': 2.7,
'sigma_1': 0.5}
The ParametrizedFunction.__call__()
takes a dict
of variable names (here, "x"
only) to the value(s) that should be used in their place.
function_1d({"x": 0})
Array(0.00057991, dtype=float64, weak_type=True)
This is where we move to the data generation ― the input values are usually a list of values (expressed in the backend):
rng = np.random.default_rng()
x_values = np.linspace(0, 5, num=20)
y_values = function_1d({"x": x_values})
y_values
Array([0.00057991, 0.01533186, 0.06767626, 0.1597476 , 0.20593748,
0.15694528, 0.10445861, 0.09506037, 0.10579904, 0.1189151 ,
0.12428972, 0.11587303, 0.09647026, 0.07504325, 0.05834281,
0.0473477 , 0.03998121, 0.03433191, 0.02951671, 0.02526857], dtype=float64)
Show code cell source
%config InlineBackend.figure_formats = ['svg']
fig, ax = plt.subplots()
ax.scatter(x_values, y_values)
ax.set_xlabel("$x$")
ax.set_ylabel("$f(x)$")
plt.show()
Generate data#
So, we now have a function \(f\) of \(x\) expressed in some computational backend. This function is to describe a distribution over \(x\). In the real world, \(x\) is an observable from a process you measure. But sometimes, it’s useful to generate a ‘toy’ data sample for your model function as well, to try it out.
Hit & miss#
The challenge is to generate values of \(x\) with a density that is proportional to the value of the function evaluated at that point. To do this, we use a hit & miss approach:
Generate a random value for \(x\) within the domain \((x_\mathrm{min}, x_\mathrm{max})\) on which you want to generate the data sample.
Generate a random value \(y\) between \(0\) and the maximum value \(y_\mathrm{max}\) of the function over the domain of \(x\).
Check if \(y\) lies below \(f(x)\) (“hit”) or above (“miss”).
If there is a “hit”, accept this value of \(x\) and add it to the data sample.
We keep performing this until the sample of \(x\) values contains the desired number of events.
Show code cell source
%config InlineBackend.figure_formats = ['svg']
x_domain = np.linspace(0, 5, num=200)
y_values = function_1d({"x": x_domain})
fig, ax = plt.subplots(figsize=(8, 5))
ax.plot(x_domain, y_values)
ax.set_xlabel("$x$")
ax.set_ylabel("$f(x)$")
x_min = x_range[1]
x_max = x_range[2]
y_max = 0.21
x_value = 1.5
line_segment = [[0, 0], [0, y_max]]
ax.plot(*line_segment, color="black")
ax.text(
-0.22,
y_max / 2 * 0.5,
"uniform sample $(0, y_{max})$",
rotation="vertical",
)
ax.axhline(y=y_max, linestyle="dotted", color="black")
ax.text(
x_min + 0.1,
y_max - 0.01,
"$y_{max}$",
)
line_segment = [[x_min, x_max], [0, 0]]
ax.plot(*line_segment, color="black")
ax.text(
(x_max - x_min) / 2 - 0.22,
0.005,
R"uniform sample $(x_\mathrm{min}, x_\mathrm{max})$",
)
ax.scatter(x_value, function_1d({"x": x_value}))
ax.axvline(x=x_value, linestyle="dotted")
def draw_y_hit(ax, x_random, y_random):
y_value = function_1d({"x": x_random})
color = "green" if y_random < y_value else "red"
text = "hit" if y_random < y_value else "miss"
ax.scatter(0, y_random, color=color)
ax.arrow(
x=0,
y=y_random,
dx=x_random,
dy=0,
head_length=0.15,
length_includes_head=True,
color=color,
linestyle="dotted",
)
ax.text(x_value + 0.05, y_random, text)
draw_y_hit(ax, x_random=x_value, y_random=0.05)
draw_y_hit(ax, x_random=x_value, y_random=0.17)
There is one problem though: how to determine \(y_\mathrm{max}\)? In this example, we can just read off the value of \(y_\mathrm{max}\), or even compute it analytically from the original sympy.Expr
. This is not the case generally though, so we need to apply a trick.
Since we are generating uniformly distributed random values values of \(x\) and computing their \(f(x)\) values, we can keep track of which values of \(f(x)\) is the highest. Starting with \(y_\mathrm{max} = 0\) we just set \(y_\mathrm{max} = f(x)\) once \(f(x) > y_\mathrm{max}\) and completely restart the generate loop. Eventually, some value of \(x\) will lie near the absolute maximum of \(f\) and the data generation will happily continue until the requested number of events has been reached.
Warning
There are two caveats:
The domain sample (here: the uniformly distributed values of \(x\)) has to be large in order for the data sample to accurately describe the original function.
The the function displays narrow structures, like some sharp global maximum containing \(y_\mathrm{max}\), changes are smaller that the value of \(x\) will lie within this peak. The domain sample will therefore have to be even larger. It will also take longer before \(y_\mathrm{max}\) is found.
Domain distribution#
First of all, we need to randomly generate values of \(x\). In this simple, 1-dimensional example, we could just use a random generator like numpy.random.Generator
feed its output to the ParametrizedFunction.__call__()
. Generally, though, we want to cover \(n\)-dimensional cases. The class NumpyDomainGenerator
allows us to generate such a uniform distribution for each variable within a certain range. It requires a RealNumberGenerator
(here we use NumpyUniformRNG
) and it also requires us to define boundaries for each variable in the resulting DataSample
.
from tensorwaves.data import NumpyDomainGenerator, NumpyUniformRNG
rng = NumpyUniformRNG(seed=0)
domain_generator = NumpyDomainGenerator(boundaries={"x": (0, 5)})
domain = domain_generator.generate(1_000_000, rng)
Tip
Set a seed
in the RealNumberGenerator
if you want to generate deterministic data sample. If you leave it unspecified, you get an indeterministic data sample.
Tip
You can disable the progress bar through the logging
module:
import logging
logging.getLogger("tensorwaves.data").setLevel(logging.ERROR)
Use "tensorwaves"
to regulate all tensorwaves
logging.
When we feed the sample generated domain sample to the ParametrizedBackendFunction
and use it its output values as weights to the histogram of the uniform domain sample, we see that the domain nicely produces a distribution as expected from the model we defined:
Show code cell source
%config InlineBackend.figure_formats = ['png']
plt.hist(
domain["x"],
bins=200,
density=True,
alpha=0.5,
label="uniform",
)
plt.hist(
domain["x"],
weights=np.array(function_1d(domain)),
bins=200,
alpha=0.5,
density=True,
label="weighted with $f$",
)
plt.legend();

Note
In PWA, the sample on which we perform hit-and-miss is not uniform, because the available space is limited by the masses of the initial and final state (phase space). See TFPhaseSpaceGenerator
and Step 2: Generate data.
Intensity distribution#
With a Domain distribution in hand, we can work out an implementation for the Hit & miss approach. The IntensityDistributionGenerator
class helps us to do this:
from tensorwaves.data import IntensityDistributionGenerator
data_generator = IntensityDistributionGenerator(domain_generator, function_1d)
data = data_generator.generate(1_000_000, rng)
And indeed, it results in the correct distribution!
Show code cell source
%config InlineBackend.figure_formats = ['png']
plt.hist(data["x"], bins=200);

Optimize the model#
For the rest, the procedure is really just the same as that sketched in Step 3: Perform fit.
We tweak the parameters a bit, then use ParametrizedBackendFunction.update_parameters()
to change the function…
initial_parameters = {
"a": 0.2,
"b": 0.1,
"c": 0.2,
"mu_0": 0.9,
"sigma_0": 0.4,
"sigma_1": 0.4,
}
function_1d.update_parameters(initial_parameters)
…compare what this looks like compared to the data…
Show code cell source
%config InlineBackend.figure_formats = ['png']
plt.hist(data["x"], bins=200, density=True)
plt.hist(
domain["x"],
weights=np.array(function_1d(domain)),
bins=200,
histtype="step",
color="red",
density=True,
);

…define an Estimator
and choose jax
as backend…
estimator = UnbinnedNLL(function_1d, data, domain, backend="jax")
…optimize with Minuit2
(the callback
argument is optional—see Callbacks).
minuit2 = Minuit2(
callback=CallbackList([
CSVSummary("traceback-1D.csv"),
YAMLSummary("fit-result-1D.yaml"),
YAMLSummary("fit-result-1D-git-friendly.yaml", git_friendly=True),
TFSummary(),
])
)
fit_result = minuit2.optimize(estimator, initial_parameters)
fit_result
FitResult(
minimum_valid=True,
execution_time=10.183971881866455,
function_calls=212,
estimator_value=-186820.05513799412,
parameter_values={
'a': 0.14037617000382285,
'b': 0.04729039414126555,
'c': 0.2795029413063146,
'mu_0': 1.0019881244460875,
'sigma_0': 0.3011346771379525,
'sigma_1': 0.4904526974249457,
},
parameter_errors={
'a': 0.01896974762455273,
'b': 0.006408627701743625,
'c': 0.03774911885890374,
'mu_0': 0.0009358262229918435,
'sigma_0': 0.0007578363984747491,
'sigma_1': 0.0040977335736983094,
},
)
Tip
For complicated expressions, the fit can be made faster with create_cached_function()
. See Constant sub-expressions.
And again, we have a look at the resulting fit, as well as what happened during the optimization.
optimized_parameters = fit_result.parameter_values
function_1d.update_parameters(optimized_parameters)
Show code cell source
%config InlineBackend.figure_formats = ['png']
plt.hist(data["x"], bins=200, density=True)
plt.hist(
domain["x"],
weights=np.array(function_1d(domain)),
bins=200,
histtype="step",
color="red",
density=True,
);

Callbacks#
The Minuit2
optimizer above was constructed with callbacks
. Callbacks allow us to insert behavior into the fit procedure of the optimizer. In this example, we use CallbackList
to stack some Callback
classes: CSVSummary
, YAMLSummary
, and TFSummary
.
YAMLSummary
writes the latest fit result to disk. It’s a Loadable
callable and can be used to pick up a fit later on, for instance if it was aborted.
latest_parameters = YAMLSummary.load_latest_parameters("fit-result-1D.yaml")
latest_parameters
{'a': 0.14037617000382285,
'b': 0.04729039414126555,
'c': 0.2795029413063146,
'mu_0': 1.0019881244460875,
'sigma_0': 0.3011346771379525,
'sigma_1': 0.4904526974249457}
optimizer = Minuit2()
optimizer.optimize(estimator, latest_parameters)
FitResult(
minimum_valid=True,
execution_time=4.596926927566528,
function_calls=99,
estimator_value=-186820.05514176824,
parameter_values={
'a': 0.14037653663841984,
'b': 0.04729020829644717,
'c': 0.2795023332279385,
'mu_0': 1.0019881440338743,
'sigma_0': 0.3011346392954652,
'sigma_1': 0.4904483692112301,
},
parameter_errors={
'a': 0.0189603734782196,
'b': 0.006405359500279584,
'c': 0.03772965141511397,
'mu_0': 0.0009358257782092815,
'sigma_0': 0.000757836527249722,
'sigma_1': 0.0040976927756123755,
},
)
CSVSummary
records the parameter values in each iteration and can be used to analyze the fit process:
Show code cell source
%config InlineBackend.figure_formats = ['svg']
fit_traceback = pd.read_csv("traceback-1D.csv")
fig, (ax1, ax2) = plt.subplots(
nrows=2, figsize=(7, 9), sharex=True, gridspec_kw={"height_ratios": [1, 2]}
)
fit_traceback.plot("function_call", "estimator_value", ax=ax1)
fit_traceback.plot("function_call", sorted(initial_parameters), ax=ax2)
fig.tight_layout()
ax2.set_xlabel("function call")
plt.show()
TFSummary
provides a nice, interactive representation of the fit process and can be viewed with TensorBoard as follows:
Example in 2D#
The idea illustrated above works for any number of dimensions. Let’s create multiply the expression we had with some \(\cos\) as a function of \(y\):
y, omega = sp.symbols("y omega")
expression_2d = expression_1d * sp.cos(y * omega) ** 2
expression_2d
parameter_defaults[omega] = 0.5
%config InlineBackend.figure_formats = ['png']
y_range = (y, -sp.pi, +sp.pi)
substituted_expr_2d = expression_2d.subs(parameter_defaults)
plot3d(substituted_expr_2d, x_range, y_range);

function_2d = create_parametrized_function(
expression=expression_2d,
parameters=parameter_defaults,
backend="jax",
)
Generate 2D data#
boundaries = {"x": (0, 5), "y": (-np.pi, +np.pi)}
domain_generator_2d = NumpyDomainGenerator(boundaries)
data_generator_2d = IntensityDistributionGenerator(domain_generator_2d, function_2d)
domain_2d = domain_generator_2d.generate(300_000, rng)
data_2d = data_generator_2d.generate(30_000, rng)
Show code cell source
%config InlineBackend.figure_formats = ['svg']
fig, axes = plt.subplots(1, 2, figsize=(8, 3))
intensities = np.array(function_2d(domain_2d))
kwargs = {
"weights": intensities,
"bins": 100,
"density": True,
}
axes[0].hist(domain_2d["x"], **kwargs)
axes[1].hist(domain_2d["y"], **kwargs)
axes[0].set_xlabel("$x$")
axes[1].set_xlabel("$y$")
axes[0].set_ylabel("$f(x, y)$")
axes[0].set_yticks([])
axes[1].set_yticks([])
fig.tight_layout()
Perform fit with different optimizers#
initial_parameters = {
"a": 0.1,
"b": 0.1,
"c": 0.2,
"mu_0": 0.9,
"omega": 0.35,
"sigma_0": 0.4,
"sigma_1": 0.4,
}
function_2d.update_parameters(initial_parameters)
Show code cell source
%config InlineBackend.figure_formats = ['png']
fig, axes = plt.subplots(ncols=2, figsize=(9, 4), sharey=True, tight_layout=True)
axes[0].hist2d(**data_2d, bins=50)
axes[1].hist2d(**domain_2d, weights=function_2d(domain_2d), bins=50)
axes[0].set_xlabel("$x$")
axes[0].set_ylim([-3, +3])
axes[1].set_xlabel("$x$")
axes[0].set_ylabel("$y$")
axes[0].set_title("Data sample")
axes[1].set_title("Function with optimized parameters")
plt.show()

Minuit2#
Commonly, one would construct a Minuit2
instance and call its optimize()
method. For more advanced options, one could specify a small minuit_modifier
protocol into the Minuit2
constructor. In this example, we set the tol
attribute. For other options, see iminuit.Minuit
.
def tweak_minuit(minuit) -> None:
minuit.tol = 0.2
For the rest, the fit procedure goes just as in Optimize parameters:
estimator = UnbinnedNLL(function_2d, data_2d, domain_2d, backend="jax")
minuit2 = Minuit2(
callback=CSVSummary("traceback.csv"),
minuit_modifier=tweak_minuit,
)
fit_result = minuit2.optimize(estimator, initial_parameters)
fit_result
FitResult(
minimum_valid=True,
execution_time=2.2066872119903564,
function_calls=224,
estimator_value=-14864.252656036046,
parameter_values={
'a': 0.1131036191503325,
'b': 0.0369616396212123,
'c': 0.21853961378915338,
'mu_0': 0.9995001064249691,
'omega': 0.5021536330546471,
'sigma_0': 0.29908430206850956,
'sigma_1': 0.5359594533713485,
},
parameter_errors={
'a': 0.07220281605186124,
'b': 0.023711313890492507,
'c': 0.13934994245710072,
'mu_0': 0.0053695758662479585,
'omega': 0.00145457762613891,
'sigma_0': 0.00435898386568267,
'sigma_1': 0.024120491218740606,
},
)
Note that further information about the internal iminuit.Minuit
optimizer is available through FitResult.specifics
, e.g. computing the hesse()
afterwards:
fit_result.specifics.hesse()
Migrad | |
---|---|
FCN = -1.486e+04 | Nfcn = 274 |
EDM = 4.11e-05 (Goal: 0.0002) | |
Valid Minimum | Below EDM threshold (goal x 10) |
No parameters at limit | Below call limit |
Hesse ok | Covariance accurate |
Name | Value | Hesse Error | Minos Error- | Minos Error+ | Limit- | Limit+ | Fixed | |
---|---|---|---|---|---|---|---|---|
0 | a | 0.1 | 0.4 | |||||
1 | b | 0.04 | 0.12 | |||||
2 | c | 0.2 | 0.7 | |||||
3 | mu_0 | 1.000 | 0.005 | |||||
4 | omega | 0.5022 | 0.0014 | |||||
5 | sigma_0 | 0.299 | 0.004 | |||||
6 | sigma_1 | 0.536 | 0.024 |
a | b | c | mu_0 | omega | sigma_0 | sigma_1 | |
---|---|---|---|---|---|---|---|
a | 0.135 | 0.044 (1.000) | 0.26 (1.000) | 0.005e-3 (0.002) | -0 | 0.002e-3 (0.001) | 0.1e-3 (0.011) |
b | 0.044 (1.000) | 0.0144 | 0.085 (1.000) | 0.003e-3 (0.005) | 0 | 0.003e-3 (0.006) | 0 (0.011) |
c | 0.26 (1.000) | 0.085 (1.000) | 0.505 | 0.004e-3 (0.001) | -0 | 0.004e-3 (0.001) | 0.1e-3 (0.005) |
mu_0 | 0.005e-3 (0.002) | 0.003e-3 (0.005) | 0.004e-3 (0.001) | 2.88e-05 | 0 (0.002) | 0.009e-3 (0.384) | -0.008e-3 (-0.060) |
omega | -0 | 0 | -0 | 0 (0.002) | 2.12e-06 | 0 | -0 |
sigma_0 | 0.002e-3 (0.001) | 0.003e-3 (0.006) | 0.004e-3 (0.001) | 0.009e-3 (0.384) | 0 | 1.9e-05 | 0.003e-3 (0.033) |
sigma_1 | 0.1e-3 (0.011) | 0 (0.011) | 0.1e-3 (0.005) | -0.008e-3 (-0.060) | -0 | 0.003e-3 (0.033) | 0.000585 |
Scipy#
scipy = ScipyMinimizer(
method="Nelder-Mead",
callback=CSVSummary("traceback-scipy.csv"),
)
fit_result = scipy.optimize(estimator, initial_parameters)
fit_result
FitResult(
minimum_valid=True,
execution_time=7.553789138793945,
function_calls=402,
estimator_value=-14864.25268072169,
parameter_values={
'a': 0.11921369723669326,
'b': 0.03896060103993386,
'c': 0.23033372118197631,
'mu_0': 0.9995186249006902,
'omega': 0.502145064595906,
'sigma_0': 0.29909655068790947,
'sigma_1': 0.5359184972331181,
},
iterations=263,
)
Warning
Scipy does not provide error values for the optimized parameters.
Analyze fit process#
If we update the parameters in the ParametrizedFunction
with the optimized parameter values found by the Optimizer
, we can compare the data distribution with the function.
optimized_parameters = fit_result.parameter_values
function_2d.update_parameters(optimized_parameters)
Show code cell source
%config InlineBackend.figure_formats = ['png']
fig, axes = plt.subplots(1, 2, figsize=(9, 4), sharey=True, tight_layout=True)
fig.suptitle("Final fit result")
axes[0].hist2d(**data_2d, bins=50)
axes[1].hist2d(**domain_2d, weights=function_2d(domain_2d), bins=50)
axes[0].set_xlabel("$x$")
axes[0].set_ylim([-3, +3])
axes[1].set_xlabel("$x$")
axes[0].set_ylabel("$y$")
axes[0].set_title("Data sample")
axes[1].set_title("Function with optimized parameters")
plt.show()

In addition, the callbacks allow us to inspect how the parameter values evolved during the fit with the ScipyMinimizer
and Minuit2
optimizers:
Show code cell source
%config InlineBackend.figure_formats = ['svg']
minuit_traceback = pd.read_csv("traceback.csv")
scipy_traceback = pd.read_csv("traceback-scipy.csv")
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(
ncols=2,
nrows=2,
figsize=(10, 9),
gridspec_kw={"height_ratios": [1, 3]},
)
fig.suptitle("Evolution of the parameter values during the fit")
ax1.set_title("Minuit2")
ax2.set_title("Scipy (Nelder-Mead)")
ax1.sharex(ax3)
ax2.sharex(ax4)
ax1.sharey(ax2)
ax3.sharey(ax4)
ax2.set_ylim(
1.02 * scipy_traceback["estimator_value"].min(),
0.98 * scipy_traceback["estimator_value"].max(),
)
pars = list(initial_parameters)
minuit_traceback.plot("function_call", "estimator_value", ax=ax1, legend=False)
scipy_traceback.plot("function_call", "estimator_value", ax=ax2)
minuit_traceback.plot("function_call", pars, ax=ax3, legend=False)
scipy_traceback.plot("function_call", pars, ax=ax4, legend=True).legend(
loc="upper right"
)
fig.tight_layout()
ax2.set_xlabel("function call")
plt.show()