Skip to content

core

qubit_approximant.core

AdamOptimizer(iters, step_size=0.01, beta1=0.9, beta2=0.999, eps=1e-08)

Bases: GDOptimizer

Adam (A Method for Stochastic Optimization) optimizer.

Attributes:

Name Type Description
step_size float

The size of the step of each gradient descent iteration.

beta1 float

The factor for the average gradient.

beta2 float

The factor for the average squared gradient.

eps float

A regularizing small parameter used to avoid division by zero.

References

The optimizer is described in [1]_.

.. [1] https://arxiv.org/abs/1412.6980

iters : int The number of gradient descent iterations to perform. step_size : float The size of the step of each gradient descent iteration. beta1 : float The factor for the average gradient. beta2 : float The factor for the average squared gradient. eps: float A regularizing small parameter used to avoid division by zero.

Source code in qubit_approximant/core/optimizer/optimizer.py
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
def __init__(
    self,
    iters: int,
    step_size: float = 0.01,
    beta1: float = 0.9,
    beta2: float = 0.999,
    eps: float = 1e-8,
):
    """
    Parameters
    ----------
    iters : int
        The number of gradient descent iterations to perform.
    step_size : float
        The size of the step of each gradient descent iteration.
    beta1 : float
        The factor for the average gradient.
    beta2 : float
        The factor for the average squared gradient.
    eps: float
        A regularizing small parameter used to avoid division by zero.
    """
    self.step_size = step_size
    self.beta1 = beta1
    self.beta2 = beta2
    self.eps = eps
    super().__init__(iters, step_size)

step(grad_cost, params)

Update the parameters with a step of Adam. Adam changes the step size in each iteration.

Source code in qubit_approximant/core/optimizer/optimizer.py
228
229
230
231
232
233
234
235
236
237
238
239
240
241
def step(self, grad_cost: Callable, params: NDArray) -> NDArray:
    """Update the parameters with a step of Adam. Adam changes the step
    size in each iteration."""
    m = zeros_like(params)
    v = zeros_like(params)
    grad = grad_cost(params)

    m = self.beta1 * m + (1.0 - self.beta1) * grad
    v = self.beta2 * v + (1.0 - self.beta2) * grad**2
    mhat = m / (1.0 - self.beta1 ** (self.iter_index + 1))
    vhat = v / (1.0 - self.beta2 ** (self.iter_index + 1))
    params = params - self.step_size * mhat / (sqrt(vhat) + self.eps)

    return params

BlackBoxOptimizer(method, method_kwargs=None)

Bases: Optimizer

Optimizer that uses scipy's inbuilt function minimize.

Attributes:

Name Type Description
method str

The desired optimization method.

method_kwargs dict

A dictionary with keyword arguments for the optimizer.

Parameters:

Name Type Description Default
method str

The desired optimization method.

required
method_kwargs dict

A dictionary with keyword arguments for the optimizer.

None
Source code in qubit_approximant/core/optimizer/optimizer.py
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
def __init__(self, method: str, method_kwargs: dict | None = None):
    """
    Initialize a black box optimizer.

    Parameters
    ----------
    method : str
        The desired optimization method.
    method_kwargs : dict
        A dictionary with keyword arguments for the optimizer.
    """
    if method in BlackBoxOptimizer.blackbox_methods:
        self.method = method
        self.method_kwargs = {} if method_kwargs is None else method_kwargs
    else:
        raise ValueError(f"Optimization {method} is not supported.")

__call__(cost, grad_cost, init_params)

Calculate the optimized parameters using scipy.optimize.minimize().

Parameters:

Name Type Description Default
cost Callable

Cost function to be minimized.

required
grad_cost Callable

Gradient of the cost function.

required
init_params NDArray

Initial parameter guess for the cost function; used to initialize the optimizer.

required

Returns:

Type Description
NDArray

Optimum parameters

Source code in qubit_approximant/core/optimizer/optimizer.py
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
def __call__(self, cost: Callable, grad_cost: Callable, init_params: NDArray) -> NDArray:
    """
    Calculate the optimized parameters using `scipy.optimize.minimize()`.

    Parameters
    ----------
    cost: Callable
        Cost function to be minimized.
    grad_cost: Callable
        Gradient of the cost function.
    init_params : NDArray
        Initial parameter guess for the cost function; used to initialize the optimizer.

    Returns
    -------
    NDArray
        Optimum parameters
    """
    result = minimize(
        cost, init_params, method=self.method, jac=grad_cost, options=self.method_kwargs
    )
    params = result.x
    return params

Circuit(x, encoding, params_layer)

Bases: ABC

Quantum circuit that encodes the function. The circuit consists of a number of layers,

U = Ln * ... * L1

Attributes:

Name Type Description
encoding Callable

Return the encoding of the function in the circuit. For example amplitude or probability of the |0> qubit.

grad_encoding Callable

Returns the gradient of the chosen encoding.

params_layer int

Number of parameters per layer.

x: NDArray Values where to evaluate the function encoded in the circuit. encoding : str Choose between amplitude or probability encoding. Must be either 'amp' or 'prob'. params_layer : int Number of parameters per layer.

Source code in qubit_approximant/core/circuit/circuit.py
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
def __init__(self, x: NDArray, encoding: str, params_layer: int):
    """
    Parameters
    ----------
    x: NDArray
        Values where to evaluate the function encoded in the circuit.
    encoding : str
        Choose between amplitude or probability encoding.
        Must be either 'amp' or 'prob'.
    params_layer : int
        Number of parameters per layer.
    """
    self.x = x

    if encoding == "prob":
        self.encoding = self.prob_encoding
        self.grad_encoding = self.grad_prob
    elif encoding == "amp":
        self.encoding = self.amp_encoding
        self.grad_encoding = self.grad_amp
    else:
        raise ValueError("Invalid encoding '{encoding}'. Choose between 'prob' or 'amp'.")

    self.params_layer = params_layer  # To be defined in subclasses

x: NDArray property writable

Values where to evaluate the function encoded in the circuit.

Returns:

Type Description
NDArray

The value of x.

amp_encoding(params)

Returns approximate function encoded in the amplitude of the qubit.

Parameters:

Name Type Description Default
params NDArray

Parameters of the quantum gates in the layer.

required

Returns:

Type Description
NDArray

Values of the amplitudes of the |0> qubit for each value of x.

Source code in qubit_approximant/core/circuit/circuit.py
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
def amp_encoding(self, params: NDArray) -> NDArray:
    """Returns approximate function encoded in the amplitude of the qubit.

    Parameters
    ----------
    params : NDArray
        Parameters of the quantum gates in the layer.

    Returns
    -------
    NDArray
        Values of the amplitudes of the |0> qubit for each value of x.
    """
    layers = params.size // self.params_layer
    params = params.reshape(layers, self.params_layer)
    U = self.layer(params[0, :])[:, :, 0]
    for i in range(1, params.shape[0]):
        Ui = self.layer(params[i, :])
        U = np.einsum("gmn, gn -> gm", Ui, U)
    return U[:, 0]

grad_amp(params)

Returns the gradient of the amplitude encoding and the encoded function.

Parameters:

Name Type Description Default
params NDArray

Parameters of the quantum gates in the layer.

required

Returns:

Type Description
tuple[NDArray, NDArray]

Gradients of the amplitude with respect to all parameters and the amplitudes for each x.

Source code in qubit_approximant/core/circuit/circuit.py
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
def grad_amp(self, params: NDArray) -> tuple[NDArray, NDArray]:
    """Returns the gradient of the amplitude encoding and the encoded function.

    Parameters
    ----------
    params : NDArray
        Parameters of the quantum gates in the layer.

    Returns
    -------
    tuple[NDArray, NDArray]
        Gradients of the amplitude with respect to all parameters and the amplitudes for each x.
    """
    layers = params.size // self.params_layer
    params = params.reshape(layers, self.params_layer)
    U = np.tensordot(np.ones(self.x.size), np.array([1, 0]), axes=0)  # dim (G,2)
    D = np.zeros((layers, self.params_layer, self.x.size, 2), dtype=np.complex128)

    for i in range(layers):
        DUi = self.grad_layer(params[i, :])  # dim (4,G,2)
        # j is each of the derivatives
        D[i, ...] = np.einsum("jgmn, gn -> jgm", DUi, U)
        # Multiply derivative times next layer
        Ui = self.layer(params[i, :])
        U = np.einsum("gmn, gn -> gm", Ui, U)

    grad = np.zeros((layers, self.params_layer, self.x.size), dtype=np.complex128)
    grad[layers - 1] = D[layers - 1, :, :, 0]
    # In the first iteration we reuse the L-th layer
    B = Ui[:, 0, :]
    for i in range(layers - 2, -1, -1):
        grad[i, ...] = np.einsum("gm, jgm -> jg", B, D[i, ...])
        # Multiply derivative times previous layer
        Ui = self.layer(params[i, :])
        B = np.einsum("gn, gnm -> gm", B, Ui)

    grad = np.einsum("ijg -> gij", grad)
    grad = grad.reshape(self.x.size, -1)  # D has shape (x, L*4)
    fn_approx = U[:, 0]

    return grad, fn_approx

grad_layer(params) abstractmethod

Returns the derivative of one layer with respect to its parameters.

Parameters:

Name Type Description Default
params NDArray

Parameters of the quantum gates in the layer.

required

Returns:

Type Description
NDArray

Values of the probabilities of the |0> qubit for each value of x.

Source code in qubit_approximant/core/circuit/circuit.py
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
@abstractmethod
def grad_layer(self, params: NDArray) -> NDArray:
    """Returns the derivative of one layer with respect to its parameters.

    Parameters
    ----------
    params : NDArray
        Parameters of the quantum gates in the layer.

    Returns
    -------
    NDArray
        Values of the probabilities of the |0> qubit for each value of x.
    """
    ...

grad_prob(params)

Returns the gradient of the probability encoding and the probability encoding.

Parameters:

Name Type Description Default
params NDArray

Parameters of the quantum gates in the layer.

required

Returns:

Type Description
tuple[NDArray, NDArray]

Gradients of the probability with respect to all parameters and the probability for each x.

Source code in qubit_approximant/core/circuit/circuit.py
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
def grad_prob(self, params: NDArray) -> tuple[NDArray, NDArray]:
    """Returns the gradient of the probability encoding and the probability encoding.

    Parameters
    ----------
    params : NDArray
        Parameters of the quantum gates in the layer.

    Returns
    -------
    tuple[NDArray, NDArray]
        Gradients of the probability with respect to all parameters
        and the probability for each x.
    """
    grad_amp, amp = self.grad_amp(params)
    fn_approx = amp.real**2 + amp.imag**2
    grad_prob = 2 * np.real(np.einsum("g, gi -> gi", amp.conj(), grad_amp))
    return grad_prob, fn_approx

layer(params) abstractmethod

Returns the layer of our circuit

Parameters:

Name Type Description Default
params NDArray

Parameters of the quantum gates in the layer.

required

Returns:

Type Description
NDArray

Unitary matrix of the layer with size (x,2,2)

Source code in qubit_approximant/core/circuit/circuit.py
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
@abstractmethod
def layer(self, params: NDArray) -> NDArray:
    """Returns the layer of our circuit

    Parameters
    ----------
    params : NDArray
        Parameters of the quantum gates in the layer.

    Returns
    -------
    NDArray
        Unitary matrix of the layer with size (x,2,2)
    """
    ...

prob_encoding(params)

Returns approximate function encoded in the probability of the qubit. s Parameters ---------- params : NDArray Parameters of the quantum gates in the layer.

    Returns
    -------
    NDArray
        Values of the probabilities of the |0> qubit for each value of x.
Source code in qubit_approximant/core/circuit/circuit.py
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
def prob_encoding(self, params: NDArray) -> NDArray:
    """Returns approximate function encoded in the probability of the qubit.
    s
            Parameters
            ----------
            params : NDArray
                Parameters of the quantum gates in the layer.

            Returns
            -------
            NDArray
                Values of the probabilities of the |0> qubit for each value of x.
    """
    fn_amp = self.amp_encoding(params)
    return fn_amp.real**2 + fn_amp.imag**2

CircuitRxRy(x, encoding)

Bases: Circuit

Each layer of the circuit is made of three rotations dependent on 3 parameters:

L = RX(θx) RY(w * x + θy)

x: NDArray The values where we wish to approximate a function. encoding: str Choose between amplitude or probability encoding. Must be either 'amp' or 'prob'.

Source code in qubit_approximant/core/circuit/circuit.py
312
313
314
315
316
317
318
319
320
321
322
323
def __init__(self, x: NDArray, encoding: str):
    """
    Parameters
    ----------
    x: NDArray
        The values where we wish to approximate a function.
    encoding: str
        Choose between amplitude or probability encoding.
        Must be either 'amp' or 'prob'.
    """
    self.params_layer = 3
    super().__init__(x, encoding, self.params_layer)

grad_layer(params)

Returns the derivative of one layer with respect to its 3 parameters.

Parameters:

Name Type Description Default
params NDArray

Parameters of the quantum gates in the layer.

required

Returns:

Type Description
NDArray

Gradient of the layer with respect to each parameter.

Source code in qubit_approximant/core/circuit/circuit.py
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
def grad_layer(self, params: NDArray) -> NDArray:
    """Returns the derivative of one layer with respect to its 3 parameters.

    Parameters
    ----------
    params : NDArray
        Parameters of the quantum gates in the layer.

    Returns
    -------
    NDArray
        Gradient of the layer with respect to each parameter.
    """
    w = params[0]
    θx = params[1]
    θy = params[2]
    Dx = np.einsum("mng, np -> gmp", RY(w * self.x + θy), grad_RX(θx))
    Dy = np.einsum("mng, np -> gmp", grad_RY(w * self.x + θy), RX(θx))
    Dw = np.einsum("gmn, g -> gmn", Dy, self.x)
    return np.array([Dw, Dx, Dy])  # type: ignore

layer(params)

Each layer is the product of two rotations. L = RX(θx) RY(w * x + θy)

Parmeters

params : NDArray Parameters of the gates in the layer.

Returns:

Type Description
NDArray

Unitary matrix of the layer with size (x,2,2)

Raises:

Type Description
ParameterError

The number of parameters given does not correspond with the circuit ansatz.

Source code in qubit_approximant/core/circuit/circuit.py
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
def layer(self, params: NDArray) -> NDArray:
    """
    Each layer is the product of two rotations.
    L = RX(θx) RY(w * x + θy)

    Parmeters
    ---------
    params : NDArray
        Parameters of the gates in the layer.

    Returns
    -------
    NDArray
        Unitary matrix of the layer with size (x,2,2)

    Raises
    ------
    ParameterError
        The number of parameters given does not correspond with
        the circuit ansatz.
    """
    if params.size != self.params_layer:
        raise ParameterError(self.params_layer)
    w = params[0]
    θx = params[1]
    θy = params[2]
    # move the x axis to first position
    return np.einsum("mng, np -> gmp", RY(w * self.x + θy), RX(θx))

CircuitRxRyRz(x, encoding)

Bases: Circuit

Each layer of the circuit is made of three rotations dependent on 4 parameters:

L = RX(x * w + θx) RY(θy) RZ(θz)

x: NDArray The values where we wish to approximate a function. encoding: str Choose between amplitude or probability encoding. Must be either 'amp' or 'prob'.

Source code in qubit_approximant/core/circuit/circuit.py
235
236
237
238
239
240
241
242
243
244
245
246
def __init__(self, x: NDArray, encoding: str):
    """
    Parameters
    ----------
    x: NDArray
        The values where we wish to approximate a function.
    encoding: str
        Choose between amplitude or probability encoding.
        Must be either 'amp' or 'prob'.
    """
    self.params_layer = 4
    super().__init__(x, encoding, self.params_layer)

grad_layer(params)

Returns the derivative of one layer with respect to its 4 parameters.

Parameters:

Name Type Description Default
params NDArray

Parameters of the quantum gates in the layer.

required

Returns:

Type Description
NDArray

Gradient of the layer with respect to each parameter.

Source code in qubit_approximant/core/circuit/circuit.py
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
def grad_layer(self, params: NDArray) -> NDArray:
    """Returns the derivative of one layer with respect to its 4 parameters.

    Parameters
    ----------
    params : NDArray
        Parameters of the quantum gates in the layer.

    Returns
    -------
    NDArray
        Gradient of the layer with respect to each parameter.
    """
    w = params[0]
    θx = params[1]
    θy = params[2]
    θz = params[3]

    Dx = np.einsum("mn, np, pqg -> gmq", RZ(θz), RY(θy), grad_RX(w * self.x + θx))
    Dw = np.einsum("gmq, g -> gmq", Dx, self.x)
    Dy = np.einsum("mn, np, pqg -> gmq", RZ(θz), grad_RY(θy), RX(w * self.x + θx))
    Dz = np.einsum("mn, np, pqg -> gmq", grad_RZ(θz), RY(θy), RX(w * self.x + θx))

    return np.array([Dw, Dx, Dy, Dz])  # type: ignore

layer(params)

Returns the layer of the circuit: L = RX(x * w + θ0) RY(θ1) RZ(θ2)

Parameters:

Name Type Description Default
params NDArray

Parameters of the quantum gates in the layer.

required

Returns:

Type Description
NDArray

Unitary matrix of the layer with size (x,2,2)

Raises:

Type Description
ParameterError

The number of parameters given does not correspond with the circuit ansatz.

Source code in qubit_approximant/core/circuit/circuit.py
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
def layer(self, params: NDArray) -> NDArray:
    """
    Returns the layer of the circuit:
    L = RX(x * w + θ0) RY(θ1) RZ(θ2)

    Parameters
    ----------
    params : NDArray
        Parameters of the quantum gates in the layer.

    Returns
    -------
    NDArray
        Unitary matrix of the layer with size (x,2,2)

    Raises
    ------
    ParameterError
        The number of parameters given does not correspond with
        the circuit ansatz.
    """
    if params.size != self.params_layer:
        raise ParameterError(self.params_layer)
    w = params[0]
    θx = params[1]
    θy = params[2]
    θz = params[3]
    # move the x axis to first position
    return np.einsum("mn, np, pqg -> gmq", RZ(θz), RY(θy), RX(w * self.x + θx))

CircuitRy(x, encoding)

Bases: Circuit

Each layer of the circuit is made of three rotations dependent on 2 parameters:

L = RY(w * x + θy)

x: NDArray The values where we wish to approximate a function. encoding: str Choose between amplitude or probability encoding. Must be either 'amp' or 'prob'.

Source code in qubit_approximant/core/circuit/circuit.py
384
385
386
387
388
389
390
391
392
393
394
395
def __init__(self, x: NDArray, encoding: str):
    """
    Parameters
    ----------
    x: NDArray
        The values where we wish to approximate a function.
    encoding: str
        Choose between amplitude or probability encoding.
        Must be either 'amp' or 'prob'.
    """
    self.params_layer = 2
    super().__init__(x, encoding, self.params_layer)

grad_layer(params)

Returns the derivative of one layer with respect to its 2 parameters.

Parameters:

Name Type Description Default
params NDArray

Parameters of the quantum gates in the layer.

required

Returns:

Type Description
NDArray

Gradient of the layer with respect to each parameter.

Source code in qubit_approximant/core/circuit/circuit.py
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
def grad_layer(self, params: NDArray) -> NDArray:
    """Returns the derivative of one layer with respect to its 2 parameters.

    Parameters
    ----------
    params : NDArray
        Parameters of the quantum gates in the layer.

    Returns
    -------
    NDArray
        Gradient of the layer with respect to each parameter.
    """
    w = params[0]
    θy = params[1]
    Dy = np.einsum("mng -> gmn", grad_RY(w * self.x + θy))
    Dw = np.einsum("gmn, g -> gmn", Dy, self.x)

    return np.array([Dw, Dy])  # type: ignore

layer(params)

Each layer is one RY rotation: L = RY(w * x + θy)

Parmeters

params : NDArray Parameters of the gates in the layer.

Returns:

Type Description
NDArray

Unitary matrix of the layer with size (x,2,2)

Raises:

Type Description
ParameterError

The number of parameters given does not correspond with the circuit ansatz.

Source code in qubit_approximant/core/circuit/circuit.py
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
def layer(self, params: NDArray) -> NDArray:
    """
    Each layer is one RY rotation:
    L = RY(w * x + θy)

    Parmeters
    ---------
    params : NDArray
        Parameters of the gates in the layer.

    Returns
    -------
    NDArray
        Unitary matrix of the layer with size (x,2,2)

    Raises
    ------
    ParameterError
        The number of parameters given does not correspond with
        the circuit ansatz.
    """
    if params.size != self.params_layer:
        raise ParameterError(self.params_layer)
    w = params[0]
    θy = params[1]
    # move the x axis to first position
    return np.einsum("mng -> gmn", RY(w * self.x + θy))

Cost(fn, circuit, metric)

Create a cost function from the encoding and the metric.

Attributes:

Name Type Description
metric Callable

The metric or loss function to quantify how well our circuit approximates the target function.

grad_metric Callable

The gradient of the metric or loss function.

circuit Circuit

Quantum circuit that encodes our function.

fn NDArray

Function we desire to approximate.

fn : NDArray Function we desire to approximate. circuit : Circuit Quantum circuit that encodes our function. metric : str Name of the metric we want to use. Allowed values are: - 'mse' (mean square error) - 'rmse' (root mean square error) - 'mse_weighted' (mse weighted by fn) - 'kl_divergence' - 'log_cosh'.

Source code in qubit_approximant/core/cost/cost.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
def __init__(self, fn: NDArray, circuit: Circuit, metric: str) -> None:
    """
    Parameters
    ----------
    fn : NDArray
        Function we desire to approximate.
    circuit : Circuit
        Quantum circuit that encodes our function.
    metric : str
        Name of the metric we want to use.
        Allowed values are:
            - 'mse' (mean square error)
            - 'rmse' (root mean square error)
            - 'mse_weighted' (mse weighted by fn)
            - 'kl_divergence'
            - 'log_cosh'.
    """
    try:
        self.metric = globals()[metric]
        self.grad_metric = globals()["grad_" + metric]
    except KeyError as e:
        raise ValueError("Invalid metric '{metric}'. Choose between 'MSE' or 'RMSE'.") from e

    self.circuit = circuit
    self.fn = fn

__call__(params)

Evaluate the cost function given the parameters of the circuit.

Parameters:

Name Type Description Default
params NDArray

Parameters of the quantum gates in the layer.

required

Returns:

Type Description
float

The value of the cost function for the chosen circuit and metric.

Source code in qubit_approximant/core/cost/cost.py
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
def __call__(self, params: NDArray) -> float:
    """Evaluate the cost function given the parameters of the circuit.

    Parameters
    ----------
    params : NDArray
        Parameters of the quantum gates in the layer.

    Returns
    -------
    float
        The value of the cost function for the chosen circuit and metric.
    """
    fn_approx = self.circuit.encoding(params)
    return self.metric(self.fn, fn_approx)

grad(params)

Return the gradient of the cost function.

Parameters:

Name Type Description Default
params NDArray

Parameters of the quantum gates in the layer.

required

Returns:

Type Description
NDArray

Gradient of the cost.

Source code in qubit_approximant/core/cost/cost.py
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
def grad(self, params: NDArray) -> NDArray:
    """Return the gradient of the cost function.

    Parameters
    ----------
    params : NDArray
        Parameters of the quantum gates in the layer.

    Returns
    -------
    NDArray
        Gradient of the cost.
    """
    grad_fn_approx, fn_approx = self.circuit.grad_encoding(params)
    return self.grad_metric(self.fn, fn_approx, grad_fn_approx)

GDOptimizer(iters, step_size)

Bases: Optimizer

Gradient descent optimizer.

iters : int The number of gradient descent iterations to perform. step_size : float The size of the step of each gradient descent iteration.

Source code in qubit_approximant/core/optimizer/optimizer.py
117
118
119
120
121
122
123
124
125
126
127
def __init__(self, iters: int, step_size: float):
    """
    Parameters
    ----------
    iters : int
        The number of gradient descent iterations to perform.
    step_size : float
        The size of the step of each gradient descent iteration.
    """
    self.step_size = step_size
    self._iters = iters

iters property writable

Number of iterations of gradient descent.

__call__(cost, grad_cost, params)

Calculate the optimized parameters using a number of gradient descent iterations.

Parameters:

Name Type Description Default
cost Callable

Cost function to be minimized.

required
grad_cost Callable

Gradient of the cost function.

required
params NDArray

Initial parameter guess for the cost function; used to initialize the optimizer.

required

Returns:

Type Description
NDArray

Optimum parameters

Source code in qubit_approximant/core/optimizer/optimizer.py
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
def __call__(self, cost: Callable, grad_cost: Callable, params: NDArray) -> NDArray:
    """
    Calculate the optimized parameters using a number of gradient descent iterations.

    Parameters
    ----------
    cost : Callable
        Cost function to be minimized.
    grad_cost : Callable
        Gradient of the cost function.
    params : NDArray
        Initial parameter guess for the cost function; used to initialize the optimizer.

    Returns
    -------
    NDArray
        Optimum parameters
    """
    min_cost = 100000
    min_params = zeros_like(params)

    for i in range(self.iters):
        self.iter_index = i
        params = self.step(grad_cost, params)

        if (c := cost(params)) < min_cost:
            self.min_cost = c
            min_params = params

    return min_params

step(grad_cost, params)

Update the parameters with a step of Gradient Descent.

Source code in qubit_approximant/core/optimizer/optimizer.py
170
171
172
173
def step(self, grad_cost: Callable, params: NDArray) -> NDArray:
    """Update the parameters with a step of Gradient Descent."""
    params = params - grad_cost(params) * self.step_size
    return params

IncrementalOptimizer(min_layer, max_layer, optimizer, new_layer_coef, new_layer_position)

Bases: MultilayerOptimizer

This optimizer uses the parameters of an optimized L layer circuit as input for the optimization of a L+1 layer circuit.

Attributes:

Name Type Description
new_layer_position str

The position where to add the parameters of the new layer. For, example, it may be the initial or final layer of our circuit.

Parameters:

Name Type Description Default
min_layer int

Starting number of layers to optimize.

required
max_layer int

Final number of layers to optimize.

required
optimizer Optimizer

The optimizer used to find the optimum parameters.

required
new_layer_coef float

The coefficient that multiplies the normal distribution of the new parameters in the additional layer.

required
new_layer_position str

The position where to add the parameters of the new layer. For, example, it may be the initial or final layer of our circuit.

required
Source code in qubit_approximant/core/optimizer/multilayer_optimizer.py
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
def __init__(
    self,
    min_layer,
    max_layer,
    optimizer: Optimizer,
    new_layer_coef: float,
    new_layer_position: str,
) -> None:
    """
    Initialize a black box optimizer.

    Parameters
    ----------
    min_layer : int
        Starting number of layers to optimize.
    max_layer : int
        Final number of layers to optimize.
    optimizer : Optimizer
        The optimizer used to find the optimum parameters.
    new_layer_coef : float
        The coefficient that multiplies the normal distribution of the
        new parameters in the additional layer.
    new_layer_position : str
        The position where to add the parameters of the new layer. For,
        example, it may be the initial or final layer of our circuit.
    """
    if new_layer_position in IncrementalOptimizer.layer_positions:
        self.new_layer_position = new_layer_position
    else:
        raise ValueError(
            f"new_layer_position = {new_layer_position} is not supported. "
            "Try 'initial', 'middle', 'final' or 'random'"
        )
    super().__init__(min_layer, max_layer, optimizer, new_layer_coef)

inital_params_diff: tuple[list[float], list[float]] property

Returns a list with the mean and standard deviation of the difference between the optimum parameters in the i-th layer and the optimum parameters of the (i+1)-th layer. (We exclude the additional parameters added with the new layer).

Returns:

Type Description
tuple[list[float], list[float]]

Mean and standard deviation of the parameter differences.

Raises:

Type Description
ValueError

Parameter difference only supported for new initial and final layers.

__call__(cost, grad_cost, init_params)

Calculate the optimized parameters for each number of layers.

Parameters:

Name Type Description Default
cost Callable

Cost function to be minimized.

required
grad_cost Callable

Gradient of the cost function.

required
init_params NDArray

Initial parameter guess for the cost function; used to initialize the optimizer.

required

Returns:

Type Description
list[NDArray]

The optimum parameters for each number of layers.

Source code in qubit_approximant/core/optimizer/multilayer_optimizer.py
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
def __call__(self, cost: Callable, grad_cost: Callable, init_params: NDArray) -> list[NDArray]:
    """Calculate the optimized parameters for each number of layers.

    Parameters
    ----------
    cost : Callable
        Cost function to be minimized.
    grad_cost : Callable
        Gradient of the cost function.
    init_params : NDArray
        Initial parameter guess for the cost function; used to initialize the optimizer.

    Returns
    -------
    list[NDArray]
        The optimum parameters for each number of layers.
    """
    self.params_layer = init_params.size // self.min_layer
    params = init_params
    self.params_list = []

    for layer in range(self.min_layer, self.max_layer + 1):
        params = self.optimizer(cost, grad_cost, params)
        self.params_list.append(params)
        params = self._new_initial_params(params, layer)
    return self.params_list

MultilayerOptimizer(min_layer, max_layer, optimizer, new_layer_coef=0.3)

Bases: ABC

This optimizer uses the parameters of an optimized L layer circuit as input for the optimization of a L+1 layer circuit.

Attributes:

Name Type Description
min_layer int

Starting number of layers to optimize.

max_layer int

Final number of layers to optimize.

optimizer Optimizer

The optimizer used to find the optimum parameters.

new_layer_coef float

The coefficient that multiplies the normal distribution of the new parameters in the additional layer.

Parameters:

Name Type Description Default
min_layer int

Starting number of layers to optimize.

required
max_layer int

Final number of layers to optimize.

required
optimizer Optimizer

The optimizer used to find the optimum parameters.

required
new_layer_coef float

The coefficient that multiplies the normal distribution of the new parameters in the additional layer.

0.3
Source code in qubit_approximant/core/optimizer/multilayer_optimizer.py
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
def __init__(self, min_layer, max_layer, optimizer: Optimizer, new_layer_coef: float = 0.3):
    """
    Initialize a black box optimizer.

    Parameters
    ----------
    min_layer : int
        Starting number of layers to optimize.
    max_layer : int
        Final number of layers to optimize.
    optimizer : Optimizer
        The optimizer used to find the optimum parameters.
    new_layer_coef : float
        The coefficient that multiplies the normal distribution of the
        new parameters in the additional layer.
    """
    self.min_layer = min_layer
    self.max_layer = max_layer
    self.optimizer = optimizer
    self.new_layer_coef = new_layer_coef

__call__(cost, grad_cost, init_params) abstractmethod

Calculate the optimized parameters for each number of layers.

Parameters:

Name Type Description Default
cost Callable

Cost function to be minimized.

required
grad_cost Callable

Gradient of the cost function.

required
init_params NDArray

Initial parameter guess for the cost function; used to initialize the optimizer.

required

Returns:

Type Description
list of NDArray

The optimum parameters for each number of layers.

Source code in qubit_approximant/core/optimizer/multilayer_optimizer.py
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
@abstractmethod
def __call__(self, cost: Callable, grad_cost: Callable, init_params: NDArray) -> list[NDArray]:
    """
    Calculate the optimized parameters for each number of layers.

    Parameters
    ----------
    cost: Callable
        Cost function to be minimized.
    grad_cost: Callable
        Gradient of the cost function.
    init_params : NDArray
        Initial parameter guess for the cost function; used to initialize the optimizer.

    Returns
    -------
    list of NDArray
        The optimum parameters for each number of layers.
    """
    ...

NonIncrementalOptimizer(min_layer, max_layer, optimizer, new_layer_coef)

Bases: MultilayerOptimizer

This optimizer creates new initial parameters for the optimization of a circuit with an additional layer.

Parameters:

Name Type Description Default
min_layer int

Starting number of layers to optimize.

required
max_layer int

Final number of layers to optimize.

required
optimizer Optimizer

The optimizer used to find the optimum parameters.

required
new_layer_coef float

The coefficient that multiplies the normal distribution of the new parameters in the additional layer.

required
Source code in qubit_approximant/core/optimizer/multilayer_optimizer.py
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
def __init__(self, min_layer, max_layer, optimizer: Optimizer, new_layer_coef: float):
    """
    Initialize a black box optimizer.

    Parameters
    ----------
    min_layer : int
        Starting number of layers to optimize.
    max_layer : int
        Final number of layers to optimize.
    optimizer : Optimizer
        The optimizer used to find the optimum parameters.
    new_layer_coef : float
        The coefficient that multiplies the normal distribution of the
        new parameters in the additional layer.
    """
    super().__init__(min_layer, max_layer, optimizer, new_layer_coef)

__call__(cost, grad_cost, init_params)

Calculate the optimized parameters for each number of layers.

Parameters:

Name Type Description Default
cost Callable

Cost function to be minimized.

required
grad_cost Callable

Gradient of the cost function.

required
init_params NDArray

Initial parameter guess for the cost function; used to initialize the optimizer.

required

Returns:

Type Description
list[NDArray]

The optimum parameters for each number of layers.

Source code in qubit_approximant/core/optimizer/multilayer_optimizer.py
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
def __call__(self, cost: Callable, grad_cost: Callable, init_params: NDArray) -> list[NDArray]:
    """
    Calculate the optimized parameters for each number of layers.

    Parameters
    ----------
    cost: Callable
        Cost function to be minimized.
    grad_cost: Callable
        Gradient of the cost function.
    init_params : NDArray
        Initial parameter guess for the cost function; used to initialize the optimizer.

    Returns
    -------
    list[NDArray]
        The optimum parameters for each number of layers.
    """
    self.params_layer = init_params.size // self.min_layer
    self.params_list = []
    params = init_params
    rng = np.random.default_rng()

    for layer in range(self.min_layer, self.max_layer + 1):
        params = self.optimizer(cost, grad_cost, params)
        self.params_list.append(params)
        params = self.new_layer_coef * rng.standard_normal((layer + 1) * self.params_layer)
    return self.params_list