Model

Root object to define a problem to be optimized

titanq.Model.__init__(self, *, api_key: str | None = None, storage_client: StorageClient | None = None, base_server_url: str = 'https://titanq.infinityq.io') None

Initiate the model with a storage client. If the storage_client is missing, the storage will be managed by TitanQ.

Notes

The storage managed by TitanQ supports weight matrices with a size up to 10k only.

Parameters

api_key

TitanQ API key to access the service. If not set, it will use the environment variable TITANQ_API_KEY

storage_client

Storage to choose in order to store some items.

base_server_url

TitanQ API server url, default set to https://titanq.infinityq.io.

Raises

MissingTitanqApiKey

If no API key is set and is also not set as an environment variable

Examples

With an S3 storage client
>>> from titanq import Model, S3Storage
>>> storage_client = S3Storage(
    access_key="{insert aws bucket access key here}",
    secret_key="{insert aws bucket secret key here}",
    bucket_name="{insert bucket name here}"
)
>>> model = Model(storage_client)
Managed storage client
>>> from titanq import Model, S3Storage
>>> model = Model()
titanq.Model.add_cardinality_constraint(self, constraint_mask: ndarray, cardinality: int)

Adds cardinality constraint vector to the model.

Parameters

constraint_mask

A NumPy 1-D dense ndarray (must be binary). The constraint_mask vector of shape (N,) where N is the number of variables.

cardinality

The constraint_rhs cardinality. This value has to be a non-zero unsigned integer.

Raises

MissingVariableError

If no variable have been added to the model.

MaximumConstraintLimitError

If the number of constraints exceed the limit.

ConstraintSizeError

If the constraint_mask shape or the constraint_rhs shape does not fit the expected shape of this model.

ValueError

If the constraint_mask is not in binary or the cardinality is not an unsigned integer.

Examples

>>> constraint_mask = np.array([1, 1, 1, 0, 1])
>>> cardinality = 3
>>> model.add_cardinality_constraint(constraint_mask, cardinality)
titanq.Model.add_cardinality_constraints_matrix(self, constraint_mask: ndarray, cardinalities: ndarray)

Adds cardinality constraints in matrix format to the model.

Parameters

constraint_mask

A NumPy 2-D dense ndarray (must be binary). The constraint_mask matrix of shape (M, N) where M the number of constraints and N is the number of variables.

cardinalities

A NumPy 1-D ndarray (must be non-zero unsigned integer). The constraint_rhs vector of shape (M,) where M is the number of constraints.

Raises

MissingVariableError

If no variable have been added to the model.

MaximumConstraintLimitError

The number of constraint exceed the limit.

ConstraintSizeError

If the constraint_mask shape or the constraint_rhs shape does not fit the expected shape of this model.

ValueError

If the constraint_mask is not binary or cardinalities data type are not unsigned integers.

Examples

>>> constraint_mask = np.array([[1, 1, 1, 0, 1], [1, 1, 1, 1, 0]])
>>> cardinalities = np.array([3, 2])
>>> model.add_cardinality_constraints_matrix(constraint_mask, cardinalities)
titanq.Model.add_constraint_from_expression(self, equation: Equation)

ℹ️ This feature is experimental and may change.

Adds a constraint to the model using the given expression.

This method processes the provided constraint expression to add it as a constraint to the optimization problem. Only linear constraints of the following types are supported:

  • A == B

  • A < B

  • A <= B

  • A > B

  • A >= B

Constraints involving quadratic terms are not supported and will raise an error.

Parameters

expression

The constraint expression. This should be an instance of Equation.

Raises

ValueError

If the provided expression contains quadratic terms.

TypeError

If the provided expression is of an invalid or unsupported type.

Examples

>>> from titanq import Model, Vtype
>>> x = model.add_variable_vector('x', 2, Vtype.BINARY)
>>> y = model.add_variable_vector('y', 2, Vtype.BINARY)
>>> expr = sum(x+y) == 1
>>> model.add_constraint_from_expression(expr)
titanq.Model.add_equality_constraint(self, constraint_mask: ndarray, limit: float32) None

Adds an equality constraint vector to the model.

Parameters

constraint_mask

A NumPy 1-D dense ndarray (float32). The constraint_mask vector of shape (N,) where N is the number of variables.

limit

Limit value to the constraint mask.

Raises

MissingVariableError

If no variable have been added to the model.

MaximumConstraintLimitError

The number of constraint exceed the limit.

ValueError

If the constraint_mask shape does not fit the expected shape of this model. If the constraint_mask or limit contains irregular format (‘NaN’ or ‘inf’).

Examples

>>> constraint_mask = np.array([1.05, -1.1], dtype=np.float32)
>>> limit = -3.45
>>> model.add_equality_constraint(constraint_mask, limit)
titanq.Model.add_equality_constraints_matrix(self, constraint_mask: ndarray, limit: ndarray) None

Adds an equality constraint matrix to the model.

Parameters

constraint_mask

A NumPy 2-D dense ndarray (float32). The constraint_mask vector of shape (M, N) where M the number of constraints and N is the number of variables.

limit

A NumPy 1-D array (float32). The limit vector of shape (M,) where M is the number of constraints.

Raises

MissingVariableError

If no variable have been added to the model.

MaximumConstraintLimitError

The number of constraint exceed the limit.

ValueError

If the constraint_mask shape does not fit the expected shape of this model. If the constraint_mask or limit contains irregular format (‘NaN’ or ‘inf’).

Examples

>>> constraint_mask = np.array([[-3.51, 0, 0, 0], [10, 0, 0, 0]], dtype=np.float32)
>>> limit = np.array([2, 10], dtype=np.float32)
>>> model.add_equality_constraints_matrix(constraint_mask, limit)
titanq.Model.add_inequality_constraint(self, constraint_mask: ndarray, constraint_bounds: ndarray)

Adds inequality constraint vector to the model. At least one bound must be set.

Parameters

constraint_mask

A NumPy 1-D dense ndarray (float32). The constraint_mask vector of shape (N,) where N is the number of variables.

constraint_bounds

A NumPy 1-D ndarray (float32). Vector of shape (2,)

Raises

MissingVariableError

If no variable have been added to the model.

MaximumConstraintLimitError

The number of constraint exceed the limit.

ValueError

If the constraint_mask shape does not fit the expected shape of this model. If the constraint_mask contains irregular format (‘NaN’ or ‘inf’). If the lowerbound is equal or higher than the upperbound.

Examples

>>> constraint_mask = np.array([1.05, -1.1], dtype=np.float32)
>>> constraint_bounds = np.array([1.0, np.nan], dtype=np.float32)
>>> model.add_inequality_constraint(constraint_mask, constraint_bounds)
titanq.Model.add_inequality_constraints_matrix(self, constraint_mask: ndarray, constraint_bounds: ndarray)

Adds inequality constraint matrix to the model.

Parameters

constraint_mask

A NumPy 2-D dense ndarray (float32). The constraint_mask vector of shape (M, N) where N is the number of variables.

constraint_bounds

A NumPy 2-D ndarray (float32). Vector of shape (M, 2) where M is the number of constraints.

Raises

MissingVariableError

If no variable have been added to the model.

MaximumConstraintLimitError

The number of constraint exceed the limit.

ValueError

If the constraint_mask shape does not fit the expected shape of this model. If the constraint_mask contains irregular format (‘NaN’ or ‘inf’). If the lowerbound is equal or higher than its given upperbound.

Examples

>>> constraint_mask = np.array([[-3.51, 0], [10, 0]], dtype=np.float32)
>>> constraint_bounds = np.array([[8, 9], [np.nan, 100_000]], dtype=np.float32)
>>> model.add_inequality_constraints_matrix(constraint_mask, constraint_bounds)
titanq.Model.add_quadratic_equality_constraint(self, constraint_mask: ndarray, limit: float32, constraint_linear_weights: ndarray | None = None) None

ℹ️ This feature is experimental and may change.

Adds an equality quadratic constraint to the model.

Parameters

constraint_mask

A NumPy 2-D dense ndarray (float32). The constraint_mask vector of shape (N, N) where N is the number of variables.

limit

Limit value to the constraint mask.

constraint_linear_weights

A NumPy 1-D dense ndarray (float32). The constraint_linear_weights vector of shape (N,) where N is the number of variables.

Raises

MissingVariableError

If no variable have been added to the model.

MaximumConstraintLimitError

The number of quadratic constraint exceed the limit.

ValueError

If the constraint_mask shape does not fit the expected shape of this model. If the constraint_mask or limit contains irregular format (‘NaN’ or ‘inf’).

Examples

>>> constraint_mask = np.array([[0.1, 0.1], [0.1, 0.1]], dtype=np.float32)
>>> limit = 1.0
>>> constraint_linear_weights = np.array([0, 0.2], dtype=np.float32)
>>> model.add_quadratic_equality_constraint(constraint_mask, limit, constraint_linear_weights)
titanq.Model.add_quadratic_inequality_constraint(self, constraint_mask: ndarray, constraint_bounds: ndarray, constraint_linear_weights: ndarray | None = None) None

ℹ️ This feature is experimental and may change.

Adds an inequality quadratic constraint to the model.

Parameters

constraint_mask

A NumPy 2-D dense ndarray (float32). The constraint_mask vector of shape (N, N) where N is the number of variables.

constraint_bounds

A NumPy 1-D ndarray (float32). Vector of shape (2,).

constraint_linear_weights

A NumPy 1-D dense ndarray (float32). The constraint_linear_weights vector of shape (N,) where N is the number of variables.

Raises

MissingVariableError

If no variable have been added to the model.

MaximumConstraintLimitError

The number of quadratic constraint exceed the limit.

ValueError

If the constraint_mask shape does not fit the expected shape of this model. If the constraint_mask or limit contains irregular format (‘NaN’ or ‘inf’).

Examples

>>> constraint_mask = np.array([[1.05, -1.1], []], dtype=np.float32)
>>> constraint_bounds = np.array([np.nan, 10], dtype=np.float32)
>>> constraint_linear_weights = np.array([4.0, 4.0], dtype=np.float32)
>>> model.add_quadratic_inequality_constraint(constraint_mask, constraint_bounds, constraint_linear_weights)
titanq.Model.add_set_partitioning_constraint(self, constraint_mask: ndarray)

Adds set partitioning constraint vector to the model.

Parameters

constraint_mask

A NumPy 1-D dense ndarray (must be binary). The constraint_mask vector of shape (N,) where N is the number of variables.

Raises

MissingVariableError

If no variable have been added to the model.

MaximumConstraintLimitError

The number of constraint exceed the limit.

ConstraintSizeError

If the constraint_mask shape does not fit the expected shape of this model.

ValueError

If the constraint_mask data type is not binary.

Examples

>>> constraint_mask = np.array([1, 1, 1, 0, 1])
>>> model.add_set_partitioning_constraint(constraint_mask)
titanq.Model.add_set_partitioning_constraints_matrix(self, constraint_mask: ndarray)

Adds set partitioning constraints in matrix format to the model.

Parameters

constraint_mask

A NumPy 2-D dense ndarray (must be binary). The constraint_mask matrix of shape (M, N) where M the number of constraints and N is the number of variables.

Raises

MissingVariableError

If no variable have been added to the model.

MaximumConstraintLimitError

The number of constraints exceed the limit.

ConstraintSizeError

If the constraint_mask shape does not fit the expected shape of this model.

ValueError

If the constraint_mask data type is not binary.

Examples

>>> constraint_mask = np.array([[1, 1, 1, 0, 1], [1, 1, 1, 1, 0]])
>>> model.add_set_partitioning_constraints_matrix(constraint_mask)
titanq.Model.add_variable_vector(self, name: str = '', size: int = 1, vtype: Vtype = Vtype.BINARY, variable_bounds: List[Tuple[int, int]] | List[Tuple[float, float]] | None = None) ndarray[Any, dtype[Any]]

Add a vector of variable to the model. Multiple variables vector can be added but with different names.

Notes

If Vtype is set to Vtype.INTEGER or Vtype.CONTINUOUS, variable_bounds need to be set.

Parameters

name

The name given to this variable vector.

size

The size of the vector.

vtype

Type of the variables inside the vector.

variable_bounds

Lower and upper bounds for the variable vector. A list of tuples (can be either integers or continuous)

Return

variable

The variable vector created.

Raises

MaximumVariableLimitError

If the total size of variables exceed the limit.

ValueError

If the size of the vector is < 1

Examples

>>> from titanq import Model, Vtype
>>> model.add_variable_vector('x', 3, Vtype.BINARY)
>>> model.add_variable_vector('y', 2, Vtype.INTEGER, [[0, 5], [1, 6]])
>>> model.add_variable_vector('z', 3, Vtype.CONTINUOUS, [[2.3, 4.6], [3.1, 5.3], [1.1, 4]])
titanq.Model.get_constraints_weights_and_bounds(self) Tuple[ndarray | None, ndarray | None]

Retrieve the weights and bounds of all constraints from the model.

Return

Constraints weights if not None and constraints bounds if not None

titanq.Model.get_objective_matrices(self) Tuple[ndarray | None, ndarray | None]

Retrieve the weights and bias vector from the model’s objective. Both will be None if not set.

Return

Weights matrix if not None and bias vector if not None

titanq.Model.get_quad_constraints_linear_weights(self) ndarray | None

Retrieve the quadratic constraints linear weights.

Return

Quadratic constraints linear weights if not None

titanq.Model.get_quad_constraints_weights_and_bounds(self) Tuple[ndarray | None, ndarray | None]

Retrieve the quadratic constraints weights and bounds.

Return

Quadratic constraints weights if not None and quadratic constraints bounds if not None

titanq.Model.optimize(self, *, beta: List[float] = [1, 0.5, 0.33, 0.25, 0.2, 0.16, 0.14, 0.125], coupling_mult: float = 0.5, timeout_in_secs: float = 10.0, num_chains: int = 8, num_engines: int = 1, penalty_scaling: float | None = None, precision: Precision = Precision.AUTO) OptimizeResponse

Optimize this model.

Notes

All of the files used during this computation will be cleaned at the end. For more information on how to tunes those parameters, visit

The tuning guide

TitanQ API documentation

Parameters

beta

Scales the problem by this factor (inverse of temperature). Beta values can then be adjusted to see if a better objective function value can be obtained. A lower beta allows for easier escape from local minima, while a higher beta is more likely to respect penalties and constraints.

Beta values tuning guide

Range: List of [0, 20000]

Recommended values: List of [0.004…2]

NOTE: Beta values should be provided in descending order

>>> import numpy as np
>>> num_chains = 8
>>> beta = (1/(np.linspace(2, 50, num_chains, dtype=np.float32))).tolist()
coupling_mult

Strength of parameter that keeps multiple logical copies of variables to have the same ground state solution. Heuristic to be tuned for a particular problem. Small values of this parameter will lead to an incorrect solution while large values will take a long time to converge to the correct solution.

coupling_mult tuning guide

Range: [0, 100]

Recommended values: [0.05…1.0]

timeout_in_secs

Maximum time (in second) the computation can take.

timeout_in_secs tuning guide

Range: [0.1, 600]

NOTE: Currently there is no other stop criteria. All computations will run up to the timeout value.

num_chains

Number of parallel chains running computation. Only the best result of all chains is returned.

num_chains tuning guide

Recommended values: [8, 16, 32]

NOTE: num_chains * num_engines cannot exceed 512

num_engines

Number of independent batches of chains to run the computation. The best result of the batch of chains in each engine is returned.

num_engines tuning guide

Range: [1, 512]

NOTE: num_chains * num_engines cannot exceed 512

penalty_scaling

Scaling factor for constraint penalty violations. Increasing this value results in stronger constraint enforcement at the cost of increasing the odds of becoming trapped in local minima.

Range: penalty_scaling > 0

NOTE: If None, a value will be inferred from the objective function of the problem.

precision

Some problems may need a higher precision implementation to converge properly such as when problem weights exhibit a high dynamic range. This flag allows this higher precision (but slightly slower speed) implementation to be used when Precision.HIGH is set. Setting Precision.STANDARD uses a medium precision perfect for general use and offering the best speed/efficiency. The default setting Precision.AUTO will inspect the problem passed in and determine which of Precision.AUTO or Precision.STANDARD precision to use based on internal metrics.

Returns

OptimizeResponse

Optimized response data object

Raises

MissingVariableError

If no variable have been added to the model.

MissingObjectiveError

If no objective matrices have been added to the model.

Examples

basic solve
>>> response = model.optimize(timeout_in_secs=60)
multiple engine
>>> response = model.optimize(timeout_in_secs=60, num_engines=2)
custom values
>>> response = model.optimize(beta=[0.1], coupling_mult=0.75, num_chains=8)
print values
>>> print("-" * 15, "+", "-" * 26, sep="")
>>> print("Ising energy   | Result vector")
>>> print("-" * 15, "+", "-" * 26, sep="")
>>> for ising_energy, result_vector in response.result_items():
>>>     print(f"{ising_energy: <14f} | {result_vector}")
titanq.Model.set_objective_expression(self, expr: MathObject, target=Target.MINIMIZE)

ℹ️ This feature is experimental and may change.

Sets the objective function for the optimization problem using the given expression.

This method processes the provided expression to extract the bias vector and weight matrix, and then sets these as the objective matrices for the optimization problem.

Parameters

expr

The expression defining the objective function. This should be an instance of MathObject.

target

The target of this objective matrix.

Raises

TypeError

if the provided expression contains any invalid/unsupported input

Examples

>>> from titanq import Model, Vtype
>>> x = model.add_variable_vector('x', 2, Vtype.BINARY)
>>> y = model.add_variable_vector('y', 2, Vtype.BINARY)
>>> expr = (np.array([3, 4]) * x + (x * y) - 5 * y)[0]
>>> model.set_objective_expression(expr)
titanq.Model.set_objective_matrices(self, weights: ndarray | None, bias: ndarray, target=Target.MINIMIZE)

Set the objective matrices for the model.

Parameters

weights

The quadratic objective matrix, this matrix needs to be symmetrical. A NumPy 2-D dense ndarray (must be float32). Weights matrix can be set to None if it is a linear problem with no quadratic elements.

bias

The linear constraint vector. A NumPy 1-D ndarray.

target

The target of this objective matrix.

Raises

MissingVariableError

If no variable have been added to the model.

ObjectiveAlreadySetError

If an objective has already been set in this model.

ValueError

If the weights shape or the bias shape does not fit the variables in the model. If the weights or bias data type is not float32.

Examples

>>> from titanq import Model, Target
>>> edges = {0:[4,5,6,7], 1:[4,5,6,7], 2:[4,5,6,7], 3:[4,5,6,7], 4:[0,1,2,3], 5:[0,1,2,3], 6:[0,1,2,3], 7:[0,1,2,3]}
>>> size = len(edges)
>>> weights = np.zeros((size, size), dtype=np.float32)
>>> for root, connections in edges.items():
>>>     for c in connections:
>>>         weights[root][c] = 1
>>> # construct the bias vector (Uniform weighting across all nodes)
>>> bias = np.asarray([0]*size, dtype=np.float32)
>>> model.set_objective_matrices(weights, bias, Target.MINIMIZE)