Client
This class provides a synchronous interface to qognitive’s API. Under the hood, it uses the requests library to make HTTP requests to the API.
- class qcog_python_client.QcogClient
Bases:
BaseQcogClient
[RequestsClient
],TrainProtocol
,InferenceProtocol
- classmethod create(*, token: str | None = None, hostname: str = 'dev.qognitive.io', port: int = 443, api_version: str = 'v1', safe_mode: bool = False, version: str = '0.0.63') QcogClient
Create a client with initialization from the API.
Since __init__ is always sync we cannot call to the API using that method of class creation. If we need to fetch things such as the project ID that the token is associated with the only way to do that properly with async objects is to use a factory method.
Here we replace init with create and then once the object is created (since the sync part is linked to the creation of the object in memory space) we are able to then call the API using our async methods and not block on IO.
Qcog api client implementation there are 2 main expected usages:
Training
Inference
The class definition is such that every parameter must be used explicitly:
hsm_client = QcogClient(token="value", version="0.0.45")
Each “public” method return “self” to chain method calls unless it is one of the following utilities: status and inference
Each method that results in an api call will store the api response as a json dict in a class attribute
In practice, the 2 main expected usage would be for a fresh training:
hsm = QcogClient.create(...).pauli(...).data(...).train(...)
where the “…” would be replaced with desired parametrization
If we wanted, we could infer after training, right away.
result: pd.DataFrame = hsm.inference(...)
but this would require to run the following loop:
hsm.wait_for_training().inference(...)
to make sure training has successfully completed.
To run multiple inference on a persistent trained model, the trained_model guid go to storage. Datasets? Also storage. Training parameters? Storage. That way one can rebuild the client to run inference:
hsm = QcogClient.create(...).preloaded_model(trained_model_guid) for df in list_of_dataframes: result: Dataframe = hsm.inference(...)
Most methods class order is not important with 3 exceptions:
train may only be called after data, and named model
inference and status must have a preloaded model first
- Parameters:
token (str | None) – A valid API token granting access optional when unset (or None) expects to find the proper value as QCOG_API_TOKEN environment variable
hostname (str) – API endpoint hostname, currently defaults to dev.qognitive.io
port (int) – port value, default to https 443
api_version (str) – the “vX” part of the url for the api version
safe_mode (bool) – if true runs healthchecks before running any api call sequences
version (str) – the qcog version to use. Must be no smaller than OLDEST_VERSION and no greater than NEWEST_VERSION
- Returns:
the client object
- Return type:
- data(data: DataFrame) QcogClient
Upload a dataset for training.
For a fresh “to train” model and properly initialized model upload a pandas DataFrame dataset.
- Parameters:
data (pd.DataFrame:) – the dataset as a DataFrame
upload (bool:) – if true post the dataset
- Return type:
- ensemble(operators: list[str | int], dim: int = 16, num_axes: int = 4, sigma_sq: dict[str, float] = {}, sigma_sq_optimization: dict[str, float] = {}, seed: int = 42, target_operator: list[str | int] = []) QcogClient
Select EnsembleModel for the training.
- property http_client: CLIENT
Return the http client.
- inference(data: DataFrame, parameters: InferenceParameters) DataFrame
From a trained model query an inference.
- Parameters:
data (pd.DataFrame) – the dataset as a DataFrame
parameters (dict) – inference parameters
- Returns:
the predictions
- Return type:
pd.DataFrame
- pauli(operators: list[str | int], qbits: int = 2, pauli_weight: int = 2, sigma_sq: dict[str, float] = {}, sigma_sq_optimization: dict[str, float] = {}, seed: int = 42, target_operator: list[str | int] = []) QcogClient
Select PauliModel for the training.
- preloaded_data(guid: str) QcogClient
Retrieve a dataset that was previously uploaded from guid.
- Parameters:
guid (str) – guid of a previously uploaded dataset
- Returns:
itself
- Return type:
- preloaded_model(guid: str) QcogClient
Preload a model from a guid.
- preloaded_training_parameters(guid: str, rebuild: bool = False) QcogClient
Retrieve preexisting training parameters payload.
- Parameters:
guid (str) – model guid
rebuild (bool) – if True, will initialize the class “model” (ex: pauli or ensemble) from the payload
- Returns:
itself
- Return type:
- status() str
Fetch the status of the training request.
- train(batch_size: int, num_passes: int, weight_optimization: GradOptimizationParameters | AdamOptimizationParameters | AnalyticOptimizationParameters | EmptyDictionary, get_states_extra: LOBPCGFastStateParameters | PowerIterStateParameters | EIGHStateParameters | EIGSStateParameters | NPEIGHStateParameters | GradStateParameters | EmptyDictionary) QcogClient
Start a training job.
For a fresh “to train” model properly configured and initialized trigger a training request.
- Parameters:
batch_size (int) – The number of samples to use in each training batch.
num_passes (int) – The number of passes through the dataset.
weight_optimization (NotRequiredWeightParams) – optimization parameters for the weights
get_states_extra (NotRequiredStateParams) – optimization parameters for the states
- Return type:
- property version: str
Qcog version.
- wait_for_training(poll_time: int = 60) QcogClient
Wait for training to complete.
Note
This function is blocking
- Parameters:
poll_time (int:) – status checks intervals in seconds
- Returns:
itself
- Return type: