Tensorflow

To easily capture model metadata, you can use our inference runners. These runners will run a dataset through the model and log the metadata katiML needs, without having to call upload_to_lake() or a REST API.

from dioptra.inference.tf.classifier_runner import ClassifierRunner
class ClassifierRunner(
    model: Model,
    embeddings_layers: List[str],
    logits_layer: str,
    class_names: List[str],
    metadata: Optional[List[object]]
)
Arguments
Description

a tensorflow model

a list of names of embeddings layers. It can combine names and indexes. For example: [0].embeddings

the name of the logits layer

a list of class names corresponding to the logits indexes

a list of metadata to be added to each datapoint. The index in this list should match the index from the dataset. metadata can include any of the metadata accepted by Dioptra and will override model inferenced values

def run(
    self,
    dataset: Dataset
)
Arguments
Description

an iterable dataset to run the model inference on. The iterator should only return the features, not the groundtruth. Groundtruth should be passed as a metadata. It should not be shuffled is used with metadata

def wait_for_uploads(self): -> [object]
# Waits on all metadata generated during inference to be uploaded to Dioptra.
# Returns the list of uploads generated by the runner during the inference.

Example usage

You can also send metadata like groundtruth, image uris or tags. Here is a complete example.

Last updated

Was this helpful?