This article gives an overview of the Peak APIs screens and their functions.

These functions enable you to configure and deploy APIs that expose and integrate the outputs of your models to other systems.

They manage the final stage of the API deployment process after you have developed your API code, written a launcher script for your API and created a Dockerfile and image to contain and run this code.

For an end-to-end guide to this process, see Deploying APIs.


Getting to the API screens

Go to Work > APIs.
The APIs screen appears.

If your peak organization does not have any configured endpoints, you will be prompted to add one.
If you have already have configured endpoints, they will all be listed here.

What can you do from the API screens?

From here, you can:

  • Create new or edit existing API endpoints and API configurations

  • Deploy endpoints

  • Specify the resources that your API will use

  • Manage API stacks
    These are used for performing blue-green tests on your API deployments.

Endpoints tab

An endpoint defines the base URL of your API and allows multiple configurations to be accessed from the same endpoint.

The Endpoints tab lists all of the API endpoints that are deployed on your organization and their status.

Multi-stack deployment

Endpoints can contain multiple stacks, each pointing to a different configuration with a distinct URL. Multiple stacks allow A/B or blue/green tests to be performed.

For more information, see Stacks.

Shadow stack deployments

With shadow deployment, a new version is deployed alongside the current version.

Traffic for the live deployment is duplicated and routed to the shadow for testing and debugging purposes.

For more information, see Stacks.

Endpoints metadata

The Endpoints tab shows the following metadata for each stack:

  • Traffic Routing
    This shows the percentage of total traffic that is being routed to the stack.

  • Status

    • Not ready

      The stack is either deploying or deployment has failed.

    • In service
      The stack has been successfully deployed and is in service.

  • Last Deployment Status

    • Success
      The stack has been successfully deployed.

    • Failed
      The deployment has failed.
      Click on the stack to view logs.

  • Last Deployment At
    The time and date that the last deployment was attempted.

Configurations tab

A configuration is a wrapper around the API image which allows you to define additional parameters, such as instance sizes, environment variables, custom entry points, and scaling rules.

The Configuration tab lists all of the API configurations that have been added to the tenant along with their metadata. Configurations can be tagged, previewed, cloned, or deleted by hovering your pointer over them.

Configurations cannot be edited.
Instead, clone a configuration, make changes to it, and then save the clone as a new configuration.

Configurations metadata

The Configurations tab shows the following metadata:

  • Status

    • Available

      Indicates the configuration is available for use.

    • In use
      Indicates if the configuration is being used by an endpoint stack.

  • Created by
    The tenant user that created the configuration.

  • Created at
    The time and date that the configuration was created at.

  • Tags
    You can create and add tags to help with categorizing your API configurations.

Creating configurations and deploying endpoints

From the top of the APIs screen you can create new API configurations, deploy endpoints and manage the resources used by your API instances.

Create configuration

Clicking here opens the Create Configuration screen. This enables you to define and save an API configuration so that it can be used when deploying an API.

For more information, see the following articles:

Deploy endpoint

Clicking here opens the Deploy Endpoint screen. This enables you to define endpoints for your API.

For more information, see the following articles:


The settings let you specify the maximum instance size for each type of instance that could be used for an API configuration.

They are separated into the following types with each one having a range of different instance sizes:

  • General Purpose

  • Memory Intensive

  • CPU Intensive

  • GPU Intensive

It is best practice to start with one of the smaller sizes and scale up, if necessary.