For more information about all of the stages that are required for deploying an API on Peak, see How to deploy an API on Peak.


Getting to the screens

Go to Work > APIs.
The APIs screen appears.
If your peak organization does not have any configured endpoints, you will be prompted to add one. 
If you have already have configured endpoints, they will all be listed here.

Adding and cloning configurations

A configuration is a wrapper around an API image that allows you to define additional parameters, such as instance sizes, environment variables, custom entry points, and scaling rules.

To add a new configuration:

  1. From the APIs screen, click CREATE CONFIGURATION.
    The Create Configuration pane appears.

  2. Complete the required fields and click SAVE.
    Your configuration appears in the Configurations tab.

To clone an existing configuration:

  1. From the APIs screen, hover over the configuration that you want to clone.

  2. Click the Clone Icon.
    The Clone Configuration pane appears.

  3. Complete the required fields and click SAVE.
    Your configuration appears in the Configurations tab.

Completing the Configuration pane fields

The fields on the Create Configuration pane and Clone Configuration pane are the same.

Complete these fields when adding or cloning an API configuration:

Configuration Name

Enter a suitable name for the configuration.

Only alphanumeric characters and hyphens can be used.


Select a pre-built API Deployment type image.

Use Image Management to create and manage API Deployment images.

To create a new or check an existing API image, go to Factory > Image Management.

Model (optional)

If your API uses model files, you can specify where they are stored here.

For example, if your model files are stored in your organizations's S3 data lake, enter <tenant>/datascience/model/ and then the name of the required file.

The file will then be available in the container when the stack is deployed.

Entry Point (Optional)

Specify an entry point for your API to modify your model via a command line.

Multiple commands can be supported when they are on separate lines.

For example, if you want to pass python3 as a command and as an argument, define them as:

  • python3


Instance Size

This lets you specify the instance size for the API configuration.

They are separated into the following types with each one having a range of different instance sizes:

  • General Purpose

  • Memory Intensive

  • CPU Intensive

  • GPU Intensive

It is best practice to start with one of the smaller sizes and scale up, if necessary.

Max Instance

You can specify the maximum number of instances that you want to deploy for your API.

To maintain high availability, the default is set at a minimum of two instances.

Heath Check URL

At the moment your API is deployed, the system checks to see if the API application is ready by pinging this URL. If it is not ready, the deployment will fail.

It can be any URL from the application which returns a Success(200) response when hit.

For example, it could be:

  • Root (/) 

  • Other paths, such as /ping or /healthz

For more information, see Kubernetes Documentation: Using API health endpoints


Enter the port number that your application is running on.

The default is port 8080.


From here, you can specify environment variables to modify the behavior of processes or to override variables in the image.

In addition, secret credentials can be added as environment variables from the Select Secret Credentials drop-down. This means that external credentials, such as a GitHub password, can be passed without exposing secret values.

Secret credentials are added to the Select Secret Credentials drop-down from Peak’s Console area.

For more details, see How to set up External Credentials.

Environment variable formats for external credentials

The format is: NAME_TYPE, where:

  • NAME - is your credential name


For Bitbucket credentials, the format is:




You can define scaling rules based on CPU and memory usage to help your API deal with sudden changes in load.

At least one scaling rule must be defined.