Skip to main content

cortex engines

This command allows you to manage various engines available within Cortex.

Usage:


cortex engines [options] [subcommand]

Options:

OptionDescriptionRequiredDefault valueExample
-h, --helpDisplay help information for the command.No--h

Subcommands:

cortex engines list

info

This CLI command calls the following API endpoint:

This command lists all the Cortex's engines.

Usage:


cortex engines list

For example, it returns the following:


+---+--------------+-------------------+---------+----------------------------+---------------+
| # | Name | Supported Formats | Version | Variant | Status |
+---+--------------+-------------------+---------+----------------------------+---------------+
| 1 | onnxruntime | ONNX | | | Incompatible |
+---+--------------+-------------------+---------+----------------------------+---------------+
| 2 | llama-cpp | GGUF | 0.1.34 | linux-amd64-avx2-cuda-12-0 | Ready |
+---+--------------+-------------------+---------+----------------------------+---------------+
| 3 | tensorrt-llm | TensorRT Engines | | | Not Installed |
+---+--------------+-------------------+---------+----------------------------+---------------+

cortex engines get

info

This CLI command calls the following API endpoint:

This command returns an engine detail defined by an engine engine_name.

Usage:


cortex engines get <engine_name>

For example, it returns the following:


+-----------+-------------------+---------+-----------+--------+
| Name | Supported Formats | Version | Variant | Status |
+-----------+-------------------+---------+-----------+--------+
| llama-cpp | GGUF | 0.1.37 | mac-arm64 | Ready |
+-----------+-------------------+---------+-----------+--------+

info

To get an engine name, run the engines list command.

Options:

OptionDescriptionRequiredDefault valueExample
engine_nameThe name of the engine that you want to retrieve.Yes-llama-cpp
-h, --helpDisplay help information for the command.No--h

cortex engines install

info

This CLI command calls the following API endpoint:

This command downloads the required dependencies and installs the engine within Cortex. Currently, Cortex supports three engines:

  • llama-cpp
  • onnxruntime
  • tensorrt-llm

Usage:


cortex engines install [options] <engine_name>

Options:

OptionDescriptionRequiredDefault valueExample
engine_nameThe name of the engine you want to install.Yesllama-cpp, onnxruntime, tensorrt-llm-
-h, --helpDisplay help for command.No--h

cortex engines uninstall

This command uninstalls the engine within Cortex.

Usage:


cortex engines uninstall [options] <engine_name>

For Example:


## Llama.cpp engine
cortex engines uninstall llama-cpp

Options:

OptionDescriptionRequiredDefault valueExample
engine_nameThe name of the engine you want to uninstall.Yes--
-h, --helpDisplay help for command.No--h