SambaNova PyTorch operator support

This doc page lists all supported PyTorch operators. It includes information about all operators that have full support or experimental support. Our tutorials, including Convert existing models to SambaFlow, use PyTorch and the SambaNova extensions.

We haven’t yet completed testing for operators that are new in 1.18, so the information about supported datatypes is not always available for those operators. For details, see the API Reference External link. If you have specific questions on supported datatypes, contact SambaNova Support.
Experimental support means that SambaNova has not yet completed the full test suite with that operator.
Each operator might have additional limitations, for example, some keywords might be supported on CPU but not on RDU. The table below includes a link to the API Reference for that operator. The link opens in a new tab (or window).
Table 1. PyTorch operator support
Operator Full support Experimental support Documentation (new tab)

abs

BF16, FP32

abs External link

add

BF16, FP32

INT16, INT32, INT64, BOOL, SCALAR INT, SCALAR FLOAT

add External link

addmm

BF16, FP32

addmm External link

bitwise_not

BOOL

bitwise_not External link

bmm

BF16, FP32

INT16, INT32, INT64, BOOL, Mixed-Precision

bmm External link

cat

BF16, FP32, INT16, INT32, INT64,

BOOL

cat External link

cross_entropy

BF16

cross_entropy External link

cumsum

BF16, FP32, INT64,

INT16, INT32

cumsum External link

div

BF16, FP32, SCALAR_FLOAT

div External link

dropout

BF16, FP32

dropout External link

expand

BF16, FP32, BOOL

INT16, INT32, INT64

expand External link

flatten

BF16, FP32

INT16, INT32, INT64, BOOL

flatten External link

gelu

BF16

FP32 (new in 1.17)

gelu External link

index_select

BF16, FP32, INT64

INT16, INT32, BOOL

index_select External link

linear

BF16, FP32

Mixed-Precision

linear External link

logical_or

BF16, FP32, INT16, INT32, INT64, BOOL,

logical_or External link

masked_fill

BF16

INT16, INT32, INT64, BOOL

masked_fill External link

matmul

BF16, FP32

INT16, INT32, INT64, BOOL

matmul External link

max

FP32

INT16, INT32, INT64, BOOL

max External link

mean

BF16, FP32

mean External link

mul

BF16, FP32

INT16

mul

neg

BF16, FP32

INT16, INT32, INT64, BOOL

neg External link

permute

BF16, FP32

INT16, INT32, INT64, BOOL

permute External link

pow

BF16, FP32

INT16, INT32, INT64, BOOL, SCALAR INT, SCALAR FLOAT

pow External link

relu

BF16, FP32

INT16, INT32, INT64, BOOL

relu External link

reshape

BF16, FP32, INT16, INT32,

INT64, BOOL

reshape External link

rsqrt

BF16, FP32

INT16, INT32, INT64, BOOL

rsqrt External link

rsub

BF16, FP32, INT32, INT64

INT16, BOOL, Mixed-Precision

bmm External link

silu

BF16

INT16, INT32, INT64, BOOL, Mixed-Precision

FP32 (new in 1.17)

silu External link

softmax

BF16, FP32

INT16, INT32, INT64, BOOL, Mixed-Precision

softmax External link

split

BF16, FP32

split External link

squeeze

BF16, FP32

INT16, INT32, INT64, BOOL

squeeze External link

stack

BF16

stack External link

sub

BF16, FP32, INT64

INT16, INT32, BOOL

sub External link

tanh

BF16

INT16, INT32, INT64, BOOL,

FP32 (new in 1.17)

tanh External link

tobf16

BF16, FP32, INT16, INT32, INT64, BOOL,

tofp32

BF16, FP32, INT32, INT64, BOOL

INT16

toint64

INT16, INT32, INT64, BOOL,

transpose

BF16, FP32

INT16, INT32, INT64, BOOL

transpose External link

unsqueeze

BF16, FP32

INT16, INT32, INT64, BOOL

unsqueeze External link

view

BF16, FP32, INT16, INT32, INT64

BOOL, Mixed-Precision

view External link