SambaNova PyTorch operator support
This doc page lists all supported PyTorch operators. It includes information about all operators that have full support or experimental support. Our tutorials, including Convert existing models to SambaFlow, use PyTorch and the SambaNova extensions.
We haven’t yet completed testing for operators that are new in 1.18, so the information about supported datatypes is not always available for those operators. For details, see the API Reference . If you have specific questions on supported datatypes, contact SambaNova Support. |
Experimental support means that SambaNova has not yet completed the full test suite with that operator. |
Each operator might have additional limitations, for example, some keywords might be supported on CPU but not on RDU. The table below includes a link to the API Reference for that operator. The link opens in a new tab (or window). |
Operator | Full support | Experimental support | Documentation (new tab) |
---|---|---|---|
abs |
BF16, FP32 |
||
add |
BF16, FP32 |
INT16, INT32, INT64, BOOL, SCALAR INT, SCALAR FLOAT |
|
addmm |
BF16, FP32 |
||
bitwise_not |
BOOL |
||
bmm |
BF16, FP32 |
INT16, INT32, INT64, BOOL, Mixed-Precision |
|
cat |
BF16, FP32, INT16, INT32, INT64, |
BOOL |
|
cross_entropy |
BF16 |
||
cumsum |
BF16, FP32, INT64, |
INT16, INT32 |
|
div |
BF16, FP32, SCALAR_FLOAT |
||
dropout |
BF16, FP32 |
||
expand |
BF16, FP32, BOOL |
INT16, INT32, INT64 |
|
flatten |
BF16, FP32 |
INT16, INT32, INT64, BOOL |
|
gelu |
BF16 |
FP32 (new in 1.17) |
|
index_select |
BF16, FP32, INT64 |
INT16, INT32, BOOL |
|
linear |
BF16, FP32 |
Mixed-Precision |
|
logical_or |
BF16, FP32, INT16, INT32, INT64, BOOL, |
||
masked_fill |
BF16 |
INT16, INT32, INT64, BOOL |
|
matmul |
BF16, FP32 |
INT16, INT32, INT64, BOOL |
|
max |
FP32 |
INT16, INT32, INT64, BOOL |
|
mean |
BF16, FP32 |
||
mul |
BF16, FP32 |
INT16 |
|
neg |
BF16, FP32 |
INT16, INT32, INT64, BOOL |
|
permute |
BF16, FP32 |
INT16, INT32, INT64, BOOL |
|
pow |
BF16, FP32 |
INT16, INT32, INT64, BOOL, SCALAR INT, SCALAR FLOAT |
|
relu |
BF16, FP32 |
INT16, INT32, INT64, BOOL |
|
reshape |
BF16, FP32, INT16, INT32, |
INT64, BOOL |
|
rsqrt |
BF16, FP32 |
INT16, INT32, INT64, BOOL |
|
rsub |
BF16, FP32, INT32, INT64 |
INT16, BOOL, Mixed-Precision |
|
silu |
BF16 |
INT16, INT32, INT64, BOOL, Mixed-Precision FP32 (new in 1.17) |
|
softmax |
BF16, FP32 |
INT16, INT32, INT64, BOOL, Mixed-Precision |
|
split |
BF16, FP32 |
||
squeeze |
BF16, FP32 |
INT16, INT32, INT64, BOOL |
|
stack |
BF16 |
||
sub |
BF16, FP32, INT64 |
INT16, INT32, BOOL |
|
tanh |
BF16 |
INT16, INT32, INT64, BOOL, FP32 (new in 1.17) |
|
tobf16 |
BF16, FP32, INT16, INT32, INT64, BOOL, |
||
tofp32 |
BF16, FP32, INT32, INT64, BOOL |
INT16 |
|
toint64 |
INT16, INT32, INT64, BOOL, |
||
transpose |
BF16, FP32 |
INT16, INT32, INT64, BOOL |
|
unsqueeze |
BF16, FP32 |
INT16, INT32, INT64, BOOL |
|
view |
BF16, FP32, INT16, INT32, INT64 |
BOOL, Mixed-Precision |