SambaNova PyTorch operator support
Supported datatypes for each new operator are still being validated and more information will be made available at a later date. If you have specific questions on support datatypes, contact SambaNova Support. For details on all supported operators, see the API Reference . |
SambaNova has expanded the PyTorch API to support running on RDU. Our tutorials, including Convert existing models to SambaFlow, use PyTorch and the SambaNova extensions.
This doc page lists all supported PyTorch operators. It includes information about all operators that have full support or experimental support.
We haven’t yet completed testing for operators that are new in 1.18, so the information about supported datatypes is not yet available for those operators. |
Experimental support means that SambaNova has not yet completed the full test suite with that operator. |
Each operator might have additional limitations, for example, some keywords might be supported on CPU but not on RDU. The table below includes a link to the API Reference for that operator. The link opens in a new tab (or window). |
Operator | Full support | Experimental support | Documentation (new tab) |
---|---|---|---|
add |
BF16, FP32 |
INT16, INT32, INT64, BOOL, SCALAR INT, SCALAR FLOAT |
|
addmm |
BF16, FP32 |
||
bitwise_not |
BOOL |
||
bmm |
BF16, FP32 |
INT16, INT32, INT64, BOOL, Mixed-Precision |
|
cat |
BF16, FP32, INT16, INT32, INT64, |
BOOL |
|
cross_entropy |
BF16 |
||
cumsum |
BF16, FP32, INT64, |
INT16, INT32 |
|
div |
BF16, FP32, SCALAR_FLOAT |
||
dropout |
BF16, FP32 |
||
expand |
BF16, FP32, BOOL |
INT16, INT32, INT64 |
|
flatten |
BF16, FP32 |
INT16, INT32, INT64, BOOL |
|
gelu |
BF16 |
FP32 (new in 1.17) |
|
index_select |
BF16, FP32, INT64 |
INT16, INT32, BOOL |
|
linear |
BF16, FP32 |
Mixed-Precision |
|
logical_or |
BF16, FP32, INT16, INT32, INT64, BOOL, |
||
masked_fill |
BF16 |
INT16, INT32, INT64, BOOL |
|
matmul |
BF16, FP32 |
INT16, INT32, INT64, BOOL |
|
max |
FP32 |
INT16, INT32, INT64, BOOL |
|
mean |
BF16, FP32 |
||
mul |
BF16, FP32 |
INT16 |
|
neg |
BF16, FP32 |
INT16, INT32, INT64, BOOL |
|
permute |
BF16, FP32 |
INT16, INT32, INT64, BOOL |
|
pow |
BF16, FP32 |
INT16, INT32, INT64, BOOL, SCALAR INT, SCALAR FLOAT |
|
relu |
BF16, FP32 |
INT16, INT32, INT64, BOOL |
|
reshape |
BF16, FP32, INT16, INT32, |
INT64, BOOL |
|
rsqrt |
BF16, FP32 |
INT16, INT32, INT64, BOOL |
|
rsub |
BF16, FP32, INT32, INT64 |
INT16, BOOL, Mixed-Precision |
|
silu |
BF16 |
INT16, INT32, INT64, BOOL, Mixed-Precision FP32 (new in 1.17) |
|
softmax |
BF16, FP32 |
INT16, INT32, INT64, BOOL, Mixed-Precision |
|
split |
BF16, FP32 |
||
squeeze |
BF16, FP32 |
INT16, INT32, INT64, BOOL |
|
stack |
BF16 |
||
sub |
BF16, FP32, INT64 |
INT16, INT32, BOOL |
|
tanh |
BF16 |
INT16, INT32, INT64, BOOL, FP32 (new in 1.17) |
|
tobf16 |
BF16, FP32, INT16, INT32, INT64, BOOL, |
||
tofp32 |
BF16, FP32, INT32, INT64, BOOL |
INT16 |
|
toint64 |
INT16, INT32, INT64, BOOL, |
||
transpose |
BF16, FP32 |
INT16, INT32, INT64, BOOL |
|
unsqueeze |
BF16, FP32 |
INT16, INT32, INT64, BOOL |
|
view |
BF16, FP32, INT16, INT32, INT64 |
BOOL, Mixed-Precision |