下方表格中:
lhs:left-hand side,指运算中的左操作数。
rhs:right-hand side,指运算中的右操作数。
torchvision::deformconv 的直接支持,
如有需要请联系地平线技术支持人员获取相关说明。| ONNX Operator Name | BPU Support Constraints |
|---|---|
| Abs | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Acos | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Acosh | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Add | lhs: Type: int8, int16, int32, if type is int32, this hbir.add op must be fusible to a Conv op Shape: [*] rhs: Same as lhs output: Same as lhs |
| And | lhs: Type: int8, int16, bool8 Shape: [*] rhs: Same as lhs output: Type: bool8 |
| ArgMax | input: Type: int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 65535]; Special, ReduceArgMax/ReduceArgMin's reduce axis dim size ∈ [1, 32767] Element : reduce Elements size ∈ [1, 65535] output: Same as input, ReduceArgMax/ReduceArgMin's output can be of type int32 or int64, as long as the size of the reduced axis can be represented using an int16 number. |
| ArgMin | input: Type: int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 65535]; Special, ReduceArgMax/ReduceArgMin's reduce axis dim size ∈ [1, 32767] Element : reduce Elements size ∈ [1, 65535] output: Same as input, ReduceArgMax/ReduceArgMin's output can be of type int32 or int64, as long as the size of the reduced axis can be represented using an int16 number. |
| Asin | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Asinh | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Atan | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Atanh | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| AveragePool | input: Type: int8, int16 Shape: [*,H,W,C] or [*,L,C] output: Same as input kernel: Shape: [KL] or [KH,KW], only support 1d or 2d now Dim: 1d: KL ∈ [1, 256], KL*bitWidth/8 <= 24576; 2d: KH, KW ∈ [1, 256], KH*KW*bitWidth/8 <= 24576 stride: Shape: [SH,SW] or [SL] Dim: SH, SW, SL ∈ [1, 256] pad: Shape: [PH_BEGIN,PW_BEGIN,PH_END,PW_END] or [PL_BEGIN,PL_END] PH_BEGIN,PW_BEGIN,PL_BEGIN,PH_END,PW_END,PL_END ∈ [-255, 256] |
| BatchNormalization | N/A, collapsed in graph optimization phase |
| Cast | input: Type: int8, int16, bool8 Shape: [*] output: Same as input |
| Ceil | input: Type: int8, int16 Shape: [*] output: Same as input |
| Celu | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Clip | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Concat | input: Arg Number: input number ∈ [1, 1024] Dim: all dims < 131072 size < 2G output: Same as input |
| Constant | N/A, collapsed in graph optimization phase |
| ConstantOfShape | N/A, collapsed in graph optimization phase |
| Conv | input: --conv 1d-- Type: int8, int16 Shape: [*,L,C] Dim: * ∈ [1, 4096]; L,C ∈ [1, 65536] --conv 2d-- Type: int8, int16 Shape: [*,H,W,C] Dim: * ∈ [1, 4096]; H,W,C ∈ [1, 65536] weight: --conv 1d-- Type: int8, int16 Shape: [N,KL,C] Dim: C ∈ [1, 8192]; KL ∈ [1, 31]; N ∈ [1, 65536] if fout is the last layer of conv else [1, 8192] Size: KL × C ∈ [1, 65536] --conv 2d-- Type: int8, int16 Shape: [N,KH,KW,C] Dim: C ∈ [1, 8192]; KH,KW ∈ [1, 31]; N ∈ [1, 65536] if fout is the last layer of conv else [1, 8192] Size: KH × KW × C ∈ [1, 65536] bias: Type: f32 output: --conv 1d-- Type: int8, int16, int32 Shape: [*,L,C] Dim: * ∈ [1, 4096]; L,C ∈ [1, 65536] --conv 2d-- Type: int8, int16, int32 Shape: [*,H,W,C] Dim: * ∈ [1, 4096]; H,W,C ∈ [1, 65536] stride: --conv 1d-- Shape: [SL] Dim: SL ∈ [1, 256]; SL ∈ {1} if dilation > 1 --conv 2d-- Shape: [SH,SW] Dim: SH,SW ∈ [1, 256]; SH,SW ∈ {1} if dilation > 1 pad: --conv 1d-- Shape: [P_left,P_right] Dim: P_left,P_right ∈ [-L/2, 256] --conv 2d-- Shape: [P_top,P_left,P_bottom,P_right] Dim: P_top,P_bottom ∈ [-H/2, 256], P_left,P_right ∈ [-W/2, 256] groupNum: fin.c is divisible by group number dilation: --conv 1d-- Shape: [DL] Dim: DL ∈ [1, 18] --conv 2d-- Shape: [DH,DW] Dim: DH,DW ∈ [1, 18] others: --conv 1d-- Stride only support odd number and 2 when conv is a int16 depthwise conv If groupNum > 1, for each group, fin.c' ∈ [1, 65535], KL × fin.c' ∈ [1, 65535] --conv 2d-- Stride only support odd number and 2 when conv is a int16 depthwise conv If groupNum > 1, for each group, fin.c' ∈ [1, 65535], KH × KW × fin.c' ∈ [1, 65535] fin.c' = fin.c × min(lcm(fout.c × (lcm(fin.c, 4) / fin.c), 8) / fout.c, groupNum) |
| ConvTranspose | input: Type: int8, int16; input and weight cannot both be int16 1d_Shape: [*,W,C] 1d_Dim: * ∈ [1, 128]; W ∈ [1, 65536]; C ∈ [1, 2048] 2d_Shape: [*,H,W,C] 2d_Dim: * ∈ [1, 128]; H,W ∈ [1, 65536]; C ∈ [1, 2048] weight: Type: int8, int16; input and weight cannot both be int16 1d_Shape: [N,KW,C] 1d_Dim: N,C ∈ [1, 2048]; KW ∈ [1, 14] 1d_Size: KW × C ∈ [1, 65536] 2d_Shape: [N,KH,KW,C] 2d_Dim: N,C ∈ [1, 2048]; KH,KW ∈ [1, 14]; KH,KW cannot both be 1 2d_Size: KH × KW × C ∈ [1, 65536] bias: Type: f32 output: Same as input, the type additionally supports int32 stride: 1d_Shape: [SW] 1d_Dim: SW ∈ [1, 14]; 2d_Shape: [SH,SW] 2d_Dim: SH,SW ∈ [1, 14]; pad: 1d_Shape: [P_left,P_bottom] 1d_Dim: P_left,P_bottom ∈ [0, 256] 2d_Shape: [P_top,P_left,P_bottom,P_right] 2d_Dim: P_top,P_left,P_bottom,P_right ∈ [0, 256] dilation: 1d_Shape: [DW] 1d_Dim: DW ∈ {1} 2d_Shape: [DH,DW] 2d_Dim: DH,DW ∈ {1} |
| Cos | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Cosh | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| CumSum | input: Type: int8, int16, input must be complete quantized Shape: [*, dim[axis], *] Dim: * ∈ [1, 65536]; dim[axis] ∈ [1, 8192] output: Type: int8, int16, int32 Shape/Dim: same with input |
| DeformConv | input: Type: int8 Shape: [*,H,W,C] Dim: H,W ∈ [1, 1024]; H × W ≤ 720 × 1024; other dims ∈ [1, 65536] offset: Type: int16 Shape: [*,OH,OW,2 × offsetGroupNum × KH × KW] Size: 2 × offsetGroupNum × KH × KW ∈ [2, 256], OH × KH × OW × KW ≤ 720 × 1024 mask: Type: int8 Shape: [*,OH,OW,offsetGroupNum × KH × KW] Size: offsetGroupNum × KH × KW ∈ [1, 128] weight: Type: int8 Shape: [N,KH,KW,C] Dim: C ∈ [1, 8192]; KH,KW ∈ [1, 8]; N ∈ [1, 4096] Size: KH × KW × C ∈ [1, 65536] bias: Type: f32 output: Type: int8, int16, int32 Other constraints: Same as fin stride: Shape: [SH,SW] Dim: SH,SW ∈ [1] pad: Shape: [P_top,P_left,P_bottom,P_right] Dim: P_top,P_bottom ∈ [-H/2, 256], P_left,P_right ∈ [-W/2, 256] groupNum: fin.c is divisible by group number offsetGroupNum: fin.c is divisible by offset group number Size: offsetGroupNum ∈ [1, 2] dilation: Shape: [DH,DW] Dim: DH,DW ∈ [1] others: For each group, fin.c ∈ [1, 8192], KH × KW × fin.c ∈ [1, 65535], fin.c = C when group = 1 |
| DepthToSpace | input: No limits output: Same as input |
| Div | input: Type: int8, int16 output: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Dropout | N/A, collapsed in graph optimization phase |
| Elu | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Equal | lhs: Type: int8, int16, bool8 Shape: [*] rhs: Same as lhs output: Type: bool8 |
| Erf | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Exp | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Expand | input: No limits output: Same as input |
| EyeLike | input: Type: int8, int16, bool8 Shape: [*] output: Same as input |
| Flatten | input: No limits output: Same as input |
| Floor | input: Type: int8, int16 Shape: [*] output: Same as input |
| GRU | input: Dim: all dims < 2097152 Type: int8, int16 size < 2G output: Same as input |
| Gather | input: Type: int8, int16, int32, float16, float32 Shape: [*] input will transpose to [N, W, C]. W is inputShape[dim], N is the product of inputShape[:dim], C is the product of inputShape[dim+1:]. N, C ∈ [1, 1048576], W ∈ [1, 4096]. If input type is int8, int16, W ∈ [1, 32768]. index: Type: int8, int16, int32, int64 Shape: [*] index value should not be larger than 32768. And the reduce multiple of all index dims of shape should in range [1, 737280(720*1024)], because all dims will be reduced to W dim of indices and output. If W of fout is larger than 737280, this op will be split too many sub-ops. output: Same as input |
| GatherElements | input: Type: int8, int16, int32, float16, float32 Shape: [*] input will transpose to [N, W, C]. W is inputShape[dim], N is the product of inputShape[:dim], C is the product of inputShape[dim+1:]. N, C ∈ [1, 1048576]. N × C should not be larger than 1048576 W ∈ [1, 4096]. If input type is int8, int16, W ∈ [1, 32768]. indices: Type: int8, int16, int32, int64 Shape: [*] indices value should not be larger than 32768 indices will transpose to [N, D, C]. D is indicesShape[dim], N is the product of indicesShape[:dim], C is the product of indicesShape[dim+1:]. N, C ∈ [1, 1048576], D ∈ [1, 737280(720*1024)]. indicesShape[i] <= inputShape[i] for all dimensions i != dim. output: Same as indices |
| GatherND | input: Type: int8, int16, int32, float16, float32 Shape: [*] if gather1d, W = inputShape[batchDim], W ∈ [1, 4096]. If input type is int8, int16 W ∈ [1, 32768] if gather2d, H = inputShape[batchDim], W = inputShape[batchDim+1], H, W ∈ [1, 4096]. If input type is int8, H, W ∈ [1, 32768]. H and W cannot both be greater than 4096 at the same time. B is product of inputShape[0: batchDim], B ∈ [1, 1048576]. C is product of inputShape[batchDim+D:], C ∈ [1, 1048576]. indices: Type: int8, int16, int32, int64 Shape: [*, D] indices value should not be larger than 32768. D ∈ [1, 2]. output: Shape: [*] Same as input batchDim: The number of batch dimensions. The gather of indexing starts from dimension of input[batchDim:] |
| Gelu | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Gemm | input: --conv 1d-- Type: int8, int16 Shape: [*,L,C] Dim: * ∈ [1, 4096]; L,C ∈ [1, 65536] --conv 2d-- Type: int8, int16 Shape: [*,H,W,C] Dim: * ∈ [1, 4096]; H,W,C ∈ [1, 65536] weight: --conv 1d-- Type: int8, int16 Shape: [N,KL,C] Dim: C ∈ [1, 8192]; KL ∈ [1, 31]; N ∈ [1, 65536] if fout is the last layer of conv else [1, 8192] Size: KL × C ∈ [1, 65536] --conv 2d-- Type: int8, int16 Shape: [N,KH,KW,C] Dim: C ∈ [1, 8192]; KH,KW ∈ [1, 31]; N ∈ [1, 65536] if fout is the last layer of conv else [1, 8192] Size: KH × KW × C ∈ [1, 65536] bias: Type: f32 output: --conv 1d-- Type: int8, int16, int32 Shape: [*,L,C] Dim: * ∈ [1, 4096]; L,C ∈ [1, 65536] --conv 2d-- Type: int8, int16, int32 Shape: [*,H,W,C] Dim: * ∈ [1, 4096]; H,W,C ∈ [1, 65536] stride: --conv 1d-- Shape: [SL] Dim: SL ∈ [1, 256]; SL ∈ {1} if dilation > 1 --conv 2d-- Shape: [SH,SW] Dim: SH,SW ∈ [1, 256]; SH,SW ∈ {1} if dilation > 1 pad: --conv 1d-- Shape: [P_left,P_right] Dim: P_left,P_right ∈ [-L/2, 256] --conv 2d-- Shape: [P_top,P_left,P_bottom,P_right] Dim: P_top,P_bottom ∈ [-H/2, 256], P_left,P_right ∈ [-W/2, 256] groupNum: fin.c is divisible by group number dilation: --conv 1d-- Shape: [DL] Dim: DL ∈ [1, 18] --conv 2d-- Shape: [DH,DW] Dim: DH,DW ∈ [1, 18] others: --conv 1d-- Stride only support odd number and 2 when conv is a int16 depthwise conv If groupNum > 1, for each group, fin.c' ∈ [1, 65535], KL × fin.c' ∈ [1, 65535] --conv 2d-- Stride only support odd number and 2 when conv is a int16 depthwise conv If groupNum > 1, for each group, fin.c' ∈ [1, 65535], KH × KW × fin.c' ∈ [1, 65535] fin.c' = fin.c × min(lcm(fout.c × (lcm(fin.c, 4) / fin.c), 8) / fout.c, groupNum) |
| GlobalAveragePool | input: Type: int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 65535]; Special, ReduceArgMax/ReduceArgMin's reduce axis dim size ∈ [1, 32767] Element : reduce Elements size ∈ [1, 65535] output: Same as input, ReduceArgMax/ReduceArgMin's output can be of type int32 or int64, as long as the size of the reduced axis can be represented using an int16 number. |
| GlobalMaxPool | input: Type: bool8, int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 65535] Element : reduce Elements size ∈ [1, 65535] output: Same as input |
| Greater | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Type: bool8 |
| GreaterOrEqual | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Type: bool8 |
| GridSample | input: Type: int8 Shape: [*,H,W,C] Dim: H ∈ [1, 32768], W ∈ [1, 32768], other dims ∈ [1, 65536]. NOTE: H and W cannot both be greater than 4096 at the same time. grid: Type: int16 Shape: [*,H,W,2] output: Same as input except Dim constraints mode: Only support bilinear and nearest padding_mode: Only support zeros and border |
| GroupNormalization | input: Type: int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 65535] Element : reduce Elements size ∈ [1, 65535] output: Same as input |
| HardSigmoid | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| HardSwish | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Identity | N/A, collapsed in graph optimization phase |
| If | N/A, collapsed in graph optimization phase |
| InstanceNormalization | input: Type: int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 65535] Element : reduce Elements size ∈ [1, 65535] output: Same as input |
| LSTM | input: Dim: all dims < 2097152 Type: int8, int16 size < 2G output: Same as input |
| LayerNormalization | input: Type: int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 65535] Element : reduce Elements size ∈ [1, 65535] output: Same as input |
| LeakyRelu | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Less | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Type: bool8 |
| LessOrEqual | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Type: bool8 |
| Log | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| LogSoftmax | input: Type: int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 65535] Element : reduce Elements size ∈ [1, 65535] output: Same as input |
| MatMul | lhs: Type: int8, int16 Shape: [*,M,C] Dim: * ∈ [1, 4096], M,C ∈ [1, 8192] rhs: Type: int8, int16 Shape: [*,C,N] Dim: * ∈ [1, 4096]; C ∈ [1, 8192], N ∈ [1, 1048576] output: Type: int8, int16, int32 Shape: [*,M,N] Other constraints: Same as lhs and rhs |
| Max | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Same as lhs |
| MaxPool | input: Type: int8, int16 Shape: [*,H,W,C] or [*,L,C] output: Same as input kernel: Shape: [KL] or [KH,KW], only support 1d or 2d now Dim: 1d: KL ∈ [1, 256], KL*bitWidth/8 <= 24576; 2d: KH, KW ∈ [1, 256], KH*KW*bitWidth/8 <= 24576 stride: Shape: [SH,SW] or [SL] Dim: SH, SW, SL ∈ [1, 256] pad: Shape: [PH_BEGIN,PW_BEGIN,PH_END,PW_END] or [PL_BEGIN,PL_END] PH_BEGIN,PW_BEGIN,PL_BEGIN,PH_END,PW_END,PL_END ∈ [-255, 256] |
| Min | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Same as lhs |
| Mish | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Mul | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Type: int8, int16, int32 Shape: [*] |
| Neg | input: Type: int8, int16 Shape: [*] output: Same as input |
| Not | input: Type: int8, int16, bool8 Shape: [*] output: Type: bool8 |
| Or | lhs: Type: int8, int16, bool8 Shape: [*] rhs: Same as lhs output: Type: bool8 |
| PRelu | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Same as lhs |
| Pad | N/A, collapsed in graph optimization phase |
| Pow | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Reciprocal | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| ReduceL1 | input: Type: int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 65535] Element : reduce Elements size ∈ [1, 65535] output: Same as input |
| ReduceL2 | input: Type: int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 65535] Element : reduce Elements size ∈ [1, 65535] output: Same as input |
| ReduceMax | input: Type: bool8, int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 65535] Element : reduce Elements size ∈ [1, 65535] output: Same as input |
| ReduceMean | input: Type: int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 65535]; Special, ReduceArgMax/ReduceArgMin's reduce axis dim size ∈ [1, 32767] Element : reduce Elements size ∈ [1, 65535] output: Same as input, ReduceArgMax/ReduceArgMin's output can be of type int32 or int64, as long as the size of the reduced axis can be represented using an int16 number. |
| ReduceMin | input: Type: bool8, int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 65535] Element : reduce Elements size ∈ [1, 65535] output: Same as input |
| ReduceSum | input: Type: int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 65535]; Special, ReduceArgMax/ReduceArgMin's reduce axis dim size ∈ [1, 32767] Element : reduce Elements size ∈ [1, 65535] output: Same as input, ReduceArgMax/ReduceArgMin's output can be of type int32 or int64, as long as the size of the reduced axis can be represented using an int16 number. |
| Relu | N/A, collapsed in graph optimization phase |
| Reshape | input: No limits output: Same as input |
| Resize | input: Type: int8 Shape: [*,H,W,C] The integer part of step ∈ [-256, 255], otherwise the backend will be on cpu output: Same as input mode: support nearest and bilinear |
| Round | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Selu | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Shape | N/A, collapsed in graph optimization phase |
| Sigmoid | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Sign | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Sin | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Sinh | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Size | N/A, collapsed in graph optimization phase |
| Softmax | input: Type: int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 65535] Element : reduce Elements size ∈ [1, 65535] output: Same as input |
| Softplus | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Softsign | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| SpaceToDepth | input: No limits output: Same as input |
| Split | input: Dim: all dims < 2097152 output: Same as input |
| Sqrt | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Squeeze | input: No limits output: Same as input |
| Sub | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Same as lhs |
| Sum | lhs: Type: int8, int16, int32, if type is int32, this hbir.add op must be fusible to a Conv op Shape: [*] rhs: Same as lhs output: Same as lhs |
| Tan | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Tanh | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| ThresholdedRelu | inputs: Type: int8, int16 outputs: If input is int8, output is int8 If input is int16, output is int8/int16 |
| Tile | input: No limits output: Same as input |
| Transpose | input: No limits output: Same as input |
| Trilu | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Type: int8, int16, int32 Shape: [*] |
| Unsqueeze | input: No limits output: Same as input |
| Where | condition: Type: bool8 lhs: Type: int8, int16 Shape: [*] rhs: Type: int8, int16 output: Same as lhs |
| Xor | lhs: Type: int8, int16, bool8 Shape: [*] rhs: Same as lhs output: Type: bool8 |