DML_ELEMENT_WISE_BIT_OR_OPERATOR_DESC structure (directml.h)
Computes the bitwise OR between each corresponding element of the input tensors, and writes the result into the output tensor.
The bitwise operation is applied to tensor data in its native encoding. Therefore, the tensor data type is ignored except for determining the width of each element.
This operator supports in-place execution, meaning that the output tensor is permitted to alias one or more of the input tensors during binding.
Syntax
struct DML_ELEMENT_WISE_BIT_OR_OPERATOR_DESC {
const DML_TENSOR_DESC *ATensor;
const DML_TENSOR_DESC *BTensor;
const DML_TENSOR_DESC *OutputTensor;
};
Members
ATensor
Type: const DML_TENSOR_DESC*
A tensor containing the left-hand side inputs.
BTensor
Type: const DML_TENSOR_DESC*
A tensor containing the right-hand side inputs.
OutputTensor
Type: const DML_TENSOR_DESC*
The output tensor to write the results to.
Availability
This operator was introduced in DML_FEATURE_LEVEL_3_0
.
Tensor constraints
ATensor, BTensor, and OutputTensor must have the same DataType, DimensionCount, and Sizes.
Tensor support
DML_FEATURE_LEVEL_4_1 and above
Tensor | Kind | Supported dimension counts | Supported data types |
---|---|---|---|
ATensor | Input | 1 to 8 | FLOAT64, FLOAT32, FLOAT16, INT64, INT32, INT16, INT8, UINT64, UINT32, UINT16, UINT8 |
BTensor | Input | 1 to 8 | FLOAT64, FLOAT32, FLOAT16, INT64, INT32, INT16, INT8, UINT64, UINT32, UINT16, UINT8 |
OutputTensor | Output | 1 to 8 | FLOAT64, FLOAT32, FLOAT16, INT64, INT32, INT16, INT8, UINT64, UINT32, UINT16, UINT8 |
DML_FEATURE_LEVEL_3_0 and above
Tensor | Kind | Supported dimension counts | Supported data types |
---|---|---|---|
ATensor | Input | 1 to 8 | UINT32, UINT16, UINT8 |
BTensor | Input | 1 to 8 | UINT32, UINT16, UINT8 |
OutputTensor | Output | 1 to 8 | UINT32, UINT16, UINT8 |
Requirements
Requirement | Value |
---|---|
Minimum supported client | Windows 10 Build 20348 |
Minimum supported server | Windows 10 Build 20348 |
Header | directml.h |