q8_copy_int8_weight_and_add_offset Class — pytorch Architecture
Architecture documentation for the q8_copy_int8_weight_and_add_offset class in XnnpackUtils.cpp from the pytorch codebase.
Entity Profile
Source Code
aten/src/ATen/native/quantized/cpu/XnnpackUtils.cpp lines 31–48
template <typename PT>
void q8_copy_int8_weight_and_add_offset(const at::Tensor& in, at::Tensor& out) {
using T = typename PT::underlying;
static constexpr auto offset = std::is_same_v<T, uint8_t> ? 128 : 0;
TORCH_CHECK(
in.scalar_type() == c10::kQInt8,
"q8_copy_int8_weight_and_add_offset: Expected input weight data type ",
toString(c10::kQInt8),
" but got ",
toString(in.scalar_type()))
const int8_t* in_ptr =
reinterpret_cast<const int8_t*>(in.data_ptr<c10::qint8>());
T* out_ptr = reinterpret_cast<T*>(out.data_ptr<PT>());
for (const auto i : c10::irange(in.numel())) {
out_ptr[i] = static_cast<T>(static_cast<int32_t>(in_ptr[i]) + offset);
}
}
Source
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free