Home / Class/ QConv1dPackWeightInt8Cudnn Class — pytorch Architecture

QConv1dPackWeightInt8Cudnn Class — pytorch Architecture

Architecture documentation for the QConv1dPackWeightInt8Cudnn class in ConvPrepack.cpp from the pytorch codebase.

Entity Profile

Source Code

aten/src/ATen/native/quantized/cudnn/ConvPrepack.cpp lines 159–198

class QConv1dPackWeightInt8Cudnn final {
 public:
  static c10::intrusive_ptr<ConvPackedParamsBase<2>> run_conv(
      Tensor weight,
      std::optional<Tensor> bias,
      torch::List<int64_t> stride,
      torch::List<int64_t> padding,
      torch::List<int64_t> dilation,
      int64_t groups) {
    const torch::List<int64_t> output_padding({0});
    return _run(std::move(weight), std::move(bias), stride, padding, output_padding, dilation, groups,
                /*transpose=*/false);
  }

 private:
  static c10::intrusive_ptr<ConvPackedParamsBase<2>> _run(
      Tensor weight,
      std::optional<Tensor> bias,
      torch::List<int64_t> stride,
      torch::List<int64_t> padding,
      torch::List<int64_t> output_padding,
      torch::List<int64_t> dilation,
      int64_t groups,
      bool transpose) {
    if (weight.dim() == 3) {
      // we currently use conv2d kernel for conv1d by making the input and weight tensors
      // 4D rather than 3D. we add a dummy width dimension of size 1
      // out channels, in channels / groups, L -> out channels, in channels / groups, 1, L
      weight = weight.unsqueeze(-2);
    }
    stride = quant_utils::MakeArgForConv1d(stride, 1);
    padding = quant_utils::MakeArgForConv1d(padding, 0);
    output_padding = quant_utils::MakeArgForConv1d(output_padding, 0);
    dilation = quant_utils::MakeArgForConv1d(dilation, 1);

    return PackedConvWeightCudnn<2>::prepack(
        weight, std::move(bias), stride, padding, output_padding, dilation, groups,
        transpose);
  }
};

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free