Home / Class/ apply_per_row_backward Class — pytorch Architecture

apply_per_row_backward Class — pytorch Architecture

Architecture documentation for the apply_per_row_backward class in WeightNormKernel.cpp from the pytorch codebase.

Entity Profile

Source Code

aten/src/ATen/native/cpu/WeightNormKernel.cpp lines 295–322

template <typename scalar_t>
inline std::enable_if_t<is_reduced_floating_point_v<scalar_t>, void>
apply_per_row_backward(
  scalar_t* grad_v_ptr,
    const scalar_t* grad_w_ptr,
    const scalar_t* v_ptr,
    const float* a_ptr,
    const float* b_ptr,
    int64_t size) {
  using bVec = vec::Vectorized<scalar_t>;
  using fVec = vec::Vectorized<float>;
  int64_t d = 0;
  for (; d < size - (size % bVec::size()); d += bVec::size()) {
    bVec grad_w_bvec = bVec::loadu(grad_w_ptr + d);
    auto [grad_w_fvec0, grad_w_fvec1] = vec::convert_to_float<scalar_t>(grad_w_bvec);
    bVec v_bvec = bVec::loadu(v_ptr + d);
    auto [v_fvec0, v_fvec1] = vec::convert_to_float<scalar_t>(v_bvec);

    fVec grad_v_fvec0 = fVec::loadu(a_ptr + d) * grad_w_fvec0 - fVec::loadu(b_ptr + d) * v_fvec0;
    fVec grad_v_fvec1 = fVec::loadu(a_ptr + d + fVec::size()) * grad_w_fvec1
        - fVec::loadu(b_ptr + d + fVec::size()) * v_fvec1;
    bVec grad_v_bvec = vec::convert_from_float<scalar_t>(grad_v_fvec0, grad_v_fvec1);
    grad_v_bvec.store(grad_v_ptr + d);
  }
  for(; d < size; ++d) {
    grad_v_ptr[d] = float(grad_w_ptr[d]) * a_ptr[d] - float(v_ptr[d]) * b_ptr[d];
  }
}

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free