Home / Class/ adagrad_fused_step_impl Class — pytorch Architecture

adagrad_fused_step_impl Class — pytorch Architecture

Architecture documentation for the adagrad_fused_step_impl class in FusedAdagradKernel.cpp from the pytorch codebase.

Entity Profile

Source Code

aten/src/ATen/native/cpu/FusedAdagradKernel.cpp lines 139–185

template <typename scalar_t>
void adagrad_fused_step_impl(
    const at::Tensor& param,
    const at::Tensor& grad,
    const at::Tensor& state_sum,
    const at::Tensor& state_step,
    const double lr,
    const double lr_decay,
    const double weight_decay,
    const double eps,
    const bool maximize,
    const float* grad_scale_ptr) {
  using opmath_t = at::opmath_type<scalar_t>;
  scalar_t* param_data = param.data_ptr<scalar_t>();
  scalar_t* grad_data = grad.data_ptr<scalar_t>();
  scalar_t* state_sum_data = state_sum.data_ptr<scalar_t>();
  double step = state_step.item<float>();
  double clr = lr / (1.0 + (step - 1.0) * lr_decay);

  constexpr size_t cache_line_size = 64;
  constexpr int64_t cache_line_aligned_task_unit = cache_line_size / sizeof(scalar_t);
  size_t num_units = divup(param.numel(), cache_line_aligned_task_unit);

  auto adagrad_fn = [&](int64_t begin, int64_t end) {
        // local pointers
        begin *= cache_line_aligned_task_unit;
        end = std::min(end * cache_line_aligned_task_unit, param.numel());
        scalar_t* param_ptr = param_data + begin;
        scalar_t* grad_ptr = grad_data + begin;
        scalar_t* state_sum_ptr = state_sum_data + begin;

        const int64_t size = end - begin;
        adagrad_math<scalar_t, opmath_t>(
          param_ptr,
          grad_ptr,
          state_sum_ptr,
          clr,
          eps,
          weight_decay,
          maximize,
          grad_scale_ptr,
          size
        );
      };
  at::parallel_for(
      0, num_units, 0, adagrad_fn);
}

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free