gemm_batched_mkl_impl Class — pytorch Architecture
Architecture documentation for the gemm_batched_mkl_impl class in CPUBlas.cpp from the pytorch codebase.
Entity Profile
Source Code
aten/src/ATen/native/CPUBlas.cpp lines 580–595
template <typename scalar_t>
static void gemm_batched_mkl_impl(
TransposeType transa, TransposeType transb,
int64_t batch_size, int64_t m, int64_t n, int64_t k,
scalar_t alpha,
const scalar_t **a, int64_t lda,
const scalar_t **b, int64_t ldb,
scalar_t beta,
scalar_t **c, int64_t ldc) {
for (int64_t i = 0; i < batch_size;) {
int sub_batch = std::min(batch_size - i, int64_t{INT_MAX});
mkl_gemm_batched(transa, transb, sub_batch, m, n, k, alpha,
&a[i], lda, &b[i], ldb, beta, &c[i], ldc);
i += sub_batch;
}
}
Source
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free