gemm_batched Class — pytorch Architecture
Architecture documentation for the gemm_batched class in CPUBlas.cpp from the pytorch codebase.
Entity Profile
Source Code
aten/src/ATen/native/CPUBlas.cpp lines 618–644
template <typename scalar_t>
static void gemm_batched(
TransposeType transa, TransposeType transb,
int64_t batch_size, int64_t m, int64_t n, int64_t k,
scalar_t alpha,
const scalar_t **a, int64_t lda,
const scalar_t **b, int64_t ldb,
scalar_t beta,
scalar_t **c, int64_t ldc) {
if (batch_size == 1) {
return gemm(transa, transb, m, n, k, alpha, a[0], lda, b[0], ldb, beta, c[0], ldc);
}
if constexpr (AT_MKL_ENABLED() && is_blas_library_type<scalar_t>::value) {
internal::normalize_last_dims(transa, transb, m, n, k, &lda, &ldb, &ldc);
if (use_blas_gemm(transa, transb, m, n, k, lda, ldb, ldc)) {
gemm_batched_mkl_impl(
transa, transb, batch_size, m, n, k, alpha, a, lda, b, ldb, beta, c, ldc);
} else {
gemm_batched_generic(
transa, transb, batch_size, m, n, k, alpha, a, lda, b, ldb, beta, c, ldc);
}
} else {
gemm_batched_generic(
transa, transb, batch_size, m, n, k, alpha, a, lda, b, ldb, beta, c, ldc);
}
}
Source
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free