PySpect

Home

Invoices

search

top

Package search

flash-attn

  • Summary: Flash Attention: Fast and Memory-Efficient Exact Attention
  • Author: Tri Dao
  • Homepage: https://github.com/Dao-AILab/flash-attention
  • Source: https://github.com/Dao-AILab/flash-attention
  • Repo profile
  • Number of releases: 71
  • First release: 0.2.0 on 2022-11-15T22:16:17
  • Latest release: 2.7.4.post1 on 2025-01-30T06:39:51
Dependencies of flash-attn (2).
DependencyOptional
einopsfalse
torchfalse