PySpect

Home

lists

Frequently asked questions

© 2025 PySpect

Package profile

vllm

  • Summary: A high-throughput and memory-efficient inference and serving engine for LLMs
  • Author: vLLM Team
  • Homepage: https://github.com/vllm-project/vllm
  • Source: https://github.com/vllm-project/vllm (Repo profile)
  • Number of releases: 62
  • First release: 0.0.1 on 2023-06-19
  • Latest release: 0.9.2 on 2025-07-08

Releases

Dates and sizes of releasesJulyOctober2024AprilJulyOctober2025AprilJulyRelease Date100200300400Size in MB

PyPI Downloads

Weekly downloads over the last 3 monthsFebruaryMarchAprilMayJuneDate0100200300400500600700800900 thousand downloads per week

Dependencies

Vllm has 62 dependencies, 9 of which optional.
Dependencies of vllm (62).
DependencyOptional
aiohttpfalse
blake3false
cachetoolsfalse
cloudpicklefalse
compressed-tensorsfalse
depyffalse
einopsfalse
fastapifalse
filelockfalse
gguffalse
huggingface-hubfalse
importlib_metadatafalse
larkfalse
llguidancefalse
lm-format-enforcerfalse
mistral_commonfalse
msgspecfalse
ninjafalse
numbafalse
numpyfalse
openaifalse
opencv-python-headlessfalse
outlinesfalse
partial-json-parserfalse
pillowfalse
prometheus_clientfalse
prometheus-fastapi-instrumentatorfalse
protobuffalse
psutilfalse
py-cpuinfofalse
pybase64false
pydanticfalse
python-json-loggerfalse
pyyamlfalse
pyzmqfalse
rayfalse
regexfalse
requestsfalse
scipyfalse
sentencepiecefalse
setuptoolsfalse
sixfalse
tiktokenfalse
tokenizersfalse
torchfalse
torchaudiofalse
torchvisionfalse
tqdmfalse
transformersfalse
typing_extensionsfalse
watchfilesfalse
xformersfalse
xgrammarfalse
boto3true
datasetstrue
fastsafetensorstrue
librosatrue
pandastrue
runai-model-streamertrue
runai-model-streamer-s3true
soundfiletrue
tensorizertrue

Details