Skip to content

Pinned Loading

  1. vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 52.2k 8.7k

  2. llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 1.6k 174

Repositories

Showing 10 of 18 repositories
  • llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 1,630 Apache-2.0 174 27 (5 issues need help) 31 Updated Jul 14, 2025
  • vllm-ascend Public

    Community maintained hardware plugin for vLLM on Ascend

    Python 868 Apache-2.0 251 201 (5 issues need help) 112 Updated Jul 14, 2025
  • vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 52,154 Apache-2.0 8,680 1,810 (10 issues need help) 776 Updated Jul 14, 2025
  • aibrix Public

    Cost-efficient and pluggable Infrastructure components for GenAI inference

    Go 3,917 Apache-2.0 394 194 (19 issues need help) 15 Updated Jul 14, 2025
  • vllm-gaudi Public
    Python 4 1 0 0 Updated Jul 12, 2025
  • ci-infra Public

    This repo hosts code for vLLM CI & Performance Benchmark infrastructure.

    HCL 14 29 0 8 Updated Jul 13, 2025
  • vllm-spyre Public

    Community maintained hardware plugin for vLLM on Spyre

    Python 30 Apache-2.0 18 11 (1 issue needs help) 17 Updated Jul 12, 2025
  • guidellm Public

    Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs

    Python 399 Apache-2.0 53 46 (4 issues need help) 10 Updated Jul 10, 2025
  • production-stack Public

    vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization

    Python 1,490 Apache-2.0 225 54 (3 issues need help) 41 Updated Jul 10, 2025
  • flash-attention Public Forked from Dao-AILab/flash-attention

    Fast and memory-efficient exact attention

    Python 80 BSD-3-Clause 1,818 0 12 Updated Jul 10, 2025