Tensors and Dynamic neural networks in Python with strong GPU acceleration
-
Updated
May 6, 2026 - Python
Tensors and Dynamic neural networks in Python with strong GPU acceleration
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Scalene: a high-performance, high-precision CPU, GPU, and memory profiler for Python with AI-powered optimization proposals
Open Machine Learning Compiler Framework
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
Run, manage, and scale AI workloads on any AI infrastructure. Use one system to access & manage all AI compute (Kubernetes, Slurm, 20+ clouds, on-prem).
A python library built to empower developers to build applications and systems with self-contained Computer Vision capabilities
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, DeepSpeed, Axolotl, etc.
Real-Time and Accurate Full-Body Multi-Person Pose Estimation&Tracking System
An interactive NVIDIA-GPU process viewer and beyond, the one-stop solution for GPU process management.
A Python framework for GPU-accelerated simulation, robotics, and machine learning.
A flexible framework of neural networks for deep learning
FlashInfer: Kernel Library for LLM Serving
High-performance TensorFlow library for quantitative finance.
Time series forecasting with PyTorch
On-device AI across mobile, embedded and edge for PyTorch
📊 A simple command-line utility for querying and monitoring GPU status
High-performance inference framework for large language models, focusing on efficiency, flexibility, and availability.
Add a description, image, and links to the gpu topic page so that developers can more easily learn about it.
To associate your repository with the gpu topic, visit your repo's landing page and select "manage topics."