Lists (3)
Sort Name ascending (A-Z)
- All languages
- Assembly
- C
- C#
- C++
- CSS
- Clojure
- CoffeeScript
- Common Lisp
- Coq
- Cuda
- Cython
- Dockerfile
- Emacs Lisp
- Fortran
- GAP
- Go
- Go Template
- HTML
- Haskell
- Java
- JavaScript
- Julia
- Jupyter Notebook
- Lean
- Lua
- MATLAB
- MDX
- Makefile
- Markdown
- Metal
- Nim
- PHP
- Pascal
- Perl
- Python
- R
- Racket
- Rocq Prover
- Ruby
- Rust
- SCSS
- Sass
- Scala
- Shell
- Svelte
- Swift
- SystemVerilog
- TeX
- TypeScript
- Verilog
- Vim Script
- Visual Basic
- Vue
- Zig
Starred repositories
The Doubleword Inference Stack is the easiest & most performant way to run genAI infrastructure in your private environment.
Cross-agent skills that help coding agents use Entire context from Checkpoints, sessions, and git history to search past work, explain code, and hand off sessions.
Headless CLI client for stateful Agent Client Protocol (ACP) sessions
(Mirror) S3-compatible object store for small self-hosted geo-distributed deployments. Main repo: https://git.deuxfleurs.fr/Deuxfleurs/garage
Open-source orchestration for zero-human companies
On-device Speech AI for Apple Silicon
A hybrid programming language combining Lean4's formal verification with blazing-fast compilation, actor-based agent orchestration, AI-driven optimization, and vector-backed agent memory.
🦄 ai that works - every tuesday 10 AM PST
Machine Learning Engineering Open Book
🤗 smolagents: a barebones library for agents that think in code.
Mirage Persistent Kernel: Compiling LLMs into a MegaKernel
Verified tensor graph optimization in Lean 4: constructive soundness proofs + equality saturation + verified extraction via e-graph↔circuit bijection + multi-target code generation.
Verified GPU programming framework for Lean 4. Write type-safe WebGPU shaders with formal verification, hardware-accelerated matrix ops, and cross-platform support (Metal/Vulkan/D3D12). Build prova…
Row-wise block scaling for fp8 quantization matrix multiplication. Solution to GPU mode AMD challenge.
A lightweight multi-GPU inference engine for LLMs on mid/low-end GPUs.
A Flexible Framework for Experiencing Heterogeneous LLM Inference/Fine-tune Optimizations
hardware accelerator for deep convolutional neural networks
Open-source CUDA compiler targeting multiple GPU architectures. Compiles .cu to AMD and Tenstorrent GPU's
Intel® Nervana™ reference deep learning framework committed to best performance on all hardware
An ARC-AGI solution using Agentica from Symbolica