Large Language Model Text Generation Inference
Go to file
Olivier Dehaene 1d986983d5 fix: cleanup 2022-10-08 12:34:25 +02:00
proto Init 2022-10-08 12:30:12 +02:00
router Init 2022-10-08 12:30:12 +02:00
server fix: cleanup 2022-10-08 12:34:25 +02:00
README.md Init 2022-10-08 12:30:12 +02:00

README.md

BLOOM Inference

A Rust and gRPC server for BLOOM Inference.

Install

cd server
pip install .
cd router
cargo build --release

Run

python server/bloom_inference/main.py bigscience/bloom --num-gpus 8 --shard-directory /dev/shm/models
./router/target/release/router

TODO:

  • Improve model download
    • Store "shardable" layers separately and layer by layer
  • Add batching args to router CLI
  • Add docstrings + comments everywhere as the codebase is fairly complicated
  • Add tests
  • Add shutdown logic in router and server
  • Improve multi-processing logic in server
  • Improve error handling everywhere
  • Improve past key layer indexing?