| | Petals runs Llama 2 (70B) from Colab at 5 tokens/sec (github.com/bigscience-workshop) |
| 5 points by borzunov on July 19, 2023 | past | 3 comments |
|
| | Petals: Run 100B+ language models at home bit-torrent style (github.com/bigscience-workshop) |
| 594 points by antman on Jan 2, 2023 | past | 155 comments |
|
| | Show HN: Run and fine-tune 175B+ LMs in Colab using a P2P network of GPUs (github.com/bigscience-workshop) |
| 3 points by borzunov on Dec 13, 2022 | past |
|
| | PromptSource: Toolkit for creating, sharing and using natural language prompts (github.com/bigscience-workshop) |
| 1 point by mindcrime on Feb 7, 2022 | past |
|
| | Lessons learned from training 104B model (github.com/bigscience-workshop) |
| 2 points by EvgeniyZh on Jan 30, 2022 | past |
|
| | Research workshop on large language models (github.com/bigscience-workshop) |
| 1 point by throwaway879080 on Oct 19, 2021 | past |
|