Facebook SDK

Recent Posts

test

Show HN: Local fine tuning for Mistral and SDXL, GPU mem/latency optimization https://ift.tt/84SDBAd

Show HN: Local fine tuning for Mistral and SDXL, GPU mem/latency optimization 100% bootstrapped new startup. It lets you fine tune Mistral-7B and SDXL. In particular, for the LLM fine tuning we implemented a dataprep pipeline that turns websites/pdfs/doc files into question-answer pairs for training the small LLM using an big LLM. It includes a GPU scheduler that can do finegrained GPU memory scheduling (Kubernetes can only do whole-GPU, we do it per-GB of GPU memory to pack both inference and fine tuning jobs into the same fleet) to fit model instances into GPU memory to optimally trade off user facing latency with GPU memory utilization It's a pretty simple stack of control plane and a fat container that runs anywhere you can get hold of a GPU (e.g. runpod). Architecture: https://ift.tt/zpbJGKn Demo walkthrough showing runner dashboard: https://ift.tt/jTVQiXA Run it yourself: https://ift.tt/F91leah Discord: https://ift.tt/8ySjiBb Please roast me! https://ift.tt/jTVQiXA December 22, 2023 at 01:43AM
Show HN: Local fine tuning for Mistral and SDXL, GPU mem/latency optimization https://ift.tt/84SDBAd Show HN: Local fine tuning for Mistral and SDXL, GPU mem/latency optimization https://ift.tt/84SDBAd Reviewed by Manish Pethev on December 22, 2023 Rating: 5

No comments:

If you have any suggestions please send me a comment.

Flickr

Powered by Blogger.