Mistral 7B runs on Apple Vision Pro


Joseph Semrai shows on X how the small, large Mistral 7B language model runs on an Apple Vision Pro. This is a variant of the model with 4-bit quantization, which reduces the model’s memory requirements, but also its accuracy. The performance requirements are reduced enough to run on a Vision Pro M2 with a total of 16 GB of memory. A 4-bit version of Mistral 7B Instruct is available here.

Video: Joseph Semrai

Ad

Join our community

Join the DECODER community on Discord, Reddit or Twitter – we can’t wait to meet you.

Ad

Join our community

Join the DECODER community on Discord, Reddit or Twitter – we can’t wait to meet you.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:

Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.

Join our community

Join the DECODER community on Discord, Reddit or Twitter – we can’t wait to meet you.



Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top