CausalLM/14B model with AWQ quantization. Perhaps better than all existing models < 70B, in most quantitative evaluations...
Stay ahead with weekly updates: get platform news, explore projects, discover updates, and dive into case studies and feature breakdowns.