Member of Technical Staff, Model Efficiency

Who are we?

Our mission is to scale intelligence to serve humanity. We’re training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI. We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what’s best for our customers. Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products. Join us on our mission and shape the future!

Why this role?

Large Language Models (LLMs) have demonstrated remarkable performance across various tasks. However, the substantial computational and memory requirements of LLM inference pose challenges for deployment. The model efficiency team is responsible for increasing the inference efficiency of our foundation models by improving model architecture and optimizing ML frameworks. As an engineer on this team, you’ll work on improving the key model serving metrics including latency and throughput by profiling the system, identifying bottlenecks, and solving problems with innovative solutions. Please Note: We have offices in Toronto, San Francisco, New York and London. We embrace a remote-friendly environment, and as part of this approach, we strategically distribute teams based on interests, expertise, and time zones to promote collaboration and flexibility. You'll find the Model Efficiency team concentrated in the EST and PST time zones. You may be a good fit for the Model Efficiency team if you have: * Significant experience in developing high-performance machine learning algorithms or machine learning infrastructure * Hands-on experience with large language models * Bias for actions and results * An appetite to solve challenging machine learning research problems It is a big plus if you also have considerable experience with one of these areas: * Model compression techniques: quantization, pruning, sparsity, low-rank compression, knowledge distillation, etc. * GPU/Accelerator programming or high-performance computing * LLM Inference performance modeling * Machine learning framework internals If some of the above doesn’t line up perfectly with your experience, we still encourage you to apply! If you consider yourself a thoughtful worker, a lifelong learner, and a kind and playful team member, Cohere is the place for you. We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants of all kinds and are committed to providing an equal opportunity process. Cohere provides accessibility accommodations during the recruitment process. Should you require any accommodation, please let us know and we will work with you to meet your needs.

Our Perks:

Details

Original URL: https://www.linkedin.com/jobs/view/member-of-technical-staff-model-efficiency-at-cohere-3954882270