Join the mighty team at Supernormal bringing the future of work closer to the present. We're a rapidly growing platform solving a real need for thousands of people every day - automating meeting notes and tasks. As an engineer at Supernormal, you'll play a major role in developing our product experiences and a workplace that people love. Together we'll help people save time and meet more efficiently so our customers can focus on what matters - like sending satellites to space or running their local government. Machine learning engineers at Supernormal build the AI that superpowers the core product experience for people's meetings including transcription, note generation, and task automation. The AI team builds reliable and secure services that use the most advanced AI models in the market to generate millions of high-quality meeting notes to a rapidly growing customer base. Our work revolves heavily around software engineering, too - we are looking for people with a drive to roll up their sleeves and get new models and features out to users as quickly as possible. Supernormal is a well-funded growth stage startup backed by EQT Ventures, Balderton Capital, and byFounders VC. We're growing rapidly in a competitive market with an AI-powered product used by thousands every day. If you want to operate with high autonomy in an environment where you'll get to flex your skills to build great products with great people, Supernormal is your place.
The AI team at Supernormal owns everything about how meeting notes, question answering, and task completion are generated. This includes LLM API calls, custom model training and deployment, speech recognition, quality evaluation and fixes, retrieval augmented generation, and much more. We optimize for cost, latency, and quality. Some of the projects include: * Prompt engineering using state-of-the-art techniques to improve the core meeting assistant scenarios * Building and shipping custom machine learning models to augment the AI stack including to improve transcript quality, reduce tokens sent to APIs, remove defects in LLM output, and extract semi-structured data * Training and deploying custom large language models from open source using state-of-the-art techniques (LoRA, RLHF, instruction-tuning, etc) * Developing new product experiences using NLP & LLMs that get better based on user feedback & iteration while collaborating with product engineers & design team * Defining and improving business & product metrics to optimize the quality and cost of AI usage * Improving how we use LLM-powered search and question answering (using RAG) over sets of meetings * Advocating for, and building, new and better ways of doing things. You'll leave everything you touch just a bit better than you found it
What you will bring: * We are a fast-moving startup building zero-to-one products on top of large language models. The ideal candidate is passionate about machine learning modeling, with a foundational understanding of algorithms and hands-on experience in developing and optimizing models. * You exhibit proficiency in basic data analysis, feature engineering, and model evaluation, with a willingness to learn and grow in the field. * AI/ML Experience: Demonstrated experience working on real-world machine learning projects, either through internships, academic projects, or professional experience. * Experience in model evaluation, validation, and tuning, as well as familiarity with handling large datasets and deploying models into production environments, is highly desirable * A Solid Educational Foundation: Bachelor's degree in Computer Science, Engineering, AI, Mathematics, or related field; Master's degree or PhD in related disciplines is a plus * Software Engineering Skills: Knowledge of software development principles and best practices, such as version control, code review, and testing. Experience with collaborative development tools and platforms, such as GitHub * Proficient in Python and SQL: our AI stack uses Python & PyTorch and interfaces with Ruby on Rails (bonus if you know it, but not required) and we write a lot of SQL queries on top of Snowflake to pull data