Location: Remote   |   Full-Time
AI ML Lead Researcher LLM Large Language Models Distributed Training Fine-tuning Inference Research PyTorch JAX DeepSpeed Blockchain Proof of Work AI Engineer Data Science
Ambient is seeking a Lead AI Researcher to spearhead the research and development efforts for our core Large Language Model (LLM) and the AI components of our unique Proof of Work system. You will lead initiatives in model architecture, distributed training, fine-tuning methodologies, and inference optimization within a novel blockchain context.

Company Overview: Ambient is creating a cryptocurrency network where AI computation (verified inference, fine-tuning, pre-training on a >600B parameter LLM) constitutes the Proof of Work. We're building an SVM-compatible L1 (a Solana fork) aiming for Bitcoin-like economics coupled with AI productivity. Our goal is a single, open-source, ever-evolving LLM trained and utilized cooperatively across the network using our Proof of Logits (PoL) consensus.

Role & Responsibilities:
- Lead research efforts into LLM architecture, scaling laws, and performance for our specific network requirements.
- Design and experiment with novel distributed training and fine-tuning techniques suitable for a decentralized network of potentially heterogeneous hardware.
- Develop and refine the mechanisms for verified inference (Proof of Logits) ensuring security and efficiency.
- Stay abreast of the latest advancements in AI/ML, particularly in LLMs, distributed computing, and efficient inference.
- Collaborate closely with engineering teams (Rust, Python) to translate research into practical implementations on the Ambient network.
- Define the research roadmap for the network's core AI capabilities.
- Publish research findings and represent Ambient at conferences and in the community.
- Mentor and guide other AI/ML engineers and researchers on the team.

Technical Skills Required:
- PhD or equivalent research experience in Machine Learning, AI, Computer Science, or a related field.
- Deep expertise in Large Language Models (architecture, training, fine-tuning, inference).
- Proven experience with distributed training frameworks (e.g., PyTorch FSDP, DeepSpeed, JAX pmap).
- Strong programming skills, particularly in Python and relevant ML libraries.
- Solid theoretical understanding of machine learning algorithms and optimization techniques.
- Experience in leading research projects and teams.

Ideal Candidate:
- Published research in top-tier AI/ML conferences/journals.
- Experience working with truly massive models (>100B parameters).
- Interest in the intersection of AI, cryptography, and blockchain technology.
- Ability to tackle ambiguous, complex research problems and drive innovation.
- Excited by the challenge of building and improving a foundational AI model within a decentralized, open-source ecosystem.
Post Date: April 17, 2025