WHAT YOU DO AT AMD CHANGES EVERYTHING
We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences – the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world’s most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives.
AMD together we advance_
THE ROLE:
We are looking for Applied Research Scientist experienced with training large language models, large multimodal models, and image/video generation models. In this role, you will explore novel LLM/LMM and image/video generation architectures and large-scale training techniques to advance the state-of-the-arts. You will be part of a world-class research team working on pre-training, fine-tuning, and aligning large language models, large multimodal models, and image/video generation models, in addition to keeping up-to-date to the latest progress and trends in LLM/LMM, image/video generation, and other foundation models.
THE PERSON:
Do you like to design and implement novel research ideas, improve the quality of the large language/multimodal models, accelerate the training and inference speed of LLMs, LMMs, and image/video generation models, and influence future hardware and software direction? If so, this role is for you. The ideal candidate will have expertise and hands-on experience on training LLMs, LMMs, and/or diffusion models., familiar with hyper-parameter tuning techniques, data preprocessing, tokenization methods and latest training approaches for LLMs, LMMs, and diffusion models. A successful candidate needs to be knowledgeable with latest transformer architectures.
KEY RESPONSIBILITIES:
- Train and finetune LLMs, LMMs, and image/video generation models.
- Improve on the state-of-the-art LLMs, LMMs, and image/video generation models.
- Accelerate the training and inference speed of LLMs, LMMs, and image/video generation models.
- Research novel ML techniques and model architectures.
- Influence the direction of AMD AI platform.
- Publish your work at top-tier venues.
- Engage with academia and open-source ML communities.
PREFERRED EXPERIENCE:
- Experience in developing and debugging in Python.
- Experience in ML frameworks such as PyTorch, JAX or TensorFlow.
- Experience with distributed training.
- Expertise on LLM/LMM/Diffusion pretraining, finetuning, and/or RLHF.
- Familiar with transformer architecture.
- Strong communication and problem-solving skills.
- Publication at top-tier venues is a huge plus.
ACADEMIC CREDENTIALS:
- A PhD or master’s degree or equivalent in machine learning, computer science, artificial intelligence, or a related field.
LOCATION:
- San Jose or Seattle; other US locations may be considered.
#LI-MV1
#HYBRID
#REMOTE
At AMD, your base pay is one part of your total rewards package. Your base pay will depend on where your skills, qualifications, experience, and location fit into the hiring range for the position. You may be eligible for incentives based upon your role such as either an annual bonus or sales incentive. Many AMD employees have the opportunity to own shares of AMD stock, as well as a discount when purchasing AMD stock if voluntarily participating in AMD’s Employee Stock Purchase Plan. You’ll also be eligible for competitive benefits described in more detail here.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.