Hi There! Name's Machel

I’m a research scientist at Google DeepMind working on NLP research, with a focus on multilingual NLP. Recently, I’ve been working a lot with LLMs and instruction tuning, developing recipes to boost multilingual capabilities in LLMs.

I was previously a visiting student at Carnegie Mellon University, advised by Graham Neubig and researcher at the University of Tokyo at Matsuo Lab with Yutaka Matsuo (where I still collaborate).

I am currently working on multilingual NLP (Reid and Artetxe, 2022), low-resource NLP (Reid et al., 2021), edit models (Marrese-Taylor et al., 2021, Reid and Zhong, 2021, Reid and Neubig, 2022) as well as other topics as well! Feel free to reach out via email or Twitter!


Recent Posts