Audio player

Comments

112 - Alignment of Multilingual Contextual Representations, with Steven Cao

NLP Highlights

Science & Medicine

We invited Steven Cao to talk about his paper on multilingual alignment of contextual word embeddings. We started by discussing how multilingual transformers work in general, and then focus on Steven’s work on aligning word representations. The core idea is to start from a list of words automatically aligned from parallel corpora and to ensure the representations of the aligned words are similar to each other while not moving too far away from their original representations. We discussed the experiments on the XNLI dataset in the paper, analysis, and the decision to do the alignment at word level and compare it to other possibilities such as aligning word pieces or higher level encoded representations in transformers. Paper: https://openreview.net/forum?id=r1xCMyBtPS Steven Cao’s webpage: https://stevenxcao.github.io/


More episodes  


Listen to 120 - Evaluation of Text Generation, with Asli Celikyilmaz

120 - Evaluation of Text Generation, with Asli Celikyilmaz

Oct 2, 2020
Listen to 119 - Social NLP, with Diyi Yang

119 - Social NLP, with Diyi Yang

Sep 3, 2020
Listen to 118 - Coreference Resolution, with Marta Recasens

118 - Coreference Resolution, with Marta Recasens

Aug 26, 2020
Listen to 117 - Interpreting NLP Model Predictions, with Sameer Singh

117 - Interpreting NLP Model Predictions, with Sameer Singh

Aug 12, 2020
Listen to 116 - Grounded Language Understanding, with Yonatan Bisk

116 - Grounded Language Understanding, with Yonatan Bisk

Jul 2, 2020