More powerful deep learning with transformers (Ep. 84) (Rebroadcast)
Nov 28th, 2019 by frag
Some of the most powerful NLP models like BERT and GPT-2 have one thing in common: they all use the transformer architecture.
Such architecture is built on top of another important concept already known to the community: self-attention.
In this episode I explain what these mechanisms are, how they work and why they are so powerful.
- Attention is all you need
- The illustrated transformer
- Self-attention for generative models