More powerful deep learning with transformers (Ep. 84)

Some of the most powerful NLP models like BERT and GPT-2 have one thing in common: they all use the transformer architecture.
Such architecture is built on top of another important concept already known to the community: self-attention.
In this episode I explain what these mechanisms are, how they work and why they are so powerful.

Don't forget to subscribe to our Newsletter or join the discussion on our Discord server



Share | Download

Episodes Date

Load more

Podbean App

Play this podcast on Podbean App