Dzmitry Bahdanau
E899028
Dzmitry Bahdanau is a computer scientist best known for pioneering the neural attention mechanism in sequence-to-sequence models, which transformed neural machine translation and modern deep learning.
Observed surface forms (1)
| Surface form | Occurrences |
|---|---|
| Neural Machine Translation by Jointly Learning to Align and Translate | 0 |
Statements (44)
| Predicate | Object |
|---|---|
| instanceOf |
computer scientist
ⓘ
researcher ⓘ scientific paper ⓘ |
| advisor | Yoshua Bengio NERFINISHED ⓘ |
| alsoKnownAs | Dmitry Bahdanau NERFINISHED ⓘ |
| associatedWith | Université de Montréal NERFINISHED ⓘ |
| author |
Dzmitry Bahdanau
NERFINISHED
ⓘ
Kyunghyun Cho NERFINISHED ⓘ Yoshua Bengio NERFINISHED ⓘ |
| citationsCountRange | >10000 citations for attention NMT paper ⓘ |
| coAuthor |
Kyunghyun Cho
NERFINISHED
ⓘ
Yoshua Bengio NERFINISHED ⓘ |
| countryOfCitizenship | Belarus NERFINISHED ⓘ |
| fieldOfWork |
deep learning
ⓘ
machine learning ⓘ natural language processing ⓘ neural machine translation ⓘ neural machine translation ⓘ |
| gender | male ⓘ |
| hasAcademicAdvisor | Yoshua Bengio NERFINISHED ⓘ |
| hasContribution |
demonstrated improvements over encoder-decoder models without attention
ⓘ
enabled alignment learning jointly with translation in NMT ⓘ introduced additive attention mechanism for sequence-to-sequence models ⓘ popularized attention mechanisms in NLP research ⓘ |
| influenced |
development of attention mechanisms in deep learning
ⓘ
modern neural machine translation systems ⓘ transformer-based neural network architectures ⓘ |
| influencedBy |
Kyunghyun Cho
NERFINISHED
ⓘ
Yoshua Bengio NERFINISHED ⓘ |
| knownFor |
neural attention mechanism in sequence-to-sequence models
ⓘ
pioneering attention-based neural machine translation ⓘ |
| languageOfWorkOrName |
Belarusian
ⓘ
English ⓘ Russian ⓘ |
| mainSubject | attention mechanism ⓘ |
| nativeLanguage | Belarusian ⓘ |
| notableIdea |
additive attention
NERFINISHED
ⓘ
soft alignment in neural translation ⓘ |
| notableWork | Neural Machine Translation by Jointly Learning to Align and Translate NERFINISHED ⓘ |
| publicationYear | 2014 ⓘ |
| researchInterest |
neural network architectures
ⓘ
representation learning ⓘ sequence-to-sequence learning ⓘ |
| workLocation | Montreal NERFINISHED ⓘ |
Referenced by (1)
Full triples — surface form annotated when it differs from this entity's canonical label.