Update README.md

This commit is contained in:
Phil Wang
2020-12-24 10:58:41 -08:00
committed by GitHub
parent aa9ed249a3
commit 34e6284f95

View File

@@ -205,6 +205,17 @@ Other sparse attention frameworks I would highly recommend is <a href="https://g
}
```
```bibtex
@misc{touvron2020training,
title={Training data-efficient image transformers & distillation through attention},
author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou},
year={2020},
eprint={2012.12877},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@misc{vaswani2017attention,
title = {Attention Is All You Need},