From 4fc7365356d7d1136a40b202373862d2459dcaa3 Mon Sep 17 00:00:00 2001 From: Phil Wang Date: Wed, 17 Feb 2021 15:30:45 -0800 Subject: [PATCH] incept idea for using nystromformer --- README.md | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index a1ac542..0a5a227 100644 --- a/README.md +++ b/README.md @@ -173,23 +173,22 @@ A pytorch-lightning script is ready for you to use at the repository link above. There may be some coming from computer vision who think attention still suffers from quadratic costs. Fortunately, we have a lot of new techniques that may help. This repository offers a way for you to plugin your own sparse attention transformer. -An example with Linformer +An example with Nystromformer ```bash -$ pip install linformer +$ pip install nystrom-attention ``` ```python import torch from vit_pytorch.efficient import ViT -from linformer import Linformer +from nystrom_attention import Nystromformer -efficient_transformer = Linformer( +efficient_transformer = Nystromformer( dim = 512, - seq_len = 4096 + 1, # 64 x 64 patches + 1 cls token depth = 12, heads = 8, - k = 256 + num_landmarks = 256 ) v = ViT(