alih
2ece3333da
Minor changes
2021-07-01 17:51:35 -07:00
Ali Hassani
a73030c9aa
Update README.md
2021-07-01 16:41:27 -07:00
Steven Walton
780f91a220
Tested and changed README format
2021-07-01 16:26:41 -07:00
Steven Walton
88451068e8
Adding CCT
...
Adding Compact Convolutional Transformers (CCT) from Escaping the Big Data
Paradigm with Compact Transformers by Hassani et. al.
https://arxiv.org/abs/2104.05704
2021-07-01 16:22:33 -07:00
Phil Wang
64a2ef6462
fix mpp
2021-06-16 16:46:32 -07:00
Phil Wang
a254a0258a
fix typo
2021-06-01 07:33:00 -07:00
Phil Wang
daf3abbeb5
add NesT
2021-05-27 22:02:17 -07:00
Phil Wang
7333979e6b
add link to official repo for levit
2021-05-06 13:12:30 -07:00
Phil Wang
74b402377b
add image
2021-05-02 15:40:53 -07:00
Phil Wang
41d2d460d0
link to yannic
2021-05-02 14:51:55 -07:00
Phil Wang
04f86dee3c
implement SOTA new self-supervised learning technique from facebook for vision transformers, Dino
2021-05-02 14:00:36 -07:00
Phil Wang
6549522629
be able to accept non-square patches, thanks to @FilipAndersson245
2021-05-01 20:04:41 -07:00
Phil Wang
6a80a4ef89
update readme
2021-05-01 11:51:35 -07:00
Phil Wang
7807f24509
fix small bug
2021-04-29 15:39:41 -07:00
Phil Wang
a612327126
readme
2021-04-29 15:22:12 -07:00
Phil Wang
30a1335d31
release twins svt
2021-04-29 14:55:25 -07:00
Phil Wang
ab781f7ddb
add Twins SVT (small)
2021-04-29 14:54:06 -07:00
Phil Wang
fbced01fe7
cite
2021-04-20 18:36:54 -07:00
Phil Wang
30b37c4028
add LocalViT
2021-04-12 19:17:32 -07:00
Phil Wang
b50d3e1334
cleanup levit
2021-04-06 13:46:19 -07:00
Phil Wang
2cb6b35030
complete levit
2021-04-06 13:36:11 -07:00
Phil Wang
3a3038c702
add layer dropout for CaiT
2021-04-01 20:30:37 -07:00
Phil Wang
b1f1044c8e
offer hard distillation as well
2021-04-01 16:56:14 -07:00
Phil Wang
deb96201d5
readme
2021-03-31 23:02:47 -07:00
Phil Wang
9ef8da4759
add CaiT, new vision transformer out of facebook AI, complete with layerscale, talking heads, and cls -> patch cross attention
2021-03-31 22:42:16 -07:00
Phil Wang
506fcf83a6
add documentation for three recent vision transformer follow-up papers
2021-03-31 09:22:15 -07:00
Phil Wang
9332b9e8c9
cite
2021-03-30 22:16:14 -07:00
Phil Wang
518924eac5
add CvT
2021-03-30 14:42:39 -07:00
Phil Wang
e712003dfb
add CrossViT
2021-03-30 00:53:27 -07:00
Phil Wang
8135d70e4e
use hooks to retrieve attention maps for user without modifying ViT
2021-03-29 15:10:12 -07:00
Phil Wang
3067155cea
add recorder class, for recording attention across layers, for researchers
2021-03-29 11:08:19 -07:00
Phil Wang
ab7315cca1
cleanup
2021-03-27 22:14:16 -07:00
Phil Wang
15294c304e
remove masking, as it complicates with little benefit
2021-03-23 12:18:47 -07:00
Phil Wang
b900850144
add deep vit
2021-03-23 11:57:13 -07:00
Phil Wang
78489045cd
readme
2021-03-09 19:23:09 -08:00
Phil Wang
173e07e02e
cleanup and release 0.8.0
2021-03-08 07:28:31 -08:00
Phil Wang
0e63766e54
Merge pull request #66 from zankner/masked_patch_pred
...
Masked Patch Prediction "Suggested in #63 " Work in Progress
2021-03-08 07:21:52 -08:00
Zack Ankner
a6cbda37b9
added to readme
2021-03-08 09:34:55 -05:00
Phil Wang
3744ac691a
remove patch size from T2TViT
2021-02-21 19:15:19 -08:00
Phil Wang
e3205c0a4f
add token to token ViT
2021-02-19 22:28:53 -08:00
Phil Wang
4fc7365356
incept idea for using nystromformer
2021-02-17 15:30:45 -08:00
Phil Wang
5db8d9deed
update readme about non-square images
2021-01-12 06:55:45 -08:00
Phil Wang
e8ca6038c9
allow for DistillableVit to still run predictions
2021-01-11 10:49:14 -08:00
Phil Wang
1106a2ba88
link to official repo
2021-01-08 08:23:50 -08:00
Phil Wang
f95fa59422
link to resources for vision people
2021-01-04 10:10:54 -08:00
Phil Wang
be1712ebe2
add quote
2020-12-28 10:22:59 -08:00
Phil Wang
1a76944124
update readme
2020-12-27 19:10:38 -08:00
Phil Wang
74074e2b6c
offer easy way to turn DistillableViT to ViT at the end of training
2020-12-25 11:16:52 -08:00
Phil Wang
e0007bd801
add distill diagram
2020-12-24 11:34:15 -08:00
Phil Wang
dc4b3327ce
no grad for teacher in distillation
2020-12-24 11:11:58 -08:00