From 3f50dd72cf40a943f9140ab2f22d9d9d94e64d6d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Long=20M=2E=20L=C6=B0u?= <55311435+minhlong94@users.noreply.github.com> Date: Sat, 21 Nov 2020 18:37:03 +0700 Subject: [PATCH] Update README.md --- README.md | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/README.md b/README.md index 6a2c7fc..fffe1f1 100644 --- a/README.md +++ b/README.md @@ -36,6 +36,28 @@ mask = torch.ones(1, 8, 8).bool() # optional mask, designating which patch to at preds = v(img, mask = mask) # (1, 1000) ``` +## Parameters +- `image_size`: int. +Image size. +- `patch_size`: int. +Number of patches. `image_size` must be divisible by `patch_size`. +The number of patches is: ` n = (image_size // patch_size) ** 2` and `n` **must be greater than 16**. +- `num_classes`: int. +Number of classes to classify. +- `dim`: int. +Last dimension of output tensor after linear transformation `nn.Linear(..., dim)`. +- `depth`: int. +Number of Transformer blocks. +- `heads`: int. +Number of heads in Multi-head Attention layer. +- `mlp_dim`: int. +Dimension of the MLP (FeedForward) layer. +- `channels`: int, default `3`. +Number of image's channels. +- `dropout`: float between `[0, 1]`, default `0.`. +Dropout rate. +- `emb_dropout`: float between `[0, 1]`, default `0`. +Embedding dropout rate. ## Research Ideas ### Self Supervised Training