Morteza Abolghasemi dc9ee53a1f first commit | 3 months ago | |
---|---|---|
.vscode | 3 months ago | |
dataloaders | 3 months ago | |
examples | 3 months ago | |
modules | 3 months ago | |
utils | 3 months ago | |
.DS_Store | 3 months ago | |
.gitignore | 3 months ago | |
Readme.md | 3 months ago | |
VideoMAE_frame_selector.ipynb | 3 months ago | |
check_labels_dist_in_clip_space.ipynb | 3 months ago | |
download_k400.ipynb | 3 months ago | |
main.py | 3 months ago | |
postpretrain_VideoMAE_to_CLIP_Space.ipynb | 3 months ago | |
preprocess_kinetics_labels.ipynb | 3 months ago | |
requirements.txt | 3 months ago | |
save_kinetics_dataset.ipynb | 3 months ago | |
test.ipynb | 3 months ago | |
uniform-sampler-video-embedder.py | 3 months ago |
This project focuses on video action recognition using deep learning techniques, leveraging transfer learning from language models and attention mechanisms.
1.1. Download the Kinetics dataset:
save_kinetics_dataset.ipynb
to download the dataset.download_k400.ipynb
.1.2. Save the dataset:
2.1. Update Kinetics labels:
preprocess_kinetics_labels.ipynb
.3.1. Post-pretraining of VideoMAE:
postpretrain_VideoMAE_to_CLIP_Space.ipynb
.4.1. Prepare the test dataset:
4.2. Run the test:
test.ipynb
to evaluate the model’s performance.test.ipynb
.The model processes multiple frames from a video scene and creates rich representations in the CLIP space.