mohalisad 6400586fcc init | 3 months ago | |
---|---|---|
01_ResidualPrompt | 3 months ago | |
02_AutoEncoder | 3 months ago | |
03_CombPrompts | 3 months ago | |
04_LowerDimPrompt | 3 months ago | |
06_PCAEmb | 3 months ago | |
07_AnalyzeCombPrompts | 3 months ago | |
08_ICLR | 3 months ago | |
09_Cluster | 3 months ago | |
11_wandb_api | 3 months ago | |
13_additional_table | 3 months ago | |
14_thesis_run | 3 months ago | |
_datasets | 3 months ago | |
_models | 3 months ago | |
_mydelta | 3 months ago | |
_trainer | 3 months ago | |
.gitignore | 3 months ago | |
README.md | 3 months ago | |
Untitled.ipynb | 3 months ago | |
_config.py | 3 months ago | |
_utils.py | 3 months ago | |
requirements.txt | 3 months ago |
This project is based on Python 3.10
. To get started, you can create an environment using conda with the following command:
conda create -n superpos python=3.10
After setting up the environment, install all the required packages with:
pip install -r requirements.txt
The entry point of this project is located in the ./09_Cluster
directory. The most important files in this directory are the config.yaml
files. Below is an example of a configuration file:
default: &default
use_tqdm: true
random_seed: 42
base_save_path: /home/msadraei/trained_final
model_name: google/t5-base-lm-adapt
project_name_prefix: iclr_attempt_lmt5
experiment_name_suffix: null
train_batch_size: 32
valid_batch_size: 32
remove_dropout: true
learning_rate: 0.01
weight_decay: 0.01
num_epochs: 40
peft_params: null # no mutation
hot_modules:
- sadcl
best_finder:
save: true
metric: valid_mean
higher_better: true
tasks:
- glue:cola
- glue:mrpc
- glue:stsb
- superglue:rte
- superglue:cb
- superglue:wic
- superglue:copa
- superglue:boolq
- superglue:multirc
pp: &pp
- /home/msadraei/trained_final/hzi_cluster_t5_base_glue-mnli/10_combine_128
- /home/msadraei/trained_final/hzi_cluster_t5_base_glue-sst2/10_combine_128
- /home/msadraei/trained_final/hzi_cluster_t5_base_glue-qqp/10_combine_128
- /home/msadraei/trained_final/hzi_cluster_t5_base_glue-qnli/10_combine_128
run_configs:
- <<: *default
learning_rate: 0.3
weight_decay: 0.00001
peft_params:
kind: attempt
n_tokens: 10
g_bottleneck: 100
pretrained_paths: *pp
This project supports different kinds of Parameter-Efficient Fine-Tuning (PEFT) methods. The valid values for PEFT types are 'combine'
, 'residual'
, 'simple'
, 'spot'
, and 'attempt'
. Each run configuration will be executed over each dataset in the list of tasks.
To run a configuration, use the following command:
python train.py config.yaml
This will start the training process based on the settings defined in config.yaml
.