A dataset of informal Persian audio and text chunks, along with a fully open processing pipeline, suitable for ASR and TTS tasks. Created from crawled content on virgool.io.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 3.2KB

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364
  1. # VirgoolInformal-Speech-Dataset
  2. This repository contains a dataset of informal Persian audio and text chunks suitable for Automatic Speech Recognition (ASR) and Text-to-Speech (TTS) tasks. The dataset was created by crawling informal Persian text from [virgool.io](https://virgool.io) using the crawling scripts from [this repository](https://github.com/MahtaFetrat/Virgool-Informal-Posts-Crawler), recording their spoken forms, and processing the raw audio and text files into smaller, equivalent chunks.
  3. ## Dataset Description
  4. The dataset includes:
  5. - Raw audio files recorded from the crawled text.
  6. - Raw text files crawled from the blog.
  7. - Processed audio and text chunks, aligned for ASR and TTS tasks.
  8. - A fully open processing pipeline documented in a Jupyter Notebook, detailing each step from raw data to processed output.
  9. ### Raw Data
  10. The raw data consists of:
  11. - Audio files in their original format.
  12. - Text files in their original format.
  13. ### Processed Data
  14. The processed data consists of:
  15. - Audio files converted from m4a to mono mp3 format.
  16. - Text files normalized, cleaned, and tokenized into sentences.
  17. - Aligned audio-text chunks created using the forced alignment tool Aeneas.
  18. ## Processing Notebook
  19. The processing of the raw data is documented in a Jupyter Notebook, which includes the following steps:
  20. 1. **Audio Processing**: Converting audio files from m4a to mono mp3.
  21. 2. **Text Processing**: Normalizing text, removing and substituting symbols, removing links and references, converting numbers to their spoken format, and removing extra spaces.
  22. 3. **Sentence Tokenization**: Splitting text files into sentences using a custom sentence tokenization script.
  23. 4. **Forced Alignment**: Creating aligned audio-text chunks using Aeneas.
  24. ### Running the Notebook
  25. To run the processing notebook, place the raw data files into a folder named `raw-data` in the root directory. The processed audio and text files will be output to a directory named `processed-data`, and the forced alignment results will be written to `forced-aligned-data`.
  26. For detailed instructions on environment setup, please refer to [the processing notebook](https://github.com/MahtaFetrat/VirgoolInformal-Speech-Dataset/blob/main/VirgoolInformal_Dataset_Processing.ipynb).
  27. You can view and run [the processing notebook in Google Colab](https://colab.research.google.com/drive/1AjvrRisJYdqvNdSDKdSWfxge6S29mavm?usp=sharing).
  28. ## Usage
  29. The dataset can be used to train Persian ASR and TTS models, specifically tailored for informal Persian speech. Additionally, it can be utilized to evaluate ASR models in terms of Character Error Rate (CER). For example, see [this repository](https://github.com/MahtaFetrat/Open-Persian-ASRs) for an ASR evaluation setup.
  30. ## Links
  31. - [Raw Data](link to be updated)
  32. - [Processed Data](link to be updated)
  33. - [Processing Notebook in Colab](https://colab.research.google.com/drive/1AjvrRisJYdqvNdSDKdSWfxge6S29mavm?usp=sharing)
  34. ## License
  35. This project is licensed under the open MIT License for the code and under the open CC-0 License for the data.
  36. ## Contributing
  37. Contributions are welcome! Please feel free to submit a Pull Request.
  38. ---
  39. Enjoy working with the VirgoolInformal-Speech-Dataset!