yt_de_study/README.md

73 lines
4.6 KiB
Markdown
Raw Permalink Normal View History

2020-09-06 20:24:39 +02:00
# Study German using Kurzgesagt / Dinge Erklärt Youtube Channels
[Kurzgesagt](https://www.youtube.com/c/inanutshell/videos) and [Dinge Erklärt](https://www.youtube.com/c/KurzgesagtDE/videos) channels contain videos that are _mostly_ the same in English and German. Which provides great opportunity to study both languages.
## Dependencies
* [youtube-dl](https://ytdl-org.github.io/youtube-dl/)
* [ffmpeg](https://ffmpeg.org/)
## Description
* Fix the `EN_VIDEO`, `DE_VIDEO` videos and `FINAL_TITLE` at the top of `main.py`
* The `main.py` script does the following:
1. Downloads the videos with **lowest** quality (speeds up further `ffmpeg` processing) and the subtitles in both English and German
2. Optionally syncs the video timestamps (very stupid linear algorithm right now which is super slow and maybe completely useless)
3. Times and tries best to arrange the timestamps of the two subtitle files to match
4. Stops and lets the user fix (see `fix.txt` below)
5. If the `fix.txt` file exists, it assumes times have been fixed
2020-09-06 20:56:06 +02:00
6. Cuts the scenes defined in `fix.txt` and then combines them in `'EN'+'DE'+'EN'` form and puts subtitle text in the middle of the screen
2020-09-06 20:24:39 +02:00
7. Concatenates the final video with title of variable `FINAL_TITLE` and clears up temp files
### `fix.txt`
The file is used to further align scenes. It has the following form (it is auto-generated):
```
000 ... Text in English |
... 000 | Text in German
... 001 | Text 2 in German
001 ... Text 2 in English |
-------
... 002 | Title in scene 2 in German is in one line
002 ... Title in scene 2 |
003 ... is in two lines in English |
-------
```
* The file is generated if `fix.txt` doesn't exist
* The first number is the Nth subtitle in English
* The second number is the Nth subtitle in English
* The numbers are three digits long (`000`) and are separated by space
* The number could be `...` which signifies that this subtitle in this language is not defined on this line
* After the numbers, the rest of the line is not processed by the program but it's helpful for users to align the texts - English and German separated by the `|` character
* Scene is separated by 7 dashes (`-------`)
* All subtitles in a scene are grouped by language and then a cut is produced
* The timings of the videos are not defined here, they are assumed from the corresponding subtitle (i.e. there is no way to adjust timings other than adjust them in the corresponding subtitle file)
* The user can re-arrange the lines as they wish, combining the subtitles as they seem most useful (e.g. attempt for full sentences although if the subs are cut in a weird way that may be very challenging)
Given the above file, two cuts will be produced with the following on-screen text (and hopefully aligned audio):
1. First:
```
Text in English Text 2 in English
Text in German Text 2 in German
```
2. Second:
```
Title in scene 2 is in two lines in English
Title in scene 2 in German is in one line
```
2020-09-06 20:56:06 +02:00
## TODO:
* The cmd interface could be better - perhaps provide a first argument the name of the video and then store the metadata in `data/video_title/data.py` or similar (see the current `data/strange_stars` for inspiration).
- This could provide caches e.g. for `fix.txt` and other intermediate files
* **HUGE assumption** here is that the videos are of the roughly same length, quality and resolution - there have been no tests for boundary conditions too outside of these assumptions
- E.g. videos are assumed to be `.mp4` when downloading with worst quality - this may fail because mp4 is hardcoded in the commands (even though ffmpeg will probably be able to merge different formats, it will fail on different resolution/quality)
* Linear sync is uuuuugly (and thus by default not used at all right now). It's trying to find the matching frame linearly searching from -5 to +5 seconds with a constant step on `de_video` from a 0th point of the `en_video`. The idea is to syncronize the frames which should maybe provide a bettwr guide to subtitle aligment. Much better would a variant of binary search algo with varying step alternating on left/right side or a gradient descent-like.
- Further, generating images, saving them, comparing the save and saving on disk is very naive and also slow - maybe come up with a faster method to compare frames overall
* Concatenating of DE+EN+DE and then putting over subtitles (step 6. above) is currently two ffmpeg commands while it could be one
- Further, both steps 6. and 7. could be optimized/combined with some advanced ffmpeg-magic