Moving Towards a More Scalable Podcast Tooling
Fleetingmoving towards a more scalable podcast reader
Constraints
- cannot download everything,
- need to perform a kind of breadth-first search, to avoid avoiding some podcasts,
- 6 states, grouped in two categories
- for each podcast, json lists todo, next and done (hereafter called ltodo, lnext, ldone)
- for each podcast, git annexed files with metadata todo, state, done (hereafter called gtodo, gnext, gdone)
Only gtodo and gnext lead to stuff being downloaded.
The algorithm in a few scripts
- go through all the podcast and ensure that the sum of the duration of gnext, gtodo and lnext is > 200 minutes, move some from ltodo to lnext if need be,
- then go through all the podcasts and move from lnext to gtodo until I reach 6000 minutes of listening (to be downloaded)
- then go through all the podcasts and move from gtodo to gnext until I reach 600 minutes of listening (to be sent to my phone)
If any of them does not work, then run the step above and try again.
When an episode is listened to, I move it to gdone. Then later, another script gets all the gdone and move them to ldone.