https://www.languagereactor.com is something similar to Yabla I think... But you can use it indefinitely and subscribe if you need the premium features.
I tried it for a few months, but never was really able to get it to work for me, although I did find the dictionary hover overlay in YouTube videos to be helpful at the beginning. Yabla is different in that they will break up a video into pieces, have you listen to it without subtitles, and then ask you to type out what you heard. This was really helpful, particularly on the advanced levels, as picking up various accents can be difficult until your ears adjust.
It's surprising how much easier to translate a foreign when it's given in a sentence. Also helps when there are multiple translations for a word depending on context.
One way it could grade you automatically is by the speed of flipping the card (or entering the correct answer). If it took less than a second to confirm then evidently it was easy.
But conversely, if I alt-tabbed to chat with a friend, or paused studying because the person sitting next to me asked a question, or I took a sip from my coffee mug, that doesn't mean it's hard necessarily. Even though all of those take at least as much time as answering a hard card un-interrupted would.
The AI cannot read my mind, there is no approximation that will work reasonably accurately here for "how confident was I in my answer", unless I input that myself.
I would argue that it's harder for me to decide how easily I recalled a word and decide between a few loosely defined levels as to which one I should choose than to apply a simple algorithm.
If the window loses focus it would be able to pause automatically. If you are distracted another way, no big deal you will see that word again soon and unlikely to keep getting distracted on the same word. The benefits would outweigh the odd misfire.
It should definitely be added as a variable within the calculation, but the current FSRS predicts how likely you are to access the memory (if it's sufficiently available which is defined by its retrieval strength) and speed of retrieval isn't really a factor in this version. The different grades are more to define how well all parts of the memory is retrieved.
Not to say that how quickly you can access it doesn't play a role in real life.
Whenever I try to use anki I can't figure what those four buttons actually mean, so I end up with 40 cards that I still can't recall and then the thing happily drops another 10 on top and I just delete the deck or the app. Haven't learned the thing I was trying to learn with it ever.
Either I don't understand the algorithm or it doesn't understand me.
The four buttons is apparently a contentious topic in the community. It's gotten more serious because in FSRS misusing "hard" to mean "I didn't get it, but I felt close" is really bad and throws off the algorithm.
I like the design suggestions proposed at [1] and [2] for this particular problem. [2] in particular gives tooltips which are supposed to guide you toward exactly what the buttons mean:
- Again: "My answer was completely incorrect"
- Hard: "My answer was correct, but I hesitated a lot"
- Good: "My answer was correct, and I hesitated a little"
- Easy: "My answer was correct, and I didn't hesitate"
That said, you can also just reduce it to a two-button system: only ever use Again and Good. There is some evidence this works better, especially with FSRS which is doing enough machine-learning behind the scenes anyway that it doesn't need the extra signal from Hard vs. Good vs. Easy.
My tip is to map the 1-4 difficulties as "wrong, or <60% confidence", "60-80% confidence, thought required", "90%+ confidence, thought required", and "90%+ confidence, no serious thought".
Depending on what you're learning, you might vary those. For language learning, that works well imo.
Also, make sure to switch to FSRS. The old algorithm defaulted to "again" resetting a card to 0, while "again" in FSRS does show it again, but doesn't reset it back to being effectively new.
Amazing approach! Is learning a second language too different from the types of courses Miyagi was designed for, or do you see a potential for that category?
I was actually thinking about building this because I watch a lot of YT videos in other languages (best way to do travel research is to search the destination using the local name and getting local videos).
Thanks! Definitely some potential, we actually built a language learning tool for a few days early on (but decided that it was too crowded of a space to start in).
Learning languages seems a bit different in that there's more focus on repetition compared to comprehension questions, but there are certain topics (like grammar concepts) that could work well in our current structure. Also there are some really popular YouTube channels for learning any language, so we definitely see a potential to augment those videos to more accurately & effectively learn.
Using speech to text you could say the answer and it could validate your answer. If AI engine is powerful enough it could have you say the foreign word and rate your pronunciation.
As for spaced repetition I developed an alternative which just has a column for number of times correct answer was given and order by descending order on that field. This gives you new words first followed by words you've barely gotten correct etc
You could use the data you've collected in the DB to generate a quiz that tests your knowledge of the words.
If you track how many times you entered the correct answer and sort by descending order on that field you will be presented with the least known words first. Easy alternative to spaced repetition.