Voidness Way

“Voidness is that which stands right in the middle between this and that. The void is all-inclusive, having no opposite—there is nothing which it excludes or opposes. It is living void.” — Bruce Lee

I. In the beginning was the Void.

The Void (空) subsumes your self-study, and involves your metacognition, allowing you to learn how to learn, and adapt accordingly.

Typically, the conception is that the most effective learning method research has discovered–spaced retrieval–requires software like Anki, or Memrise, etc. This is untrue.

We’ve been distracted by the ‘Supermemo model’ of spaced retrieval, popularized by articles like this, from Wired. Note the headline: Want to Remember Everything You’ll Ever Learn? Surrender to This Algorithm. Tongue-in-cheek, but that sums up the problem nicely.

Well that, and the fact that most people call such digital flashcard software ‘spaced repetition’, which loses the emphasis on ‘retrieval practice’.

Our conception of effective learning has been constrained by the tools we use, in particular, spaced retrieval became spaced repetition with cued recall cards scheduled with multiple graded buttons (easy/good/hard), card contents formulated into a simple fact-like forms. The idea of a magical algorithm took the focus, allowing us to remember everything.

I have always described Anki as a HUD, not a vessel. It’s actually sort of both, but I wanted to emphasize that Anki is a learning tool: not a tool for maintaining what you’ve learned, but for learning through testing and relearning through testing a few times, spaced out, to reinforce it. This isn’t some quirky idea of mine that runs contrary to the anecdotal opinions you stumble across here and there. This is what the researchers emphatically state and show all the time.

I use the HUD metaphor because the tools we use for learning should be transparent, and persistent. A wearable computer, augmented reality lenses, rather than fragile glasses or a bulky VR headset.

But it’s not transparent and persistent, is it? It’s opaque and its usage is inconsistent. Even with mobile apps, the act of using Anki or similar software is frequently a task, a burden you take on, and the scheduling algorithm is your taskmaster. Learning should be difficult–desirably so, enough to produce optimal learning. But initiating/instantiating the learning process should not be.

What ends up happening, under the common illusion for those who don’t follow the way of the Void, is that you either use Anki or similar software (e.g. the site Memrise), or you don’t do spaced retrieval. So once the software becomes a burden, you have periods of anxiety and a sense of inefficiency and learn ineffectively. The rare occasion someone does ditch the software for spaced retrieval, they narrowly interpret unplugged ‘spaced retrieval’ as tons of paper flashcards in Leitner boxes… These all-or-nothing limitations or narrow interpretations are unnecessary.

That SRS model I described is a specific unproven model based on much broader research-based learning techniques. ‘Unproven’ in the sense that it’s undoubtedly effective, but it inherits its effectiveness primarily from the broader techniques, which have been extensively researched and the results reproduced for a very long time. The adaptive scheduling algorithm based on graded buttons, the formatting, these are narrow refinements, attempts at maximizing, which studies clearly show are not necessary to produce far superior learning compared to other learning methods such as passively re-reading.

So let’s take a look at effective learning in the Void. Then we will come back to Anki, with the epiphany that there is no Anki.

I am going to link to multiple collections of short, accessible quotes from researchers which will explain each technique, and in those links I will expand with practical implications. Think of this post as a guided tour through those links… it probably won’t make much sense unless you click them.

No, seriously, see that link Spaced Retrieval below? Read it before reading the following section! It’s the most important part. If you read nothing else here today, read that. Likewise for the other headings. And if you click and don’t want to read it all, at least skim the quotes, which are all from the main spaced retrieval researchers.

Spaced Retrieval

Spaced retrieval is the primary method you hear about, usually with spacing the most prominent. However, it seems that retrieval practice is the real key: it’s almost always the #1 focus in studies, followed by spacing when they’re not combined. Spacing complements retrieval, and we can supplement spaced retrieval with various other methods: pretesting, feedback (required), interleaving, changing study locations, active encoding, and mnemonics are what I will be focusing on, as these are what the researchers tend to focus on.

Interleaving and Changing Study Locations

Retrieval, as we’ve noted, is primary, and effective independent of spacing, and requires corrective feedback. It synergizes with spacing, and interleaving. Retrieval can take many forms, open-ended or precise, quick or slow, abstract or physical, applying to various domains. Spacing can be rough and arbitrary, and is an optimization that can be disregarded as you like. Feedback doesn’t have to be immediate–take a day if you want. We introduce variety and randomness with interleaving and changing study locations.

“By knowing things that exist, you can know that which does not exist. That is the Void.” — Musashi Miyamoto

When we examine things further, we come to understand the importance of active attention and minimalist reformulation of to-be-learned material throughout each day, focusing on ‘triggers’/cues, noticing gaps in our knowledge. You can focus on these and wait to practice later in a small number of large sessions (90 minutes, perhaps), or interweave in short sessions as you look and/or listen.

Pretesting and Feedback

We want to build a web of associated triggers, something light that gives us the ability to think on our feet and find information when we need it. Pretesting offers us freedom again: we don’t even need to have encountered material to benefit from attempts at retrieval, regardless of whether they’re successful attempts or how long we spend trying.

Continually reflect on your learning process and make adjustments.

“There is no refuge from the living void…” — Thomas Ligotti

Organize your mind and environment–it can be with a conventional memory palace, or something more abstract. Mnemonics let us build useful links, part of that web of knowledge we desire.

Organizing Knowledge

Emphasize triggers and ‘pointers’ to knowledge.

Study sessions should be flexible in both content and structure: the content primarily depends on what you need or want to learn. Create new cues for learning as you work, as necessary.

Think about what order of items–easy or hard–motivates you best. For new items, hardest to easiest seems ideal; for learned items, easiest to hardest. We want flow, to be ‘in the zone’.

Learning should be opt-in and effortless, your methods and environment allowing you to quickly engage with materials and tools. Which brings us to ‘Void’ Anki.

II. In the beginning was the Command Line.

“The void awaits surely all them that weave the wind. ” — James Joyce

Void Anki

My current vision for Anki, based on the above principles, is to make it an entirely opt-in process. The interfaces for our tools affect us, so removing the decks and counts from the interface is the first step.


Once we do this, we navigate Anki by ‘command-line’, or rather, the Filtered deck options. You control how active or passive you want to be: perhaps one session or part of a session, you’re just summoning a subset of ‘Japanese’ triggers/cues (as I like to think of them, rather than cards) from the pool of ‘can/should be studied’ cards.

Perhaps another session or subsession, you’re targeting specific material, Chapter x of a book, changing filters on-the-fly to expand the current pool of triggers/cues, or creating new items, or not even creating them, but pausing to stop and practice elsewhere, without Anki.

Putting it all together, as an effective learner who uses spaced retrieval and other techniques, Void Anki has become just another tool in your kit, along with Google and your text editors or personal wikis or notebooks or Jupyter or anything else.

CriteriEN (encrit v3.0)

This is an update to what was originally called encrit, a research-based strategy for reviewing Anki cards or spaced retrieval study in general. It’s essentially taking a commonsense approach and formalizing it.

A fundamental component of spaced retrieval studies that is often overlooked by the average learner who uses Anki or other software is the division of study sessions into cycles consisting of two phases: studying and testing, or memory encoding and memory retrieval.

For the name of this method, the EN comes from ‘encoding‘ (memory) and the rest from criterion levels in retrieval practice–how many times an item must be correctly recalled in a session.

The # of cycles of the two phases–studying (encoding) and testing (retrieving)–depends on reaching a criterion level of what is typically one correct recall of each card; 3 correct recalls seems to be the maximum before you experience inefficient, diminished returns and is really quite unnecessary. See the work of Rawson and Vaughn esp. for more research that specifically explores criterion levels in retrieval practice, but it’s present in nearly all research on spaced retrieval.

Previously, encrit used multiple learning/relearning sessions in the first 24h. This was overkill, I feel. Hence CriteriEN. Plus the name was dumb so I needed an excuse to change it to a slightly less stupid name.

I’ll quote a researcher’s description of a typical spaced retrieval (successive relearning) session, then outline how to do this in Anki.

“You could also encourage students to use flashcards for the most important concepts. Importantly, for successive relearning, after they attempt to retrieve each concept, they should check the correct answer to evaluate whether they have it or not. If they do not have it, then they should mark that concept and return to it again later in the study session. In addition, they should continue doing so until they can correctly retrieve it.

Once they do correctly retrieve it, then they can remove the concept from further practice during that particular session.” — Rawson & Dunlosky

So the outline of a study session is like so. Let’s say you sit down on a day to study for 20 minutes. You have a batch of new cards.

—Initial Session (e.g. Day 1):

——Study/Test Cycle:

———Initial Study Phase: Look at the front/question (‘cue’) and back/answer (‘target’), use mnemonics, listen and repeat, write things out a few times, pop open an IDE and write some code or work through math formulae, etc. Set each of the studied cards aside (e.g. hit Again*). *I renamed the button from ‘Again’ to ‘Test’ using the Low Key add-on to signify that hitting that button sends it to the phase below.

  • The order in which you study may vary: a particular optimal order, or it may be random, ordered from most difficult to easiest (since these are new cards: see the peak-end rule), or in the order added (useful when knowledge is progressively built upon in cards, e.g. if added from a STEM textbook).

———Test Phase: This usually occurs 5-10 minutes after the Study phase for each card, if you set the first step in Anki to 5-10 minutes. That is, after all new cards have been encoded/studied, with you hitting Again (first step) as you finish encoding each of those new cards above, when Anki presents them 5-10 minutes later, this is for testing: Look at just the front/question (‘cue’) of a randomly presented card you studied (randomly  presented or in the order Anki presents them based on when you hit Again), and based on the cue, you attempt to recall the back/answer (‘target’) or use the cue as a reminder to work through a math problem, etc. on your own to get that answer.

  • If you pass once, schedule the card for the next session (e.g. hit Correct), as it met criterion level 1–one successful retrieval this session.
  • If you fail a card, re-study the answer on the back, and set it aside (e.g. hit Again) for retesting, so that minutes latermixed in with testing the other cardsyou retest it. Repeat this till you pass it once in this session. So all cards will eventually be passed once.

End Initial Session

—Next Session (e.g. Day 3)

  • Now these are ‘review’ cards, rather than ‘new’ or ‘learning’.

———Test Phase: We start with the test phase, since we already encoded the card and want to learn through retrieval practice (+ we ‘re-encode’ after each test by reviewing the back of the card). We repeat the above two bullet points under the initial Test Phase: pass once or restudy the back and hit Again, then re-test when it reappears minutes later. Remember to wait a few minutes after reviewing the back of the card before retesting, to flush recent memory. Also, remember that corrective feedback strengthens memories even when you pass (via metacognitive improvements and reconsolidation), so even when you are correct, try and look at the back of a card at least briefly to assess or refresh your understanding before sending it on to the next session by hitting the Correct* button. *I renamed this from ‘Correct’ to ‘Pass’ using the Low Key add-on to emphasize it’s being passed on to the next session.

  • If a failed card seems like it needs special attention, change the front of the card to make it easier somehow, or implement a better encoding strategy when re-studying. Add audio, images, use a mnemonic, whatever. When I was learning isolated Hangul (which I don’t recommend, I suspect starting with larger units would’ve been better), I used simple mnemonics for problem cards. I’m continually tweaking the front of cards (sometimes I get lazy when creating them and use bad cues) to find just the right cues. We want difficulty to be balanced between the Front and Back, with the Front slightly weak to give us desirable difficulty. That is, the stronger the cue, the easier the card, and the weaker the cue, the harder the card, so we want to find that sweet spot. For language cards, try balancing information across senses, which spreads out the cognitive load and transfers well to real language use, as language is multisensory.

—Next Session (e.g. Day 7-10)

Repeat the above: test-feedback-retest-etc. In other words, every time we have an Anki/spaced retrieval session, we use a test-feedback loop to make sure that all cards are passed once and moved on to the next session. No card left behind, if you will.

We have a tad more work to do to set this up in Anki:

  • For lapsed card settings, I will explain why below, but you should set ‘new interval’ to 100% or at least more than 0 so that it’s not reset entirely (see below), set minimum interval to ~2 days, leave steps as one step of 5-10 minutes.

The x-minutes length of that first (only) step, representing Again, is for postponing the test phase to flush your ‘short-term’ memory. That is, after you finish studying/encoding a card by looking at the answer side, hit Again to present it in the test phase x minutes later, continuing on with the other cards. It might be longer or shorter than 5-10m depending on your material, methods, or the number of new/lapsed cards. For language cards, I tend to use smaller wait times, for STEM cards longer, as I tend to do procedures such as working through problems with those, which is more time consuming per card. See this setting under the main Preferences for preventing Anki from showing you studied cards too soon for testing.

So anyway, typically you’d look at the back to re-study when incorrect, just hit Again (1) to show yourself cards for re-testing, wait for x minutes, then retest. Under these settings, if you then pass it, the card will go back to around its original interval (since we have it set at 100% rather than 0% which would reset it), which is at least 2 days due to the minimum interval setting. [If you use the Low Stakes add-on discussed below, the new interval will actually be the card’s previous successful interval. I quite like this slight backstep for lapses, which I’ve seen used in one interesting research paper (normally researchers don’t reset at all).]

As far as I know, Anki doesn’t penalize the ‘ease’ of cards if you hit Again when in learning/relearning mode. It does keep resetting the intervals of failed cards based on whatever your setting is, but I modified this behavior with the Low Stakes add-on so that the interval is only adjusted when you fail a card in review (rather than one that hasn’t been successfully re-tested and is still in that phase where you’re trying to get it to criterion level).

  • For new and regular reviewing settings, you’d also just have one step: 5-10 minutes or however long you think is appropriate; change graduating and easy intervals to 2 days to ensure an initial 2 day gap after Day 1 as described below, set starting ease to 300-500%, make sure easy bonus is 100% (doesn’t do anything, no need to grade Easy–keep grading and scheduling simple), leave the interval modifier 100% and max interval to default. You could experiment with a higher ease… e.g. 400%; this would better approximate the 10-20% optimal gap found in Pashler, et al.’s researchEdit: I’ve been using 400% and I love it–I think a constant ease of 400% should be the default behavior you implement…

If you don’t use filtered decks, the Anki author Damien Elmes created an add-on to allow reviewing in different orders, here. I recommend using this on its default of descending intervals so that you test cards according to the principles of proximal learning: from easiest to hardest. Of course, if you use Filtered Decks, which I recommend at all times, then you have more options. Review the tips here in the original encrit if you need to for using filtered decks. Especially: don’t rebuild decks that contain cards that are still in the learning phase (which can happen if you have multiple steps set up, which I don’t advocate any longer), some strange black magic happens to intervals when you do that.

You may want to use this add-on to ensure learning cards are always first, even if they spent over a day in learning.

For optimal recall, from recent studies I’ve read, doing a single session on Day 1, then waiting a couple of days till Day 3 allows for a desirable level of difficulty (desirable difficulty is the catchphrase that represents why spaced retrieval is superior to cramming). I’ve found this to be true for all card types so far (e.g. programming, maths, language).

Keep in mind that the study/encoding phase is not the learning phase, with the testing/retrieval phase merely assessing what you learned. This is how people mistakenly think of Anki–that it simply stores and tests things you already learned (that’s actually the role memory palaces play, instead). As the researchers repeatedly stress, Anki is a learning tool, not a mere retention extension tool.

The primary learning element of this process is actually the testing phase. Testing is the best way to learn. You learn facts and concepts better by testing than by passively studying them. The studying phase is just an initial encoding process that gets your foot in the door of the superior learning of testing.

Testing is a learning process in that it involves the active reconstruction of knowledge which renders your memories plastic and amenable to change and augmentation (“reconsolidation”). Each successive test continues this enhancement. As long as you get corrective feedback, you don’t even need to study new information before testing to optimize learning it, that just makes things feel easier and makes up for any slacking you do encoding information with corrective feedback.

At any rate, the tradeoff with this criterion constraint, provided we adhere to it, is that when we fail cards, we don’t have to reset them, as noted in the settings above. Relearning is easier than learning, and re-study and re-testing to criterion refreshes and enhances the memory and allows us to place them back with the other passed cards (although, see the Low Stakes add-on linked below which sets intervals back a bit). However, we can tweak things by tagging cards we fail multiple times as leeches or suspending them, then editing them to enhance our presentation (e.g. making the cue stronger to ease recall of the target) on the front of the card. This add-on might make that easier.

After all, this is a quality control issue, the quality of our memories, not a quantity issue. Rather than mucking about with the timing of reviews, we need primarily to improve the quality of our encoding, changing our methods and presentation as needed, if needed (if a card becomes a leech through repeated failures).

I’ve noted in the past that a special algorithm that reads our minds is not only implausible, but unnecessary, because studies show that a very precise interval spacing is not necessary, you just need some kind of spacing–in the long run, equally spaced intervals is as effective as expanding intervals, both being many times more effective than no intervals. All that fetishistic talk about forgetting curves and indices and memory decay is really gratuitous. I know all about needlessly complicated learning methods, believe it or not.

However, though expanding and equal retrieval intervals give equal results in the end, I’ve also noted that efficiency-wise, this means allowing your spacing to expand is better, because you can have fewer study sessions giving equal or superior results (note the surprise that marginal superiority was found for expanding ‘for the first time’). More recent studies have shown that expanding retrieval gives higher average recall (if tested at a random point in time) than equal retrieval, so this is another reason to let the exponential improvements in spacing and recall grow naturally. But again, there’s no special algorithm or ultra-precise scheduling necessary. Those seem mostly to be marketing notions meant to get you to buy an app or sign up for yet another learning site.

The typical gap between the first handful of review sessions can be looked at most simply as (previous gap * 3) if we use 300% ease as an example (setting aside the small fuzz to intervals that Anki applies). If the gap was 2 days, the next will be 6 days. If 6 days, then the next could be 18 days. At this point if you pass, research I’ve seen from Pashler, et al. indicates you should have good retention for at least 54 more days, but more likely 6 months to a year. That’s if you only saw the information on cards as isolated for controlled studies, I believe, so imagine this as part of a bigger context of actual usage.

So you can look at sessions as being on Day 1, then 2 days later on Day 3, then 6 days later on Day 9, then 18 days later on Day 27, and from there the safe bet would be Day 81 ( Day 27 + (18*3) ), but you should have good recall for a long while afterward, as research has shown an interval of 21 days can give you a year’s retention and that a gap of 10-20% between session X and session Y, that is, 10-20% of the gap between session Y and Z, should give optimal recall for session Z–which can be anywhere from 250% to 500% in Anki terms. It’s flexible.

In this simple model, which is essentially just a binary pass/fail with criterion level [re]study/[re]test cycles, batches of cards will generally stay together, and we only need a few sessions (such as Day 1, Day 3, Day 9) for a card to become well-learned for the purposes of template reversal (e.g. switching from Recognition to Production). We don’t actually need to grade our answers on a scale, we can just answer Correct/Incorrect–gradations of how well we passed will be sorted out between the criterion-based cycles and our liberal approach to interval sizes.

In Anki, therefore, we only need two buttons, as this add-on achieves. I call it Low Key Anki. Either fail and encode/retrieve to criterion level, or pass it according to the simple schedule. No need to make a mess of things by changing the schedule outside of pass/fail.

In fact, we can even tweak the settings to simplify passing and failing further, as with this add-on. I call it Low Stakes Anki (in late June/early July I made the default Low Stakes behavior for lapses of cards in the testing phase to be a reset to the last successful interval and I think this is quite good).

So the gap between Anki and the ‘real’ materials we want to study becomes that much narrower, in terms of logistics.

Once you reach criterion 2-3 times, making the item familiar, you can transition the card type with templates, and/or feel comfortable going to sentences/media, placing less pressure and need on the Anki reviews.

Anki in its core form is really a fast-mapping process, fast-mapping vocabulary, grammar points, kanji, facts and concepts, math and programming procedures, etc. This is part of a larger extended mapping or process, where you flesh out your knowledge through exposure and context.

But recall that Anki isn’t a pensieve for holding memories, already-learned items, it’s for active learning through testing, a flexible HUD.

To digress, for procedures as with maths or coding or physical activities, Anki is more of a scheduling reminder program that says ‘Hey, time to practice this math/programming task or motor skill for a while, here’s some useful digitally enhanced information to practice it, let me know when I can schedule it for the next practice at a later date, by passing or failing it’. You can take as long as you need for these tasks.

The point is, Anki is a scaffolding tool that gives us efficiently fast deliberate learning which allows us to continually ascend to the next layer of intuitive usage, so let’s make it a simple, streamlined process.

Systemic Functional Gistics

“Verbs: they’re the proudest—adjectives you can do anything with, but not verbs—however, I can manage the whole lot of them! Impenetrability! That’s what I say!” – Humpty Dumpty

We spend a lot of time skimming. In fact, for language in general, studies suggest we take a ‘good enough’ approach, satisficing rather than maximizing.

You might think of extensive reading (tadoku [多読]) or extensive listening when I mention all this: Consuming a lot of material, trying to get a ‘good enough’ understanding and moving on, without looking up words.

Research shows that while great for motivation and necessary for well-rounded understanding, the problem with this is that it’s slow and inefficient, and requires designing or finding material that doesn’t exceed your level too much: it’s best used for reinforcing what you’ve learned through deliberate study. Loosely speaking, study is typically the fast-mapping process, while contextualized usage is the slow-mapping, or extended-mapping, process.

So a combination is best, but most combinations tend to be too efficient and studying too much, or too inefficient and studying too little, with an uncomfortable gap between study and authentic usage. It’s disheartening when you spend so much time studying, and it feels like real materials and enjoyment are just out of reach. You get burnt out. But when you try to skip the gap, the struggle to comprehend or the inefficiency of just ‘moving on’ from difficult items is subpar.

I’ve come up with various methods to overcome this, first giving more weight to deliberate study, gradually increasing usage, and learning in targeted batches to keep study/usage close together, but I think I’ve found something better, that I hope to streamline over time. It’s a continuation of my ideas such as ‘soft monolinguality‘ or ‘incremental immersion‘.

tanaka optimized

My new proposal is for we listless self-students who often can’t be bothered to study sentence cards, yet studying words in isolation feels too dull. But we still want efficient study to complement our media consumption, ideally.

I propose you take a ‘good enough’ approach to studying Japanese, in addition to using it. A gist-based approach to studying for immediate usage, to be precise.

While I have created resources to employ a laid back, satisficing approach to studying, here I don’t mean to be slapdash with your studying, but to specifically study:

  • a) the least amount of items that will
  • b) allow you to comprehend the least amount of target material necessary to
  • c) understand its core content

This emphasis on study is important, because it takes years of experience or strong doses of deliberate practice to skim and scan at a good speed with decent comprehension.

With the proposed approach, we can minimize the gap between study and usage to just how long it takes to learn the minimal number of items for gist.

Note that I said target material. We want to be very specific. Right from the onset of your Japanese learning, you seek out interesting media,  such as a chapter of a light novel (or a whole light novel), a news article, a television episode, etc., and tailor effortful study around the goal of just getting the gist of it, quickly and with minimal effort during actual media consumption. (Of course, if you’re really just starting out, first learn kanji and words together.) This is why, as I’ve said in the past, generalized frequencies are a bit problematic… they’re too general, and may not apply well to your input/output goals.

 Skimming is for gist; scanning is for extracting specifics. Gist is gleaned by attending to the meanings of multiple words at a time, primarily, rather than syntax. The ‘good enough’ approach to language suggests that full syntactic processing is only used when semantic processing needs it.

If the idea of ‘good enough’ bothers you because you want to be perfectly native-like, keep in mind that you will never be native.
I don’t mean that you can never be indistinguishable from a native in performance, I mean that your approach to learning should be piecewise—quantum bits—because you will always be a mixed-language user whose usage differs from a native’s due to this mixing. Note that I said ‘user’, not ‘learner’. In reality, the goal is to be a successful L2 user, fluent in what you’ve learned, no matter your level.
The process is always additive—rather than biasing attention to the incomprehensible, everything you learn is a delightful addition to your mental toolkit.

So what are the core items to learn from materials for getting the gist? What do we study that isn’t too isolated and dull, nor too lengthy?

The answer to both of these questions is predicate-argument structures (PAS) [述語 ・項構造]. This is a fancy term for verbs and what they connect to, generally subjects and objects. In the sentence ‘I skipped school’, the PAS is ‘I’ (subject) ‘skipped’ (verb) ‘school’ (object).


If you look at a sentence in terms of its syntactic dependencies (an adjective describing a noun is ‘dependent’ on the noun, etc.), the PAS represents semantic relations as well, indicating the roles of the terms. PAS are very useful for information retrieval and other areas of natural language processing.

Let’s take a look at verbs, for a moment.

“Verbing weirds language.” — Calvin & Hobbes

At the core of every sentence is a verb upon which everything depends, directly or indirectly, in terms of dependency grammar (Google’s Parsey McParseface uses this distinctly non-Chomskyan grammar), where every word in a sentence is dependent on another—except the root verb, which isn’t dependent on anything. It’s called ‘verb centrality’.

Doing the hard work of every sentence is this main verb, even if it’s indirectly felt, or implied through context rather than explicitly in the sentence, as sometimes seen in speech. In the obsolete ‘generative grammar’, I believe they used to call it a ‘matrix verb’.

  • In ‘construal theory’, primary phrases are root verbs and their subjects.
  • Contextualized action verbs (‘threw the ball’) activate motor sequences (throwing something) when we process them.
  • Verbs, of course, are frequent in programming. Although Java has had issues.
  • Short action verb phrases are quite useful for learning to think in Japanese.
  • Calling ‘heat’ a noun instead of a verb had repercussions, due to how language shapes our thoughts.

Even cooler: Japanese is a verb-friendly language. Japanese verbs are acquired relatively more quickly in Japanese, used at a higher frequency, and acquired earlier. This is because a verb always occurs at the end of a Japanese sentence or PAS, and verb arguments (e.g. the subject ‘I’) are often omitted. These traits in languages like Japanese factor into how the language is learned and used, so that the verbs are more central.

Tae Kim has suggested that the farther you get from the main verb of a sentence, the more extraneous the information becomes. This could be because dependency distance (distance between a word and what it depends on) typically increases the difficulty.

“Whenever the literary German dives into a sentence, that is the last you are going to see of him until he emerges on the other side of his Atlantic with his verb in his mouth.” — Mark Twain

Because Japanese is a head-final language, with verbs coming after their complements and the root verb at the end, studies suggest that the complements of a verb play a critical role early in Japanese sentence processing, narrowing predictions. Supposedly the head-final nature of Japanese affects overall perception, as well. (See also Senko Maynard on the agent-does structure.)

In the past I’ve discussed kotodama (言霊), the ‘word-soul’ in Japanese, where kanji are living objects; if this places an emphasis on nouns, then I could say that verbs are likepower words’.

The ‘bare verb’ (presented by itself) is typically enough for Japanese children to learn, because as noted, subjects are often dropped in Japanese; so the morphology (the form, such as te iru as -ing verb) is often relied on for success.

So, while the subject isn’t all that useful for learning verbs, specifically, Japanese verbs are more general than in English, and the context added by objects (‘ball’ in ‘threw the ball’) is very useful.

Still, fast-mapping verbs in general is harder than nouns, so it’s good to use spaced retrieval (e.g. Anki) for learning verbs rather than relying on usage. Using Anki to study is like fast-forwarding the fast-mapping. It’s either Anki or robots.

Let’s put all this together to implement our ‘good-enough’ approach.


For us, it’s: 「あきらめが悪いな」 – Why don’t you give up?

We want to study the minimal core Japanese needed for gist, narrowing down our targets for reading and listening tasks. We know where to locate the most essential aspects of sentences: at the end, with the root verb. We want at least one argument, for context, such as the object. Subjects are a bit less important, as noted, but also, as pronouns (‘I’, ‘she’, ‘Hiroshi’, etc.), they’re often repeated and relatively small in number, so they’ll be easier to recognize.

So what do we do? We ‘normalize’ the sentences, compressing them to just the predicate-argument structures, which connects verbs to their complements whatever the distance between them.

Instead of every sentence being a lengthy, special unique snowflake, we extract the essential aspects. In research, the best sentence compression and translation methods use bunsetsu, dependency grammar, and PAS, making the results more readable.

What’s nice about this is you’re studying chunks, which I’ve discussed before in the context of language processing and collocations. Learning these improves fluency, and they generally need to be deliberately studied.

Often the PAS will recur, so learning a PAS can help you skim multiple sentences, and you can flesh out your understanding as you see verbs used in various ways and arguments are mixed and matched. And of course, you’ll be acquiring more and more PAS.

Remember that I said for gist, semantic processing is ideal? PAS let us go beyond the surface to the deeper semantic structures. They’re basically ‘semantic frames’ for events. They make up the propositions used in Word Grammar and elsewhere.

Some have even suggested that predicate-argument structures are the core of our mental representations, upon which modern languages are mapped.

They’re thematically related clusters of words, rather than semantically related clusters. Thematic clusters are easier to learn than semantic, because when words (or kanji) are closely related in meaning, they interfere with one another.

In document summarization research, predicate-argument structures form topic themes which can be used to identify key areas to focus on.

They also form ‘generative’ language, allowing you to produce important messages (think ‘survival phrases’ using verbs like ‘eat’, ‘drink’, ‘sleep’, etc.). So PAS are good for production exercises, also, such as tweeting.

“The whole of nature, as has been said, is a conjugation of the verb ‘to eat’, in the active and passive.” — W.R. Inge

We might also correlate PAS with verb-noun collocations, which have often been a target of language learning research. Modified dictoglosses (a dictogloss is a dictation/output task) have been shown effective for learning these by underlining them in target texts.

We can differentiate a PAS and a verb-noun collocation in that the latter describes common patterns such that they tend to be processed like a single unit, while a PAS is any instance of predicates and their arguments.

It seems learning these chunks as wholes is best, to avoid ‘cross-association’ mistakes in matching the separated words.

Recall that skimming is for gist, scanning is for specifics. In a sense we can treat the arguments, typically nouns, as the concrete specifics to scan for, and the verbs as the general gist, with the root verb of the sentence acting as a kind of router.

Or you can look at verbs/predicates as functions (as linguists sometimes do since the predicate calculus of Frege), which take arguments as input and output coherent events: verb(subject, object) → subject-object-verb, a predicate-argument structure. We primarily want the skill to parse this. (I guess we could say the main verb is a higher-order function? Perhaps a metafunction? Maybe we can throw in case markers as type hinting (since particles can be omitted colloquially)? Or let’s not.)

function diagram

And as noted before, for abstraction, nouns can be seen as data structures/objects, verbs as functions/algorithms. Functional programming style (as opposed to OOP) tends to be verb-biased, like Japanese. Of course, Japanese is a hybrid morphographic/phonographic system with the best of both worlds, and these days programming languages are hybrids of functional and object-oriented styles, as seen with Clojure. Perhaps this is why Clojure is so expressive. But I digress.

If we want to add a layer to incorporate more scanning, we want to look at ‘keywords’, rather than just PAS. The keywords of a text are the nouns with the most dependents. These help summarize the content of documents. You might relate this to valency, which refers to how many arguments a predicate has.

So the keywords and PAS are the potential core targets for getting the gist of any material you want to consume.

Now, how do we go about extracting and studying them?

First off, just knowing what I’ve told you, you can take any resources you already have, and focus initial study on the verbs and their complements.

It would be nice if we could just select the last couple of pieces of any Japanese sentence and voila, but the arguments we want aren’t commonly so well-placed next to the final verb. For example:


The object ‘chemicals’ is separated from ‘mixed’ by ‘at a ratio of 1 to 3’.

But we can also automate this process with ChaPAS. ChaPAS will take any Japanese text input and output a text of the dependencies and predicate-argument structures. I’ve written a short tutorial on how to use it here.

It doesn’t require programming knowledge, you’ll just need to install a suite of tools and copy a few command-line statements. However, if you know regular expressions or other fancy find/replace techniques and the like, this will help clean up the output. In the future, I will try and create tools to streamline the process of listing the PAS, and bulk producing pretty diagrams out of the ChaPAS output. Until then, I will create some resources, to be uploaded shortly. Update: For now I’ve modified this script to output PAS.



Another bonus of ‘normalizing’ sentences for media is that we can share decks and other resources that contain only these, without copyright fear. You can’t copyright words or short phrases; and because they’re compact and generalized, we don’t need the original audio: we can use WWWJDIC’s audio for each word, or text-to-speech (TTS) to capture the PAS in a single piece. As for copyright, see also Google’s successful ‘snippets’ defense for its book-scanning, and corpora of copyrighted works that only use KWIC.

Note on manipulating the PAS: Keep in mind it’s typically SV (subject-verb), OV (object-verb), or SOV (subject-object-verb), in general (with an indirect object and direct object, it’s usually SIOV). Occasionally you’ll see OSV (object-subject-verb), because word order is quite free in Japanese as long as the postpositions and case markers (the particles) are maintained. But it’s best to keep things transfer-appropriate, the items you study reflecting authentic usage. To digress, another rarity is crossed dependencies.

Ideally what we want to focus on are PAS, supplemented by keywords. Mainly we want to focus on the primary PAS (from the root verb), but this is a refinement that I will try to add in a tool, later, along with a particular automated summarization technique using keywords that I have in mind, informed by my own ideas and research I have read.

We want to focus on the nominative (が)—the subject, the accusative (を)—the object, and the dative (に)—the indirect object. I would prioritize accusative, then dative, then subject, but it’s not essential to prioritize anything. For names, we also have the ability to detect Named Entities with ChaPAS and CaboCha. Another refinement.

With ChaPAS, as I noted, you can end up with a list of the PAS, or diagrams, though diagrams are more involved. For our gist-based method, the diagrams are really just supplements, it’s the isolated PAS in text form that we want:

学校を ← 休んだ (skipped school)

薬品を ← 混ぜた (mixed the chemicals)

Notice I kept the inflection. This is how the words are actually used, and recall that we want things to be transfer-appropriate, learning the conjugations in context on both cards and in usage. ChaPAS stores the dictionary form (lemma) in the output, so we can still make use of this in various tools.

For the format of cards, you can just stick the PAS on the front of an Anki card with the meaning on the back, but the multiple words might be too hard a recognition task. You could list the kanji meanings on the front with the PAS, offsetting the difficulty by the hints from kanji meanings, using word formation principles to put them together. But that might make it too easy.

So to modulate the difficulty/ease, I recommend using the add-ons I created here for just this purpose.

Essentially, you’ll have a mixed recognition/production card with those add-ons, where you place the kanji-jumbled version of the PAS on the front, along with its meaning and the shuffled meanings of the kanji, and you recall the properly spelled version of the PAS.

I also recommend putting the audio on the front with readings—sound isn’t as important for reading Japanese as it is in English, as Japanese is morphographic. Additionally, if we want listening tasks, rather than reading tasks, then we want audio (with Japanese text as supplement, as with subtitles) to be our cue, making the task transfer-appropriate. In fact, I think PAS-only Japanese subtitles would be good to try. Perhaps mixed with adaptive subtitles. You can actually use the semantic content of PAS as a shared anchor between sentences, perhaps, say, a Japanese and an English sentence.

But that’s another refinement.


Once we use the PAS, preferably at least the root PAS for each sentence, and possibly keywords to build a foundation, after just 2-3 reviews for each batch you’ve extracted from material, we can consider the batch well-learned and start consuming media, focusing on just the gist, identifying the small fraction from each sentence that we’ve studied as our goal, and using it to infer the rest—that is, the PAS isn’t the endpoint, it’s the bridge to the rest of the media. For reading, since the PAS are conjugated, you might re-insert and highlight them, a form of ‘input enhancement’.

I don’t advocate it (yet), but a seductive notion is that since spacing is a complement to testing (the ‘retrieval’ in spaced retrieval), which is independent and equal or superior in effect, PAS make cramming a possibility—ultra-fast-mapping in Anki followed by a heavier emphasis on incidental learning through usage, perhaps retiring cards quickly to focus on media and cull your decks. Or a kind of spaced cramming, rather, since even microspacing in a single day is superior to cramming. ‘Preloading’ vocabulary rather than ‘prelearning’ it.

This would require a substantial media consumption rate and a high negative capability, as the poet Keats called it. If you did something like this, you could perhaps use a single deck that only ever contains the gist cards for a single text or episode. Cards can be suspended, or automatically emptied with this add-on and the periodic use of the Empty Cards option in Anki.

It’s up to you whether you want to place multi-argument PAS on the same card, or split them up. That is, if you have subject ← object ← verb, you could make a subject ← verb card and an object ← verb card. I think it’s probably best to keep it all together.

If you don’t intend to share materials and/or they’re not copyrighted, then you could include the original sentence meanings (not the sentences) on cards, focusing on the aspect of the meaning which captures the PAS meaning. Don’t worry, you won’t accidentally memorize the entire sentence meanings, ruining the novelty. If our memory was that amazing, we wouldn’t need much studying, would we?

I’ve been focusing on PAS, but for keywords, rather than looking for some kind of dependency analysis using ChaPAS or CaboCha, the simple version (as with simply identifying verbs and objects on sentences) is to pick an adjective-noun combination from sentences that contain them. These are easier to learn (this is true of adjective-noun kanji compounds, also). You could perhaps extend PAS that have noun arguments to include any adjectives for the noun. Another refinement.

Another use of PAS is to create ‘thinking in Japanese’ cards.

I’ve presented to you the justification and implementation, and I think you can take what you want from it and make it work, but I do encourage you to look into ChaPAS and other tools, and I believe summarization can make things even better, by narrowing the number of PAS even further, and giving you short extracts of target materials. That is, document analysis can give ideal sentences to compress and prioritize for study of the document as a whole, or just the extract.

How you place this gist-based approach in your overall regimen is up to you.

I have found a few summarization tools, and in my own studies and brainstorming have discovered actionable methods I intend to share in the future. In the meantime, you can look into ledes for Japanese news articles, just picking the first sentence or two. Likewise for paragraphs and topic sentences. However, this depends on how inductive the style is. Here’s a corpus of lead lines from web articles that have been annotated with PAS–the readme is very helpful, and the PAS annotations are in the .KNP files which can be opened with a text editor. But stay tuned for this and other resources…

Oh, and Systemic Functional Gistics comes from Systemic Functional Linguistics; the metaphor for verbs as functions; a functional approach to studying for immediate use; and of course, skimming for gist.

Anki Text Playback

This is a rough template hack for Anki that I created which will only display an Expression when a trigger is clicked; it will display the expression for a length of time, in milliseconds, designated by a Length field on your card. It requires a Length field with time in milliseconds. Or Sequence Marker field (see below for formatting). I emphasize that because Anki will crash if you try to run it as is without it. See the final point in this entry for a more complete safeguard. Update: Primer.

After which the expression will disappear and you’ll have to click ‘Play Text’ again to get it to reappear. Feel free to replace <p> with <div> etc., onClick with onMouseOver, or style and/or replace ‘Play Text’ with your preference; perhaps even a play button of some sort from Google Images.

I created this for a few reasons.

First, for deaf learners, or hearing persons for various reasons, it can simulate for text the time limitations of video or audio clips from shows that would be generated by subs2srs but are absent or inaccessible.

That is, without video or audio clips, studying lines in Anki won’t clue you in on precisely how long subtitles would be displayed on a screen when watching the actual video, making the comprehension cards less transfer-appropriate; previously I recommended referencing the sequence marker information and constantly keeping the time in mind when reviewing statically displayed text, but that’s not an ideal solution.

You could also use this for regular video clip comprehension cards, to further simulate the video viewing experience, with transient, selectable text rather than having to hardcode the Japanese subtitles. Perhaps replace onClick with onMouseOver to streamline the playback and if using the replay button add-on you could place the Video/Audio field directly adjacent to the link to make the playback onsets synch up further.

With this code, which you ought to stick on the Front of subs2srs comprehension cards, use the Sequence Marker field instead of Length, in subs2srs’s preferences setting sequence marker format to ${d_total_msec} (thanks cb4960!). Perhaps rename the field back to Length if you have a lot of subs2srs cards in the same deck with a different sequence marker format, as those would cause a crash onClick.

Second, you can use it for fluency activities. That is, taking well-learned language items and studying them under time pressure. In the past I suggested auto-answer add-ons, which subsequently have flourished and are useful in their own right, but these are not ideal for fluency activites, as with such add-ons you only get one shot and there’s the added emphasis on grading.

With this code, you can turn Anki cards into fluency exercises in a few ways. For example, you could have multiple instances of the code on the front, each with a different duration, getting shorter and shorter for an exercise similar to Paul Nation’s 4-3-2 speaking activity.

Again, grading isn’t necessarily the aim here, so perhaps filtered decks for mature cards (prop:ivl>=21) that don’t affect scheduling would be useful here.

Or you could put the code on the back of a regular card, so you can practice your fluency after getting feedback. In which case you’d likely want to remove the FrontSide code, if any, thus displaying only the playback elements without recourse to statically displayed answers.

You can also treat it as a perishable hint field, of course.

  • Speaking of feedback, you can use this code for delayed corrective feedback in Anki. Stick it on the back of the card. By default it’s set to wait 3 seconds before showing you the answer, as I seem to recall that being the time used in research which showed the benefits of delaying corrective feedback. Of course, the downside is that it adds 3 seconds to each card review, so perhaps reserve it for certain types of cards…
  • You might want to wrap the text playback code in {{#Length}}code{{^Length}}{{Expression}}{{/Length}} – Or Sequence Marker field (see above). This tells Anki to only display the text playback field if the Length field isn’t empty; if it’s empty it will just display the Expression field. This should eliminate the risk of Anki crashing.

Here’s a sample template if you’re doing ja・ミニマル’s video clip comprehension cards. Make sure to keep cards made with that Sequence Marker format separate from other subs2srs cards. Or perhaps after creating these, you could rename the field to Length and use that instead.

looper v→l/o: regenesis

A more advanced technique than before to automate looping is to use selective card generation. It requires this main.py for Morph Man, and these 3 fields in your deck: mx0 (is set to Yes if a card was ever entirely mature; never emptied), k1e (is set to Yes if a card was ever k+1; never emptied), and outlier (set to Yes if cards are m+1 or higher, or k+2 or higher; emptied at m+0 and k+1). I added the config.py above also so you can see the field setup.

For the r→p loop, where you switch to studying vocabulary as production after it becomes mature through studying it as recognition, you use this front layout for the production template:


See the original Looper for full layouts. Production cards will only be generated if a vocabulary word is studied to maturity via recognition cards. After this, failing production cards won’t result in the card disappearing, as the mx0 field remains, so you can restudy it as production, the way it was intended.

For the v→l/o loop, going from vocabulary to listening comprehension or output cards, set up 3 templates: vocab, comprehension/output, and outliers.

For the front of the vocab template:


For the comprehension or output template front:

{{#mx0}}{{Sentence Meaning}} and {{Shuffled Gloss}} if output, or {{Video}} or {{Sentence Audio}} if comprehension{{/mx0}}

For the outliers front template:


With this set up, if a card has multiple unknowns it will remain in the outliers template, so it can grow into a k+1 card. Without this template such notes would be destroyed when using ToolsEmpty. Also, in some cases, cards may be entirely known yet not entirely mature when MM3 analyzed them (thus no focusMorph, which lasts until cards are m+0, but which is only set if a card has 1 unknown when analyzed); with this template these will also remain.

So primarily you will study the vocab and output/comprehension cards, but you can also study the outliers should you wish, or they will grow on their own as your knowledge grows.

Use the filtered decks at the end of the previous looper tutorial to study cards as vocab or listening comprehension/o+1. The k1e field and filter ensure that you can study cards as vocabulary always, despite lapses which may affect k+N, just as mx0 ensures output/comprehension.

Once you have this set up as above, and MM3 has processed cards, you can run Tools→Empty and cull the excess cards from your decks, leaving you with a lean system. They’ll regenerate as needed, as you learn. In fact the system’s so lean there’s some redundancy with the filtered decks, but I like to have a kind of separation of concerns.

If you want to review the vocab and listening comprehension/output mixed together, you can use the filter -outlier:yes (or perhaps outlier: which should designate only empty fields, the way typing nothing after the colon does in the browser search).