Eric J Ma's Website

Principles for using AI autodidactically

written by Eric J. Ma on 2025-06-07 | tags: llms autodidactic ai learning agency syllabus education critical digital knowledge


In this blog post, I share insights from my interviews with researchers and digital professionals on how to use AI, especially large language models, as a tool for active learning rather than passive consumption. I discuss strategies like creating personalized syllabi, applying critical thinking, and using AI for feedback, emphasizing that true learning requires effort and agency. Want to know the key trick to making AI your learning partner instead of your crutch?

We need to move beyond passive consumption

Imagine having a personal tutor who's absorbed millions of books, papers, and discussions across every field of human knowledge. That's essentially what Large Language Models (LLMs) offer us today. As David Duvenaud aptly describes them, LLMs are a "galaxy brain" of knowledge waiting to be tapped.

But having access to information isn't the same as learning from it. The difference lies in how we engage with these AI tools - passively consuming their outputs versus actively using them to expand our understanding. Through my interviews with researchers and digital professionals, I've discovered patterns in how the most effective learners use AI autodidactically - teaching themselves with AI as their assistant, not their replacement.

Lessons from autodidactic AI users at work

I have conducted many interviews at work about how folks in Moderna's Research and Digital organizations use AI. While the discussions are insanely specific to work and sometimes touch on IP that I cannot reveal, there are principles and patterns in what I observe the best folks do when using AI in their day-to-day work to learn new stuff.

Generate a personalized syllabus for learning

They recognize that any kind of learning involves effort and hard work, and that the pain of the process is a non-negotiable to make anything stick. So instead of using AI to do stuff for them, they start by using AI to provide a tailored syllabus that allows them to progressively move up the knowledge ladder with increasing effort.

This is what I would call "scaffolding a personalized syllabus". Their prompts here often include a bit of their current role, their prior training, their own objectives for learning, and what they know from prior experience about how they learn best. On the basis of the syllabus, iterate and follow up.

Apply one's ability to think critically to LLM outputs

They recognize that questions are a great way to learn, so they will continuously question and LLM to draw out answers. The act of generating a question as a human is part of the effort needed.

They apply the skill of critical thinking to the answers generated by an LLM, asking questions such as, "if this is true..." or "is this coherent with...". They do not blindly accept the output of an LLM!

Apart from self-coherence with what they know, they verify by cross-checking reputable sources on the internet -- scholarly literature, expert writing, etc.

At a meta-level, if they find an angle that demands explanation, knowing that sometimes an LLM can be blinded by conversation history, they will explicitly prompt an LLM on contrary points, using prompts that start with, "but I remember that..." or "this sounds suspicious, could it be that..."

Also, in the absence of another human, they use LLMs to provide initial critique about what they have produced (e.g. in writing form). They use LLMs in the same way jazz musicians riff off one another.

What's the core trick?

At its core, the main "trick" to using an LLM autodidactically is to avoid delegating critical thinking to the LLM and instead applying the full force of one's agency. We need to leverage the galaxy brain of knowledge from its training set (and, where applicable, internet search capabilities) and apply individual effort by critically thinking through LLM outputs. Essentially, every skill we were taught to hone in literature class in high school, debate club in junior college, science philosophy class in undergrad, and scientific journal clubs during graduate training!

AI has brought the philosophical points of human agency into sharp relief. Like any tool, LLMs can be used to increase your agency or diminish it. It's a double-edged sword. Use it for the former!


Cite this blog post:
@article{
    ericmjl-2025-principles-for-using-ai-autodidactically,
    author = {Eric J. Ma},
    title = {Principles for using AI autodidactically},
    year = {2025},
    month = {06},
    day = {07},
    howpublished = {\url{https://ericmjl.github.io}},
    journal = {Eric J. Ma's Blog},
    url = {https://ericmjl.github.io/blog/2025/6/7/principles-for-using-ai-autodidactically},
}
  

I send out a newsletter with tips and tools for data scientists. Come check it out at Substack.

If you would like to sponsor the coffee that goes into making my posts, please consider GitHub Sponsors!

Finally, I do free 30-minute GenAI strategy calls for teams that are looking to leverage GenAI for maximum impact. Consider booking a call on Calendly if you're interested!