Eric J Ma's Website

« 1 2 3 »

How to Do Agentic Data Science

written by Eric J. Ma on 2026-02-01 | tags: agentic coding experiments logging reports journal plots iteration structure exploration

In this blog post, I share ten lessons I've learned from experimenting with agentic coding in data science, from setting clear goals and structuring projects to leveraging coding agents for faster iterations and better insights. I discuss practical tips like maintaining logs, generating diagnostic plots, and treating the agent as a partner in exploration. Curious how you can make AI your jazz partner in data science and boost your productivity?

Read on... (2028 words, approximately 11 minutes reading time)
Model feel, fast tests, and AI coding that stays in flow

written by Eric J. Ma on 2026-01-25 | tags: llm autonomy supervision personality verbosity harness refactoring workflow testing ergonomics

In this blog post, I share my hands-on experience using AI coding models, focusing less on benchmarks and more on the day-to-day feel—how model style, personality, and the right testing harness impact productivity and flow. I discuss the trade-offs between long-horizon autonomy and short-horizon iteration, and why a constructive, enthusiastic AI assistant matters as much as raw performance. Curious how the right mix of model and harness can transform your coding workflow?

Read on... (1901 words, approximately 10 minutes reading time)
How to build self-improving coding agents - Part 3

written by Eric J. Ma on 2026-01-19 | tags: agents ai workflows productivity skills

In this blog post, I share how to combine repo memory and reusable skills to create self-improving coding agents. I walk through a maturity model, explain when to update AGENTS.md versus creating a skill, and highlight the importance of metacognition in systematizing your workflows. I also discuss how agents are evolving beyond coding tools into general-purpose teammates. Curious how you can make your coding agents smarter and more helpful over time?

Read on... (1129 words, approximately 6 minutes reading time)
How to build self-improving coding agents - Part 2

written by Eric J. Ma on 2026-01-18 | tags: agents ai skills mcp workflows

In this blog post, I dive into the concept of 'skills' for coding agents—reusable playbooks that streamline repetitive tasks and make workflows explicit. I share real examples, from debugging to release announcements, and discuss how skills evolve through iteration and feedback. I also touch on the challenges of distributing and updating skills compared to MCP servers. Curious about how these skills can make your coding agents smarter and more efficient?

Read on... (1154 words, approximately 6 minutes reading time)
How to build self-improving coding agents - Part 1

written by Eric J. Ma on 2026-01-17 | tags: agents ai workflows productivity software

In this blog post, I share my approach to making coding agents truly self-improving by focusing on operational feedback, not just model updates. I explain how using an AGENTS.md file as repository memory and developing reusable skills can help agents learn from mistakes and reduce repetitive guidance. My goal is to create an environment where agents get better each week without constant babysitting. Curious how these strategies can make your coding agents more effective?

Read on... (1132 words, approximately 6 minutes reading time)
How I fixed a browser selection bug with sequence alignment algorithms

written by Eric J. Ma on 2026-01-06 | tags: javascript bioinformatics katex canvas algorithms bugfix highlighting selection web development ui

In this blog post, I share how a tricky text highlighting bug in my canvas-chat project led me to use a classic bioinformatics algorithm, Smith-Waterman, to solve messy browser selection issues—especially with KaTeX-rendered math. Instead of struggling with normalization, I reframed the problem as sequence alignment, which proved robust and effective. Curious how an algorithm from DNA analysis can fix web UI bugs?

Read on... (1566 words, approximately 8 minutes reading time)
Canvas Chat: A Visual Interface for Thinking with LLMs

written by Eric J. Ma on 2025-12-31 | tags: ai llm opencode claude visualization tools productivity

I built Canvas Chat, a visual interface for nonlinear LLM conversations, over Christmas break using OpenCode and Claude Opus 4.5. The tool solves a specific job: thinking through complex problems where exploration branches in multiple directions. Features include branching conversations, highlight-and-branch, multi-select merge for synthesis, matrix evaluation, and integrated web search. It's open source and runs on Modal.

Read on... (1509 words, approximately 8 minutes reading time)
You Can Just Make Stuff with OpenCode and Claude Opus 4.5

written by Eric J. Ma on 2025-12-28 | tags: ai opencode claude automation workflow llm reasoning development review tools

In this blog post, I share how using OpenCode and Claude Opus 4.5 has transformed my approach from writing code to simply building—directing AI to create what I envision. I discuss how these tools handle everything from infrastructure to greenfield apps, and how reasoning traces have become more important than code review. I also reflect on unlearning old habits and embracing new possibilities as AI models improve. Curious how this shift could change your own workflow?

Read on... (1962 words, approximately 10 minutes reading time)
How I Themed My tmux with OpenCode + Claude (And When to Switch Models)

written by Eric J. Ma on 2025-12-27 | tags: ai opencode claude tmux terminal creativity workflow pair-programming

I pair-programmed a tmux status bar theme with OpenCode and Claude, discovering along the way when to switch between Sonnet and Opus models. The real insight: AI enables creative expression by bridging the gap between aesthetic vision and technical implementation, letting me work like a designer even though I'm not one.

Read on... (1687 words, approximately 9 minutes reading time)
Two years of weekly blogging and what 2025 taught me

written by Eric J. Ma on 2025-12-25 | tags: blogging retrospective coding agents llms bayesian biotech career writing marimo modal data science

Reflecting on my second year of weekly blogging, I published 50 posts in 2025, bringing my two-year total past 100. This year was dominated by coding agents and AI-assisted programming, with extensive writing on AGENTS.md, autonomous agents, and productive patterns for working with AI. I also explored Bayesian methods for biological applications, got excited about Marimo and Modal, and wrote about data science leadership and career development. Two years of consistent writing has reinforced that writing clarifies thinking, consistency compounds, and the best posts come from problems you're actively solving.

Read on... (5722 words, approximately 29 minutes reading time)
« 1 2 3 »