Skip to content
Aman
TwitterYoutube

Interfaces should make understanding opt-out, not opt-in

interface7 min read

Hi,

I am Aman, and this safe space is where I am most raw with my thoughts, hop on if you'd like to interact with them :))


Low agency

For the last 3- 4 months, I’ve been building things with .NET as my primary backend framework. I mostly operate from a base level intution of how the system is supposed to work(based on my past experience). Beyond that I have no understanding of:

  • How to make the program more efficient

  • How to make it faster

  • How to make it simpler to understand

etc, I have been relying heavily on the LLM gods(Claude Code and Codex CLI) for that.

I work in a very procedural way with LLMS where I do the legwork of

  1. Thinking through what I need to accomplish

  2. How the system is supposed to interact

  3. Conveying the instructions through my prompt in a clear, bulleted list

It has been working well overall, with a few problems here and there.

One thing I have noticed is that when too much autonomy is given to the LLM, it's choice of external tools or third party libraries often doesn't align with what I have in mind. My intution suggests that it is all based on the number of NPM downloads or conversations based out of reddit discussions. The only way out in such situations is brute forcing my way through "English".


English as the interface

I don't think "English" can be the primary language through which we interact with computers and I’ve explained why in this Twitter thread:

I’m very bearish on the future of prompt engineering as the primary interface through which we instruct computers

September 4, 2025

English, by default, demands an ‘opt-in’ effort for clarity. Its goal is to lower adoption barriers, like Python, yet it lacks the guardrails that help me shape unstructured thoughts into their most precise and clear form.

September 4, 2025

A blog post goes through multiple edits and layers of filtration, which, at its core, is simply the act of shaping my unstructured thoughts into structure.

September 4, 2025

When we do linear algebra or even something mathematically simple, like calculating the HCF and LCM of numbers, there are built-in guardrails that allow me to think in a linear direction.https://t.co/hIPeAzYxc7

September 4, 2025

English, or any other natural language, is very non-linear in this aspect.

This is why I sometimes ask ChatGPT to generate prompts for me.

September 4, 2025

The ambition is to raise convenience to the level of capabilities.https://t.co/FiSKaIaVP7

September 4, 2025

What people often miss is ‘understanding’ as a core component of this equation. English, in fact, pulls me away from understanding, I don’t want to treat my work with LLMs as a black box.

September 4, 2025

I want to deepen my understanding of the subject matter when working with LLMs, after all most great works emerge from curiosity driven side projects. pic.twitter.com/dY5JXNM2vT

September 4, 2025

This blog is an extension of this thread and what I think about vibecoding or using AI in programming in general.

I have always approached learning a new programming language with joy and curiosity because each language has it's own constructs that shapes how you think.

For example: the use of channels and pointers in Go forces you to be intentional about how you structure your program. On the other hand, with JavaScript, you’re intentional about how you shape your DOM side effects. Both languages need to be operated with different intentions in mind, because each introduces its own way of thinking.


Feedback Loops

I don't think abstracting away understanding moves us closer to excellence because every tool eventually requires a deep, time earned understanding and I was able to bypass all the checkpoints and jump straight to the feedback loop of creating something tangible.

An arguement can be made that, deep understanding is not required to build something useful and I’m not entirely sure how I feel about that. Programming is fun for me, and accomplishing something hard after struggling for a long time has a satisfaying feeling that claude Code simply can’t substitute for.

It feels like there is no need to be intentional anymore, you can just build things now. The choice of a particular tool used to matter, and the only tangible translation of experience was the maturity to make better decisions. Now, I am not sure whether we should rely on LLMs to make the "best possible" decision for us.

That being said I do think that great things are built by people who have a deep understanding of the subject matter. Every piece of art, technology, infratructure or policy has been created by by someone or a group of people who had a good understanding of the subject and then found a way to branch out from that understanding. Social media is glorifying the idea of "building without having a core understanding",

But there's is a real joy that comes from building things, chasing excellence and putting our work out in the world, the kind of joy that comes from wrestling with a problem, shaping it, and finally seeing it work. The feedback you get from creating something sparks curiosity about how it works under the hood, and that curiosity naturally pulls you deeper, refining your understanding at every step.

Back in 2023, I wrote about Accelarated Learning and how LLMs has made it possible for anyone to be more curious and gain a deeper understanding of their craft. One could even argue that Bloom’s 2 Sigma problem can now be meaningfully demonstrated with LLMs.

The average level of intelligence, curiosity and agency should be increasing but I don't think that is happenng becuase,

  1. I think there are a very few high agency people who use LLMS correctly to deepen their understanding of a domain they don’t yet understand(personal opinion).

With the abundant options of vibe coding tools available in the market today, non engineers aren’t spending time understanding:

  • what the DOM actually is,

  • where their client is being rendered,

  • how their REST endpoints work.

The interfaces are designed in a way where curiosity is "opt in" by default.

  1. AI tools have abstracted away the muscle memory I built from struggling to find difficult answers and giving long, focused attention to problem statements.

It has all become too passive, I am constantly playing "prompt game", where the only goal is to check the output and see whether it works correctly or not. And don’t even get me started on edge cases.

The ability to have a second order thinking on a problem statement to consider situations where my fundamental assumptions might be falsified doesn’t even come naturally to me anymore. I have started experiencing prompt fatigue and loss of agency while debugging any issue and I think my focus is declining too.

What’s interesting is that this only happens on projects where I choose to rely heavily on the current LLM interfaces. On othjer projects where I operate from actual understanding this problem doesn't exist. Engineering is no longer the moat, understanding and the ability to differentiate are the true moats now.


Understanding as the fundamental block

I'm strongly aligned with Geoffrey Litt's philosophy on LLM tool usage, keeping understanding at the center rather than letting it be abstracted away. His approaches resonate the key insight:

LLMs should amplify understanding, not replace it. When we maintain that cognitive ownership, we build better systems. When we outsource it, we build houses of cards.

We need to put more intention into the interfaces we build. In a future where LLMS become increasingly high agency, designing interfaces that encourages the user to operate with real understanding becomes crucial.

In this tweet Geoffrey talks about his use of AI as an interactive command center where he maintains a complete understanding of what the agent is doing. I completely resonate with his vision i.e,

We need to start treating AI as more of a human accelarator rather than a tool that entirely replaces humans. A lot of the real work we do is still heavily dependent on the context and on who we are doing it for.

Take a very simple example: I have to meet a school friend who’s in town, someone I haven’t seen in 10 years.

Any AI based calendar scheduling app would treat this person like any other contact and schedule the meeting based on generic algorithmic decisions. The system has no concept of who this person is to me. That context is completely missing.

Until we have personal databases, rich, structured APIs of our entire digital lives that these systems can connect to, AI won’t be able to make context aware decisions that actually reflect our relationships and priorities.

Inspired by Upload on Amazon Prime, I’ve written about personal databases and how they will soon become a commodity. But this is already a messy problem, our personal and professional information is scattered across dozens of different apps and creating a layer that syncs all of them together to provide a unified interface is incredibly challenging, especially if companies decide to gatekeep data and lean further into closed source ecosystems.

I’ve seen a bunch of recent AI funding announcements focused on solving “memory.” Let’s hope some of them manage to crack this.

Personally, I resonate with Obsidian’s files over cloud approach, it gives people the freedom to own their data while still letting tools build great experiences on top of it. It’s the model that offers the best chance of outsourcing our information for maximum UX convenience. But we’re not quite there yet.


Culture

Lastly, I’m worried about the cultural impact this shift will have on companies going forward. Engineering culture is one of the most important cultures within any tech organization, and it requires constant maintenance.

When a group of people collectively decides to push the bundaries of what is technically possible, they rely on tight feedback loops and the deeply satisfying feeling that their work is meaningful. AI based tools increasingly abstract people away from some of these fundamental experiences, and honestly, I am not even sure I would enjoy being part of an engineering team that doesn’t have a deep technical understanding of what it’s trying to accomplish.


Conclusion

The way we work and interact with computers is rapidly changing. I'm trying out many new tools and adapting to this environmental shift at my own pace.

For me, building things is fun but only when I understand how they work. That understanding is essential to my enjoyment of the process. I don't think machines are good enough yet to take that agency away from us.

That said, AI tools can accelerate our work when used with high agency keeping understanding at the center rather than abstracting it away. Of course, these are my personal views. Some people prefer building without deep understanding, and that camp will always exist.

This is where I think AI widens the gap between good and great, between high agency and low agency. Those who use these tools while actively refining their understanding will improve rapidly. Those who don't care about the "why" behind things will plateau at mediocrity.

© 2026 by Aman. All rights reserved.