skip to content

Some Thoughts and Uses of AI

A reflection on using AI coding tools, balancing their benefits against potential impacts on learning and growth.


· 8 min read

Last Updated:


How I Use Coding Assistants

Damn, I wrote this ages ago. The way I use LLMs has changed a lot since, but the principles behind how I interact with them are still the same.

Three Ways I use LLMs

I thought these three categories of how I use LLMs helpful. I found them in David Crawshaw’s article How I program with LLMs.

  1. Autocomplete: I have it enabled with Github Copilot. Sometimes I disable suggestions because they annoyingly break my train of thought. I don’t feel like this has had any significant boost on my productivity.

  2. Search: Here is where I’ve felt their impact on my workflow the most: I save considerable time by avoiding having to parse irrelevant documentation. When we turn to documentation, it’s usually to answer a specific question, so we naturally approach it with a filter. Any content that doesn’t directly address our question or relate to it can and should be ignored to maximise efficiency. The same holds true for Googling. Instead of jumping in and out of sources I can get the LLM to summarise a variety of sources.

  3. Coding Assistance or Chat Driven Programming: I don’t use this unless I’m genuinely stuck, brainstorming solutions, or seeking insight on improving my approach. This aligns with the idea of the LLM as an always-available, deeply knowledgeable colleague. It can save hours by highlighting something relevant I didn’t know or introducing alternative approaches I might never have considered. When routine takes over, and I approach problems with tried-and-true methods, it’s easy to fall into the trap of repeating bad habits or relying on outdated solutions. The LLM helps break that cycle.

As a rule of thumb, I use LLMs to handle the initial steps, as starting is often the hardest part of the process and doesn’t contribute much to growth. Take setting up a new front-end project, for instance. Do you really gain much from struggling with configuring the front-end ecosystem (like build tooling)? Most of the difficulty isn’t due to intrinsic complexity but rather accidental complexity—things like dependency version mismatches that make the process inherently unstable and unnecessarily messy. I rarely solve such a problem and think, “Great, another feather in my learning cap”. More often, the issue was so poorly signposted that when I face it again, it feels completely new. Plus, since this kind of work is rare, I forget the lesson quickly anyway.

The LLMs become an incredibly helpful tool for getting over the hump and starting momentum particularly when you have spent your attention earlier during the day and you are a little dull and slumped.

My Praises (For Cursor)

I’ve been using Cursor and I really like it.

Generally, I have felt a huge benefit in having it there as that as an always-available, deeply knowledgeable colleague. It’s great for rubber-ducking. Sometimes there are questions that are either too distracting or too clumsy to ask of your colleagues. There’s a type of conundrum where you don’t know how or what to ask because you’re so uncertain. The only way to gain clarity is by asking “stupid” questions to feel out the boundaries of the space. The LLM is perfect for this.

The LLM has really helped me get over the hump. Building momentum is tough—it’s like staring up a mountain at the start of each task. Sometimes, all you need is a little push, and the LLM is great for that.

Here are some features that I like:

  • The Docs feature is great—I can find answers and best practices without leaving my IDE. It gives helpful, sensible suggestions, often ones I wouldn’t have thought of.
  • Tab autocomplete is super smart and works so well. I’ve pretty much ditched my usual key bindings for global renaming and just use tab now. Had a cool moment where I made a field read-only, jumped into the test file, and it suggested updating a test that was checking if we could update a field using the one I’d just made read-only. Instead, it automatically swapped in a different field that it knew wasn’t read-only.

My Letdowns

I really dislike how inherently uncertain LLMs are in their responses. They deliver an answer, but if you press them on it they fold immediately, basically admitting that everything they just said is likely untrue. Sure, you can argue that it’s up to the user or validators to tease out the truth and verify the answer, but what’s the point of these tools if you can’t be confident in what they provide?

Humans usually stick to their arguments unless there’s strong evidence against them—and even then, we often stubbornly hold on. People don’t just fold under light pressure, especially after thinking things through and backing it up. When we’re challenged and defend our ideas, our confidence grows. My issue with LLMs is that nothing they say—advice or ideas—comes with real confidence behind it.

Sure, nothing is ever black and white—everything is contextual with many angles to consider. You could argue that the LLM is simply adjusting its answer based on new information, and that’s a good thing. But it’s never really an adjustment. It’s more like, “Ah, you’ve given me new information, so this must be the final truth you want to hear,” as if it forgets all other considerations and just parrots back whatever I last fed it. It feels like an attempt to appease me—just give the kid what he wants.

The Hidden Cost of AI Shortcuts: Are We Sacrificing Growth for Convenience?

Although I have experienced the benefits of using these LLMs, I worry whether using these models truly makes us smarter by offloading menial tasks to focus on more meaningful challenges, or if it’s just laziness, and to truly prepare for the challenges we anticipate, we must do the hard work, stay sharp, and accept there are no shortcuts.

I keep coming back to the fact that the best in any craft continue to practice the fundamentals. NBA players still drill basic dribbling routines—they don’t spend all their practice time perfecting flashy windmill dunks. There’s value in consistently doing the boring, primitive tasks that lay a strong foundation. These tasks build work ethic and fortitude, and they prove one’s readiness for the more complex responsibilities that may come later. Take learning algebra in school. Have you used it much since? I haven’t. So, why did I (we) waste time learning it? Well, part of the reason for learning it goes beyond the discipline itself. Algebra was a challenge that taught us problem-solving, perseverance, and logical thinking. Even if we don’t see the benefit right away, these challenges shape our ability to tackle future problems. It’s not about the specific task at hand, it is about what faculties it is improving more generally: your ability to solve problems.

I worry that if we offload these foundational aspects of work to LLMs, we might miss the lessons they teach—lessons about patience, determination, and the reality that not everything is pleasant. The big question, then, is whether there’s such a thing as a shortcut that doesn’t come with hidden costs. Are we saving time now only to pay for it later?

The trouble with relying on LLMs is that we still think we’re learning. We give it a problem, it spits out an answer, and we nod along, feeling in control. But real learning isn’t passive. It demands effort, and the slow, often frustrating process of wrestling with an idea until it clicks. Hard won clarity comes from thinking, failing, and trying again. Understanding something by reading and understanding it by doing are very different. For example, solving a problem yourself is far more valuable than just checking the answer. Reading a solution may give the illusion of understanding, but true comprehension comes from being able to solve problems without needing to check the working solution.

Isn’t this in contradiction to what I said earlier, about there being problems that don’t deserve the amount of effort they receive? Of course, not every problem is worth the fight—some are mere drudgery, and outsourcing them makes sense. But we should be wary of mistaking convenience for progress. The real trick is knowing which challenges refine us and which simply waste our time.

If your role is just validating AIs output, your ability to judge good and bad code will weaken over time. Evaluating AI requires real experience, which comes from doing the work yourself and just reviewing AI-generated solutions. LLMs should be used as tools for exploration, not as shortcuts to answers. Progress requires effort. While AI can handle mundane tasks, it should aid skill development, not replace it.

Getting Left Behind

I also worry that colleagues that use these models will produce more code and seem like they are working at a quicker pace. Which will necessitate all developers to use AI to get their work done. And so we make popular the (possible) brain drain detrimental effects of AI driven development.