On AI and Learning

Over the weekend, I had some interesting discussions about the impacts AI on developer growth and skill atrophy. Much of that discussion was inspired by Mo Bitar's video on the subject.

Within my small group, we couldn't come to a consensus on its impacts. We all had anecdotes with wildly different outcomes, so I decided to push a poll on a few communities I participate in to get a better sense of how engineers feel about AI's impact on their learning.

By the time time I collected and processed the results, It had about 67 responses. with a pretty diverse mix of opinions. I'm sharing the results in hopes that others find them useful.

Poll

For all the Claude code/agentic development users out there. How have these tools impacted your learning/growth? 1️⃣ Positively impacted your learning 2️⃣ Negatively impacted your learning 3️⃣ No noticeable difference 📊 Show results

— Nish Tahir (@nishtahir.com) March 23, 2026 at 12:47 PM

By the time time I collected and processed the results, It had about 67 responses. with a pretty diverse mix of opinions. I'm sharing this in hopes that others find them useful.

Methodology

  • The poll had 67 cumulative responses of mostly professional engineers. ~5% of people may voted more than once.
  • Not everyone that voted left an opinion, but some did indicate support (via emojis etc) for other opinions that were shared.
  • Themes were aggregated using an LLM to process the results - I read every thread and comment to ensure accuracy.
  • This was not intended to be a rigorous study. I asked and approached the topic to satisfy my own curiosity.

Common themes and sentiments

  • The dominant overall sentiment is mixed rather than purely positive or negative.
  • AI is widely seen as accelerating exposure, onboarding, experimentation, and cross-domain work.
  • AI is also widely seen as weakening retention when it replaces hands-on problem solving.
  • Many people believe the impact depends heavily on how the tool is used, not just whether it is used.
  • A repeated pattern is: AI improves familiarity and momentum, but not automatically proficiency or durable understanding.
  • The learning benefits are strongest when people stay curious, ask follow-up questions, and treat the tool as an explainer or collaborator rather than a vending machine for code.
  • The learning downsides are strongest when AI becomes the default first step for every bug, decision, or hard problem.
  • Workplace context matters a lot. When AI is tied to pressure for immediate speed gains, learning often suffers. When the user controls the pace and can reinvest time into understanding, learning improves.
  • Review, verification, and judgment come up again and again. People do not just need code faster; they need stronger ways to evaluate whether the code and tests are sound.
  • There is a noticeable emotional split between excitement about expanded capability and unease about dependency, shallowness, and long-term skill erosion.

Practices that emerged as useful

  • Use AI to explain concepts, compare frameworks, and answer very specific questions in natural language.
  • Use AI to reduce startup friction in a new language, framework, or codebase, especially during the first few weeks or months.
  • Treat AI as a tutor rather than an answer engine.
  • Ask it to guide with questions first, and only give the solution after multiple attempts.
  • Have it create a curriculum, learning path, or sequence of exercises instead of only generating production code.
  • Invert the usual workflow by implementing the solution yourself and asking AI to critique, check, or correct your work.
  • Type code manually when the goal is retention or skill-building, even if AI could generate it faster.
  • Use AI to save time on boilerplate, then spend the saved time exploring why the solution works and what tradeoffs it has.
  • Ask for plans, reasoning, and explanations of decisions, not just code output.
  • Verify claims against other sources when accuracy matters, especially for unfamiliar ecosystems or advanced topics.
  • Be especially careful with tests. Generated tests may look complete while still missing important gaps.
  • Keep PRs and tasks small enough that humans can still review them meaningfully.
  • Use AI for augmentation in review, but do not treat AI review as a complete replacement for human judgment.

Overall Takeaway

The strongest view indicated a sentiment that AI changes what people learn more than it simply increases or decreases learning overall. It tends to improve speed, confidence, range, and access to unfamiliar areas, while risking shallower understanding and weaker retention if it removes too much productive struggle. The best learning outcomes indicated use AI deliberately: as a tutor, critic, explainer, or scaffold, while still preserving space for manual reasoning, debugging, and review. In my opinion this aligns well with a study that Anthropic ran on the topic a few months ago. My notes on the paper are here.

Subscribe to Another Dev's Two Cents

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe