The Grug Brained AI-Developer: An LLM appendix
A survivor's guide to coding when the robots are helping (badly)
Preface: This appendix is an unofficial addition to the original Grug Brained Developer guide. The original wisdom was shared by the enlightened grug brain developer themselves. This humble addition on LLM tools is offered by a fellow grug who has suffered through the AI revolution and lived to tell the tale. Original grug deserves all credit for showing the way of the grug.
grug meet new shiny rock maker: LLM
grug hear much talk in developer cave about new magic tool: LLM assistant!
big brain say "LLM make you code 10x faster! replace all junior grug!"
grug suspicious
grug try github copilot, try cursor, try claude code, try many magic tool
grug have thoughts
complexity demon get new friend
remember complexity demon? worst enemy of grug?
well, complexity demon very excited about LLM! why?
because LLM make very easy create LOTS of code very fast! code everywhere! files everywhere!
before LLM: junior grug write 100 line bad code, take week
after LLM: junior grug generate 10,000 line bad code, take day
complexity demon rubbing hands together, very happy
grug sense great disturbance in codebase
LLM good at some things
grug must admit: LLM sometimes useful!
when grug forget exact syntax for sort() in new language, LLM help. good!
when grug need write same boring test 50 times with small change, LLM help. good!
when grug need understand what crazy big brain developer write 5 year ago with no comments, LLM help explain. very good!
but grug notice pattern: LLM best when grug already know answer but just lazy or forget exact spelling
LLM worst when grug not know answer and hope LLM figure out
this important distinction! many grug not understand!
the 70% problem
LLM get grug 70% way to solution very fast!
grug initially very excited!
then grug realize: last 30% take more time than would take grug write whole thing from start
why?
because LLM code look right but not quite right. like reflection in water: look like real thing but when grug reach for it, just splash
example grug see many times:
LLM write authentication code, forget edge case where token expire during request
LLM write database query, not consider what happen when table have 10 million row
LLM write UI component, look perfect until user on mobile phone
fixing these things harder than writing from scratch because grug brain must first understand what LLM brain was thinking, then understand why wrong, then fix
two complexity instead of one!
security holes everywhere
grug see many young grug copy paste from LLM straight to production
grug slowly reach for club
LLM learn from internet code. you know what on internet? lots of bad code! stackoverflow answers from 2012! tutorials that say "disable security for now, we fix later" (spoiler: never fix later)
grug see LLM suggest:
SQL queries with injection holes size of mammoth
authentication that accept password "password"
encryption using Math.random() (grug cry)
API keys hardcoded in frontend (grug cry more)
young grug not know these bad, because look like working code!
"but it compile!" young grug say
grug explain: so does rm -rf /
, but grug not recommend
copy paste demon multiply like rabbits
grug look at metrics, very concerning:
before LLM: maybe 8% of code is copy paste after LLM: 12% and growing fast!
but wait, get worse!
LLM not just copy paste, LLM copy paste with small random changes!
so now grug have 47 versions of same function, all slightly different, all need maintain
grug call this "mutation plague"
when bug found in one, must find all 47 cousins and fix too
complexity demon laughing so hard, tears streaming down face
grug observe trust paradox
very strange thing happen:
2023: 43% of grugs trust AI output 2024: 33% of grugs trust AI output
2025: probably even less
but same time:
2023: 70% grugs use AI tools 2024: 76% grugs use AI tools 2025: 84% grugs use AI tools!
grug confused. grugs trust less but use more?
then grug understand: like grug relationship with project manager promises. grug not trust, but still must listen because no choice
the junior grug problem
junior grug love LLM! LLM make junior grug feel like senior grug!
junior grug generate entire application in afternoon!
junior grug very proud!
senior grug look at code...
senior grug need drink
problem is: junior grug not know what junior grug not know. LLM also not know what LLM not know. together make powerful combination of not knowing!
like blind grug leading blind grug, but both very confident about direction
senior grug use different
senior grug learn to use LLM like use intern:
"hey LLM, write boring boilerplate for API endpoint"
"hey LLM, explain what this regex do"
"hey LLM, convert this JSON to TypeScript interface"
notice: all things where senior grug already know answer, just want save time
senior grug NEVER say:
"hey LLM, design my application architecture"
"hey LLM, solve this complex business logic I don't understand"
that path lead to tears
the review problem
before LLM: grug review 100 lines of human code, take 30 minute
after LLM: grug review 1000 lines of LLM code, take 3 hours
but wait! LLM generate code in 3 seconds!
where time save?
no time save! time moved from writing to reviewing!
and reviewing harder because:
code style inconsistent (LLM learn from everyone)
patterns unfamiliar (LLM mix paradigms like crazy)
assumptions hidden (LLM make many assumption, not tell grug)
pattern grug recommend
trust but verify pattern
grug learn from old russian proverb: "doveryai, no proveryai"
mean: trust, but verify
actually, grug modify: "not trust, but verify anyway"
every line LLM write, grug read
every function LLM create, grug test
every assumption LLM make, grug question
treat LLM like very eager intern who read every programming book but never actually program before
the small chunk strategy
grug learn: LLM good at small thing, bad at big thing
don't ask LLM write entire application
ask LLM write one function
don't ask LLM design system
ask LLM improve one specific part
smaller chunk = less chance for complexity demon sneak in
the rubber grug debugging
when LLM code not work, grug use ancient technique: explain to rubber grug (or real grug if available)
but twist! make LLM explain its own code back to grug!
"explain why you chose this approach"
"what assumptions did you make?"
"what edge cases did you consider?"
often LLM realize own mistake when forced explain
(sometimes grug realize LLM smarter than grug think, but not often)
anti-patterns grug see too much
the "YOLO production" pattern
young grug copy from ChatGPT straight to main branch grug have heart attack
the "infinite regeneration" pattern
grug see developer click regenerate 47 times hoping for better answer definition of insanity: doing same thing expecting different result
the "AI said so" pattern
"but Claude said this was best practice!" grug remind: Claude also once told grug to use MongoDB for financial transactions
the "more context = better" pattern
developer paste entire codebase into prompt LLM have stroke generate code that reference functions that don't exist
when use, when not use
grug say use LLM for:
writing tests (but check test actually test something)
documentation (but verify accurate)
code explanation (but verify understanding)
syntax reminder (but verify correct)
boilerplate generation (but customize after)
learning new library (but read real docs too)
grug say not use LLM for:
security-critical code
complex business logic
architecture decisions
performance optimization (LLM usually make slower)
anything involving money
anything involving personal data
anything grug not understand enough to verify
grug prediction for future
LLM not going away. like cloud, like agile, like every other thing grug told would "change everything"
will change some things, not change others
good grugs who learn use tool properly will benefit
lazy grugs who rely on tool without understanding will create mess
complexity demon will grow stronger than ever
but grug survive. grug always survive. because grug know secret:
at end of day, someone must understand code
LLM not understand code. LLM predict next token.
human must understand. human must debug. human must maintain.
human who understand code still needed
human who only copy-paste from LLM? not so much
grug final wisdom on LLM
LLM like very powerful club. can build house faster, or can hit own foot harder.
choice is yours.
but remember: no matter how smart LLM get, complexity demon always waiting
stay vigilant
keep brain engaged
and always, always test the code
because grug who trust LLM completely is grug who soon look for new job
grug go now, code review awaiting. junior grug just generated "simple" microservice architecture with 47 dependencies. grug need bigger club.