Something that has been bothering me for a while and I’d really like to put it in writing.
I believe that LLM based AI is going to negatively impact an ability for a person to become an expert at anything. As an effect of that, a negative feedback loop is going to form where AI will become dumber and dumber. You can’t have LLM learn more and yet destroy the whole incentive for learning anything.
I’ve a few anecdotes I’d like to use to support that thesis. If you don’t care about it, skip to the last headline.
How internet shaped my early education
I’m based in Poland, so my experience might be different than yours.
I’ve attended a public school system up until I was around 19 y/o.
During that time I genuinely hated any kind of homework, learning about anything that I didn’t find extremely interesting was a chore. It didn’t help me that most of the classes were just boring to me.
I didn’t care about history of Poland, the fights we fought, why our country was broken up into pieces.
I didn’t care about all the different kinds of mathematics equations that one can use to calculate all the different things.
And I certainly didn’t care about what some old fellow wrote in their masterpieces such as Lalka or Pan Tadeusz (an obligatory reading for our Polish classes).
The only skills I built throughout the school education was whatever I found useful and thought it might be useful for me in the future. There weren’t too many of these things, but I did keep some knowledge from my school. I also learned A LOT by just… reading whatever was interesting on the internet. And exploring on my own. And breaking stuff and trying to fix it.
Most of my obligatory homework was done by just rewriting whatever I found on the internet as an answer. It was 2010-2017 internet, and there were already LOAD of answers to all kinds of questions that teacher asked and has given us as homework. For whatever I couldn’t find on the internet, I either tried and failed to solve, or just skipped hoping whatever I’ve done was enough.
There are various different reasons why I was this way, and it wasn’t certainly just because I was lazy, but I will skip the reader the displeasure of learning about that just yet.
However, during that time, whatever I found on the internet was presumably written by real people. Someone had to provide that knowledge to The Internet so that I could copy it and make use of it. Essentially internet was split between people who knew something and wanted to share on the internet, and people who just wanted to use it. At least that’s how I see it.
I don’t see how that is worse than what we currently get in LLM based solutions in terms of education. For all the folks that are screaming that education is falling apart due to that - I’ve got news for you - you’re late by at least 20 years.
Whoever wanted to learn, did learn, and the rest of the folks just sort of made use of their knowledge.
But…
Remember how I said previously that it was all provided by the real people that cared enough to share that knowledge on the internet? When you use LLM - that is no longer happening. There is nothing behind it. It either knows the answer to your question, imagines it, or it doesn’t. There is no way to contribute back. All you can do is share your knowledge on the internet in hope that the LLM providers scrap your answer and it makes enough of a difference in weights of that beast that it will understand your answer is the answer to the question.
But given that no one reads these sources anymore, these very sources that provided LLM these abilities they have now, why would anyone bother doing it?
As I see things now, we can assume LLMs are knowledge graph of everything internet knew about up until LLMs became capable of writing of their own. Since then, how does one know whether what is written is genuine or LLM generated?
You could argue with me that hey, in the past people were also wrong on the internet - and I’d agree. But you know that saying, the best way to get a correct answer on the internet is to post something obviously wrong? I don’t see how that holds anymore. I don’t see people bothering to reply to LLM generated outputs that they’re wrong, just to get more LLM generated output thrown at them.
Learning and exploring
I did hint about this in previous section - but the way I learned about anything really in my career was a mixture of either reading, exploring, experimenting, breaking or trying to create something.
I have no formal higher education. I did attempt university, but based on what I said earlier, you can probably judge where that ended up.
People are often impressed that I know seemingly a lot about all the different things. I know how to renovate a house, I know how to design my own PCB, I know how to create 3D models in CAD, I know how to do plumbing and piping around the house, I know how to wire up electricity in my house, I know how to create software, how to maintain servers, and a list of many other things I cannot remember right now, but if you asked me I’d tell you.
All of that comes from a combination of a few techniques and my personal traits that I grew to appreciate. If you are familiar with ADHD, you can probably see patterns, but I try not to identify with that label as the internet did a disservice to what one counts as ADHD or not.
Most of the time it started with random interest in something, be it, for example, creating a famous server homelab, a rather recurring theme for someone in my field. What I’ll do is I’ll for the next 2-3 weeks read about it on the internet as much as I can. From all the different sources. From all the different forums, I’ll join Facebook groups, I’ll talk about it with friends, I’ll do as much research as I can because that’s what drives me at this moment. Everything else in my life is not important anymore - this is what I focus on.
After that comes experimentation phase, where I’ll try to execute whatever weird comes to my mind. Hey I’d like to built my server with mATX motherboard to be as small as possible, or event ITX board, and use 2,5" inches drives.
I’ll then spend another few weeks reading what’s the best way to setup that homelab software and OS wise.
After that I’ll execute whatever comes to my mind again, install the most interesting choice play with it.
And then I’ll just leave it there, it’s done, I’ve learned whatever I wanted, it’s no longer interesting. But boy whenever someone asks me for advice on that I’ll have an essay to tell them.
This is essentially the pattern that happened with every single thing I learned. For the past… I think 15 years now? I went through so many phases of this… hyperfocus, that you can imagine I probably know a lot. But that knowledge is rather thin-layered. I might know where the bell rings, but if I were to truly apply it - it would probably take me a few months to actually know what I’m doing and be confident I didn’t just… you know, wing it.
But I could do all of this thanks to the briliant people on the internet that wanted to share their knowledge, opinions, their creations or just teach others about what they know. I might not know as much as they do, but that sure as hell helped me become who I am right now.
All of this is leading to the point that it’s thanks to various different, often contradicting opinions or solutions, I was able to learn that.
I read on the internet that LLMs are able to provide all of that, and much more. Sure, I agree, but I have a problem with this.
Whenever I read something on the internet, it was either done by someone who is an expert, or thought they were an expert in something and really had a need to share their opinion, I don’t judge whether it was right or not. It was deliberate choice to share whatever they had on their mind, and I, a humble reader of their prose, were just trying to decipher their message and save in my mind whatever was useful to me and ignore all the rest.
I don’t feel that with AI anymore. You can ask AI about anything, and you can make a persona that will sound whatever you want, you can tell them to only tell you information that is right, or do it in certain style. But to be honest? It takes away all the joy from what I just described previously. There is no… soul behind all of it. It’s just a soup of letters that happens to know something - but how do I know whatever they tell is right or wrong?
AI has no incentive to be right or wrong. AI has no agenda (except for the companies that create them, of course). AI just spits out whatever you tell it to, but it’s up to you to know whether it’s bunch of bullshit or not.
I know that, because whenever I ask it about all the things I learned about in my hyper-focus through the last 15 years, I can tell when it tells bullshit and when it is right or gives me a new path to explore. But it’s totally random.
If I didn’t already know what’s right and what’s not, I couldn’t tell whether what AI tells me is useful or not
I’ll give you a thought experiment - let’s imagine that instead of “The Internet” as I grew with it, people default to using LLMs and sort of ignore The Internet for most of their life, treating it as sort of that weird place where people just hate each other or disagree with whatever you have to say. If I were to grow with just LLM and not all the people that shared their knowledge on the internet that built what I am now - I’d essentially be shaped by a single entity, that has absolutely no incentive in whatever it does or doesn’t do.
Think about it, when you read something on the internet, you’re going to come across at least dozen of people, if they all tell you X is right solution for you or not, sure enough there is consensus on that, right?
It gives you a starting point for exploration. But with AI - no matter how many times you ask it, what are the different prompts you can use to aid with its text generation - you’re interacting with letter soup. A single soup. Sure there are different models or companies that implement them, but even if you consider that - it’s at most 10 or so soups that you start relying on to define you, your career, your knowledge or your decisions.
You can even tell the AI to be right or wrong, to omit something or to make you believe that something is the right thing or not. Suddenly, you have millions of people influenced by just a few of these letter soups.
I sure as hell don’t want that to be anywhere in influencing my knowledge or choices. For me, LLMs are only useful to find what I already know, but I don’t remember the details, for anything else - I cannot trust it in whatever it says. They are, and will always be just a better interface for searching the information, but since they cannot understand what they’re saying, I still rely on the internet to inform my choices and decisions.
The problem is that Google, search engines and the internet are increasingly become polluted with LLMs, making it really hard to find anything written by anyone who you can consider an authority.
As for how I learn and explore with LLMs now being a thing? I can’t tell yet, but whatever I used to do, I feel like it’s no longer becoming viable for future generations.
I don’t know what to make of this yet, but I don’t see that as in any way positive. What I understand is that in LLM age, the importance on finding the right, trusted information will be valued much much more than it does right now. I’d even go as far and say that creation of LLMs will incentivize creating better sources of truth and information than we have right now. But it’s going to take a while for everyone involved in their development to realize that.
My Career
The way I learn and explore greatly affects my career. I’m mentally wired in a way that doesn’t allow me to do boring work, it has to be interesting, it has to engage me in a way that makes me learn new things that I can later utilize. But I must recognize not everyone is this way.
This took me a while to understand - but not everyone works the same way I do, and I think I come to terms with that.
I’m trying to phrase it in a way that doesn’t sound rude, but I cannot so I’ll try to work backwards and see where we arrive.
When I work on something, I generally try to understand every piece of it before I work on the whole system or component. Something inside of me doesn’t allow me to work on something I don’t feel like I understand, it feels like I’m navigating in a fog if I work on something that concepts of I cannot grasp. It usually means I don’t work well in corporate environments, as that is most of the time nearly impossible there.
I recognize what I am saying sounds sort of bullshit, because hey - do I truly know how CPU works? How RAM memory works? Why electricity flows? What is the process behind making an electronic chip? Do I know my hardware is bug free? Sure not. So I don’t seem to adhere to my own principles.
But I still try to understand major parts of what I work with. Let me give you a few examples.
When I work with Python - I know that there is GIL that makes it impossible to have multi-threaded execution. I know that Go, C#, Java are GC based languages, so that when I work with them I have to be sort of aware of how I make use of the memory. I know that C++/C/Rust don’t have garbage collection, so memory management becomes much more important, with Rust being a bit more on the safer side with its borrow checker.
And yeah, I don’t know the inside outs of all the concepts I just mentioned, but sole awareness of it gives me an incredible leverage when I try to debug something or understand why it fails.
Ditto for networking protocols, I know there is DNS, UDP, TCP, major differences between UDP and TCP (sessions, or lack of them!), I know that for newer versions of http they might utilize UDP instead of TCP and that has some implications! For DNS, it’s distributed system, essentially a key value store for domain records, that tends to be unreliable under some scenarios, especially when you try to use it for load balancing.
And yet, I feel like most people don’t care about it. They will happily use all of these technologies, and sort of try to make it work based on the outcomes they get, if it looks like it works to them - job’s done, nothing to do. Even if you’re leaking memory, or you’re relying on UDP to send some informations without any way of acknowledging if it was actually received - hey, it did work when I tested it right?
I don’t blame you for that approach - I would be a hypocrite if I said that I truly understand everything and always make the right choice based on that. Yet, something inside me always tells me “hey, you remember that thing you used there? Try to read up more on it, just so you’re sure you used it right”. Might be some weird OCD, a mixture of ADHD and other disorders, maybe I’m just different. But I met a lot of folks like me and I had great pleasure working with them, even if they yelled at me when I did something obviously wrong, I still learned a lot every time they did, because it forced me to re-asses what I know and try to do better next time.
Increasingly nowadays, I see people that do something, ask me for review of their work, and I’m like - “You have no idea what you’re doing”. Obviously, it’s considered rude to say that, so I try to wrap it in nicer words. Maybe I am in the wrong? sometimes I am, so I try to be open to that possibility.
In the past, when that happened, people pointed to the StackOverflow answer as the reason why they made it this way, or documentation that said something like that, or someone blogging about it and sharing it with the internet. Sometimes I was wrong, sometimes I was right and they didn’t understand the answer they read. But we had a shared ground, the reference, that we used to discuss, and maybe explore more based on it (Is SO answer correct or should I check the documentation? Maybe it’s just outdated, if it’s from 2005 and we’re in 2015?)
It was really hard to do something, without any reference if you had no idea what you’re doing. So if someone wrote something, I could assume they pulled it from somewhere, right?
Nowadays, more and more answers tend to be - LLM told me to do so
LLM told me to do so
How do you approach discussion with an answer like that?
Think about it. Previously, we could’ve used documentation, source code, answers on the internet, discussions in forums, private conversations on IRC, a friend or colleague that guided them. That provided a ground for discussion that was sort of grounded in reality.
The moment someone says “LLM created it” or “LLM says this is the way” or a rather complicated way someone used the other day “I had it reviewed by my agent’s and they say it’s correct”.
Every time someone says that to me, a tear is shed. Not only they have no idea what they’re doing - that’s normal, we all do sometimes! They’re referencing whatever they’re doing based on what letter soup has told them. There is no discussion about this. And they’re using that LLM as a way to create an illusion of confidence that not only they don’t know what they’re doing, but there is this friend of theirs called <Insert your favorite LLM name> that guides them and vouches for the solution. How do I even consult this friend of theirs? What are their references? Where did they learn about it?
I feel like it’s a lost battle the moment someone says their decision is based on the LLM output. There is absolutely no resource you can throw at them, that the LLM wouldn’t be able to say it’s wrong in a thousand tokens per minute matter, it just takes one prompt to make malicious compliance an endless battle.
Imagine your colleague puts in the prompt that “I’ve got a few suggestions here that I got in the review process, but I think it’s wrong, please disprove it. I don’t care what’s right”. Or less malicious sounding “I need to do it ASAP, skip anything that isn’t important”.
You don’t know if they put that in their prompt, you can wish they’re not so evil, but the LLM will happily spit out what they’re asking for, no matter how much it will be coerced by the AI companies that it shouldn’t. And you don’t even know the “source” of their argument and how they arrived it! Maybe their initial prompt was just totally wrong?
I refuse to fight that battle and even participate in these discussions. If I care about it, I’ll just tell them to pound sand, if I don’t - I’ll let it pass and be someone’s else problem. The only solution I see is to not participate in it, all other outcomes lead to burn out for me.
I refuse to waste time discussing with someone who is just going to put my answers into the LLM - if you do so, just remove yourself from that discussion - why even bother being there? LLM knows what’s best anyway.
Saying all of that, this now starts to affect my career much more, and I cannot just ignore that paradigm shift. I can’t really tell whether our industry makes a correct choice or not, but I know what choice I will make.
I also don’t believe it only applies to software industry, and it has a really big implications that I don’t think anyone understands yet.
What to do?
All of that leads to the following conclusions for me.
In the past, only a few were dedicated enough to become experts in their areas of interest and most of them shared their knowledge for others to learn and incentivize them to explore further.
With arrival of LLMs, there is no incentive to become an expert anymore. Whatever you say can be easily debunked with just a few words in LLM prompt and the burden of proving it wrong will be on you. A battle no one can win, where it takes you a lot of mental effort to make a sound argument and even more effort to find references to whatever you’re saying, but it only takes a few cents for the other side to make whatever you say irrelevant. Why even bother?
Why even bother saying anything on the internet now, if it’s only going to be read once by LLM and then you can just cross your fingers that it makes a correct conclusion and doesn’t just ignore you entirely, because you were just a single person vs hundreds of LLM generated opinions that said something else.
It totally disincentives becoming an expert at anything, why bother if LLM can do seemingly the same for cheaper and much faster?
How do you even become an expert now, if whatever you read has no guarantee that at least someone went through that text and said “no, that doesn’t make sense” and just didn’t publish something that was obviously wrong? Sure, before you could publish bullshit, but it wasn’t effortless, and your name was on the line, and generally you were not the only one having an opinion on some matter, so we had a choice whether you are in the right, or someone else. Now it’s just LLMs fighting LLMs with different prompts. A text that has no meaning, soul or person behind, just a soup of letters that looks like something you can read. A truly dream like state. Have you ever had a dream where you were convinced you had solved one of the biggest problems humanity ever had, only to realize you were just dreaming and it made no sense? LLMs are just that, but they don’t realize it, and you fellow reader, are destined to read their hallucinations.
I personally believe that this is the wrong road to take, I refuse to make LLMs my second brain, even if my own brain is not good enough to posses all of the knowledge to ever exist, I’d rather try to do that and feel that I understand what I’m doing, rather than just become a puppet of AI companies to do whatever AI feels like to tell me today. I just hope there are enough other people to feel similar to keep us sane.
Otherwise, it might be just a case when humans need not apply (video)
As for my career?
I think I’ll just have to accept things as they are, and develop some personal principles to guide me moving forward. One of these principles could be that I don’t engage in a discussion where the other party relies on LLM to back up their theories. I’m yet to see how it works out for me.
No LLMs were used to write this article. Just a spellchecker and my non-native English.