Are we just bots?

Do LLMs resemble us, or do we resemble them?

7th December 2025

PsychologyRandom Thoughts

I've been having a shitty few weeks. But I've gotten to hang out and interact with so many different people from diverse backgrounds these last ten days. It feels like I've finally burst free from the tech bubble that Bangalore is, and came face to face with discomfort, harsh truths of life, and other things. Of several different trains of thoughts and conclusions I've come to over the last week, one staggering discovery I've made personally (by interacting with a lot of people) is how similar we are to present-day LLMs!

Huh?

Well, I might not make complete sense, and I think I lie somewhere in the spectrum between being very grounded in reality and believing we live in a simulation. But this is what I noticed when I heard people talk about various different topics: like family, superstition, religion, caste, etc. - is that when listening to others talk and trying to understand things, people almost always try to form patterns and learn new information - This means information is almost always never consumed raw, but in some pre-determined pattern (weights and biases?) I will try to illustrate this using an example:

Example 1: Person A and B are having a discussion about superstition. Person B says, “ Oh, I'm very unhappy with life/so and so bad thing has happened to me," and Person A reassures them, saying, "It's maybe because you have not followed this religious practice or executed a so-called ritual." Now Person A has implicitly broken down the sorrow/misery that was conveyed to them by Person A into some framework they had in mind to respond to it. Let's say a certain Person C, however, interjects and says, "That's alright, don't worry about superstition, I understand your sorrows, so just feel better for now." One might think this is perhaps more empathetic, but Person C has also taken this information to his brain in some way that's known to him to come up with the best response he can think of in his mind from a wide variety of possible good responses. Do you see the similarity to machines/neural networks here?

Example 2: Let's say Person A and Person B are in a heated debate about a topic. For the sake of this example, let's say the topic is "AI art is not art" - someone coming from an art background could argue that AI art is just plagiarism, a model trained on gajillions of training paintings cannot be a painter. And Person B could rebute and say, "But no, it's just a tool that makes art more accessible. We had cameras earlier, now we have more sophisticated tools." Usually in debates like these, you NEVER see one party concede or agree that they are in the wrong (read EGO) because their beliefs are solidified and it's really very hard for them to look at the same issue from a completely different angle. Notice how when we TALK TO ARGUE and not to UNDERSTAND - we kinda only interpret these messages HOW WE WANT TO?

What I'm trying to break down is how when some information is passed from one person to another, our interpretation of it is subject to things like: mode of communication, our previous experiences, whether or not we have a valid response, and how this new information impacts our existing knowledge base. Sounds a bit like deep learning? This raises a huge question of: are we really living in a simulation? Are we just prompts? Well, the simulation theory states that if we are able to create perfect ancestor simulations, then it's very likely that we live in one already (because of recursive logic: if it's possible for us humans to create a perfect simulation of life, then the simulation itself could create another simulation and so on, therefore it's likely that we are not the original people to invent simulations, therefore we exist in one.)

Kanan Gill has authored a witty and funny novel called Acts of God wherein he talks about this very same aspect of running simulations. But aren't neural networks modelled after the human brain? Well, both yes and no. While the term "Neural Networks"resembles terminology used in biology, the way the brain works neurologically and how machines work are very different. For example, a lot about the electrical impulses in the brain is still a mystery that we're very slowly understanding with the advent of modern science and technology.

Bottomline?

I can't help but draw a lot of parallels between humans/machines. I was so much of an AGI sceptic, but seeing AI progress at the rate that it is right now - I'm more than optimistic for all the goods that can come our way with the introduction of a super-powerful AI. My recent escapades with people and how we interpret information has just cemented the belief that we're not that far off.(On a final note, do read Machines of Loving Grace by Dario Amodei)