This week, I decided to try out both LLMs and Markov models to see how they compare in generating text. I installed LLaMA 3, kicked off a conversation, and started exploring what it could do. By testing it alongside the simpler Markov model, I’m hoping to get a feel for how each one works—what they’re good at, and where they fall short. It’s been cool getting hands-on with both and seeing the differences in how they respond!
I discovered an interesting scenario where LLaMA 3's attitude changed noticeably depending on whether or not I gave it a name.
To test this out, I started with a simple, random question: 'How are you today?'
Obviously, it wouldn’t respond with something like 'I'm doing great today' since a machine has no emotions. It answered in a very official and logical way, staying completely neutral.
Next, I asked it, 'Are you happy today?'—knowing it wouldn’t actually be able to feel or answer that in a personal way. But I was curious to see how it would respond. Its tone remained very formal, and it tried hard to redirect me toward asking a different question.
Then, I asked it about the weather in NYC. Since it doesn’t have access to real-time data, it couldn’t give a direct answer, but instead, it provided some sources where I could check the weather myself.
After that, I asked it about its name. Since it didn’t have a specific name, I decided to give it one.
From then on, I started calling it 'Lama,' and it surprisingly showed a bit more warmth—using exclamation marks and a less formal tone. It’s interesting to see how its attitude seemed to change just from giving it a name.