-
I think you’re on the wrong community for this question.
-
The thing regularly referred to as “AI” of late is more accurately referred to as generative AI, or large language models. There’s no capacity for learning from humans, it’s pattern matching based on large sets of data that are boiled down to a series of vectors to give a most-likely next word for a response to a prompt. You could argue that that’s what people do, but that’s a massive over simplification. You’re right to say it does not have the ability to form thoughts and views. That said, like a broken clock, an LLM can put out words that match up with existing views pretty darn easily!
You may be talking about general AI, which is something we’ve not seen yet and have no timeframe for existing. That may be able to have beliefs… But again, there’s not even a suggestion of that being close to happening. LLMs are (in my opinion) not even a good indicator or precursor to that coming soon.
TL;DR: An LLM (or generative AI) can’t have or form beliefs.
I suspect he means less “too much experience” and more “not wanting to pay his desired salary”, though I may be mistaken.