I think you’re projecting consciousness onto those terms more than you need to. An algorithm is a decision-making process devoid of consciousness (as far as we know). AI is capable of self-determination in as far as it’s capable of acting without reacting, or without total dependence on input. We just need our self-determination and decision-making to be special, so we present them as functions of our consciousness.
And a curse on any philosopher that tries to define consciousness as some variation of “that thing that makes human special”, any work they build on that is doomed.
The article seems to think the comparison of human intelligence with artificial intelligence is caused by naming it “intelligence” which would be a fallacy. Related to ambiguous semantic nature of inherently vague language. Saying “the article thinks” shouldn’t lead anyone to assume anyone believes articles have minds, it’s just showing the relationship between the idea and presentation.
The naming convention doesn’t help, but a more direct cause would be the fact that those funding the research are most interested in automation to replace people, and so the idea is sold to them that way, so it’s built towards that goal. It’s a commonly accepted inevitability even going back to Rosie Jetson. I agree with the article that it doesn’t need to be, it would be better for humanity if we thought of it as enhancing human intelligence rather than replacing it and built towards those interests.
Unfortunately the motivation of Capitalism is to pay as few people as possible as little as possible to still maximize profitable quality. Convincing them improving worker quality over outright replacing expensive (now mental) labor with high-output automation is a tough sell. Maybe the inability to profit from LLMs will convince them, but I doubt it.