What we’re currently calling AI isn’t AI but just a language processing system that takes its best guess at a response from it’s database of information they pilfered from the internet like a more sophisticated Google.
It can’t really think for itself and it’s answers can be completely wrong. There’s nothing intelligent about it.
I hate having to explain this shit to literal Comp Sci majors.
Even if ChatGPT was literally a perfect copy of a human being it would still be 0 steps closer to a general intelligence because it does not fucking understand WHAT or HOW to actually do the things it suggests.
Even if ChatGPT was literally a perfect copy of a human being it would still be 0 steps closer to a general intelligence because it does not fucking understand WHAT or HOW to actually do the things it suggests.
What if we AI passes the turing test not because computers got intelligent but because people got dumber
Indeed, but it’s not the popular opinion in the general public and it’s currently the biggest buzzword in tech even if it’s wrong. People are throwing serious money at “AI” even if it isn’t.
This whole open AI has Artificial General Intelligence but they’re keeping it secret! is like saying Microsoft had Chat GPT 20 years ago with Clippy.
Humans don’t even know what intelligence is, the thing we invented to try to measure who’s got the best brains - we literally don’t even have scientific definition of the word, much less the ability to test it - so we definitely can’t program it. We are a veeeeerry long way from even understanding how thoughts and memories work; and the thing we’re calling “general intelligence” ? We have no fucking idea what that even means; there’s no way a bunch of computer scientists can feed enough Internet to a ML algorithm to “invent” it. (No shade, those peepos are smart - but understanding wtf intelligence is isn’t going to come from them.)
One caveat tho: while I don’t think we’re close to AGI, I do think we’re very close to being able to fake it. Going from Chat GPT to something that we can pretend is actual AI is really just a matter of whether we, as humans, are willing to believe it.
You don’t have to crack the philosophical nature of intelligence to create intelligence (assuming “create intelligence” is a thing, I guess). The inner workings of even the simplest current models are incomprehensible, but the process of creating them is not. Presupposing that there is a difference between “faking” intelligence and “true” intelligence, I think you’re right, but I dunno if that distinction is right.
You don’t have to crack it to make it but you have to crack it to determine whether you’ve made it. That’s kinda the trick of the early AI hype, notably that NYT article that fed Chat GPT some simple sci fi, ai-coming-to-life prompts and it generated replies based on its training data - or, if you believe the nyt author, it came to life.
I think what you’re saying is a kind of “can’t define it but I know it when I see it” idea, and that’s valid, for sure. I think you’re right that we don’t need to understand it to make it - I guess what I was trying to say was, if it’s so complex that we can’t understand it in ourselves, I doubt we’re going to be able to develop the complexity required to make it.
And I don’t think that the inability to know what has happened in an AI training algorithm is evidence that we can create a sentient being.
That said, our understanding of consciousness is so nascient that we might just be so wrong about it that we’re looking in the wrong place, or for the wrong thing.
We may understand it so badly that the truth is the opposite of what I’m saying : people have said (“people have said” is a super red flag, but I mean spiritualists and crackpots, my favorite being the person who wrote The Secret Life of Plants) that consciousness is all around us, that every organized matter has consciousness. Trees, for example - but not just trees, also the parts of a tree; a branch, a leaf; a whole tree may have a separate consciousness from its leaves - or, and this is what always blows my mind: every cell in the tree except one. And every cell in the tree except two, and then every cell in the tree except a different two. And so on. With no way to communicate with them, how would a tree be aware of the consciousness of it’s leaves?
How could we possibly know if our liver is conscious? Or our countertop, or the grass in the park nearby?
While that’s obviously just thought experiment bullshit, my point is, we don’t know fucking anything. So maybe we created it already. Maybe we will create it but we will never be able to know whether we’ve created it.
AIXI is a (good, in my opinion) short mathematical definition of intelligence. Intelligence != consciousness or anything like that though.
Also, how do you know we aren’t faking consciousness? I sometimes wonder if things like “free will” and consciousness are just illusions and tricks our brains play on us.
Well, if you experience consciousness, that’s what consciousness is. As in, the word and concept “consciousness” means being conscious, the way you experience being conscious right now (unless of course you’re unconscious as I write this…). Free will does not enter into it at the basic level, nothing says you’re not conscious if you do not have free will. So what would it really mean to say consciousness is an illusion? Who and what is having the illusion? Ironically, your statement assumes the existence of a higher form of consciousness that is not illusory (which may very well exist but how would we ever know?). Simply because a fake something presupposes a real something that the fake thing is not.
So let’s say we could be certain that consciousness purely is the product of material processes in the brain. You still experience consciousness, that does not make it illusory. Perhaps this seems like I’m arguing semantics, but the important takeaway is rather that these kinds of arguments invariably fall apart under scrutiny. Consciousness is actually the only thing we can be absolutely certain exists; in this, Descartes was right.
So, it’s meaningful to say that a language model could “fake” consciousness - trick us into believing it is an “experiencing entity” (or whatever your definition would be) by giving convincing answers in a conversation - but not really meaningful to say that actual conscious beings somehow fake consciousness. Or, that “their brains” (somehow suddenly acting apart from the entity) trick them.
Hmm, I guess your right. I guess what I was vaguely thinking of was that we don’t have as much (conscious) control over ourselves as people seem to believe. E.g. we often react to things before we consciousnessly perceive them, if we ever do perceive them. Was probably thinking about expirements I’ve heard of involving Benjamin Libet’s work, and my own experiences of questioning why I’ve made some decisions, where at the time I made the decision, I rationalized the reason for doing so in one way, but in retrospect, the reason for making such decisions were probably different than what I was consciously aware of at the time. I think a lot of consciousness is just post-hoc rationalization, while the subconscious does a lot of the work. I guess this still means that consciousness is not an illusion, but that there are different “levels” of consciousness, and the highest level is mostly retrospective. I guess this all isn’t really relevant to AI though, lol.
I like this take - I read the refutation in the replies and I get that point, but consciousness as an illusion to rationalize stimulus response makes a lot of sense - especially because the reach of consciousness’s control is much more limited than it thinks it is. Literally copium.
When I was a teenager I read an Appleseed manga and it mentioned a tenet of Buddhism that I’ll never forget - though I’ve forgotten the name of the idea (and I’ve never heard anyone mention it in any other context, and while I’m not a Buddhist scholar, I have read a decent amount of Buddhist stuff)
There’s some concept in Japanese Buddhism that says that, while reality may be an illusion, the fact that we can agree on it, means that we can at least call it “real”
So I understand all that but my counter point: can we prove by empirical measure that humans operate in a way that is significantly different? (If there is, I would love to know because I was cornered by a similar talking point when making a similar argument some weeks ago)
I fail to see the distinction between “making a logical decision without all the facts” and “make guesses based on how [you’ve been programmed]”. Literally what is the difference?
I’ll concede that human intelligence is several orders more powerful, can act upon a wider space of stimuli, and can do it at a fraction of the energy efficiency. That definitely sets it apart. But I disagree that it’s the only “true” form of intelligence.
Intelligence is the ability to accumulate new information (i.e. memorize patterns) and apply that information to respond to novel situations. That’s exactly what AI does. It is intelligence. Underwhelming intelligence, but nonetheless intelligence. The method of implementation, the input/output space, and the matter of degree are irrelevant.
It’s not just about storage and retrieval of information but also about how (and if) the entity understands the information and can interpret it. This is why an AI still struggles to drive a car because it doesn’t actually understand the difference between a small child and a speedbump.
Meanwhile, a simple insect can interpret stimulus information and independently make its own decisions without assistance or having to be pre-programmed by an intelligent being on how to react. An insect can even set its own goals based on that information, like acquiring food or avoiding predators. The insect does all of this because it is intelligent.
In contrast to the insect, an AI like ChatGPT is not anymore intelligent than a calculator, as it relies on an intelligent being to understand the subject and formulate the right stimulus in the first place. Then its result is simply an informed guess at best, there’s no understanding like an insect has that it needs to zig zag in a particular way because it wants to avoid getting eaten by predators. Rather, AI as we know it today is really just a very good information retrieval system and not intelligent at all.
“Understanding” and “interpretation” are themselves nothing more than emergent properties of advanced pattern recognition.
I find it interesting that you bring up insects as your proof of how they differ from artificial intelligence. To me, they are among nature’s most demonstrably clockwork creatures. I find some of their rather predictable “decisions” to some kinds of stimuli to be evidence that they aren’t so different from an AI that responds “without thinking”.
The way you can tease out a response from ChatGPT by leading it by the nose with very specifically worded prompts, or put it on the spot to hallucinate facts that are untrue is, in my mind, no different than how so-called “intelligent” insects can be stopped in their tracks by a harmless line of Sharpie ink, or be made to death spiral with a faulty pheromone trail, or to thrust themselves into the electrified jaws of a bug zapper. In both cases their inner machinations are fundamentally reactionary and thus exploitable.
Stimulus in, action out. Just needs to pass through some wiring that maps the I/O. Whether that wiring is fleshy or metallic doesn’t matter. Any notion of the wiring “thinking” is merely anthropomorphism.
You said it yourself; you as an intelligent being must tease out whatever response you seek out of CharGPT by providing it with the correct stimuli. An insect operates autonomously, even if in simple or predictable ways. The two are very different ways of responding to stimuli even if the results seem similar.
The only difference you seem to be highlighting here is that an AI like ChatGPT is only active when queried while an insect is “always on”. I find this to be an entirely irrelevant detail to the question of whether either one meets criteria of intelligence.
The best decision I could make is a guess based on the logic I’ve determined from my own experiences that I would then compare and contrast to the current input.
I will say that “current input” for humans seems to be more broad than what is achievable for AI and the underlying mechanism that lets us assemble our training set (read as: past experiences) into useful and usable models appears to be more robust than current tech, but to the best of my ability to explain it, this appears to be a comparable operation to what is happening with the current iterations of LLM/AI.
If you can’t make logical decisions then how are you a comp sci major?
Seriously though, the point is that when making decisions you as a human understand a lot of the ramifications of them and can use your own logic to make the best decision you can. You are able to make much more flexible decisions and exercise caution when you’re unsure. This is actual intelligence at work.
A language processing system has to have it’s prompt framed in the right way, it has to have knowledge in its database about it and it only responds in a way that it’s programmed to do so. It doesn’t understand the ramifications of what it puts out.
The two “systems” are vastly different in both their capabilities and output. Even in image processing AI absolutely sucks at driving a car for instance, whereas most humans can do it safely with little thought.
I don’t think that fully encapsulates a counter point, but I think that has the beginnings of a solid counter point to the argument I’ve laid out above (again, it’s not one I actually devised, just one that really put me on my heels).
The ability to recognize when it’s out of its depth does not appear to be something modern “AI” can handle.
As I chew on it, I can’t help but wonder what it would take to have AI recognize that. It doesn’t feel like it should be difficult to have a series of nodes along the information processing matrix to track “confidence levels”. Though, I suppose that’s kind of what is happening when the creators of these projects try to keep their projects from processing controversial topics. It’s my understanding those instances act as something of a short circuit where (if you will) when confidence “that I’m allowed to walk about this” drops below a certain level, the AI will spit out a canned response vs actually attempting to process input against the model.
The above is intended ad more a brain dump than a coherent argument. You’ve given me something to chew on, and for that I thank you!
Well, it’s an online forum and I’m responding while getting dressed and traveling to an appointment, so concise responses is what you’re gonna get. In a way it’s interesting that I can multitask all of these complex tasks reasonably effortlessly, something else an existing AI cannot do.
Yes? I think that depends on your specific definition and requirements of a turing machine, but I think it’s fair to compare the almagomation of cells that is me to the “AI” LLM programs of today.
While I do think that the complexity of input, output, and “memory” of LLM AI’s is limited in current iterations (and thus makes it feel like a far comparison to “human” intelligence), I do think the underlying process is fundamentally comparable.
The things that make me “intelligent” are just a robust set of memories, lessons, and habits that allow me to assimilate new information and experiences in a way that makes sense to (most of) the people around me. (This is abstracting away that this process is largely governed by chemical reactions, but considering consciousness appears to be just a particularly complicated chemistry problem reinforces the point I’m trying to make, I think).
My definition of a Turing machine? I’m not sure you know what Turing machines are. It’s a general purpose computer, described in principle. And, in principle, a computer can only carry out one task at a time. Modern computers are fast, they may have several CPUs stitched together and operating in tandem, but they are still fundamentally limited by this. Bodies don’t work like that. Every part of them is constantly reacting to it’s environment and it’s neighboring cells - concurrently.
You are essentially saying, “Well, the hardware of the human body is very complex, and this software is(n’t quite as) complex; so the same sort of phenomenon must be taking place.” That’s absurd. You’re making a lopsided comparison between two very different physical systems. Why should the machine we built for doing sums just so happen to reproduce a phenomena we still don’t fully understand?
I feel the Turing machine portion is not particularly relevant to the larger point. Not to belabor the point, but to be as clear as I can be: I don’t think nor intend to communicate that humans operate in the same way as a computer; I don’t mean to say that we have a CPU that handles instructions in a (more or less) one at a time fashion with specific arguments that determine flow of data as a computer would do with Assembly Instructions. I agree that anyone arguing human brains work like that are missing a lot in both neuroscience and computer science.
The part I mean to focus on is the models of how AIs learn, specifically in neutral networks. There might be some merit in likening a cell to a transistor/switch/logic gate for some analogies, but for the purposes of talking about AI, I think comparing a brain cell to a node in a neutral network is most useful.
The individual nodes in neutral network will have minimal impact on converting input to output, yet each one does influence the processing of one to the other. Iand with the way we train AI, how each node tweaks the result will depend solely on the past I put that has been given to it.
In the same way, when met with a situation, our brains will process information in a comparable way: that is, any given input will be processed by a practically uncountable amount of neurons, each influencing our reactions (emotional, physical, chemical, etc) in miniscule ways based on how our past experiences have “treated” those individual neurons.
In that way, I would argue that the processes by which AI are trained and operated are comparable to that of the human mind, though they do seem to lack complexity.
Ninjaedit: I should proofread my post before submitting it.
I agree that there are similarities in how groups of nerve cells process information and how neural networks are trained, but I’m hesitant to say that’s a whole picture of the human mind. Modern anesthesiology suggests microtubuals, structures within cells, also play a function in cognition.
I don’t mean to say that the mechanism by which human brains learn and the mechanism by which AI is trained are 1:1 directly comparable.
I do mean to say that the process looks pretty similar.
My knee jerk reaction is to analogize it as comparing a fish swimming to a bird flying. Sure there are some important distinctions (e.g. bird’s need to generate lift while fish can rely on buoyancy) but in general, the two do look pretty similar (i.e. they both take a fluid medium and push it to generate thrust).
And so with that, it feels fair to say that learning, that the storage and retrieval of memories/experiences, and that the way that that stored information shapes our sub-concious (and probably conscious too) reactions to the world around us seems largely comparable to the processes that underlie the training of “AI” and LLMs.
What is a computer but bits of metal, stone, and plastic upon which electrical impulses flip individual bits? What is a human brain but a bunch of goop doing the same thing? That’s the thing about emergent properties, they kind of emerge.
What we’re currently calling AI isn’t AI but just a language processing system that takes its best guess at a response from it’s database of information they pilfered from the internet like a more sophisticated Google.
It can’t really think for itself and it’s answers can be completely wrong. There’s nothing intelligent about it.
I hate having to explain this shit to literal Comp Sci majors.
Even if ChatGPT was literally a perfect copy of a human being it would still be 0 steps closer to a general intelligence because it does not fucking understand WHAT or HOW to actually do the things it suggests.
What if we AI passes the turing test not because computers got intelligent but because people got dumber
that’s just a fact, not an opinion
Indeed, but it’s not the popular opinion in the general public and it’s currently the biggest buzzword in tech even if it’s wrong. People are throwing serious money at “AI” even if it isn’t.
There is literally nothing unpopular about this opinion.
There is literally no opinion to OP’s opinion
This whole open AI has Artificial General Intelligence but they’re keeping it secret! is like saying Microsoft had Chat GPT 20 years ago with Clippy.
Humans don’t even know what intelligence is, the thing we invented to try to measure who’s got the best brains - we literally don’t even have scientific definition of the word, much less the ability to test it - so we definitely can’t program it. We are a veeeeerry long way from even understanding how thoughts and memories work; and the thing we’re calling “general intelligence” ? We have no fucking idea what that even means; there’s no way a bunch of computer scientists can feed enough Internet to a ML algorithm to “invent” it. (No shade, those peepos are smart - but understanding wtf intelligence is isn’t going to come from them.)
One caveat tho: while I don’t think we’re close to AGI, I do think we’re very close to being able to fake it. Going from Chat GPT to something that we can pretend is actual AI is really just a matter of whether we, as humans, are willing to believe it.
You don’t have to crack the philosophical nature of intelligence to create intelligence (assuming “create intelligence” is a thing, I guess). The inner workings of even the simplest current models are incomprehensible, but the process of creating them is not. Presupposing that there is a difference between “faking” intelligence and “true” intelligence, I think you’re right, but I dunno if that distinction is right.
You don’t have to crack it to make it but you have to crack it to determine whether you’ve made it. That’s kinda the trick of the early AI hype, notably that NYT article that fed Chat GPT some simple sci fi, ai-coming-to-life prompts and it generated replies based on its training data - or, if you believe the nyt author, it came to life.
I think what you’re saying is a kind of “can’t define it but I know it when I see it” idea, and that’s valid, for sure. I think you’re right that we don’t need to understand it to make it - I guess what I was trying to say was, if it’s so complex that we can’t understand it in ourselves, I doubt we’re going to be able to develop the complexity required to make it.
And I don’t think that the inability to know what has happened in an AI training algorithm is evidence that we can create a sentient being.
That said, our understanding of consciousness is so nascient that we might just be so wrong about it that we’re looking in the wrong place, or for the wrong thing.
We may understand it so badly that the truth is the opposite of what I’m saying : people have said (“people have said” is a super red flag, but I mean spiritualists and crackpots, my favorite being the person who wrote The Secret Life of Plants) that consciousness is all around us, that every organized matter has consciousness. Trees, for example - but not just trees, also the parts of a tree; a branch, a leaf; a whole tree may have a separate consciousness from its leaves - or, and this is what always blows my mind: every cell in the tree except one. And every cell in the tree except two, and then every cell in the tree except a different two. And so on. With no way to communicate with them, how would a tree be aware of the consciousness of it’s leaves?
How could we possibly know if our liver is conscious? Or our countertop, or the grass in the park nearby?
While that’s obviously just thought experiment bullshit, my point is, we don’t know fucking anything. So maybe we created it already. Maybe we will create it but we will never be able to know whether we’ve created it.
AIXI is a (good, in my opinion) short mathematical definition of intelligence. Intelligence != consciousness or anything like that though.
Also, how do you know we aren’t faking consciousness? I sometimes wonder if things like “free will” and consciousness are just illusions and tricks our brains play on us.
Well, if you experience consciousness, that’s what consciousness is. As in, the word and concept “consciousness” means being conscious, the way you experience being conscious right now (unless of course you’re unconscious as I write this…). Free will does not enter into it at the basic level, nothing says you’re not conscious if you do not have free will. So what would it really mean to say consciousness is an illusion? Who and what is having the illusion? Ironically, your statement assumes the existence of a higher form of consciousness that is not illusory (which may very well exist but how would we ever know?). Simply because a fake something presupposes a real something that the fake thing is not.
So let’s say we could be certain that consciousness purely is the product of material processes in the brain. You still experience consciousness, that does not make it illusory. Perhaps this seems like I’m arguing semantics, but the important takeaway is rather that these kinds of arguments invariably fall apart under scrutiny. Consciousness is actually the only thing we can be absolutely certain exists; in this, Descartes was right.
So, it’s meaningful to say that a language model could “fake” consciousness - trick us into believing it is an “experiencing entity” (or whatever your definition would be) by giving convincing answers in a conversation - but not really meaningful to say that actual conscious beings somehow fake consciousness. Or, that “their brains” (somehow suddenly acting apart from the entity) trick them.
Hmm, I guess your right. I guess what I was vaguely thinking of was that we don’t have as much (conscious) control over ourselves as people seem to believe. E.g. we often react to things before we consciousnessly perceive them, if we ever do perceive them. Was probably thinking about expirements I’ve heard of involving Benjamin Libet’s work, and my own experiences of questioning why I’ve made some decisions, where at the time I made the decision, I rationalized the reason for doing so in one way, but in retrospect, the reason for making such decisions were probably different than what I was consciously aware of at the time. I think a lot of consciousness is just post-hoc rationalization, while the subconscious does a lot of the work. I guess this still means that consciousness is not an illusion, but that there are different “levels” of consciousness, and the highest level is mostly retrospective. I guess this all isn’t really relevant to AI though, lol.
I like this take - I read the refutation in the replies and I get that point, but consciousness as an illusion to rationalize stimulus response makes a lot of sense - especially because the reach of consciousness’s control is much more limited than it thinks it is. Literally copium.
When I was a teenager I read an Appleseed manga and it mentioned a tenet of Buddhism that I’ll never forget - though I’ve forgotten the name of the idea (and I’ve never heard anyone mention it in any other context, and while I’m not a Buddhist scholar, I have read a decent amount of Buddhist stuff)
There’s some concept in Japanese Buddhism that says that, while reality may be an illusion, the fact that we can agree on it, means that we can at least call it “real”
(Aka Japanese Buddhist describes copium)
For the record comp sci major here.
So I understand all that but my counter point: can we prove by empirical measure that humans operate in a way that is significantly different? (If there is, I would love to know because I was cornered by a similar talking point when making a similar argument some weeks ago)
Can you make a logical decision on your own even when you don’t have all the facts?
The current version of AI cannot, it makes guesses based on how we’ve programmed it, just like every other computer program.
I fail to see the distinction between “making a logical decision without all the facts” and “make guesses based on how [you’ve been programmed]”. Literally what is the difference?
I’ll concede that human intelligence is several orders more powerful, can act upon a wider space of stimuli, and can do it at a fraction of the energy efficiency. That definitely sets it apart. But I disagree that it’s the only “true” form of intelligence.
Intelligence is the ability to accumulate new information (i.e. memorize patterns) and apply that information to respond to novel situations. That’s exactly what AI does. It is intelligence. Underwhelming intelligence, but nonetheless intelligence. The method of implementation, the input/output space, and the matter of degree are irrelevant.
It’s not just about storage and retrieval of information but also about how (and if) the entity understands the information and can interpret it. This is why an AI still struggles to drive a car because it doesn’t actually understand the difference between a small child and a speedbump.
Meanwhile, a simple insect can interpret stimulus information and independently make its own decisions without assistance or having to be pre-programmed by an intelligent being on how to react. An insect can even set its own goals based on that information, like acquiring food or avoiding predators. The insect does all of this because it is intelligent.
In contrast to the insect, an AI like ChatGPT is not anymore intelligent than a calculator, as it relies on an intelligent being to understand the subject and formulate the right stimulus in the first place. Then its result is simply an informed guess at best, there’s no understanding like an insect has that it needs to zig zag in a particular way because it wants to avoid getting eaten by predators. Rather, AI as we know it today is really just a very good information retrieval system and not intelligent at all.
“Understanding” and “interpretation” are themselves nothing more than emergent properties of advanced pattern recognition.
I find it interesting that you bring up insects as your proof of how they differ from artificial intelligence. To me, they are among nature’s most demonstrably clockwork creatures. I find some of their rather predictable “decisions” to some kinds of stimuli to be evidence that they aren’t so different from an AI that responds “without thinking”.
The way you can tease out a response from ChatGPT by leading it by the nose with very specifically worded prompts, or put it on the spot to hallucinate facts that are untrue is, in my mind, no different than how so-called “intelligent” insects can be stopped in their tracks by a harmless line of Sharpie ink, or be made to death spiral with a faulty pheromone trail, or to thrust themselves into the electrified jaws of a bug zapper. In both cases their inner machinations are fundamentally reactionary and thus exploitable.
Stimulus in, action out. Just needs to pass through some wiring that maps the I/O. Whether that wiring is fleshy or metallic doesn’t matter. Any notion of the wiring “thinking” is merely anthropomorphism.
You said it yourself; you as an intelligent being must tease out whatever response you seek out of CharGPT by providing it with the correct stimuli. An insect operates autonomously, even if in simple or predictable ways. The two are very different ways of responding to stimuli even if the results seem similar.
The only difference you seem to be highlighting here is that an AI like ChatGPT is only active when queried while an insect is “always on”. I find this to be an entirely irrelevant detail to the question of whether either one meets criteria of intelligence.
I have to say no, I can’t.
The best decision I could make is a guess based on the logic I’ve determined from my own experiences that I would then compare and contrast to the current input.
I will say that “current input” for humans seems to be more broad than what is achievable for AI and the underlying mechanism that lets us assemble our training set (read as: past experiences) into useful and usable models appears to be more robust than current tech, but to the best of my ability to explain it, this appears to be a comparable operation to what is happening with the current iterations of LLM/AI.
Ninjaedit: spelling
If you can’t make logical decisions then how are you a comp sci major?
Seriously though, the point is that when making decisions you as a human understand a lot of the ramifications of them and can use your own logic to make the best decision you can. You are able to make much more flexible decisions and exercise caution when you’re unsure. This is actual intelligence at work.
A language processing system has to have it’s prompt framed in the right way, it has to have knowledge in its database about it and it only responds in a way that it’s programmed to do so. It doesn’t understand the ramifications of what it puts out.
The two “systems” are vastly different in both their capabilities and output. Even in image processing AI absolutely sucks at driving a car for instance, whereas most humans can do it safely with little thought.
I don’t think that fully encapsulates a counter point, but I think that has the beginnings of a solid counter point to the argument I’ve laid out above (again, it’s not one I actually devised, just one that really put me on my heels).
The ability to recognize when it’s out of its depth does not appear to be something modern “AI” can handle.
As I chew on it, I can’t help but wonder what it would take to have AI recognize that. It doesn’t feel like it should be difficult to have a series of nodes along the information processing matrix to track “confidence levels”. Though, I suppose that’s kind of what is happening when the creators of these projects try to keep their projects from processing controversial topics. It’s my understanding those instances act as something of a short circuit where (if you will) when confidence “that I’m allowed to walk about this” drops below a certain level, the AI will spit out a canned response vs actually attempting to process input against the model.
The above is intended ad more a brain dump than a coherent argument. You’ve given me something to chew on, and for that I thank you!
Well, it’s an online forum and I’m responding while getting dressed and traveling to an appointment, so concise responses is what you’re gonna get. In a way it’s interesting that I can multitask all of these complex tasks reasonably effortlessly, something else an existing AI cannot do.
You are ~30 trillion cells all operating concurrently with one another. Are you suggesting that is in any way similar to a Turing machine?
Yes? I think that depends on your specific definition and requirements of a turing machine, but I think it’s fair to compare the almagomation of cells that is me to the “AI” LLM programs of today.
While I do think that the complexity of input, output, and “memory” of LLM AI’s is limited in current iterations (and thus makes it feel like a far comparison to “human” intelligence), I do think the underlying process is fundamentally comparable.
The things that make me “intelligent” are just a robust set of memories, lessons, and habits that allow me to assimilate new information and experiences in a way that makes sense to (most of) the people around me. (This is abstracting away that this process is largely governed by chemical reactions, but considering consciousness appears to be just a particularly complicated chemistry problem reinforces the point I’m trying to make, I think).
My definition of a Turing machine? I’m not sure you know what Turing machines are. It’s a general purpose computer, described in principle. And, in principle, a computer can only carry out one task at a time. Modern computers are fast, they may have several CPUs stitched together and operating in tandem, but they are still fundamentally limited by this. Bodies don’t work like that. Every part of them is constantly reacting to it’s environment and it’s neighboring cells - concurrently.
You are essentially saying, “Well, the hardware of the human body is very complex, and this software is(n’t quite as) complex; so the same sort of phenomenon must be taking place.” That’s absurd. You’re making a lopsided comparison between two very different physical systems. Why should the machine we built for doing sums just so happen to reproduce a phenomena we still don’t fully understand?
Thats not what I intended to communicate.
I feel the Turing machine portion is not particularly relevant to the larger point. Not to belabor the point, but to be as clear as I can be: I don’t think nor intend to communicate that humans operate in the same way as a computer; I don’t mean to say that we have a CPU that handles instructions in a (more or less) one at a time fashion with specific arguments that determine flow of data as a computer would do with Assembly Instructions. I agree that anyone arguing human brains work like that are missing a lot in both neuroscience and computer science.
The part I mean to focus on is the models of how AIs learn, specifically in neutral networks. There might be some merit in likening a cell to a transistor/switch/logic gate for some analogies, but for the purposes of talking about AI, I think comparing a brain cell to a node in a neutral network is most useful.
The individual nodes in neutral network will have minimal impact on converting input to output, yet each one does influence the processing of one to the other. Iand with the way we train AI, how each node tweaks the result will depend solely on the past I put that has been given to it.
In the same way, when met with a situation, our brains will process information in a comparable way: that is, any given input will be processed by a practically uncountable amount of neurons, each influencing our reactions (emotional, physical, chemical, etc) in miniscule ways based on how our past experiences have “treated” those individual neurons.
In that way, I would argue that the processes by which AI are trained and operated are comparable to that of the human mind, though they do seem to lack complexity.
Ninjaedit: I should proofread my post before submitting it.
I agree that there are similarities in how groups of nerve cells process information and how neural networks are trained, but I’m hesitant to say that’s a whole picture of the human mind. Modern anesthesiology suggests microtubuals, structures within cells, also play a function in cognition.
Right.
I don’t mean to say that the mechanism by which human brains learn and the mechanism by which AI is trained are 1:1 directly comparable.
I do mean to say that the process looks pretty similar.
My knee jerk reaction is to analogize it as comparing a fish swimming to a bird flying. Sure there are some important distinctions (e.g. bird’s need to generate lift while fish can rely on buoyancy) but in general, the two do look pretty similar (i.e. they both take a fluid medium and push it to generate thrust).
And so with that, it feels fair to say that learning, that the storage and retrieval of memories/experiences, and that the way that that stored information shapes our sub-concious (and probably conscious too) reactions to the world around us seems largely comparable to the processes that underlie the training of “AI” and LLMs.
That’s still called an AI. Models under ANN umbrella imitates a nerve cell. What you’re talking about is AGI.
This is an objective fact, not an unpopular opinion
Disagreeing with the established definition of a term is certainly an opinion.
This is not the established definition though? Just because something is widely believed does not mean that it is the definition
What makes a definition then if it’s not the usage in common parlance and by experts in the field?
I don’t think this position qualifies for that meme because there are legions of people who agree.
On Lemmy definitely.
What is a computer but bits of metal, stone, and plastic upon which electrical impulses flip individual bits? What is a human brain but a bunch of goop doing the same thing? That’s the thing about emergent properties, they kind of emerge.