Explain why you think this.
Artificial general intelligence is a name we give to the idea that a computer program can synthesize information in an intelligent manner for
any application . In other words, it needs to have predictive models for every stimuli, and be able to adapt to new types of stimuli by analyzing its knowledge and applying predictive logic freely. That last part is the key. how do you teach a computer to make decisions without telling them what to do? computers rely on if x then y, they need recognizable prompts. how exactly would you go about doing this without it just being a predictive model. To be an agi the computer would need to know why x, and use that information in tandem with its physical knowledge in order to make a judgement. That is not possible now, and might not be possible ever. Ai's right now still use basic if x, then y structuring, just very very complicated. Modern ais are predictive in nature, and are based off of human knowledge, that means they cannot reason freely, they just imitate what they predict to be the optimal answer using identified word structures in the prompt. There is no "thinking" Because of this, they will always be limited and untrustworthy, because they cannot create their own models to learn from when given new information not easily defined by humans already. When humans learn, they build a model of the subject unconciously, using symbols to remember key parts of it. Humans use all these models together in everyday life, with relationships between these models constantly shifting. Ai lacks this fluidity, and as they become more complicated, they become more rigid. Ai's dont know how to shift their structures to rule out inconsistencies. They dont know the real world, all they know is what we tell them the real world is. All of these problems might be solvable, or not. It seems simple but its not. an agi would need to be able to answer every question that a human could know, drive a car, recognize colours and sound and time and so many things. That is just so so far away right now it might as well be impossible.
I guess I should have specified that I don't see agi coming anytime soon, but Im not in the industry so I have no idea.
That reminds me of artificial sentience, which I think is even more impossible.
The idea is to have sentience somewhat similar to how a human can experience it. But how would we know this? We would create a program that answers all of our questions designed to identify sentience correctly, or have cognitive structures that imitate humans that we think would allow for sentience. Essentially we are creating something that will say yes, I am sentient, in a million different ways. It could be sentient, or it could just be a language model that fools even us, there is no way of knowing. If it looks like a mind, talks like a mind, and seems to think like a mind, is it a mind?
Another thing, human experience of sentience is connected to our existence as a biological animal. We have no idea to the extent our physical existence impacts our conception of what sentience is. The brain we think with moves our limbs, experiences the senses, and subconsciously processes information with heavy bias towards social communication, reproduction and survival. This is what our brain evolved to do for millions of years. How are we supposed to recreate it from the outside? We have no clue how the brain works, we have not mapped the human neuron structure, and we might never be able to. Modern neuroscience has come to the conclusion that the brain reuses assets as much as possible to increase efficiency. Each part of your brain is involved in many different tasks, in a insanely complex web of relationships. From a purely logical standpoint it is probable that it is mandatory to recreate these exact relationships in order to create what we think of as sentience.
I don't know anything about computers btw this is just bullshit