To see commercial adaptation of this tech there needs to be value proposition -- i.e. reduction of work/money/etc.
Yeah, I feel like current AI doesn't have a concrete enough example of this. While it does have potential to meet those value propositions, we haven't definitely see it do those things at scale yet. I imagine a lot of companies also don't want to be the ones to take the initial leap. It makes more sense to watch other businesses try the tech and see how it affects them, before you risk jumping into it.
If the paramter size cannot be reduced to maintain performance, then hardware needs to change, which is on a much, much longer cycle than software.
This is true, though I feel like both will end developing side by side. Smaller models are improving a lot, as seen with models like GPT-4o Mini or Llama 3.1(the 8B variant). I feel that AI-related hardware development is also moving fairly fast, though that'll probably change if/when the hype bubble collapses.
Oh, and I also don't care for sentience nor sapience. I need the machine to acquire a goal and stick to it.
It'd be interesting to see how the tech diverges to meet these two, wildly different goals. Even if you don't want it, there are many pushing for sentience and/or sapience. I wonder if we could start seeing different models specialize in one of these options, seeing as they could be mutually exclusive. Regardless, there will also be tasks that call for a higher-level of independence than most other require.
But that's probably just because GPT3.5/4 has a 100K+ context window vs. the meager 8K window on a locally run 7B model. It's gonna run out; besides, even with an 1T context window is still just a ticking time bomb anyway.
The context window can be very limiting, true. I think this is why long-term memory, like what ChatGPT offers, is crucial for an AI to tackle long, very complex tasks. The window will always be a limit no matter how much it grows, so we need to supplement it with external caching and the like.
They had their ideas preceding the computational model they created. The code language they used was an extended Brainfuck with a von Neumann architecture, so they got to tailor the mechanics of the 'world' to one they probably knew could give rise to this, though I don't think this was a given in our world.
It's interesting to hear they used Brainfuck for that experiment. I mean it makes sense, but its funny seeing a meme esolang serving an actual purpose. Anyways, The tailored world is a problem, though I feel it may come with territory. Experiments like this are awesome, but I think its a Herculean task for them to try and capture even a sliver of the unpredictability and complexity of our actual world.
I still hold that there is a soul, some may think is silly, but it's not really relevant. The obvious problem to me, is that we are not even close to understanding ourselves even today, for that reason we wouldn't be able to tell if we created something 'intelligent' in a way comparable to a human.
While I don't personally believe in souls really, I agree the lack of understanding is a massive issue. I've always worried AI could never reach the goal of sentience no matter how close it gets, as us humans can't decide what qualifies since we keep moving the goalpost.
I don't really fear artificial intelligence, it's the nutjobs using it that I'm afraid of.
Sadly, like all technology, it holds a lot of potential for both good and evil. Like how computers control both machines made for healing and machines made for destruction, AI could either be a tool controlled by us or a tool used to control us.
Of course what we have now is not "AI" it's a glorified search engine that is being lobotomized as we speak because god forbid it finds patters that are not politically correct.
I think calling it a glorified search engine is a pretty big understatement of what the current tech can do. While the concepts behind it are fairly simple, we've seen a lot of emergent properties arise from them. For example, Claude 3 was capable of noticing and pointing out when it was being tested (an article about this is
here).
I'm not too concerned right now to be honest. Due to the way AI "thinks", and how the people who "make" them don't even know exactly how they work, any censorship is very obvious.
Yeah, AI and censorship are very hard to mix. Since AI doesn't really understand concepts of morality or insensitivity, the main solution we have is to "fix it in post". Of course, censors like this are what lead to the very obvious "As an AI model..." responses. Like you mentioned, their nature as a black box also doesn't help.