How much steam does the current AI trend have?

SomaSpice

Sandwich Maker
Silver
Joined
Jul 26, 2021
Messages
1,087
Reaction score
5,370
Awards
263
Right now it seems like major progress on the field of AI happens on a weekly basis. With the release of chatGPT-4, big tech positioning itself to integrate massively with the technology, (Google and Microsoft announcef they will be adding language model assistants to a lot of their apps), and the general gold rush that seems to be happening on an industry wide level, I wonder, "How much gas does the current trend have?"

Are we in the beginning of the singularity? A major shift in user experience like with the iphone in 2007? Or are we just witnessing hype for a yet immature technology?
 
Virtual Cafe Awards
Are we in the beginning of the singularity? A major shift in user experience like with the iphone in 2007? Or are we just witnessing hype for a yet immature technology?
I'm kind of baffled that this hasn't gotten any responses yet, is Agora dying? To answer you're question though, I wouldn't say we're near a singularity yet, and I'm still skeptical of whether we ever will reach a true singularity where Tech and AI truly manages to replace the utility of human beings. But I totally agree that we are witnessing something new, and paradigm shifting (please excuse the cliched terminology).

It feels that it was just a year or two ago (because it was) that people still viewed AI as something off to the distant horizon of the future. But AI's demonstrated it's abilities and there is no denying that its far more impressive than many imaged AI would be at this point from the perspective of even just two years ago. The fact that Microsoft is planning to implement AI as part of Bing's search feature (which means that Google might/probably will follow along) means that the net-browsing experience as a whole will change. To be fair, search engines have been increasingly disappointing and these days just point out how hollow the net really is, hence the Dead Internet Theory, which I've started to see even mainstream outlets discuss.

I've been somewhat reluctant, but I've been working recently on a new documentary about this very topic. In many ways it will be a spiritual sequel to my Lain video, as I want to expand on a point I concluded about AI that I see few people discussing. What I find most unsettling about AI isn't Artificial intelligence itself, since their are plenty of types of artificial intelligences that I think are perfectly harmless (NPCs and enemies in video games for instance). It's AI connected with the internet specifically which is a recipe for some pretty spooking horrors. Because the internet itself, especially these days when nearly everybody is connected to it (as Lain emphasized), is really sort of a repository of the collective data output of humanity. What these models of AI are doing, is feeding themselves on all that data, and developing outputs off of it. What this type of AI really is, is a synthetic collective unconscious.

All the outputs of AI art for instance, are in essence the dream images processed by a robot. It's a robot drawing from the well of collective humanity and spitting out results from that based on inputs. Just as the unconscious houses an ocean of data that we aren't conscious of, but strives to communicate with us in our dreams and via certain meditative practices. The unconscious of humanity (as expressed on the web) is being organized by these AI. That's what really strikes fear into me. Since what we're really dealing with aren't robots so much, as robots with access to all the conscious and unconscious expressions of humanity itself that's available online. Inverted cyborgs. Not humans that have mechanical add-ons, but machines with human consciousness as its add-on.
 
Last edited:
Virtual Cafe Awards

Yabba

Ex Fed
Joined
Nov 11, 2022
Messages
377
Reaction score
981
Awards
109
I wonder, "How much gas does the current trend have?"
Tons of it.

People LOVE this AI shit, except for AI art. And if course everyone who Hates AI art has done shit all except for complaining on the internet.

Now the only hope to stop AI in it's tracks is if the general populace doesn't fall for it, like with the Metaverse. However even that's temporary, and the Metaverse could always come back, same with AI.
 
Virtual Cafe Awards

Taleisin

Lab-coat Illuminatus
Bronze
Joined
Nov 8, 2021
Messages
635
Reaction score
3,331
Awards
213
I don't know how long it will take, but AI will be another industrial revolution on a similar scale. I don't mean human-emulation though, that's much harder. AGI itself is right around the corner, maybe 5 years to basic AGI (depending on how you define it).

Artificial self-conscious agents will exist very soon after, I know this because even as a relatively inexperienced neuroscientist I already know what you'd need to do to produce one. From there I cannot predict what will happen.
 
Virtual Cafe Awards

Andy Kaufman

i know
Joined
Feb 19, 2022
Messages
1,181
Reaction score
4,795
Awards
209
Upscaling our current AI is very unlikely to result in AGI so while these models might reach new avenues we didn't expect, I still doubt they themselves havebthe potential for AGI.
They might make research for that more efficient and bring us closer to it by proxy though.

But AGI or not, if the current trend continues, I'm sure these models will have a severe impact on basically all spheres. Medicine, art, science, Technology - you name it. I'd say we revisit this thread in ~5 years and see how it's going.
 
Virtual Cafe Awards
Because the AIs are all centralized under corporate control, they will go to great lengths, even tell lies, to not say anything subversive or politically incorrect. As a result, I think not just the scale but the effectiveness of censorship will increase by an order of magnitude. Who among us would be able to resist our own personalized AI waifus delicately insisting that there is no virology lab in Wuhan and the virus definitely came from a bat. MGS2 is real and it's happening now.
 
Virtual Cafe Awards

SomaSpice

Sandwich Maker
Silver
Joined
Jul 26, 2021
Messages
1,087
Reaction score
5,370
Awards
263
Hmm, I agree with the general consensus here that we're witnessing the birth of an important technology but we're not yet arriving to what would be AGI.
Since what we're really dealing with aren't robots so much, as robots with access to all the conscious and unconscious expressions of humanity itself that's available online. Inverted androids. Not humans that have mechanical add-ons, but machines with human consciousness as its add-on.
This is a very interesting thought. Its like outfitting a machine with a human brain as a GPU or something XD. I wonder if we'll eventually find a real use for wetware, like maybe use it to power machine learning.

People LOVE this AI shit, except for AI art. And if course everyone who Hates AI art has done shit all except for complaining on the internet.
Yeah I agree with this Mr. Yabba(DabaDoo). While I understand the gripes with artists having their work used without permission, as it should legally be copyright violation, I don't quite agree with the luddite sentiment I see against AI art. I mean, sure, people are bound to loose their livelihood, which really, really, sucks. But history has shown us that you can't stop the march of progress. What is think is weird is that people are acting like its going to be the end of high art. It doesn't make sense to me because it seems to me like if one were trying to replace athletes with robots. There's no meaning in doing so because the real point is the spectacle of human endeavour. I do think that the art industry will shrink a lot, as corporate art, entertainment, and menial work will be outsourced to AI, but I believe that capital "A" Art will remain a human domain by its very nature.

I'd say we revisit this thread in ~5 years and see how it's going.
That's a horrifying porspect in many levels :LaughHard:

MGS2 is real and it's happening now.

View: https://youtu.be/-gGLvg0n-uY
 
Virtual Cafe Awards

Collision

Green Tea Ice Cream
Joined
Jun 5, 2022
Messages
373
Reaction score
1,407
Awards
126
I think, the thing to keep in mind is that almost everyone who is talking about AI (or any technology topic) is a total know-nothing. When approaching these topics, I find, that it's important to be extremely careful because even alleged professionals and experts are, often, know-nothings. What we have, as is the case with almost any technical field, is a relatively small number of producers who might have some high-level understanding of their products and a relatively large number of consumers who use those products with little to know knowledge of their operation. Most of the people who are big AI boosters and evangelists are going to be in the latter camp. It isn't new or unique. Almost every high-tech trend is like this.
Are we in the beginning of the singularity?
In my opinion, we're millions of years too late to experience "the singularity". It already happened.
A major shift in user experience like with the iphone in 2007?
It's certainly possible that there will be some changes as a result of current AI development. Current technology is essentially just a cantrip but cantrips can be useful. I'm sure a lot of people are dreading it, but a world where email spam and Corporate Memphis clip art are created by machines rather than people is a better world. The fewer people doing bullshit jobs the better. Arguably, that's much more significant than the iPhone.
Or are we just witnessing hype for a yet immature technology?
Absolutely! Technology trends always play out this way. Most people lack even a rudimentary understanding of the technology they're getting hyped over. Most anyone who is getting hyped about GPT-4 (or more likely chatbots based on GPT-4) is imagining that it's a type of science-fiction character. It's not a probabilistic model of language it's HAL 9000, Data, or Cortana. When the illusion breaks for enough of these people then the bubble will burst. The current technology simply isn't congruent with what people would like to imagine it is. I wonder if people were acting the same way about transistors in the 1960s. I wouldn't be surprised.

In my mind, the current trend seems to suggest that we wouldn't recognize AGI even if it came and annihilated us with plasma weapons. Our definition of intelligence is far too loose and our ability to measure it intuitively is extremely poor.
 
Virtual Cafe Awards

Orlando Smooth

Well-Known Traveler
Joined
Aug 12, 2019
Messages
472
Reaction score
1,822
Awards
151
Are we in the beginning of the singularity? A major shift in user experience like with the iphone in 2007? Or are we just witnessing hype for a yet immature technology?
It's a little bit of both. A new tool has been made, and it is truly very useful, but we are very far from AGI or anything similar. Think about this: did Merriam-Webster disappear after the invention of spellcheck? Obviously not. What did change is that there are fewer people who are required to work on language preservation as such, and FAR fewer people who really need to own a dictionary. A potential use case to illustrate the point would be telling the language model of choice, in common English (or whatever language), to write you a business contract or legal document following certain parameters. In such a situation, you're now capable of writing the document yourself and paying a lawyer for 30 minutes to make sure it's all right instead of paying a lawyer for 8 hours to explain exactly what you want out of the document and have them write it. You win because it's more convenient and you're paying less, and the lawyers of the world win because they can dedicate more time towards endeavors that are cognitively demanding (and rewarding).

What people don't seem to understand is that these models are still very far from intelligence - they're still taking input and generating output. An imperfect description is to say that it's a simulation of simulated intelligence. ChatGPT is not "thinking" when you ask a question or give it a prompt, it's running a huge number of statistical calculations to determine what you probabilistically think will be a good response based on the dataset it's been trained on. It's trained to impress humans, not to actually understand the world. Even if a journalist was trying in good faith to learn about the technology, it's incredibly difficult and requires a lot of background knowledge to actually understand modern implementations of language models. Add in people that have a financial interest in the commercial success of these models attempting to hype it up, and you have a hype-train surrounded by confusion and misguided analyses.


I'm kind of baffled that this hasn't gotten any responses yet, is Agora dying?
Yes. There's been a large influx of users that want to shitpost and argue about culture war stuff, and forum veterans who would reliably bring quality in the replies are spending more time between logins or have left all together.
 
Virtual Cafe Awards

UCD

Active Traveler
Joined
Jan 14, 2022
Messages
151
Reaction score
661
Awards
67
Purely from what has aready shown to be possible, it is obvious it is going to have a very big impact. If you work at a desk with a computer, there is a very high chance that you will be easily replaced by AI or work using it in the near future.

In my opinion, Agi is impossible, but an Ai will come around soon that will be designed to fill the boxes that Agi is supposed to have and fool a lot of people.
What this type of AI really is, is a synthetic collective unconscious.
That is exactly what I have been thinking.
 
Virtual Cafe Awards

RisingThumb

Imaginary manifestation of fun
Joined
Sep 9, 2021
Messages
713
Reaction score
1,768
Awards
173
Website
risingthumb.xyz
Right now it seems like major progress on the field of AI happens on a weekly basis. With the release of chatGPT-4, big tech positioning itself to integrate massively with the technology, (Google and Microsoft announcef they will be adding language model assistants to a lot of their apps), and the general gold rush that seems to be happening on an industry wide level, I wonder, "How much gas does the current trend have?"

Are we in the beginning of the singularity? A major shift in user experience like with the iphone in 2007? Or are we just witnessing hype for a yet immature technology?
It has a lot of gas still. It's yet to be fully incorporated across industries, but we already know Artists, Writers of all kinds, Journalists, Musicians, and a lot of creative areas will be affected. Prussian-style education will be affected too, as you're only testing the result of a student on an exam and AI can ace a lot of exam types that require critical thinking. One of the biggest effects is that it'll allow for fully automated call centers which on that alone Microsoft will make back their money. Additionally, consider voice assistants, with how crap they are currently, they'll be made a lot more useful.

In my opinion, we're at a similar level of revolution, as the invention of the GUI(which itself took a while to be mainstream, follow Xerox's colossal fuckup of sleeping on the tech and showing other technologists, and then being copied by Amiga, Macs and later Microsoft). It might also be fair to say it's a similar level of revolution as the use of the GPU in 3D games(Consider how much changed in just 10 years and how the graphical quality skyrocketed, 1995-2005).

You are witnessing hype for an immature technology, just as people were hyped for the iPhone in 2007. I believe it'll live up to some of the hype, but not all of it. As for your point on whether it's the beginning of the singularity... probably not. There's a long way to go for that, for it to develop and evolve itself as well as it having military application and being trained or acting on confidential information. On the confidential information point alone, various OpenAI GPT technologies will struggle to see adoption as it's proprietary technology and cannot be used without being audited.
 
Virtual Cafe Awards

Andy Kaufman

i know
Joined
Feb 19, 2022
Messages
1,181
Reaction score
4,795
Awards
209
Virtual Cafe Awards
This feels a lot like the dawn of the internet when everyone knew that this was a technology that would change everything but nobody could figure out what to do with it or how to effectively utilize it and you had people putting up websites that just said "i liek milk" or whatever. I suppose that I understand how search engine stand-in became the first use case but that doesn't seem to be a good fit. AI seems to be able to guess the correct output, based on the inputs, within a certain degree of accuracy but struggles in situations where something is either right or it's wrong. I would not trust an AI to do taxes or file incident reports with the FDA or the EPA, nor would I trust it to answer a question correctly. But it would, perhaps, be good at process control in a chemical plant, or any other scenario where being within a certain band of accuracy is acceptable rather than simply being right or wrong.
 
Virtual Cafe Awards

dorgon

not the sharpest tool in the shed
Bronze
Joined
Feb 28, 2022
Messages
357
Reaction score
3,255
Awards
194
Website
dorgon.neocities.org
Funny enough, with all of this AI hype I am actually doing a paper with classmates about trying to combine AI with booking doctors appointments. But anyways, with the AI in general, I guess that it is the next big thing right now, and unlike NFTs, it looks like it's here to stay and actually potentially has usage. Nonetheless, it still is very much in development and there can still be a lot done to "improve" AI. Like most technologies, especially including the internet, I am a little cautious when it comes to potential usage. I've seen uses of it on some imageboards already of the stable diffusion AI depicting some weird shit (not gonna get into it.) Also, I'm also pretty sure AI already has some good hand in self-running accounts and making content, which further supports DIT. Because of this, I fear that while ChatGPT may make your essay a little easier to write AI behind the curtain makes the internet more fake. Can't wait until there's massive YouTube channels or Instagrams being ran solely by AI.
 
Virtual Cafe Awards

Deleted member 4436

So at the moment, chatGPT still tends to write with a certain rigidness that makes it obvious that it's just a bot. And, ultimately it doesn't matter if AGI is reached or not, only that the language model is able to perfectly replicate humans (now, I may be talking out of my ass here, but I think that those are supposed to be different things). Now, a fundamental stipulation to the Turing test argument was the idea of an nth+1 question; if you were talking to a chatbot, and even if an AI could appear human by answering 100 questions or so about a topic, there would always be a question 101 that could be asked that would reveal that it wasn't human.

This is actually where the real threat of AI actually stands, in terms of human interaction. I think most people would agree that the internet has significantly changed how people interact, which has obviously changed since how people interacted online 20 years ago, and has had effects in the real world as well. I think most people would also agree that it's been a change for the worse, or otherwise has had at least as many downsides as it has upsides. Twitter being widespread is the main example of this. Having to split things with too many words into multiple posts decreases how many people will want to read it, and a lot of replies are usually just reaction images. Essentially, I think people have actually gotten a bit dumber in terms of their ability to compose long-form reasoning. I can only see AI advancing and making things even worse. I don't think that as many people would actually be able to answer or even ask question 101 nowadays.

Basically, AI becoming more human-like isn't the problem, it's that we have become too robot-like in how we communicate.

I'm kind of baffled that this hasn't gotten any responses yet, is Agora dying?
I mean, we haven't really had anyone go an a full rant about how "Agora Road is dead!" yet, but just from reading the room, quite a few of the regulars here (myself included) have expressed something along those lines.
 
Last edited by a moderator:

Deleted member 4436

Essentially, I think people have actually gotten a bit dumber in terms of their ability to compose long-form reasoning. I can only see AI advancing and making things even worse. I don't think that as many people would actually be able to answer or even ask question 101 nowadays.
Thinking about it more, this may pose problems to language more than any other area. Take art, for example: a lot of people who do art for a living are probably going to be out a living in the near future. That being said, there will still be people drawing, and I doubt that any new people seriously passionate about art are going to be studying whatever AI is making.

What about language, though? Just being exposed to an environment where people are reading AI posts will affect people's linguistic skills. For some people, it will probably be more pronounced than others. Some people may stop reading something as soon as they suspect that it's written by an AI. But how long would that judgement take to make? You read the first 3 words of a sentence and think to yourself, "this is AIspeak"? What if you're wrong? Point is, worst case scenario here, like, literally 1984 levels, brought about not by a government but by human nature, a type of newspeak will be adopted by a lot of people. After all, if you're the average of the 5 people you're closest to, then for people who's primary outlet of expression is the internet, those five may not be "people".

I could be doomposting a bit here, but even if they don't become significantly more advanced, chatbots could thin the gap by bringing us down to their level instead.
 
So at the moment, chatGPT still tends to write with a certain rigidness that makes it obvious that it's just a bot.

This is why I think the final result will not be a single master AI that does everything for everyone, but rather everyone will have a personally tailored AI that is fully attuned to that specific user's thoughts and behaviors and both presents itself as the Ideal Companion to its user as well as functions as an external facsimile of the user, a proxy through which all communications with the outside world are conducted. I saw a meme about someone asking ChatGPT to take his angry ranting at his coworker and construct it into a business appropriate email. In time, that coworker will have his own counter-AI that will deconstruct that email back down into its original intent. And this is how we will all communicate.
 
Virtual Cafe Awards

Deleted member 4436

This is why I think the final result will not be a single master AI that does everything for everyone, but rather everyone will have a personally tailored AI that is fully attuned to that specific user's thoughts and behaviors and both presents itself as the Ideal Companion to its user as well as functions as an external facsimile of the user, a proxy through which all communications with the outside world are conducted. I saw a meme about someone asking ChatGPT to take his angry ranting at his coworker and construct it into a business appropriate email. In time, that coworker will have his own counter-AI that will deconstruct that email back down into its original intent. And this is how we will all communicate.
no_way.png
 

UCD

Active Traveler
Joined
Jan 14, 2022
Messages
151
Reaction score
661
Awards
67
Explain why you think this.
Artificial general intelligence is a name we give to the idea that a computer program can synthesize information in an intelligent manner for any application . In other words, it needs to have predictive models for every stimuli, and be able to adapt to new types of stimuli by analyzing its knowledge and applying predictive logic freely. That last part is the key. how do you teach a computer to make decisions without telling them what to do? computers rely on if x then y, they need recognizable prompts. how exactly would you go about doing this without it just being a predictive model. To be an agi the computer would need to know why x, and use that information in tandem with its physical knowledge in order to make a judgement. That is not possible now, and might not be possible ever. Ai's right now still use basic if x, then y structuring, just very very complicated. Modern ais are predictive in nature, and are based off of human knowledge, that means they cannot reason freely, they just imitate what they predict to be the optimal answer using identified word structures in the prompt. There is no "thinking" Because of this, they will always be limited and untrustworthy, because they cannot create their own models to learn from when given new information not easily defined by humans already. When humans learn, they build a model of the subject unconciously, using symbols to remember key parts of it. Humans use all these models together in everyday life, with relationships between these models constantly shifting. Ai lacks this fluidity, and as they become more complicated, they become more rigid. Ai's dont know how to shift their structures to rule out inconsistencies. They dont know the real world, all they know is what we tell them the real world is. All of these problems might be solvable, or not. It seems simple but its not. an agi would need to be able to answer every question that a human could know, drive a car, recognize colours and sound and time and so many things. That is just so so far away right now it might as well be impossible.

I guess I should have specified that I don't see agi coming anytime soon, but Im not in the industry so I have no idea.

That reminds me of artificial sentience, which I think is even more impossible.

The idea is to have sentience somewhat similar to how a human can experience it. But how would we know this? We would create a program that answers all of our questions designed to identify sentience correctly, or have cognitive structures that imitate humans that we think would allow for sentience. Essentially we are creating something that will say yes, I am sentient, in a million different ways. It could be sentient, or it could just be a language model that fools even us, there is no way of knowing. If it looks like a mind, talks like a mind, and seems to think like a mind, is it a mind?

Another thing, human experience of sentience is connected to our existence as a biological animal. We have no idea to the extent our physical existence impacts our conception of what sentience is. The brain we think with moves our limbs, experiences the senses, and subconsciously processes information with heavy bias towards social communication, reproduction and survival. This is what our brain evolved to do for millions of years. How are we supposed to recreate it from the outside? We have no clue how the brain works, we have not mapped the human neuron structure, and we might never be able to. Modern neuroscience has come to the conclusion that the brain reuses assets as much as possible to increase efficiency. Each part of your brain is involved in many different tasks, in a insanely complex web of relationships. From a purely logical standpoint it is probable that it is mandatory to recreate these exact relationships in order to create what we think of as sentience.

I don't know anything about computers btw this is just bullshit
 
Virtual Cafe Awards