I was thinking, Blockchains are not the real web3, but all this AI IS.

Polyg00n

Internet Refugee
Joined
Mar 4, 2023
Messages
6
Reaction score
30
Awards
3
Face it, the crypto crap sucks, the JPEGs of monkeys are useless! This Non-Fungible idea of a web3 will NEVER HAPPEN. But what is happening is the massive, fast, and effective rise of AI modules for human day to day use online as such as DALL-E or StableDiffusion for art, chatGPT for virtual non-existent friends, and google is currently working n MusicLM for making music with AI.

I feel that web 3.0 is the internet that integrates with Artificial Intelligence, not Crypto, not Blockchains.
But I feel like that will come with disadvantages to us, just as bad, if not worse than a crypto-based web3. Because AI generated content is an endangerment to human creativity, and not just artists, but writers, musicians, and even just day to day friends. But I do get why people may want AI generated art, because the few artists doing commisions online I seen and interacted with are absolute jerks, I would assume a lot of (sane) people would rather go to a AI, because then at least the AI doesn't get into controversies, share potentially personal art without your permission, and act rude, as well, it's usually free.
And I get why people want to utilize chatGPT for friends or even partners, because the AI can be who and what you want them to be, they can be as nice or as toxic as YOU, the end user wants.
 

Yabba

Ex Fed
Joined
Nov 11, 2022
Messages
347
Reaction score
900
Awards
105
Face it, the crypto crap sucks, the JPEGs of monkeys are useless! This Non-Fungible idea of a web3 will NEVER HAPPEN. But what is happening is the massive, fast, and effective rise of AI modules for human day to day use online as such as DALL-E or StableDiffusion for art, chatGPT for virtual non-existent friends, and google is currently working n MusicLM for making music with AI.

I feel that web 3.0 is the internet that integrates with Artificial Intelligence, not Crypto, not Blockchains.
But I feel like that will come with disadvantages to us, just as bad, if not worse than a crypto-based web3. Because AI generated content is an endangerment to human creativity, and not just artists, but writers, musicians, and even just day to day friends. But I do get why people may want AI generated art, because the few artists doing commisions online I seen and interacted with are absolute jerks, I would assume a lot of (sane) people would rather go to a AI, because then at least the AI doesn't get into controversies, share potentially personal art without your permission, and act rude, as well, it's usually free.
And I get why people want to utilize chatGPT for friends or even partners, because the AI can be who and what you want them to be, they can be as nice or as toxic as YOU, the end user wants.
Here's my (new) opinion on AI art.

While it may or may not be theft, AI art is not art in that it is not human expression.
 
Virtual Cafe Awards

gsyme

Traveler
Joined
Feb 24, 2023
Messages
33
Reaction score
107
Awards
28
OP's main point imo is right: ai is the next web.0 change, much more so than crypto.

The web 1.0 -> 2.0 is often misunderstood, but ultimately boils down to who creates the content.

accross both, money is made by appealing to advertisers who desire impressions.

Impressions are made by producing addictive content that draws in viewers.

Web 1.0 -- the platform owner creates the content.

Web 2.0 -- the users of the platform create the content. The platform owner simply curates it to make it maximally addictive.

Web 3.0 -- AI makes the content. We have no succinct model for how this plays out or who profits most. Likely the AI wranglers themselves, ultimately, but i think this is going to be an interesting decade to see how the tech company scene evolves and who comes out on top.

Blockchain nonsense doesnt shift who the content creator is, and thus doesnt make a direct analogy to what happened between web 1.0 and 2.0.
 

jaedaen

Stay a while, and listen.
Joined
Aug 16, 2022
Messages
129
Reaction score
310
Awards
76
Because AI generated content is an endangerment to human creativity, and not just artists, but writers, musicians, and even just day to day friends.
I suppose this is true, maybe relative to the current standards. It's not really unprecedented in human history though, as I see it.

I read an article recently whose source I can't quite remember that compared the AI vs human artists situation with human musicians of the 1800s when recording became a technologically possible thing. It went from a world where live musicians were quite a bit more prized, because in order to have music, you had to have people playing it right in front of you, to a world where you could just buy a record (or I guess wax cylinder back in the day, lol). Even today, people still go to concerts, because there's a social dynamic present that is shared amongst all participants there that you don't get from just jamming some cool shit on your computer at home. I have no doubt that average human artists will lose value in this economy, in an age where it's not that prized to begin with. For that, I do feel sorry for those that pursue this as a job. At the same time, I do think that the higher tier human artists will have a place in the modern world. Still, will the motivation to really hone your skills to that level be there when an AI can churn out amazing shit in seconds? I imagine the drive to get there will be a lot more rare. I'm sure in 1750, even if you were a pretty middling musician, you were still quite a bit more valued than you are now. I imagine there are far fewer 'middling musician's today than there were then (per capita), still, you have a lot of top level musicians today, and even plenty of middling ones.

Of course, if you live with this being the standardized norm (I guess we're talking about the next generation here, people that are being born or very small children now), you will really not feel the loss from the older generations. Is it really a tragedy then? My mom remembers countless freedoms from her youth that I'll never understand, and I could say the same for those in their 20s now. It's the old 'if a tree falls in a forest and no-one is around to hear it, does it make a sound?' question.

Regardless, in some respects, as sad as it is, there's really no turning back the clock. Pandora's box has been opened, and really we're still in the infancy of this new tech era. Still, despite my post, I really have no idea what the future holds once this AI shit matures even more. I'm gonna go get the popcorn and watch the movie though, I sure as hell am pretty curious.
 
Virtual Cafe Awards

LunarTrace

Chronic Lurker
Joined
May 22, 2022
Messages
8
Reaction score
30
Awards
6
Regardless, in some respects, as sad as it is, there's really no turning back the clock. Pandora's box has been opened, and really we're still in the infancy of this new tech era.
An opinion I hear often is that, in the very near future, access to these AI tools are going to be heavily restricted to the average person; and I suppose I agree to an extent. After all, there's already been cases of face-swapping non-consenting people into porn, or using stable diffusion to create CP. All it takes is one ambitious enough lawsuit to get far enough in court to totally change the trajectory of the Internet.

It's anyone's bet though. All I know is that, the last people I would want to determine whether or not the Internet becomes a crypto-hellscape or an AI-hellscape is the US Government.
 
Virtual Cafe Awards

alCannium27

Active Traveler
Joined
Feb 15, 2023
Messages
164
Reaction score
269
Awards
55
The way I see this explosion of AI generative arts (including imagery, sound, as well as texts) is as follows:

First, it's not at all surprising to me that these products are being pushed out at the blink of an eye atm -- the deep learning model, that being based on the neural network, has been created for a very long time, but the infrastructures to support any efficient use had been lacking. To expand, the neural network is, to my latest knowledge, the most accurate ML model at scale, but it's also a very cost-prohibitive model. It requires large numbers of servers pooling their computation resources together, at numbers impractical to host in one single location (energy consumption, heating, etc. all proves difficult for one locality). Cloud networks resolves that, now with the technology available since web 2 and matured (see AWS, Azure, etc.) it allows mass industrial adoptation of neural network based AIs.

Neural networks, as one can imagine, depends on massive clusters of servers to process input data and compute learning results. In a roughly similar process, input data are process in one neural, the result (like a signal) is passed to the next neurals in the chain to be further refined. Thus the more neurons and more layers these neurons are placed on to refine the data, the better the results. That means more servers produces the better results, and thus the richiest player in town has the ace in the business. We see this in crypto -- when computation power = better profit, we see resources conglomerate into syndicates. Take OpenAI for example, the aforementioned ChatGPT and DALL-E are both based upon this tech.

OpenAI is, according to microsoft (bing):
OpenAI is an American artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership. OpenAI conducts AI research to promote and develop friendly AI in a way that benefits all humanity. The organization was founded in San Francisco in 2015 by Sam Altman, Reid Hoffman, Jessica Livingston, Elon Musk, Ilya Sutskever, Peter Thiel and others, who collectively pledged US$1 billion. Musk resigned from the board in 2018 but remained a donor. Microsoft provided OpenAI LP a $1 billion investment in 2019 and a second multi-year investment in January 2023, reported to be $10 billion.
What we are seeing now, is massive corporations competing to push out the most ground-breaking and therefore, most attractive deep-learning product, as a result of the market having accumulated the cloud infrastructures required to do so.

The final piece of the puzzle is open-sourcing -- neural networks works by learning from data inputs with corresponding responses. Think about teaching a toddler how to say any words. It's a relatively simple process given time -- the toddlers is constantly bombarded with phrases spoken around them every day, from its parents, its relatives, friends of the family, passers-by, and even the smartphones/computers/televisions/radios etc. They learn the sound passively most of the time, connecting the previous phrase to the next and trying to figure out meaning; there's visual data accompanying the sounds and vice-versa, and the parents can provide timely feedback with rewards or punishments. The neural network is made to immitate that process, so there needs to be a way to source a massive amount of data as well as their corresponding meanings.

The other day, I used a (at the time still free-to-try) AI image generator to give me pictures of straight bananas (don't wanna get into midjourney rn due to relative cost). This model is based on aversary network, with one AI discriminating the generator's results -- or, the discriminator learns from a preset of image-to-text data to learn to tell what is real and what's is generated from visual clues, and then it is fed inputs from the text-to-image generator to give results. The text-to-image generator adjusts its parameters to learn if its images are "correct" or not (the goal is to fool the discriminator) and the discriminator's goal is to tell the image generator's results from real images. But the problem is, you still need real data to further refine the discriminator. So when I chose not to regenerate images for straight bananas, I told the discriminator that these straight bananas looked like good straight bananas to me.

With OpenAI offering open-sources APIs, it is pooling developer's data from across the global, everyone is trying to jump on the train for generative AIs, and the feedback loop keeps going until the openAI project reaches their goal of creating result indistinguishable from man-made sources. IMO this is still web 2.0 -- the cloud-based business models. The AI is certainly transforming the market place, but that's not the point of AIs. The web simply made possible the acquisition of data in the amount and quality they could not earlier.

The end goal of the generative AI is to remove labour from creation, to this I've no doubt. You can generate a movie with nothing more than a script, and a script from a sentence, a sentence from a few words -- all these requires is but to assemble different models into one, one result fed to the next, easily accomplishable, streamlinable, automatiable. They have the tools to make it happen, and with the web, they are sitting there and getting the missing pieces they need to accomplish it.
 

stonehead

Active Traveler
Joined
Oct 23, 2022
Messages
177
Reaction score
645
Awards
69
Website
argusarts.com
OP's main point imo is right: ai is the next web.0 change, much more so than crypto.

The web 1.0 -> 2.0 is often misunderstood, but ultimately boils down to who creates the content.

accross both, money is made by appealing to advertisers who desire impressions.

Impressions are made by producing addictive content that draws in viewers.

Web 1.0 -- the platform owner creates the content.

Web 2.0 -- the users of the platform create the content. The platform owner simply curates it to make it maximally addictive.

Web 3.0 -- AI makes the content. We have no succinct model for how this plays out or who profits most. Likely the AI wranglers themselves, ultimately, but i think this is going to be an interesting decade to see how the tech company scene evolves and who comes out on top.

Blockchain nonsense doesnt shift who the content creator is, and thus doesnt make a direct analogy to what happened between web 1.0 and 2.0.
I'll admit I wasn't there during web 1.0, so I could be wrong, but I thought web 1.0 was about a paradigm of hypertext documents, and web 2.0 was all about adding interactivity and functionality to that. Like adding api backends and javascript and stuff. Wikipedia is the best current example I could think of of what web 1.0 was like. Something like Netflix wouldn't be possible without web 2.0, despite the fact that the users don't create any content.

Users producing the content isn't possible in an internet of essentially text documents, so it's still a decent way to understand the transition, but I think the technological reason behind the change was pretty important too.
An opinion I hear often is that, in the very near future, access to these AI tools are going to be heavily restricted to the average person; and I suppose I agree to an extent. After all, there's already been cases of face-swapping non-consenting people into porn, or using stable diffusion to create CP. All it takes is one ambitious enough lawsuit to get far enough in court to totally change the trajectory of the Internet.

It's anyone's bet though. All I know is that, the last people I would want to determine whether or not the Internet becomes a crypto-hellscape or an AI-hellscape is the US Government.
I wonder if the average person would have generated anything with ai anyways. I mean, right now most people don't, they just look at some pictures or funny text that someone else generated. I would expect that the average person of the future won't make ai-generated content (if you can even say anyone makes ai content), instead they'll just consume ai content as it drowns out human-generated content. The average web 2.0 user wasn't generating content either, just reading others' blogs and watching their videos. Unless if the lawsuit could somehow stop people overseas from generating waves and waves of content, I'm not sure how much it would actually change the direction we're heading.
 

gsyme

Traveler
Joined
Feb 24, 2023
Messages
33
Reaction score
107
Awards
28
I'll admit I wasn't there during web 1.0, so I could be wrong, but I thought web 1.0 was about a paradigm of hypertext documents, and web 2.0 was all about adding interactivity and functionality to that. Like adding api backends and javascript and stuff. Wikipedia is the best current example I could think of of what web 1.0 was like. Something like Netflix wouldn't be possible without web 2.0, despite the fact that the users don't create any content.

Users producing the content isn't possible in an internet of essentially text documents, so it's still a decent way to understand the transition, but I think the technological reason behind the change was pretty important too.

This sort of opinion is sorta what I meant when I mentioned that web 2.0 is misunderstood.

Server side stacks were critical for the development of the web 2.0 business model, but we had those technologies prior to web 2.0 becoming the big thing. ASP and Perl-CGI were things from the mid 90s, and JS was introduced in NetScape Navigator 2 in the early-mid 90s as well. And let's not forget flash! Brain bender: Amazon was founded in 1994, so we even have e-commerce of a sort fairly early (which indeed is sorta what drove the advertising boom to begin with -- people either make money selling their dot-com era product, or they make money advertising for someone else's product) Yahoo was founded the same year, and search engines likewise rely on server side tech to operate.

So we had server side tech stacks back then. We had JS. We had flash even! We lacked high speed until the late 90s, so no, a video service like modern netflix wouldn't have made sense, but there was certainly dynamic content and even some user generated content on web 1.0.

So again, it really boils down to the business model surrounding content production: are you driving impressions with staff produced content or with user driver content?

Let's compare, say, like a game fansite like the old nintendoland Zelda: the grand adventures (you can't even get this shit on wayback due to some shenanigans with how nintendoland was coded), to say, MySpace.

Zelda TGA's main draw was the fact that it had staff collected and curated information about zelda. This stuff was definitely stored and presented using a dynamic back-end (iirc, they were perl-CGI based), and they had a "Message Board" (read: forum), but the main draw to the site was the staff produced and curated collection of stuff about zelda. This is a 1.0 site, even though it has dynamic elements, since it primarily derives its impressions from staff produced content. We could say the same for most game fansites as well, e.g. Metroid Databse, Nintendoland's The Mushroom Kingdom, etc etc. They were owner-curated content hoards that were organized using server side tech, and had wateringhole forums for people who were interested in the site's main attraction.

MySpace on the other hand was entirely different in business model: the primary draw was for things other user's were posting on the site. It's the quintessential 2.0 site, since it almost solely derives impressions from user created content -- you didn't go to myspace to read Tom's blog, but rather to read content written by the other users.

So yeah, it's the business model -- the tech stacks have pretty much always been there, as has some modicum of user generated content, so the revolution is whether you primarily rely on the user content to drive impressions or not. Totally changes the game for how the site is run.
 
Because AI generated content is an endangerment to human creativity, and not just artists, but writers, musicians, and even just day to day friends.
On the point of AI art and soul, I think there is a fundamental difference between human art and machine art. Just like how I believe there is a fundamental difference between AI conscience and human conscience. My argument is basically that humans possess souls and machines are incapable of possessing souls. While they can imitate soul, conscience, and emotions very well, perfectly even, they are devoid of these in reality. To an atheist or a materialist there is no difference between a human and a sufficiently advanced machine, because their viewpoint leaves no room for the immaterial. From this viewpoint then yes, a machine's art and conscience would be equal to that of a humans, because humans would be no different from machines or "bio automata".

But because I believe that souls do in fact exist, and we cannot "create" them, then machine art and creativity is always subordinate to human creativity. It can look really good, it can even be the best of the best, an imitation that surpasses even the most skilled human artist to ever exist; and it will still be an imitation. An imitation that was facilitated by other humans through their scientific creativity and ingenuity.
 
Virtual Cafe Awards

Jared

יָרֶד
Joined
Oct 16, 2021
Messages
172
Reaction score
581
Awards
99
Website
xertech.neocities.org
Not gonna lie this AI shit is lamer than the movies displayed it as , i was hoping there would be like revolutions and cyborg wars and shit . But nahh its just discussions about the nature of art, and human creativity. Not gonna lie AI generation has saved me atleast a good 20 to 10 bucks, which is a positive in my books. Playing around with the various things are fun too considering you have (mostly) limitless potential at a minor or nonexistent cost. Overall AI has been a neat little thing , but it disapoints me we aren't having blade runner type scenarios. Maybe we'll have to wait till charles swabb finally does something
 
Virtual Cafe Awards

alCannium27

Active Traveler
Joined
Feb 15, 2023
Messages
164
Reaction score
269
Awards
55
Overall AI has been a neat little thing , but it disapoints me we aren't having blade runner type scenarios.
Don't be so hasty now... a bladerunner android would require at minimum an AI that's capable of mimicking human behavior, to the point of being near-indistinguishible from the real ones. Oh we can get a chasis for the robots sure, just look at boston dynamics -- but they can't act like humans. The field of machine learning is still quite young, a handful of decades max, and we are just beginning to explore the possibility of self-learning machines.
Back when I was in school a few years back, genetic algorithms were the rising star in non-supervised machine learning, it's the sort of thing that can teach a biped or quadrapel to walk on its own, learning how to balance itself, getting back on their feet, even jumping crawling etc. to navigate obstacles. Now it's all neural network this, neural network that, when in my days that was something only Google Deepmind made any splash. Today it's transformers, a little footnote a few years ago, now the new hot iteration of neural networks that's apparently going to disrupt the market place.
I think we are closer than we think to the life-like robots of Cyberpunk dystopias, today, LLMs can mostly accurately figure out the meaning of human sentences via context, despite still present limits. We have usable robot chasis, we have matured voice recognition tech (easily available too). I think the next step is Google and Microsoft perfecting their own LLM models in next few years, and integrate them into their own services like Cortana and Cortana, selling them as "smart personal assistants" with the added benefit of a closely human-like chat functionality; Amazon and Apple will either make their own or try to cut a deal with either party to put these abilities into Ciri and Alexa. This being the first step, to allow people to get use to the idea of a sort of para-social relationship with machines.
We will then want more. Commercial robots exist, now, but they are highly specialised. This makes sense, Amazon needs worker bots to do the simple job of loading and unloading shipping boxes in the warehouse, auto manufacturers needs robot arms to install components on the conveyer belts. They don't need these robots to do anything more. But these machines are either heavy or are limited by terrain, and they lack that "personal touch" of a speaking and understanding android voice. At some point, I think, when Boston dynamnics finally manages to lower production costs enough to push their higher-end stuff to the civilian market, they will start by converting their robot dogs and bipedal models into personal "pets", integrating the matured AI models on the market. This second step will be harder as the entry to compete is even higher, requiring a solid industry infrastructure to produce at scale. But after a while, the few players in the game will have thoroughly developed "platforms" to be sold to rich buyers.
People like fancy stuffs, rich people even more so. What about fancy stuffs that saves you money? What about obedience slaves in the form of robot maids that can do lmost anything in the house and require just electricity and periodic maintenance? What about ones who don't require contracts? And on top of that, probably a big conversation piece for the first couple of years. The rich will buy them, parade them on social media, and then, the normal folks will clamour for them, however impractical they may be in the early stages. There's potential monies there, there's will there, there's just not the tools for it yet.

The question next, I feel, is the direction of Machine Learning will take from now on. In transformer neural networks there's several angles of attack towards future development. One that I think will definitely take is to increase the efficiency of the model. Right now, LLMs are massive, having a model with a dimension in mutliple millions. They are unwieldy and costly. Look, I've recently watched Blade Runner 2049, there's the device named Joi. Now, actually hologram tech like that, I'm not so sure, but without it, it's basically Alexa with a really good LLM model.
Without Wi-fi Alexa is a glorified speaker, and I imagine near-future general purpose robots will have severely limited functions without wifi as they rely on huge ML models sending data to them remotely to function. If the efficiency of these the transformer NN can be improved drastically, perhaps there will come a day when individual robot units capable of processing its information real-time independently. We do it now, called "Edge Computing", basically distributing workloads to remote sites and sending them back to the main unit for merger. They can be like the Geths in Mass Effect, cordinatable drones capable of independent actions based on pre-defined parameters even when disconnected from the "mainframe" -- an important ability for military adoptation, I believe.
Anyways, robots with human like skins are probably miles away -- not because infeasbility, but cost and demand. I think future android will look anime, because then you don't have to worry about the uncanny valley, or whatever. Or animals, DnD creatures, or just colorful balls. Less hassel that way.
 

NSoph

The Singularity is Now
Joined
Jul 12, 2022
Messages
178
Reaction score
755
Awards
80
The way I see this explosion of AI generative arts (including imagery, sound, as well as texts) is as follows:

First, it's not at all surprising to me that these products are being pushed out at the blink of an eye atm -- the deep learning model, that being based on the neural network, has been created for a very long time, but the infrastructures to support any efficient use had been lacking. To expand, the neural network is, to my latest knowledge, the most accurate ML model at scale, but it's also a very cost-prohibitive model. It requires large numbers of servers pooling their computation resources together, at numbers impractical to host in one single location (energy consumption, heating, etc. all proves difficult for one locality). Cloud networks resolves that, now with the technology available since web 2 and matured (see AWS, Azure, etc.) it allows mass industrial adoptation of neural network based AIs.

Neural networks, as one can imagine, depends on massive clusters of servers to process input data and compute learning results. In a roughly similar process, input data are process in one neural, the result (like a signal) is passed to the next neurals in the chain to be further refined. Thus the more neurons and more layers these neurons are placed on to refine the data, the better the results. That means more servers produces the better results, and thus the richiest player in town has the ace in the business. We see this in crypto -- when computation power = better profit, we see resources conglomerate into syndicates. Take OpenAI for example, the aforementioned ChatGPT and DALL-E are both based upon this tech.

OpenAI is, according to microsoft (bing):

What we are seeing now, is massive corporations competing to push out the most ground-breaking and therefore, most attractive deep-learning product, as a result of the market having accumulated the cloud infrastructures required to do so.

The final piece of the puzzle is open-sourcing -- neural networks works by learning from data inputs with corresponding responses. Think about teaching a toddler how to say any words. It's a relatively simple process given time -- the toddlers is constantly bombarded with phrases spoken around them every day, from its parents, its relatives, friends of the family, passers-by, and even the smartphones/computers/televisions/radios etc. They learn the sound passively most of the time, connecting the previous phrase to the next and trying to figure out meaning; there's visual data accompanying the sounds and vice-versa, and the parents can provide timely feedback with rewards or punishments. The neural network is made to immitate that process, so there needs to be a way to source a massive amount of data as well as their corresponding meanings.

The other day, I used a (at the time still free-to-try) AI image generator to give me pictures of straight bananas (don't wanna get into midjourney rn due to relative cost). This model is based on aversary network, with one AI discriminating the generator's results -- or, the discriminator learns from a preset of image-to-text data to learn to tell what is real and what's is generated from visual clues, and then it is fed inputs from the text-to-image generator to give results. The text-to-image generator adjusts its parameters to learn if its images are "correct" or not (the goal is to fool the discriminator) and the discriminator's goal is to tell the image generator's results from real images. But the problem is, you still need real data to further refine the discriminator. So when I chose not to regenerate images for straight bananas, I told the discriminator that these straight bananas looked like good straight bananas to me.

With OpenAI offering open-sources APIs, it is pooling developer's data from across the global, everyone is trying to jump on the train for generative AIs, and the feedback loop keeps going until the openAI project reaches their goal of creating result indistinguishable from man-made sources. IMO this is still web 2.0 -- the cloud-based business models. The AI is certainly transforming the market place, but that's not the point of AIs. The web simply made possible the acquisition of data in the amount and quality they could not earlier.

The end goal of the generative AI is to remove labour from creation, to this I've no doubt. You can generate a movie with nothing more than a script, and a script from a sentence, a sentence from a few words -- all these requires is but to assemble different models into one, one result fed to the next, easily accomplishable, streamlinable, automatiable. They have the tools to make it happen, and with the web, they are sitting there and getting the missing pieces they need to accomplish it.
LLama partly solves the issue with centralization and the need for big clusters. As Google said "there is no moat", relatively good models can now be trained and run on much less compute than before. Innovation and adoption is exploding, USG will not be able to stop this alone when, small open sourced models that are still useful can be run on small third world servers. The genie is out of the bottle, now that the common man has tasted its potential.
 
Virtual Cafe Awards

NSoph

The Singularity is Now
Joined
Jul 12, 2022
Messages
178
Reaction score
755
Awards
80
On the point of AI art and soul, I think there is a fundamental difference between human art and machine art. Just like how I believe there is a fundamental difference between AI conscience and human conscience. My argument is basically that humans possess souls and machines are incapable of possessing souls. While they can imitate soul, conscience, and emotions very well, perfectly even, they are devoid of these in reality. To an atheist or a materialist there is no difference between a human and a sufficiently advanced machine, because their viewpoint leaves no room for the immaterial. From this viewpoint then yes, a machine's art and conscience would be equal to that of a humans, because humans would be no different from machines or "bio automata".

But because I believe that souls do in fact exist, and we cannot "create" them, then machine art and creativity is always subordinate to human creativity. It can look really good, it can even be the best of the best, an imitation that surpasses even the most skilled human artist to ever exist; and it will still be an imitation. An imitation that was facilitated by other humans through their scientific creativity and ingenuity.
What leads you to believe in souls?
Why do you think they have anything to do with creativity?
What do you think a soul is?
 
Virtual Cafe Awards

pronoundisrespecter

Raw Honey Defender
Joined
May 15, 2023
Messages
44
Reaction score
435
Awards
35
On the point of AI art and soul, I think there is a fundamental difference between human art and machine art. Just like how I believe there is a fundamental difference between AI conscience and human conscience. My argument is basically that humans possess souls and machines are incapable of possessing souls. While they can imitate soul, conscience, and emotions very well, perfectly even, they are devoid of these in reality. To an atheist or a materialist there is no difference between a human and a sufficiently advanced machine, because their viewpoint leaves no room for the immaterial. From this viewpoint then yes, a machine's art and conscience would be equal to that of a humans, because humans would be no different from machines or "bio automata".

But because I believe that souls do in fact exist, and we cannot "create" them, then machine art and creativity is always subordinate to human creativity. It can look really good, it can even be the best of the best, an imitation that surpasses even the most skilled human artist to ever exist; and it will still be an imitation. An imitation that was facilitated by other humans through their scientific creativity and ingenuity.
I feel this way about it too.

Like you more or less said here; AI has no soul. I'd expand on this with the fact that AI isn't sentient at all, it cannot express emotions, it isn't alive, it isn't a tangible living entity in the real sense of those words - it's just a bot trained to crap out something based on whatever information it was fed. They are not the same as a human being (or even an animal) which has free will, emotions, and a soul given to them by the most high God.
 
Last edited:
Virtual Cafe Awards
What leads you to believe in souls?
Why do you think they have anything to do with creativity?
What do you think a soul is?
-What leads you to believe in souls?
There are two possibilities after death: either there is some persistence or there is not. To believe there is nothing after death, that it is a void not to dissimilar from a long dreamless sleep, is a materialist belief. It asserts that life is the only time we exist and after that there is nothing, that really only matter exists. The other option is that we can persist after death. There is a lot of argument to what this persistence is, so I won't go into a lot of detail here on it, but just for sake of argument I believe that the soul is what we really are. That we are not just our bodies, but something beyond that. The biggest piece of scientific I could give on this are Near Death Experiences. Many people who are either declared legally dead, under so much anesthesia they couldn't possibly perceive anything, or otherwise severely incapacitated, report being able to accurately perceive things they otherwise wouldn't have any knowledge of. There's a thread on this somewhere here, but here's the research that thread is referencing: NDE info. Hell, I've had one when I got into an ATV accident. Granted it's self-reported data, but all of the isolated cases with similar experiences lead me to believe there is something. My belief in the soul mainly stems from personal experiences, religion and philosophy studies, MK Ultra, divination, and other small things here and there. While there is no truly scientific evidence to back most of this up, I believe it's mostly because our current scientific tools and understanding are not yet strong enough to describe the divine.

-Why do you think they have anything to do with creativity?
If you'll humor me and accept my previous argument, then souls have everything to do with creativity. It's not our brain chemistry that comes up with paintings. Sure our brains love simple things like symmetry, colors, order, but these things on their own do not represent creativity. They are aspects of creativity, tools to be used or discarded. Our soul who experiences life through our bodies senses, is able to evoke those experiences again. It doesn't just have to be painting, books, games, sculptures, all manner of art tries to evoke an experience or communicate a message. When you close your eyes and imagine things, that's your soul at work. When you create events that never happened or fantastical places, that's your soul at work. The brain itself isn't conscience or imaginative, It needs some other special spark to get the job done. Sure it facilitates this creative process, but it isn't the creator itself.

-What do you think a soul is?
Really tough question because anyone who knows the truth is dead! Jokes aside, but there are a lot of other more talented philosophical and religious thinkers who have done a much better job explaining their theories than I can. I think souls are things that exist, but weather or not they exist in physical space is tough to say. I believe that one day the scientific process could explain what they really are, but currently science is a field that is so corrupt and self-righteous that it almost refuses to acknowledge the possibility of the soul. They say there is no evidence, but perhaps we are looking in the wrong places. I believe humans can become more aware of their soul through various means, but it's like an atrophied muscle that needs a lot of proper exercise and care. And the path that leads to this awareness is often long, multilayered, and different for most people. This is where I think things like astral projection or remote viewing are able to be explained, as a possible use or example of "soul-awareness".

sorry for the wall of text :p
 
Virtual Cafe Awards

NSoph

The Singularity is Now
Joined
Jul 12, 2022
Messages
178
Reaction score
755
Awards
80
-What leads you to believe in souls?
There are two possibilities after death: either there is some persistence or there is not. To believe there is nothing after death, that it is a void not to dissimilar from a long dreamless sleep, is a materialist belief. It asserts that life is the only time we exist and after that there is nothing, that really only matter exists. The other option is that we can persist after death. There is a lot of argument to what this persistence is, so I won't go into a lot of detail here on it, but just for sake of argument I believe that the soul is what we really are. That we are not just our bodies, but something beyond that. The biggest piece of scientific I could give on this are Near Death Experiences. Many people who are either declared legally dead, under so much anesthesia they couldn't possibly perceive anything, or otherwise severely incapacitated, report being able to accurately perceive things they otherwise wouldn't have any knowledge of. There's a thread on this somewhere here, but here's the research that thread is referencing: NDE info. Hell, I've had one when I got into an ATV accident. Granted it's self-reported data, but all of the isolated cases with similar experiences lead me to believe there is something. My belief in the soul mainly stems from personal experiences, religion and philosophy studies, MK Ultra, divination, and other small things here and there. While there is no truly scientific evidence to back most of this up, I believe it's mostly because our current scientific tools and understanding are not yet strong enough to describe the divine.

-Why do you think they have anything to do with creativity?
If you'll humor me and accept my previous argument, then souls have everything to do with creativity. It's not our brain chemistry that comes up with paintings. Sure our brains love simple things like symmetry, colors, order, but these things on their own do not represent creativity. They are aspects of creativity, tools to be used or discarded. Our soul who experiences life through our bodies senses, is able to evoke those experiences again. It doesn't just have to be painting, books, games, sculptures, all manner of art tries to evoke an experience or communicate a message. When you close your eyes and imagine things, that's your soul at work. When you create events that never happened or fantastical places, that's your soul at work. The brain itself isn't conscience or imaginative, It needs some other special spark to get the job done. Sure it facilitates this creative process, but it isn't the creator itself.

-What do you think a soul is?
Really tough question because anyone who knows the truth is dead! Jokes aside, but there are a lot of other more talented philosophical and religious thinkers who have done a much better job explaining their theories than I can. I think souls are things that exist, but weather or not they exist in physical space is tough to say. I believe that one day the scientific process could explain what they really are, but currently science is a field that is so corrupt and self-righteous that it almost refuses to acknowledge the possibility of the soul. They say there is no evidence, but perhaps we are looking in the wrong places. I believe humans can become more aware of their soul through various means, but it's like an atrophied muscle that needs a lot of proper exercise and care. And the path that leads to this awareness is often long, multilayered, and different for most people. This is where I think things like astral projection or remote viewing are able to be explained, as a possible use or example of "soul-awareness".

sorry for the wall of text :p
The wall of text was great. Sounds all very reasonable.
The most important question to me is the last one, just as a thing of definition. What are we looking for, what are we trying to locate and find stuff out about?
It is often connected with the "real" you, but the mind and soul are frequently differentiated, like mind and body.
Nowadays we can see the mind as just a subcomponent of the body, when the brain is damaged, the mind is damaged, when the body dies, the mind dies.
So you could maybe argue, that what remains after the body and mind are destroyed is "soul" or the "real you"
But it wouldn't be stuff you value necessary, people most often cherish their memories, but memories are very tangibly stored in the brain and if you destroy it, it is irreversibly gone.
If you don't have your memories and thought-patterns, is it really you, is this something we value?

I have a hard time believing that there is some kind of backup, where all your memories are somehow stored another time. We can destroy memories and thought patterns by destroying brain-parts, and we can create them purely materially in the form of AI, so I don't quite buy that brains are merely "receivers" and get their content from a far-away place through without a measurable means of transmission.

I think NDE can often be reasonably explained by confabulation.
We can retrospectively create and change memories to "connect" two real memories to maintain a narrative. See the split-brain experiment or Wernicke-Korsakoff Syndrome (severe B1 deficiency due to alcoholism)
If you ask them what happened, they make up a story, that they believe is accurate, but is actually wrong/never happened. Also related to scientists predicting decisions by monitoring the brain, long before the person thinks they made the decision, people constantly hallucinate, especially if their brain is in bad condition (like when the working of the brain is disrupted by drugs or extreme stress or severe understimulation/isolation)
 
Virtual Cafe Awards