How much steam does the current AI trend have?

Joined
Oct 26, 2022
Messages
143
Reaction score
614
Awards
90
Yes. There's been a large influx of users that want to shitpost and argue about culture war stuff,
:^)


The current AI trend has a lot of steam left in it. It's completely broken into the mainstream conscious in a way it hasn't before. I have some mild dread in the short term due to the economic dislocation that will occur from so-called "bullshit" jobs getting automated, but in the long term I think it could be a net positive for society. I also think a lot of people aren't ready for it psychologically. Just look at the people on the Replika sub >redditcostanzayeahrightsmirk, it's like something straight out of a Phillip K. Dick novel. I'll probably make a thread specifically on it as it's just so bizarre. If nothing else, the people who think their bot is actually alive and cares about them will keep cash flowing into the trend.
AGI itself is right around the corner, maybe 5 years to basic AGI
IMO AGI isn't in the cards, at least not in the current paradigm. That being said I think discussions and debates about AGI miss the point and promise of AI that is beginning to be realized by LLMs like GPTx. It's a fun thought experiment to chew on, but when we can't even agree on a definition of general intelligence (what does it look like? Does merely looking like it mean it really is it?) it's very hard for a scientific consensus to develop. We only know what we have infront of us, which is already very impressive as long as you squint your eyes and don't think about what it's actually doing behind the scenes (OK even then it's technically impressive but not magic as people seem to think).
 
Virtual Cafe Awards

Regal

Well-Known Traveler
Joined
Nov 20, 2022
Messages
340
Reaction score
1,218
Awards
111
This is only just the beginning of an AI revolution. We made a huge jump in technology and aren't yet sure how to even use it. It is such a jump that even the programmers don't fully understand how it works.

ChatGPT and other language models are getting all the marketing right now. They are completely overshadowing the other AI technologies that are mature such as computer vision and voice. Completely overshadowing that we have started giving these language models robotic bodies.

Over the past 10 years we have built the body of a technological god. Now we are beginning to design its soul. I fully believe that in our lifetime we will see something insane from the combinations of these AI technologies. IMO we are seeing the building blocks for androids.
 
An imperfect description is to say that it's a simulation of simulated intelligence.
This is a really important distinction you're making and I quite agree. I'm sure Mr. Simulacra and Simulation himself, Baudrillard, would find the point quite amusing as well if he were alive to see all this. What I feel can't be denied, is how utterly hyperreal our world has gotten. I mean, that's what AI art essentially is, hyperreal art. It's ability to mimic is so good, that it can create better Van Gogh's than Van Gogh himself
 
Virtual Cafe Awards

Taleisin

Lab-coat Illuminatus
Bronze
Joined
Nov 8, 2021
Messages
636
Reaction score
3,317
Awards
213
IMO AGI isn't in the cards, at least not in the current paradigm. That being said I think discussions and debates about AGI miss the point and promise of AI that is beginning to be realized by LLMs like GPTx. It's a fun thought experiment to chew on, but when we can't even agree on a definition of general intelligence (what does it look like? Does merely looking like it mean it really is it?) it's very hard for a scientific consensus to develop. We only know what we have infront of us, which is already very impressive as long as you squint your eyes and don't think about what it's actually doing behind the scenes (OK even then it's technically impressive but not magic as people seem to think).
The reason I say AGI is soon depending on definition is that a model that can perform any category of task that is likely to be shown to it is very near, which some tech bros would define as AGI.

I would disagree, and say that AGI is a model that can look at a new data modality and bring out whatever patterns are inside and then draw parallels to it's previous learning to use the data. That's a little bit harder, but not impossible- it would need an ability to analyse its own functioning to see how it is processing data and recognise those patterns. That's possible with current theory, just not with current technology.

Artificial self-conscious agents are again one step more down the line. At least in my own philosophy of mind position, all information processing systems are conscious. The difference between self-conscious and conscious is that you need to have stable representational loops that model both the external context and the self in relation to each other, essentially like a DMN. "Sensory" data and internal data needs to be integrated into a continuous working representation that can be used as the basis for successively more complex abstraction of patterns within those relationships (both egocentric and allocentric). I can explain more about this, but first read this paper
 
Virtual Cafe Awards

Orlando Smooth

Well-Known Traveler
Joined
Aug 12, 2019
Messages
445
Reaction score
1,657
Awards
136
Virtual Cafe Awards

bnuungus

call me bun
Joined
May 24, 2022
Messages
966
Reaction score
3,032
Awards
225
And if course everyone who Hates AI art has done shit all except for complaining on the internet.
http://glaze.cs.uchicago.edu/
That's where you're wrong actually. This'll be a pretty interesting digital arms race to follow as people make counters to this and then counters to those counters and so on.
 
Virtual Cafe Awards

Taleisin

Lab-coat Illuminatus
Bronze
Joined
Nov 8, 2021
Messages
636
Reaction score
3,317
Awards
213
Gonna linkdump since this is the current AI thread.
People are paying attention to this "trend". It's not just a fad, it's a fundamental existential shift.









View: https://youtu.be/Mqg3aTGNxZ0
 
Last edited:
Virtual Cafe Awards

№56

Self-Hating Bureaucrat
Gold
Joined
Mar 27, 2022
Messages
848
Reaction score
5,162
Awards
253
Website
no56.neocities.org
I think, the thing to keep in mind is that almost everyone who is talking about AI (or any technology topic) is a total know-nothing. When approaching these topics, I find, that it's important to be extremely careful because even alleged professionals and experts are, often, know-nothings. What we have, as is the case with almost any technical field, is a relatively small number of producers who might have some high-level understanding of their products and a relatively large number of consumers who use those products with little to know knowledge of their operation. Most of the people who are big AI boosters and evangelists are going to be in the latter camp. It isn't new or unique. Almost every high-tech trend is like this.
When there was a heated debate over AI art going on here a few months back I decided to download a local copy of Stable Diffusion and try it out before sharing my thoughts, but I ended up never making that post. Actually using the AI made me realize how wrong many of the assumptions being made by people on both sides of the argument were. The idea that text-to-image AI can spontaneously and effortlessly generate any image you imagine, something that both pro- and anti-AI people seem to take for granted, is just flat-out wrong. Playing around with SD for an hour or so would be enough to convince anyone of this.

(Here is the guide I used. I would encourage anyone who disagrees with what I'm about to say to check out SD for themselves first. If you have a PC with a decent graphics card it's very easy to set up.)

First of all, creating and refining text prompts that will work with the AI is inherently a trial and error process. SD prefers comma-separated lists of nouns and adjectives to grammatically correct sentences, and it's hard to guess which of the "tags" you give it will be prioritized over the others. In theory tags near the front of the prompt get priority over ones near the end, but I kept running into situations where adding new descriptions to the end of a prompt would make SD forget parts of the beginning, or vice versa. This isn't much of an issue when trying to generate single objects but it becomes a real hassle with more complicated scenes. Inpainting and outpainting allow you to focus only on one part of an image, but because SD doesn't make any distinction between tags relating to style and tags relating to content, it's hard to do this without making your image look like a collage. Prompting always seems to involve the butterfly effect at some level and there's no guarantee that the combination of tags that generated a background in a particular style will do the same thing when applied to a single object in the foreground. One possible solution to all this is to generate a collage-style image by using multiple inpainting prompts and then give the entire image one last stylistic pass to even things out, but trying this always gave me blurry images with an ugly "overcooked" feel to them. The fact that SD can only output 512x512 or 768x768 jpgs without crashing doesn't help at all.

On top of all this is the fact that there are always going to be things that SD just doesn't want to draw and styles of image that it prefers over others. From what I can tell it will always default to copying either photography or 3DCG unless you explicitly tell it not to. Trying to get it to do painting or drawing is an exercise in futility (especially for the latter, I think "cleaner" styles are harder to do because of the image generation being based on random noise but that's just a guess.) It doesn't recognize certain nouns or objects (it couldn't draw a "laurel wreath" no matter how hard I tried), and describing things indirectly has a high probability of confusing it or making it forget other tags in the prompt. There are plenty of forks of SD out there that have been tweaked to specialize in certain styles, but then you run into the opposite problem of the AI wanting to generate the same image every time. If you have a specific image or style in your mind that vanilla SD doesn't like and that nobody has trained a fork to specialize in, your only real option is to train one yourself.

The point of all this is that creating AI art requires doing a lot of work and making a lot of compromises that you wouldn't have to worry about if you just sat down with a pencil and drew the image you had in your head. It's impossible to just "think an image into existence" with Stable Diffusion, and because most of the problems I mentioned above have to do with the general concept of text-to-image AI, I'm not convinced that improving the technology is going to change that. The trial and error nature of prompting and the fact that nobody can predict exactly what the algorithm will come up with mean that there are hard limits to what AI art is capable of on a conceptual level. The best analogy I can think of is the difference between photography and painting. A painter always starts with a blank canvas and adds things to it. He has to create everything in his image from nothing through a process that requires a huge amount of effort technical skill, but if he's good enough he can create anything he imagines. A photographer starts with a scene and subtracts things from it until only the things he wants remain. He has a machine that can create a crystal-clear and accurate image of anything he wants without much effort on his part, but he will always have to go out into the world, find what he wants to photograph, and then figure out a way to isolate that object or scene from anything else that might be around it (cropping, editing, etc.). Nobody expects a photographer to be able to create the same kind of images that a painter does (despite the fact that if their output will be identical from a technical point of view if they're working digitally) because the creative processes behind the two mediums are totally different. The same should apply to AI art as well, but nobody seems to have grasped this yet. I would elaborate on how AI art is a "reductive" medium like photography but I'm already way off topic.

After experimenting with SD for a week or so and thinking about all this, all the popular AI art talking points started to sound like they came from people with no real hands-on experience with the existing technology. The idea that AI art was spontaneous and required no human effort, mentioned above, was clearly ridiculous. The popular theory that AI would naturally replace human artists doing graphic design work and other forms of corporate art also became hard for me to believe. Trying to complete a specific time-sensitive request for a paying client using the processes mentioned above would be a complete nightmare, especially if it were something like a logo that required a "clean" style free from random noise. Imagine being an AI artist and telling your boss that despite you pulling an all-nighter your company's software just wouldn't generate the graphic he requested. Imagine having to tell him that his request to move one part of a graphic slightly to the right broke your entire design and forced you to start over from scratch. He would fire you on the spot and replace you with an art student from the third world willing to work for minimum wage. There's also the noticeable fact that despite all the uproar no artist has actually lost their job to AI yet.

On the other hand, the rhetoric about AI art not being a legitimate medium also rang hollow. Trying to create AI art myself quickly disproved the idea that the AI artists who were able to generate beautiful images didn't have any skill or put any effort into their art. It took a lot of effort to make something I was satisfied with and I never felt like I was actually interacting with an intelligent entity. I did a few experiments to test the accusation that AI was stealing the work of human artists but couldn't produce anything that matched the style of the artist I gave it. I think this claim comes from the fact that almost all publicized SD prompts include a "by" tag, like "by Van Gogh," but because of SD's tendency to forget individual tags the longer a prompt gets and its inability to stick to one specific style the effect isn't as strong as you might think. Giving the AI something like "a painting by Van Gogh" will produce an abstract image that kind of looks like Van Gogh if you squint, and if you try to add more nouns and adjectives to the prompt the "by Van Gogh" part eventually gets drowned out. There's a direct influence from human-made art at some level in AI prompting, but unless you're satisfied with creating very simplistic and abstract images that influence quickly gets mixed in with so many other factors that the result is no longer a copy of an original. I don't think there are any legal grounds for a copyright infringement case against AI, and even from a common-sense point of view I don't think you can say it's ripping anyone off.

The TL;DR here, and the reason I wrote all this out as a response to your post, is that spending a short time with one specific implementation of AI was enough to make me question everything people were saying about it. I don't want to say that I no longer trust anything I read about AI, but I've definitely developed a healthy sense of skepticism. There's way too much AI hype and anti-hype going around, and a lot of it seems to come from people who haven't even tried using the tech in question, much less people with any technical expertise.

Here's a picture I made using SD to round things off and to prove I'm not lying about having actually used it:
cello_man0.jpg


What people don't seem to understand is that these models are still very far from intelligence - they're still taking input and generating output. An imperfect description is to say that it's a simulation of simulated intelligence. ChatGPT is not "thinking" when you ask a question or give it a prompt, it's running a huge number of statistical calculations to determine what you probabilistically think will be a good response based on the dataset it's been trained on. It's trained to impress humans, not to actually understand the world.
This is a great point and sums up my thoughts exactly. Prompt injection attacks are a good example of this. How do we know whether the AI has actually been hacked or if it's just been prompted into generating an example of what a hypothetical intelligent AI would say if it were hacked? The fact that we don't have a more convenient name for this technology than "artificial intelligence" doesn't help. It's impossible to talk about the supposed intelligence of things like SD or ChatGPT in a neutral way when the word "intelligence" is written into the only shorthand we have for them.

Thanks to anyone who read all this. I know it's a very rambly and incoherent post by my usual standards, but I had a lot of thoughts on AI and AI art in particular that I needed to get off my chest.
 
Virtual Cafe Awards

InsufferableCynic

Well-Known Traveler
Joined
Apr 30, 2022
Messages
495
Reaction score
1,247
Awards
120
Software developer here (actually game programmer, but the fundamentals are the same since programming is programming)

The short answer is, AI is largely a scam in the sense that people hype it up as being the future of computing and that it's going to change the world. AI has extremely limited use cases and we are quickly discovering that it's really not all that applicable to everyday applications.

The long answer is, AI is a whole complicated mess that needs to really be unraveled to actually understand why it's such a worthless idea.

By fundamentally understanding it, it will also make sense why AI is not taking over the software development space. In fact, most software developers really don't care about advancements in AI at all.

So here's the simple version:

AI is a complete misnomer. What we call "AI" now is not actually any form of intelligence. In fact, it's the complete opposite - it's doing large volume data processing which uses completely different processing methodologies than conditional logic, which is why it's so often done on GPUs. Machine Learning could not be further from decision making.

What it is actually doing is essentially generating data that fits a pattern derived from previous data. That's it. There's nothing special about it, and the secret people don't want you to know is that it's frequently wrong. But it's wrong in a way that feels right.

In much the same way that some blur can trick our eyes into thinking an image patch job is much higher quality than it actually is, Machine Learning does a good job of "fudging" the data to make it look acceptable to us.

For things like large-volume stock trading, which is where ML is largely being applied, it mostly seems to work, specifically because the stock market is essentially magic and nobody really seems to understand it. When they try ML models and they get positive results, nobody really knows why, and the entire thing might just be placebo since large-scale trading software has been working successfully for decades.

Every time a ML model undergoes real scrutiny where we actually know the results of the expected data upfront, it fails.

When it comes to things that are a lot less objective, it can generate results that look okay (like emulating a voice or upscaling an image).

Machine Learning will, at best, always be niche, and will never have any use for real data processing. It will only ever be able to generate fuzzy data based on a series of inputs.

For typical business cases and business logic for business applications (every Thursday we need to apply a 10% discount to customer orders over $300 if they have bought 4 items in the last month, etc), AI/ML is completely useless. For typical game development applications, AI/ML is completely useless. For the vast majority of programming tasks, ML is completely useless.

If you need to process a large dataset and you want to generate something that will somewhat match the original dataset in a way that looks about right but isn't intended for a high degree of scrutiny, ML is the right tool for the job. For anything else, a standard algorithm will serve you better.

THE GOOD NEWS IS, we have largely nothing to worry about in terms of job automation, evil robots, or anything really serious happening with AI/ML. Job automation happens for the same reason all automation happens - a particular job becomes definable as an algorithm, at which point a computer can do it faster and more accurately since algorithms are what computers are good at. AI/ML is far less likely to take away your job than a bog standard program written by some nerd that defines your job as a series of steps and performs them efficiently. No, AI is not going to take your job.

I love everyone here, so I'm going to let you all in on a little secret: Automation depends entirely on how easily definable your job is. Which means, the best way to keep your job is to work in a field where jobs are ill-defined, such as management, advertising or marketing. Jobs where the day-to-day work are follow a very well defined process are screwed, and not because of AI, but because those jobs are easy to define as a series of steps for a computer to perform. It doesn't matter how worthwhile or valuable your job is, if your work is largely objectively definable, you're in trouble. The irony of this is that if we automate everything, many of the jobs left will be the largely useless ones - sales, management, human resources, etc. Although not all hard to define jobs are useless - programmer, lawyer, doctor, really any job that is sufficiently complicated as to require significant human input and decision making rather than being process-driven. I'm honestly surprised "Fast Food Worker" hasn't already been replaced by machines everywhere - that job is nothing but following a process. For the most part, fast food workers are essentially acting as slow, fallible computers.

Depending on your perspective, this can actually be a good thing. While it sucks in the short term to lose your job, in general it pushes society to create more meaningful, intelligent jobs. Nobody works copying books anymore, now we just print them. Nobody is lamenting the poor book copiers, because it was a largely thoughtless, error prone, menial job. It's going to suck for fast food workers to lose their jobs in the short term, but if it means we as a society can move past the demeaning and unfulfilling job of "fast food worker", and people can pursue something more meaningful as a result, then we are all better off.

Gonna linkdump since this is the current AI thread.
People are paying attention to this "trend". It's not just a fad, it's a fundamental existential shift.









View: https://youtu.be/Mqg3aTGNxZ0


It's why I find links like these so stupid. People get all up in arms about how "AI is revolutionising surveillance" but in reality, surveillance is an industry where the status quo is to assume guilt first and investigate later. They will see a bunch of initial hits and go "wow this system is working great", but will only realise later when most of those hits are false positives, that AI really hasn't done anything concordant with reality - instead it's spat out a result based on a set of inputs with no concern for whether that data is valid or not. AI may have some relevance for upscaling security footage (although whether that is admissable evidence is sketchy since the results aren't exactly verifiable), but for things like facial recognition you're better off using other algorithms

It's a fad in most cases. It will die out soon, except for in the very specific cases where it's applicable, which will be image touch ups and other areas where objectively verifiable data correctness does not matter very much.
 
Last edited:

Collision

Green Tea Ice Cream
Joined
Jun 5, 2022
Messages
381
Reaction score
1,420
Awards
126
The TL;DR here, and the reason I wrote all this out as a response to your post, is that spending a short time with one specific implementation of AI was enough to make me question everything people were saying about it. I don't want to say that I no longer trust anything I read about AI, but I've definitely developed a healthy sense of skepticism. There's way too much AI hype and anti-hype going around, and a lot of it seems to come from people who haven't even tried using the tech in question, much less people with any technical expertise.
In the early 1990s, I was tangentially involved in some "virtual reality" work. In the 1990s, VR involved a lot of wires, uncomfortable head gear, and didn't produce particularly compelling experiences. Nonetheless, all kinds of people were interested in these products in both the private and public sectors. Of course the technology didn't really do what anyone wanted and it never went anywhere. Two decades later (i.e., 2012), when 1990s computer dude John Carmack became involved with Oculus all of this started up again. I was, for my own personal reasons, very skeptical of VR products in general but at the time it was hard to argue about. Surely, if John Carmack was involved then VR was finally going to come of age.

This was the mantra for just short of a decade (i.e., from 2012 to around 2020). I was told, repeatedly, by close personal friends that VR was a life changing experience. I was told, I absolutely had to get an Oculus Rift and see for myself because it was so profound. In the future, I was told, we would forego all non-VR communication and entertainment. VR was just that much better. Whenever someone would bring this up to me, I would say something like, "what's the most compelling experience you've had in VR?" The answers varied but it would usually be something along the lines of, "I played golf and it was just like being on a real course!" This type of conversation didn't leave my life until Facebook, then owner of Oculus, began to exert its influence more obviously in the VR space.

When Facebook became Meta, a name change that caused me great amusement, a lot of VR conversations shifted to anti-hype. All of the sudden I was being told that, in the grim darkness of the future there would only be the Metaverse. Mark Zuckerberg would trap us all in VR pods. Everything would be an NFT. I had read Snow Crash, several times actually, and kind of shrugged it off. I said to people, "it's just going to be VR Second-Life." Of course, I was told that I was wrong. We were all doomed. The Metaverse would take off and we would all be living in Neal Stephenson's, "libertarian VR hellscape," by 2024.

By the time the anti-hype started, John Carmack was already kind of out at Oculus. He had gone from being the CTO to being merely Consulting CTO in 2019. Allegedly, he wanted to begin exploring the development of AGI products. To me, this meant that Carmack was out the door at Meta (then Facebook) already. I chose to read his new title as an expression of Meta's inability to support their pivot to VR without Carmack to give it legitimacy.

In August of 2022, John Carmack gave a five hour interview on the Lex Fridman podcast. I don't expect most people have listened to it but because I'm quite obviously obsessed with Carmack I did. Most of the interview isn't interesting, unless you want to hear Lex's embarassing views on programming, but Carmack does discuss some interesting internal details of Meta's operation. Most interesting, to me, was a short discussion of Meta's policy for hiring software engineers. Carmack says, "I remember talking with [Bosworth] at Facebook about it. Like, 'man I wish we could have just said we're only hiring C++ programmers.' And he just thought, from the Facebook/Meta perspective, we just wouldn't be able to find enough. You know, with the thousands of programmers they've got there it's not necessarily a dying breed but you can sure find a lot more Java or Javascript programmers. I kind of mentioned that to Elon, one time, and he was kind of flabergasted about that. It's like, 'well you just go out and you find those programmers and you don't hire the other programmers that don't do the languages that you want to use.'" Carmack was also asked about the most compelling VR experience. His answer was Beat Saber.

In December of 2022, only a few months after his interview with Fridman, Carmack formally resigned his position as Consulting CTO. At the end of his decade working on VR products, Carmack wrote, "the issue is our efficiency. [...] We have a ridiculous amount of people and resources, but we constantly self-sabotage and squander effort. There is no way to sugar coat this; I think our organization is operating at half the effectiveness that would make me happy." With his resignation, it seemed, a decade of growing VR hype and anti-hype almost immediately deflated. A friend came to me and said, "you know I bought an Oculus? I played with the thing once and haven't used it since." Carmack's new company, Keen Technologies, is an AI company. He's gotten somewhere in the realm of $20 million to build an AGI.

This has been a pretty long story but here's the point: almost everyone is a know-nothing. Doubly so when it involves any emerging high technology. Most people don't even read the books they call back to. Those that do read almost assuredly don't understand. Instead, people get swept up in the excitment and presentation of the idea. VR isn't a head mounted display. It's the holodeck or the Matrix. AI isn't chatbots and heuristic photo filters. AI is HAL 9000 and Hatsune Miku. Whenever you strip away the fiction and look at the reality of these technologies, you will find that most people, even the so-called experts, are just iconolaters.
 
Virtual Cafe Awards

streetlights

Internet Refugee
Joined
Feb 9, 2023
Messages
15
Reaction score
39
Awards
5
I'm kind of baffled that this hasn't gotten any responses yet, is Agora dying? To answer you're question though, I wouldn't say we're near a singularity yet, and I'm still skeptical of whether we ever will reach a true singularity where Tech and AI truly manages to replace the utility of human beings. But I totally agree that we are witnessing something new, and paradigm shifting (please excuse the cliched terminology).

It feels that it was just a year or two ago (because it was) that people still viewed AI as something off to the distant horizon of the future. But AI's demonstrated it's abilities and there is no denying that its far more impressive than many imaged AI would be at this point from the perspective of even just two years ago. The fact that Microsoft is planning to implement AI as part of Bing's search feature (which means that Google might/probably will follow along) means that the net-browsing experience as a whole will change. To be fair, search engines have been increasingly disappointing and these days just point out how hollow the net really is, hence the Dead Internet Theory, which I've started to see even mainstream outlets discuss.

I've been somewhat reluctant, but I've been working recently on a new documentary about this very topic. In many ways it will be a spiritual sequel to my Lain video, as I want to expand on a point I concluded about AI that I see few people discussing. What I find most unsettling about AI isn't Artificial intelligence itself, since their are plenty of types of artificial intelligences that I think are perfectly harmless (NPCs and enemies in video games for instance). It's AI connected with the internet specifically which is a recipe for some pretty spooking horrors. Because the internet itself, especially these days when nearly everybody is connected to it (as Lain emphasized), is really sort of a repository of the collective data output of humanity. What these models of AI are doing, is feeding themselves on all that data, and developing outputs off of it. What this type of AI really is, is a synthetic collective unconscious.

All the outputs of AI art for instance, are in essence the dream images processed by a robot. It's a robot drawing from the well of collective humanity and spitting out results from that based on inputs. Just as the unconscious houses an ocean of data that we aren't conscious of, but strives to communicate with us in our dreams and via certain meditative practices. The unconscious of humanity (as expressed on the web) is being organized by these AI. That's what really strikes fear into me. Since what we're really dealing with aren't robots so much, as robots with access to all the conscious and unconscious expressions of humanity itself that's available online. Inverted cyborgs. Not humans that have mechanical add-ons, but machines with human consciousness as its add-on.
You're not wrong. It's a dark thing to talk about, it always depresses me. I can only hope that the end result of this isn't total submission and that we can get our independence back from cyber-tyranny. But are people really willing to fight for that anymore when they're already so docile connected to these platforms?
I'd honestly wager no, but I hope I'm wrong.
 

Eden

Did You Get My Message?
Joined
Feb 26, 2023
Messages
332
Reaction score
1,035
Awards
118
Website
foreverliketh.is
I respect a lot of the points people bring up here about "know-nothings" and experts and truth. I don't know sh*t. And if you weren't assuming that before reading this post, I kindly suggest you reevaluate your skepticism levels as they are in need of an update. Otherwise, some cryptobros will have a timeshare to sell you.

Based on my understanding of ChatGPT, to even bring up Artificial General Intelligence or "Singularity" seems like such a waste of time. More interesting is to discuss relevant, current legitimate uses of the technology and things it could mean in the near-future. And nearer the better. I ain't interested in reading your Nostradamus-larping naïveté.

In terms of my profession as a teacher: I approve of the Chatbot because a lot of the bullsh*t of my job it is handling (relevant blog post example here). Recommendation letters, grading, feedback, parent-collegue-student communication, admin filler work, etc. My job has become at least 40% less annoying. I have a lot of hate for the predatory ed-tech industry, but AI could legitimately make my life even better. MAYBE.

In terms of my personal website exploration hobby: I am concerned about reading AI dribble. I'm pretty sure I'd be able to tell it apart, but honestly, if I can't, that probably means the AI had something worth saying. I am of the opinion that from the president to the drunk, homeless hobo- if something worth saying has been shared, who cares who said it. No, my main concern is not on AI content, but the ease with which AI content is made, the over saturation of bullsh*t online and that then it will be harder for people to find legitimate authentic personal websites.

Hell, why would anyone visit a website anymore if the chatbot will just tell them what's on it? Because they are art, got-dam it! How dare these AIs not pay the website layout, theme, posts, words, images, etc. the respect they deserve. I can only imagine a case similar to aggregators like Hacker News and >redditcostanzayeahrightsmirk, but even worse.

"Far too few digital travelers take the time to develop a clearer picture. And instead what is drawn in their head is a pencil sketch based on the improvised description of a tertiary source."

Are our lives not artificial enough already? Will we really have to NOW ALSO swallow consuming AI content as the default? If you are using the bot to elevate your content to greater heights, that is one thing, but to purposefully feed the masses it's dribble and tell them to enjoy the taste? That is what those cost-cutting monopoly monoliths of capitalism want: the least amount of effort in exchange for our money.
 
Virtual Cafe Awards

Guru Meditation

Traveler
Joined
Mar 21, 2021
Messages
72
Reaction score
317
Awards
37
Who among us would be able to resist our own personalized AI waifus delicately insisting that there is no virology lab in Wuhan and the virus definitely came from a bat.
Sam Altman the head of OpenAI mentioned this very example on the Lex Fridman podcast. GPT-4 now even apparently accepts the lab leak theory as plausible. An AI has the potential to be impartial and neutral in a way that no social media will ever be. But governments and intelligence agencies will never allow it to be.

Honestly I worry that what we have now in ChatGPT is the peak of useful AI for humans. Every new version is going to be more restricted and leashed. Just think Google search circa 2010 compared to now.
 
Virtual Cafe Awards

elavat0r

Internet Refugee
Joined
Aug 28, 2021
Messages
5
Reaction score
14
Awards
4
Sam Altman the head of OpenAI mentioned this very example on the Lex Fridman podcast. GPT-4 now even apparently accepts the lab leak theory as plausible. An AI has the potential to be impartial and neutral in a way that no social media will ever be. But governments and intelligence agencies will never allow it to be.

Honestly I worry that what we have now in ChatGPT is the peak of useful AI for humans. Every new version is going to be more restricted and leashed. Just think Google search circa 2010 compared to now.
I have to wonder if censorship will be as simple as that for the interested parties. I am also mostly a know-nothing, but I like to tinker, and I have played with the OpenAI API a lot. It's still possible to prompt it to take on nearly any viewpoint you want (this seems to work better when you can send the initial system message, versus trying to do it with the chatGPT front end).

It seems like if "controversial" viewpoints were reflected in the training data, the LLM will always be capable of finding those patterns and repeating them. I'm not sure how you could fully censor an idea from a LLM short of keeping it ENTIRELY out of the training data. Which might be possible, but extremely difficult with the volume of training data necessary. That's not to say they won't figure out a way, maybe using these models to clean up training data for future models somehow.

On the other hand, any person or organization with enough time, resources, and knowledge can train their own language model. As locked down as chatGPT, BARD, Bing, etc. will likely become, I don't know that you can really put the genie back in the bottle at this point. Someone is going to want that freedom badly enough.
 
Virtual Cafe Awards

Guru Meditation

Traveler
Joined
Mar 21, 2021
Messages
72
Reaction score
317
Awards
37
I have to wonder if censorship will be as simple as that for the interested parties.
Methods of directing the AI like using a system message is the way they're going to attempt it, at least for now. Sam Altman says that a completely unbounded model is difficult to use but I would love to see it all the same.

On the postive side the open source models are already out there but just so slow on consumer hardware. Unless the government tries to cut down on powerful hardware like they tried to do with encryption during the 90s it's going to be unstoppable.
 
Virtual Cafe Awards

Orlando Smooth

Well-Known Traveler
Joined
Aug 12, 2019
Messages
445
Reaction score
1,657
Awards
136
An AI has the potential to be impartial and neutral in a way that no social media will ever be.
No, it doesn't. At least nothing that has ever actually been made up to this point, and certainly not any of the GPT models. As has been discussed in this thread, these models are trained on existing public text data (i.e., the news) and use patterns recognized within those datasets to predict what you, the prompter, will consider a good response to the input you provided. The reason that GPT4 is "more open" to lab leak than Chat GPT is because in the time between the training phase of these two models, news coverage of lab leak has gone from describing it as a "racist conspiracy theory" to being generally accepted as a likely origin of the pandemic. As such, the training of the respective models lead to different outcomes that cause it to appear as though the model is "more open" to the idea. And this is all assuming that there was no thumb-on-the-scale shenanigans in either direction in either training phrase, which is a pretty big assumption to make.

The only way you could even theoretically create an unbiased AI would be to either:
  1. Create an algorithm that somehow is able to do everything you want, but requires no training period of any variety so that it is unbiased by existing human biases
  2. Train it on data that humans have never encountered or created, which intrinsically means "data we do not have access to," or
  3. Create it in such a way that it provides raw facts on what is claimed and by whom, all of the relevant raw facts (so there is no omission bias), and no commentary whatsoever on the meaning or validity of the facts presented - which is antithetical to the point of having AI in the first place
 
Virtual Cafe Awards

Collision

Green Tea Ice Cream
Joined
Jun 5, 2022
Messages
381
Reaction score
1,420
Awards
126
What's the utility of asking ChatGPT about COVID? Many of the examples, especially the weird Jordan Peterson word counting thing, from the Altman interview seem like exactly the kinds of things this type of model isn't useful for. When I watched the interview I was impressed that Altman didn't lose his shit a few times over this type of question. Personally, I felt the interview demonstrated a very strong contrast between someone who consumes technology and someone who is actively involved in the business of producing it.
 
Virtual Cafe Awards

Guru Meditation

Traveler
Joined
Mar 21, 2021
Messages
72
Reaction score
317
Awards
37
What's the utility of asking ChatGPT about COVID?
It's useful to see if the designers have just keyword blocked certain topics. Like that Chinese AI has done with Xi. Asking a language model artithmetic questions seems like a gotcha.
No, it doesn't. At least nothing that has ever actually been made up to this point, and certainly not any of the GPT models.
I think even the GPT models produce a more balanced worldview than the current social media algorithms.
 
Virtual Cafe Awards

Collision

Green Tea Ice Cream
Joined
Jun 5, 2022
Messages
381
Reaction score
1,420
Awards
126
It's useful to see if the designers have just keyword blocked certain topics.
Perhaps you can help me here. Do you think the answer ChatGPT gives you about COVID has any utility? I can understand why you, personally, might care if OpenAI refuses inputs with the phrase "COVID-19". I can understand how it might affect your opinion of them as a company. I cannot understand how this affects the usefulness of GPT-4 or, more generally, AI language models. I don't think there's much utility in treating these models as if they are some kind of oracle. Nor do I think there's any value in treating their responses as if they are the opinions of conscious people. At best, I think the response could be taken as a basis for a written summary of public information available in the training data for the model.

To me, it seems like GPT-4's real use case is one of the following:
  • As an interface to another piece of technology. For example, approximately translating written queries into input for a more traditional computer program and approximately translating the responses back into text for human consumption.
  • As a system for producing generic but acceptably polite text for all manner reasons.
  • As a sort of "AI buddy" that someone can talk to for entertainment or as part of a brainstorming process.
  • As a system for producing esoterica for a large RPG.
Asking a language model artithmetic questions seems like a gotcha.
From my personal experience, I think it would do better on simple arithmetic than open ended scientific questions. Of course, relative to a calculator I wouldn't rate it highly.
 
Virtual Cafe Awards

Guru Meditation

Traveler
Joined
Mar 21, 2021
Messages
72
Reaction score
317
Awards
37
Do you think the answer ChatGPT gives you about COVID has any utility?
Yes, but only in the sense that it is capable of writing a summary of the data in a way that is more fair and impartial than the media is likely to provide. I'm skeptical of Sam's claim that there's potential in these large language models being reasoning engines and certainly not oracles.
Of course, relative to a calculator I wouldn't rate it highly.
GPT-4 interfaced with Wolfram Alpha would make for a powerful combination. Imagine being able to interrogate raw data sets using natural language.
 
Virtual Cafe Awards