How much steam does the current AI trend have?

Artificial general intelligence is a name we give to the idea that a computer program can synthesize information in an intelligent manner for any application . In other words, it needs to have predictive models for every stimuli, and be able to adapt to new types of stimuli by analyzing its knowledge and applying predictive logic freely. That last part is the key. how do you teach a computer to make decisions without telling them what to do? computers rely on if x then y, they need recognizable prompts. how exactly would you go about doing this without it just being a predictive model. To be an agi the computer would need to know why x, and use that information in tandem with its physical knowledge in order to make a judgement. That is not possible now, and might not be possible ever. Ai's right now still use basic if x, then y structuring, just very very complicated. Modern ais are predictive in nature, and are based off of human knowledge, that means they cannot reason freely, they just imitate what they predict to be the optimal answer using identified word structures in the prompt. There is no "thinking" Because of this, they will always be limited and untrustworthy, because they cannot create their own models to learn from when given new information not easily defined by humans already. When humans learn, they build a model of the subject unconciously, using symbols to remember key parts of it. Humans use all these models together in everyday life, with relationships between these models constantly shifting. Ai lacks this fluidity, and as they become more complicated, they become more rigid. Ai's dont know how to shift their structures to rule out inconsistencies. They dont know the real world, all they know is what we tell them the real world is. All of these problems might be solvable, or not. It seems simple but its not. an agi would need to be able to answer every question that a human could know, drive a car, recognize colours and sound and time and so many things. That is just so so far away right now it might as well be impossible.

I guess I should have specified that I don't see agi coming anytime soon, but Im not in the industry so I have no idea.

That reminds me of artificial sentience, which I think is even more impossible.

The idea is to have sentience somewhat similar to how a human can experience it. But how would we know this? We would create a program that answers all of our questions designed to identify sentience correctly, or have cognitive structures that imitate humans that we think would allow for sentience. Essentially we are creating something that will say yes, I am sentient, in a million different ways. It could be sentient, or it could just be a language model that fools even us, there is no way of knowing. If it looks like a mind, talks like a mind, and seems to think like a mind, is it a mind?

Another thing, human experience of sentience is connected to our existence as a biological animal. We have no idea to the extent our physical existence impacts our conception of what sentience is. The brain we think with moves our limbs, experiences the senses, and subconsciously processes information with heavy bias towards social communication, reproduction and survival. This is what our brain evolved to do for millions of years. How are we supposed to recreate it from the outside? We have no clue how the brain works, we have not mapped the human neuron structure, and we might never be able to. Modern neuroscience has come to the conclusion that the brain reuses assets as much as possible to increase efficiency. Each part of your brain is involved in many different tasks, in a insanely complex web of relationships. From a purely logical standpoint it is probable that it is mandatory to recreate these exact relationships in order to create what we think of as sentience.

I don't know anything about computers btw this is just bullshit

I think your post is more accurately described as a solid intuition about the differences between man and machine. To some extent I suspect ais will begin to convince us that they are sentient, or at least they will permeate the internet in such a way that distinguishing between real users and bots will become increasingly difficult if not impossible. Presumably, I could train an AI to make posts just like this one, let it loose on the road, and no one would know the difference. But then, isn't that still just me posting? It'll be hard to tell, but philosophy and law will need to find a way to accommodate that question and others like it. I also think this phenomenon will push people off the internet and back into the real world.

As for OPs question, I suspect ais will make some tasks that were once cumbersome and frustrating extremely easy while leaving others completely untouched. Until an ai can wipe my ass, suck me off, and wash my dishes all at at once without me realizing it was a computer program that did it, agi will remain a dream.
 

yaz

Internet Refugee
Joined
Apr 1, 2023
Messages
17
Reaction score
84
Awards
9
Recent AI progress is not a fad like crypto or VR. It is a potential serious danger to humanity.

Current form of AI with improvements:
- internet is going to become even more of a bleak content grinder than it already is,
- artists already are getting even less commissions, due to people being able to generate their content for free,
- jobs are going to be displaced, it will be difficult for anyone working jobs that is intellectual property work ,
- job market will shift temporarily to people operating AI and then AI will replace the operators too,
- value will flow towards economic top % of people (even more than it is now), effectively promoting the wealth gap even more

I am not sure if the current economic system can sustain this and even worse, that's of little concern to me, compared to other possibilities.

Singularity event (granular self improvement) is not out of the question and even if it is not the commonly understood AGI, it can be of existential threat to humanity.

Here's a little thing to show you how AGI gets created (with model capable enough or current models with a stroke of bad luck)

Seeing current ignorance on the topic, I am not too optimistic

View: https://twitter.com/therecount/status/1641526864626720774


"the car's on fire and there's no driver at the wheel"

View: https://voca.ro/12XQxKDldNtF
 

SomaSpice

Sandwich Maker
Silver
Joined
Jul 26, 2021
Messages
1,064
Reaction score
5,156
Awards
262
Virtual Cafe Awards

alCannium27

Active Traveler
Joined
Feb 15, 2023
Messages
172
Reaction score
277
Awards
58
There are a couple of new GPT-4 research papers that are, IMO, interesting. We all know about the "spark of AGI" paper, which is fine. But the one where it showed that GPT-4 is now capable of reflective learning is truly err.... fascinating. Having a model capable of reflecting upon its results with minimal, indirect human input shows that at the very least, it's capable of "learning" in a form that approaches human behavior. Thing of it this way: if the machine can figure out the flaws of their own answers, what need is there for us to think? Just keep asking "is the answer you've given correct" and it would continue to improve upon it.
The future is not ours, one cannot possibly be blind to that now.
 

PizzaW0lf

In search of something greater.
Joined
Jul 16, 2022
Messages
426
Reaction score
1,090
Awards
121
There's still plenty of room for improvement

aigpt.JPG
 
Virtual Cafe Awards

Collision

Green Tea Ice Cream
Joined
Jun 5, 2022
Messages
381
Reaction score
1,424
Awards
126
Is it justified or am I too ignorant to notice the warning signs?
Here's my advice on the matter: look into Mr. Yudkowsky. What is his field of study? What are his credentials? What scholarly publications has he written for? Who is behind those publications? When you can start answering these questions, I think, you will be able to tell if you're justified or not.
 
Virtual Cafe Awards

yaz

Internet Refugee
Joined
Apr 1, 2023
Messages
17
Reaction score
84
Awards
9
I see a lot of neuroticism surrounding the topic of AI. I think a lot of people are rationalizing themselves into hysteria, like in the link I've posted. Is it justified or am I too ignorant to notice the warning signs?
1680597287229.png

"simple" AI without iterative growth still has it's potential consequences

AI as a whole is an introduction of a new species to our human ecosystem. People that are currently creating it, don't precisely understand how it achieves what it achieves. All they can say is guess or observe what it already is capable of doing.

System we live in allows for that easily, "because if I don't do it, someone else will beat me to it" and so it goes ad infinitum until the grave.

Exponential growth would mean that to AGI we'd be less than ants are to us.


There's still plenty of room for improvement

View attachment 58642
just ask it to describe it as a GTA mission :Crash1:


Here's my advice on the matter: look into Mr. Yudkowsky. What is his field of study? What are his credentials? What scholarly publications has he written for? Who is behind those publications? When you can start answering these questions, I think, you will be able to tell if you're justified or not.
Yudkowsky is more of a backseat philosopher who had little too much time on his hand to read scifi, but you don't need to listen to him because if you care about publications, new ones are are coming out that are slowly aligning with his points more and more.



It is dangerous enough to take a "better safe than sorry" stance. What comes out of it is something that will at least turn the society on its head - not necessarily bad, but answer yourself how bad it can be if it's the corporations that are capable of doing the most. At worst we're just a disposable boot loader for the god machine - something I've already been thinking for a long is the "natural" path for society.
 

Taleisin

Lab-coat Illuminatus
Bronze
Joined
Nov 8, 2021
Messages
637
Reaction score
3,322
Awards
213
Virtual Cafe Awards
  • Costanza
Reactions: yaz

Taleisin

Lab-coat Illuminatus
Bronze
Joined
Nov 8, 2021
Messages
637
Reaction score
3,322
Awards
213

View: https://youtu.be/cP5zGh2fui0

multimodality and complex pattern recognition + self-reflection + stable self/environment contextual modelling = AGI
read the Nature paper on the default mode network from my linkdump.
 
Virtual Cafe Awards

Guru Meditation

Traveler
Joined
Mar 21, 2021
Messages
72
Reaction score
323
Awards
37
look into Mr. Yudkowsky. What is his field of study? What are his credentials?

Listening to some of the Yudkowsky and others, I'm rekon they're paid by Microsoft to generate hype through fear. How many people signed up for Bing imagining a surreal conversation with a Wintermute?

GPT-4 is an amazing tool. But LLMs are fundamentally driven by the thousands of man-years of tedious and continuous human training. Stop that training and the model degrades quickly.
 
Virtual Cafe Awards
asked Bing AI about "what make us long for 95-07 era" - wrote me what i should, but it was like cotton candy/peanut butter - few points, ramble, then nothing... - https://www.tumblr.com/nutzo0001/714338846279680000?source=share (reposted on tumblr, by me)

TXT-ONLY:

Bing AI BlogPost​

*Write about* "1995-2007 futures are gone for us now"

The year is 2023 and we are living in a world that is radically different from what we imagined in the late 1990s and early 2000s. The internet boom, the dot-com bubble, the rise of social media, the advent of smartphones and tablets, the emergence of artificial intelligence and biotechnology, the globalization and digitalization of everything - all these trends have shaped our lives in ways that we could not have predicted or prepared for.
But what about the futures that we dreamed of back then? The futures that were promised to us by science fiction, by visionary thinkers, by optimistic entrepreneurs? The futures that we hoped for, that we worked for, that we invested in? Where are they now?
In this blog post, I will explore some of the futures that we lost in the past two decades and why they did not materialize. I will also reflect on what we can learn from them and how we can create new futures that are more realistic, more sustainable and more desirable.

The Future of Work​

One of the futures that we envisioned in the late 1990s and early 2000s was the future of work. We imagined that the internet would enable us to work from anywhere, anytime, with anyone. We imagined that we would have more flexibility, more autonomy, more creativity and more productivity. We imagined that we would be able to pursue our passions, our talents and our interests without being constrained by geography, time zones or corporate hierarchies.
But what happened instead? The internet did not liberate us from work; it intensified it. It made us more connected, more accessible, more accountable and more competitive. It blurred the boundaries between work and life, between professional and personal, between public and private. It created new forms of stress, anxiety, burnout and isolation. It also created new forms of inequality, exploitation and precarity. It disrupted many industries, sectors and professions, displacing millions of workers and creating new skills gaps and labor shortages.
The future of work that we dreamed of is gone for us now. We need to rethink how we work in the digital age, how we balance our needs and aspirations with our responsibilities and obligations, how we collaborate and communicate with our colleagues and clients, how we learn and adapt to changing demands and opportunities.

The Future of Education​

Another future that we envisioned in the late 1990s and early 2000s was the future of education. We imagined that the internet would enable us to access unlimited information, knowledge and wisdom from anywhere, anytime, with anyone. We imagined that we would have more choice, more diversity, more quality and more affordability. We imagined that we would be able to learn anything, anytime, anywhere, at our own pace and style.
But what happened instead? The internet did not democratize education; it commodified it. It made us more dependent on standardized tests, rankings, credentials and certificates. It made us more vulnerable to misinformation, disinformation and propaganda. It made us more passive consumers of content rather than active creators of meaning. It also made us more isolated learners rather than engaged participants in communities of practice.
The future of education that we dreamed of is gone for us now. We need to rethink how we learn in the digital age, how we curate our sources of information, how we evaluate our sources of knowledge, how we apply our sources of wisdom, how we share our sources of inspiration.
 
Virtual Cafe Awards

Regal

Well-Known Traveler
Joined
Nov 20, 2022
Messages
341
Reaction score
1,226
Awards
112
In the early 1990s, I was tangentially involved in some "virtual reality" work. In the 1990s, VR involved a lot of wires, uncomfortable head gear, and didn't produce particularly compelling experiences. Nonetheless, all kinds of people were interested in these products in both the private and public sectors. Of course the technology didn't really do what anyone wanted and it never went anywhere. Two decades later (i.e., 2012), when 1990s computer dude John Carmack became involved with Oculus all of this started up again. I was, for my own personal reasons, very skeptical of VR products in general but at the time it was hard to argue about. Surely, if John Carmack was involved then VR was finally going to come of age.

This was the mantra for just short of a decade (i.e., from 2012 to around 2020). I was told, repeatedly, by close personal friends that VR was a life changing experience. I was told, I absolutely had to get an Oculus Rift and see for myself because it was so profound. In the future, I was told, we would forego all non-VR communication and entertainment. VR was just that much better. Whenever someone would bring this up to me, I would say something like, "what's the most compelling experience you've had in VR?" The answers varied but it would usually be something along the lines of, "I played golf and it was just like being on a real course!" This type of conversation didn't leave my life until Facebook, then owner of Oculus, began to exert its influence more obviously in the VR space.

When Facebook became Meta, a name change that caused me great amusement, a lot of VR conversations shifted to anti-hype. All of the sudden I was being told that, in the grim darkness of the future there would only be the Metaverse. Mark Zuckerberg would trap us all in VR pods. Everything would be an NFT. I had read Snow Crash, several times actually, and kind of shrugged it off. I said to people, "it's just going to be VR Second-Life." Of course, I was told that I was wrong. We were all doomed. The Metaverse would take off and we would all be living in Neal Stephenson's, "libertarian VR hellscape," by 2024.

By the time the anti-hype started, John Carmack was already kind of out at Oculus. He had gone from being the CTO to being merely Consulting CTO in 2019. Allegedly, he wanted to begin exploring the development of AGI products. To me, this meant that Carmack was out the door at Meta (then Facebook) already. I chose to read his new title as an expression of Meta's inability to support their pivot to VR without Carmack to give it legitimacy.

In August of 2022, John Carmack gave a five hour interview on the Lex Fridman podcast. I don't expect most people have listened to it but because I'm quite obviously obsessed with Carmack I did. Most of the interview isn't interesting, unless you want to hear Lex's embarassing views on programming, but Carmack does discuss some interesting internal details of Meta's operation. Most interesting, to me, was a short discussion of Meta's policy for hiring software engineers. Carmack says, "I remember talking with [Bosworth] at Facebook about it. Like, 'man I wish we could have just said we're only hiring C++ programmers.' And he just thought, from the Facebook/Meta perspective, we just wouldn't be able to find enough. You know, with the thousands of programmers they've got there it's not necessarily a dying breed but you can sure find a lot more Java or Javascript programmers. I kind of mentioned that to Elon, one time, and he was kind of flabergasted about that. It's like, 'well you just go out and you find those programmers and you don't hire the other programmers that don't do the languages that you want to use.'" Carmack was also asked about the most compelling VR experience. His answer was Beat Saber.

In December of 2022, only a few months after his interview with Fridman, Carmack formally resigned his position as Consulting CTO. At the end of his decade working on VR products, Carmack wrote, "the issue is our efficiency. [...] We have a ridiculous amount of people and resources, but we constantly self-sabotage and squander effort. There is no way to sugar coat this; I think our organization is operating at half the effectiveness that would make me happy." With his resignation, it seemed, a decade of growing VR hype and anti-hype almost immediately deflated. A friend came to me and said, "you know I bought an Oculus? I played with the thing once and haven't used it since." Carmack's new company, Keen Technologies, is an AI company. He's gotten somewhere in the realm of $20 million to build an AGI.

This has been a pretty long story but here's the point: almost everyone is a know-nothing. Doubly so when it involves any emerging high technology. Most people don't even read the books they call back to. Those that do read almost assuredly don't understand. Instead, people get swept up in the excitment and presentation of the idea. VR isn't a head mounted display. It's the holodeck or the Matrix. AI isn't chatbots and heuristic photo filters. AI is HAL 9000 and Hatsune Miku. Whenever you strip away the fiction and look at the reality of these technologies, you will find that most people, even the so-called experts, are just iconolaters.

I really appreciate your unique experiences and insight. 5+ years ago I briefly ran a podcast and Youtube channel dedicated to VR/AR. I took a Unity design course. I was going to VR conventions. I was one of the first people outside of Microsoft to play with the Hololens. I was super bought in that this was the future. So bought in that I was in on the ground floor of the next big thing. I was even really evaluating opening a VRCade. But over time it just got dull. Spending 10 minutes prepping my headset, workspace, and getting Steam to work properly every time I wanted to enter VR was so painful. Dealing with sweat on the headset after a hour of play was also deterring. Only having a handful of good games/experiences to queue up also killed it. Overall the industry agreed that VR kinda sucks in practice. Or that we just aren't there yet technologically.

To answer your question my most compelling experience was VRChat. So many quirky experiences. Once went to a virtual club and have never had so much fun just dancing alone in my apartment. I knew when Meta announced their metaverse it was going to be terrible. Little did I know it was going to be worse than VRChat.

Despite all that, I am optimistic about AI. AI is useful today. I don't have to do anything special to use it. I don't have to wait for devs to create new products for it. I don't have to wait for other people to see the value in it. AI is here to stay. The problem is that AI is way scarier than metaverse ever will be.
 

RisingThumb

Imaginary manifestation of fun
Joined
Sep 9, 2021
Messages
715
Reaction score
1,758
Awards
173
Website
risingthumb.xyz
I've been playing around with it, and ChatGPT has seemed to me remarkably powerful. It still needs people to be able to direct it because it's still very dumb, but as a huge productivity tool it's benefits can't be denied. It's really useful for generating scripts to automate a lot of processes, cutting out a lot of the boring work. In the last couple of days using ChatGPT I have used it to:
  • Find out what changes I needed to make to a codebase, following a major version dependency update.
  • Find out what git commands I need to use, to branch off of a different remote origin, pull it down locally to make changes, and push those changes back up(kinda trivial, but not something I do often, like once every 6 months or so)
  • Generate a script for manipulating audio and video files in a way I wanted to without faffing about in a stupid nonsense video editor
  • Generate a script for creating thumbnail images given template files and a font file, adding in any stylistic changes I wanted, so I can mass-print images(still need to include stable diffusion parts for generating the true background from a prompt) instead of faffing about in a graphics editing program
  • Get from >redditcostanzayeahrightsmirk, the top trending and engaged topics relating to some topic(philosophy) and create an interesting essay on the topic, providing me a choice of descriptions off of that essay, a choice of "clickbait" titles, and a choice of "clickbait" thumbnail texts(all using revChatGPT, so I don't have to pay for API use). I have also used it to create a suitable prompt from that essay that I plan to feed into StableDiffusion to then get the background. I'll still need to get around to using the 11.ai api to connect it all together though. This also works for twitter, but I found out they changed their API so you can't pull tweets unless you pay, and the bot's information on scraping from it assumes the same HTML layout from 2021. I could probably give it the HTML for the current layout and ask it how to do it, given that XML structure now that I think about it.
  • I have also used it for simulating dialogues, and providing a bunch of example responses(Fallout Style dialogue trees)
  • EDIT: I tried it out for making a LaTeX template with aesthetic flourishes in the old style of books and got this result
1681263022271.png

Danke Schoen ChatGPT :D

And talking with a friend he has used ChatGPT for simulating combat in his TTRPG system, as well as finding holes in the linguistic description of his system. I'm personally considering seeing how I can use it in coordination with LaTeX to typeset books in that delightful and excellent old-school Folio-style, as well as finding old out-of-print textbooks that are public domain that the world could benefit from seeing back in print. If you want approaches to a problem, particularly mathematical ones it can help you with that, and it can provide samples and describe how they work(at different levels of literacy if you want). A good example of this is arbitrage betting(not too common to find, but with AI you can cheat this out until you get b& by the various online betting places). You can also probably use it for automated daytrading too, asking it what AI techniques, where to get data, where to go for no-fee trading... etc. Another potential use is using it in dating apps... I think some people have used it for this already, I might give that a try myself, see what it accomplishes.

As I see it, ChatGPT is really just a tool that augments your own abilities and skills massively, and lets you have 10x more productive output in menial-somewhat menial tasks. You still have to figure out how to handle gluing all its outputs together in a sensible way. I think this holds true for a lot of AI, in that it's just a tool as used currently, and the use-cases of that tool haven't been fully explored. It's trivial to say content generation, but because it will end up producing a lot of slop, it will then become a game of producing meaningful content if it's to be put on the internet. It also means creating many passive revenue streams will be much easier currently to augment your full salary for whatever job you work, and because of it's efficiency, you can probably work double or triple jobs(moonlighting) if you're skilled. This content generation will apply to music creation, art creation, program creation and maintenance, finding out information from documentation that makes no sense, finding out information that's inaccessible by literature, notation or language to a much more understandable level. I see it being very useful in game's development too, meaning with the right combination of 11.ai, chatGPT and their ilk you can probably make non-trivial and rather interesting quest and dialogue systems. Most Indie games currently have dialogue that isn't spoken, so this alone is a big deal for creating immersive experiences. Animations and 3D modelling are 2 areas I haven't seen conquered by AI, but a coworker showed me Spline AI which I've joined the waitlist for, since it seems to promise prompt-based 3D modelling and texturing. I haven't seen any great music AI yet, but it's only a matter of time as music itself is its own language and GPT is just a generative pre-trained transformer... As for content generation woes on the internet, scammers are gonna take advantage of it, but some people will also take advantage of it to spread meaningless messages(memes), misinformation, lies and slander, as well as taking advantage of it to spread a positive or educational message. It is gonna be really terrible for scammers and phishers as it means their bait will follow natural language correctly and make sense.

The "Capital" or Wealth-gap that a lot of people are observing resulting from this will be better described in my opinion as a class of people below the programming API and a class of people above the programming API. Naturally it also presents problems for the prussian-style schooling we have around us, as those focus on cramming knowledge and not about understanding, but now gauging understanding in open-ended essays is also difficult. It absolutely is something big and unprecedented, and is a massive equaliser in information or knowledge-based job sectors. It's exactly as Bill Gates says, it's not been used to its full effects, and all the low-hanging fruit is being picked and is ripe for the picking. If you're of an entrepreneurial mindset, it's not something to lag behind- especially given it's free to use and the only cost is your time. Even artists are using it to augment their art as they provide a sketch as a basis, pass it into AI Art, and then post-process the image cleaning it up, and removing and snipping away the nonsense of it.

In one short word to describe ChatGPT. Is this what steroids are like?
 
Last edited:
Virtual Cafe Awards

Andy Kaufman

i know
Joined
Feb 19, 2022
Messages
1,185
Reaction score
4,795
Awards
209
A friend of mine who's also a software dev is now legally allowed to use chatGPT while coding and said that he uses it several times a day to create arbitrary functions. Something like
"give me a method that returns the highest value of this vector as a String" .
It's basically smart auto completion.
Ofc it doesn't really do any logic but just saves him time googling for the right stack exchange thread a lot.
I was surprised how fast it caught on in people's everyday life. To be fair, he works at a small startup company so they're quick to adopt new tech.
 
Virtual Cafe Awards

Regal

Well-Known Traveler
Joined
Nov 20, 2022
Messages
341
Reaction score
1,226
Awards
112
A friend of mine who's also a software dev is now legally allowed to use chatGPT while coding and said that he uses it several times a day to create arbitrary functions. Something like
"give me a method that returns the highest value of this vector as a String" .
It's basically smart auto completion.
Ofc it doesn't really do any logic but just saves him time googling for the right stack exchange thread a lot.
I was surprised how fast it caught on in people's everyday life. To be fair, he works at a small startup company so they're quick to adopt new tech.

Yep. I use it for Powershell. It isn't very good at nuance or what to do under non-typical circumstances, but it works great for saving me time here and there. Prompting it to write a code to disable someone's account is sometimes faster than googling or scrolling through my hundred internal scripts.
 

yaz

Internet Refugee
Joined
Apr 1, 2023
Messages
17
Reaction score
84
Awards
9
I've been playing around with it, and ChatGPT has seemed to me remarkably powerful. It still needs people to be able to direct it because it's still very dumb, but as a huge productivity tool it's benefits can't be denied. It's really useful for generating scripts to automate a lot of processes, cutting out a lot of the boring work. In the last couple of days using ChatGPT I have used it to:
  • Find out what changes I needed to make to a codebase, following a major version dependency update.
  • Find out what git commands I need to use, to branch off of a different remote origin, pull it down locally to make changes, and push those changes back up(kinda trivial, but not something I do often, like once every 6 months or so)
  • Generate a script for manipulating audio and video files in a way I wanted to without faffing about in a stupid nonsense video editor
  • Generate a script for creating thumbnail images given template files and a font file, adding in any stylistic changes I wanted, so I can mass-print images(still need to include stable diffusion parts for generating the true background from a prompt) instead of faffing about in a graphics editing program
  • Get from >redditcostanzayeahrightsmirk, the top trending and engaged topics relating to some topic(philosophy) and create an interesting essay on the topic, providing me a choice of descriptions off of that essay, a choice of "clickbait" titles, and a choice of "clickbait" thumbnail texts(all using revChatGPT, so I don't have to pay for API use). I have also used it to create a suitable prompt from that essay that I plan to feed into StableDiffusion to then get the background. I'll still need to get around to using the 11.ai api to connect it all together though. This also works for twitter, but I found out they changed their API so you can't pull tweets unless you pay, and the bot's information on scraping from it assumes the same HTML layout from 2021. I could probably give it the HTML for the current layout and ask it how to do it, given that XML structure now that I think about it.
  • I have also used it for simulating dialogues, and providing a bunch of example responses(Fallout Style dialogue trees)
  • EDIT: I tried it out for making a LaTeX template with aesthetic flourishes in the old style of books and got this result
View attachment 59488
Danke Schoen ChatGPT :D

And talking with a friend he has used ChatGPT for simulating combat in his TTRPG system, as well as finding holes in the linguistic description of his system. I'm personally considering seeing how I can use it in coordination with LaTeX to typeset books in that delightful and excellent old-school Folio-style, as well as finding old out-of-print textbooks that are public domain that the world could benefit from seeing back in print. If you want approaches to a problem, particularly mathematical ones it can help you with that, and it can provide samples and describe how they work(at different levels of literacy if you want). A good example of this is arbitrage betting(not too common to find, but with AI you can cheat this out until you get b& by the various online betting places). You can also probably use it for automated daytrading too, asking it what AI techniques, where to get data, where to go for no-fee trading... etc. Another potential use is using it in dating apps... I think some people have used it for this already, I might give that a try myself, see what it accomplishes.

As I see it, ChatGPT is really just a tool that augments your own abilities and skills massively, and lets you have 10x more productive output in menial-somewhat menial tasks. You still have to figure out how to handle gluing all its outputs together in a sensible way. I think this holds true for a lot of AI, in that it's just a tool as used currently, and the use-cases of that tool haven't been fully explored. It's trivial to say content generation, but because it will end up producing a lot of slop, it will then become a game of producing meaningful content if it's to be put on the internet. It also means creating many passive revenue streams will be much easier currently to augment your full salary for whatever job you work, and because of it's efficiency, you can probably work double or triple jobs(moonlighting) if you're skilled. This content generation will apply to music creation, art creation, program creation and maintenance, finding out information from documentation that makes no sense, finding out information that's inaccessible by literature, notation or language to a much more understandable level. I see it being very useful in game's development too, meaning with the right combination of 11.ai, chatGPT and their ilk you can probably make non-trivial and rather interesting quest and dialogue systems. Most Indie games currently have dialogue that isn't spoken, so this alone is a big deal for creating immersive experiences. Animations and 3D modelling are 2 areas I haven't seen conquered by AI, but a coworker showed me Spline AI which I've joined the waitlist for, since it seems to promise prompt-based 3D modelling and texturing. I haven't seen any great music AI yet, but it's only a matter of time as music itself is its own language and GPT is just a generative pre-trained transformer... As for content generation woes on the internet, scammers are gonna take advantage of it, but some people will also take advantage of it to spread meaningless messages(memes), misinformation, lies and slander, as well as taking advantage of it to spread a positive or educational message. It is gonna be really terrible for scammers and phishers as it means their bait will follow natural language correctly and make sense.

The "Capital" or Wealth-gap that a lot of people are observing resulting from this will be better described in my opinion as a class of people below the programming API and a class of people above the programming API. Naturally it also presents problems for the prussian-style schooling we have around us, as those focus on cramming knowledge and not about understanding, but now gauging understanding in open-ended essays is also difficult. It absolutely is something big and unprecedented, and is a massive equaliser in information or knowledge-based job sectors. It's exactly as Bill Gates says, it's not been used to its full effects, and all the low-hanging fruit is being picked and is ripe for the picking. If you're of an entrepreneurial mindset, it's not something to lag behind- especially given it's free to use and the only cost is your time. Even artists are using it to augment their art as they provide a sketch as a basis, pass it into AI Art, and then post-process the image cleaning it up, and removing and snipping away the nonsense of it.

In one short word to describe ChatGPT. Is this what steroids are like?

And ChatGPT is using the far less powerful model. I had been using GPT 4 preview for couple of weeks now and it is on enough of a level that might make me fire my gamedev narrative writer.
 

SomaSpice

Sandwich Maker
Silver
Joined
Jul 26, 2021
Messages
1,064
Reaction score
5,156
Awards
262
So, seeing how the conversation has arrived onto the topic of task automation, lemme ask the following:

Is automation deterimental towards skill aquisition?

I see risingthumb quite happy that he doesn't have to spend time faffing about in stupid nonsense video and image editors, and I get it, shit's time consuming. But can you really develop an eye for composition if you never manipulate the media with your own two hands?

I believe that having to consider every decision and the punishing tedium involved with fucking up forces one to become a better thinker, more so having to work around constraits breeds creativity, efficiency, and cleverness. Simply put, the more analog the media the better teacher it is.

What do you guys think?
 
Virtual Cafe Awards

Regal

Well-Known Traveler
Joined
Nov 20, 2022
Messages
341
Reaction score
1,226
Awards
112
Is automation deterimental towards skill aquisition?

Yes and no. If you already have that particular skill then there isn't a reason to spend time on that task. Also sometimes getting the work done is higher priority than skill acquisition.

The more interesting problem to me is the rise of fake experts. People who have social engineered their way into positions but have no skills/knowledge in their head. People totally reliant on AI. This will be a problem in business, but the internet specifically will be way more impacted.