How much steam does the current AI trend have?

Collision

Green Tea Ice Cream
Joined
Jun 5, 2022
Messages
381
Reaction score
1,420
Awards
126
Is automation deterimental towards skill aquisition?
Sure this seems pretty obvious to me. If you use a machine to do something you don't learn to do it without the machine. Anyone who has taken a pre-algebra class has probably been told this a hundred times.
I see risingthumb quite happy that he doesn't have to spend time faffing about in stupid nonsense video and image editors, and I get it, shit's time consuming. But can you really develop an eye for composition if you never manipulate the media with your own two hands?

I believe that having to consider every decision and the punishing tedium involved with fucking up forces one to become a better thinker, more so having to work around constraits breeds creativity, efficiency, and cleverness. Simply put, the more analog the media the better teacher it is.

What do you guys think?
I think the real question here is this: what is more important efficiency of output or quality of output? I tend to lean towards quality (i.e., craftsmanship), in general. On the other hand, many in this thread are clearly more on the side of efficiency (i.e., getting things done). For any given problem, I think, the optimal answer can be quite nuanced. There are many factors that contribute to what the right balance is. In my view, good engineering is always an exercise in line drawing.

Using @Andy Kaufman's example (modified for clarity):
Write a function that accepts, as input, a sequence of integer values and produces, as output, a textual representation of the maximum integer value in the sequence.
There are lots of ways to approach this, but almost everyone (including an AI) is going to produce approximately the same function. The common approach is fine (i.e., it requires linear time) and worrying about it any further is a time sink so, more often then not, that's where the reasoning ends. In practice, there are lots of other factors to consider here before I would be happy calling a solution optimal. How often do we need to use this function? What time constraints does it need to operate under? How large of a sequence does it need to process? What kind of hardware will execute the function? This list could go on and on, so we have to draw the line somewhere. The correct place to draw the line, in so far as there is ever a correct place, will depend on your goals.

The interesting question that arises from all of this is: for your goals and constraints where should you draw the line? Your time is finite so where is it best spent? If your goal is to master the craft of video editing then perhaps it is actually more expedient to do everything by hand (so to speak). If your goal is to rotate a video by 90 degrees because your boss accidentally recorded the whole thing sideways and has now assigned you the task of fixing it then maybe having an AI produce a script is more expedient.
 
Virtual Cafe Awards

SomaSpice

Sandwich Maker
Silver
Joined
Jul 26, 2021
Messages
1,064
Reaction score
5,057
Awards
262
On further thought, I think I phrased the question rather stupdily. Its obvious that automation means you're not practicing the task you're delegating, I should have rather asked how important is that we eschew efficiency on some degree for the sake of skill aquisition, specifically refering to high-level work.

Say that in the near future, almost all game designers delegate concept art production to AI. If no one engages in concept art beyond vetoing automated results, could it be that the humans at the helm begin choosing and utilizing poorly the resurces given by the machine? Is there a line where automation becomes deleterious?
 
Virtual Cafe Awards
Sure this seems pretty obvious to me. If you use a machine to do something you don't learn to do it without the machine. Anyone who has taken a pre-algebra class has probably been told this a hundred times.

I think the real question here is this: what is more important efficiency of output or quality of output? I tend to lean towards quality (i.e., craftsmanship), in general. On the other hand, many in this thread are clearly more on the side of efficiency (i.e., getting things done). For any given problem, I think, the optimal answer can be quite nuanced. There are many factors that contribute to what the right balance is. In my view, good engineering is always an exercise in line drawing.

Using @Andy Kaufman's example (modified for clarity):

There are lots of ways to approach this, but almost everyone (including an AI) is going to produce approximately the same function. The common approach is fine (i.e., it requires linear time) and worrying about it any further is a time sink so, more often then not, that's where the reasoning ends. In practice, there are lots of other factors to consider here before I would be happy calling a solution optimal. How often do we need to use this function? What time constraints does it need to operate under? How large of a sequence does it need to process? What kind of hardware will execute the function? This list could go on and on, so we have to draw the line somewhere. The correct place to draw the line, in so far as there is ever a correct place, will depend on your goals.

The interesting question that arises from all of this is: for your goals and constraints where should you draw the line? Your time is finite so where is it best spent? If your goal is to master the craft of video editing then perhaps it is actually more expedient to do everything by hand (so to speak). If your goal is to rotate a video by 90 degrees because your boss accidentally recorded the whole thing sideways and has now assigned you the task of fixing it then maybe having an AI produce a script is more expedient.
equality vs equity.
there is differnce between expecting same start conditions and then, same end conditions.
some i heard say the second one is communistic :)...
 
Virtual Cafe Awards
Is there a line where automation becomes deleterious?
in scifi, Dune (watched review), HAL9000, Futurama sequence about inventor so lazy he made bot to do all his tasks - and it get all the praise of everything, even awards and child's paternal love... then, Asimon 0th law - humanity hurts itself, so robots have to stop that, as it is not right to hurt them...
 
Virtual Cafe Awards

Andy Kaufman

i know
Joined
Feb 19, 2022
Messages
1,184
Reaction score
4,781
Awards
209
Sure this seems pretty obvious to me. If you use a machine to do something you don't learn to do it without the machine. Anyone who has taken a pre-algebra class has probably been told this a hundred times.

I think the real question here is this: what is more important efficiency of output or quality of output? I tend to lean towards quality (i.e., craftsmanship), in general. On the other hand, many in this thread are clearly more on the side of efficiency (i.e., getting things done). For any given problem, I think, the optimal answer can be quite nuanced. There are many factors that contribute to what the right balance is. In my view, good engineering is always an exercise in line drawing.

Using @Andy Kaufman's example (modified for clarity):

There are lots of ways to approach this, but almost everyone (including an AI) is going to produce approximately the same function. The common approach is fine (i.e., it requires linear time) and worrying about it any further is a time sink so, more often then not, that's where the reasoning ends. In practice, there are lots of other factors to consider here before I would be happy calling a solution optimal. How often do we need to use this function? What time constraints does it need to operate under? How large of a sequence does it need to process? What kind of hardware will execute the function? This list could go on and on, so we have to draw the line somewhere. The correct place to draw the line, in so far as there is ever a correct place, will depend on your goals.

The interesting question that arises from all of this is: for your goals and constraints where should you draw the line? Your time is finite so where is it best spent? If your goal is to master the craft of video editing then perhaps it is actually more expedient to do everything by hand (so to speak). If your goal is to rotate a video by 90 degrees because your boss accidentally recorded the whole thing sideways and has now assigned you the task of fixing it then maybe having an AI produce a script is more expedient.
For me personally it's another aspect of standing on the shoulders of giants.
The new generations will always need shortcuts to progress just as we used shortcuts our ancestors carved out for us. That means that our ancestors were better at some things than we are now and vice versa.
The boomer gen for example was better at mental arithmetic and memorisation but now every kid always carries google and a calculator around in their pocket.

With AI tools I think it will be a similar deal. They will enable to jumpstart certain research because coming experts can dedicate their entire time on the frontier and just do a quick recap of what came before. Of course we will still need computer Scientists (in this example) to produce new input for the AI and my prediction is that quality assurance will always (or for a long time to come) remain manual and this is where the absolute nerds will sit that are deep into the subject matter, way deeper than the software dev who just used AI to translate real world requirements into code.
 
Virtual Cafe Awards

RisingThumb

Imaginary manifestation of fun
Joined
Sep 9, 2021
Messages
713
Reaction score
1,745
Awards
173
Website
risingthumb.xyz
Is automation deterimental towards skill aquisition?
Yes, but consider that the skills it automates are typically tedious and not useful skills. Would you consider the skills of 19th and 20th century accountants to be useful today? Probably not quite.
If you already have that particular skill then there isn't a reason to spend time on that task.
Except developing and refining your skills to a higher degree. It's well and good to say you are skilled in programming, but to say you are skilled in programming X where X is something, that's quite different. I may be a skilled programmer, but I am absolutely not a skilled C++ programmer.
The more interesting problem to me is the rise of fake experts. People who have social engineered their way into positions but have no skills/knowledge in their head. People totally reliant on AI. This will be a problem in business, but the internet specifically will be way more impacted.
Is this necessarily bad? Let us take IT support as an example. Quite often their work is just to be that fake expert feeding in your customers question into google, bard, chatGPT or whatever knowledgebase you use to give them a reasonable answer. Those people totally reliant on AI basically act as an intermediary between those who are somehow incapable of using it
 
Virtual Cafe Awards

Regal

Well-Known Traveler
Joined
Nov 20, 2022
Messages
340
Reaction score
1,218
Awards
111
Is this necessarily bad? Let us take IT support as an example. Quite often their work is just to be that fake expert feeding in your customers question into google, bard, chatGPT or whatever knowledgebase you use to give them a reasonable answer. Those people totally reliant on AI basically act as an intermediary between those who are somehow incapable of using it

It kinda depends on what level we're talking here and as you noted this isn't an AI exclusive problem. I was zeroing in on "experts." I don't expect some help desk tech---generally people early in their careers---to know much. But if you're some Engineer making $150k+/year you should be able to talk things out and offer some kind of expertise without relying on Google/AI. When you get someone in a meeting who doesn't actually know what they are talking about it becomes obvious. It is like hiring a Mathematician who needs a calculator for basic math.

Does it actually matter if someone knows what they are talking about as long as the work is done successfully? Eh, I guess not. At what point though is the AI just running the business entirely then? Maybe my concern is more philosophically people becoming AI/Google talkingheads instead of thinking for themselves.
 

RisingThumb

Imaginary manifestation of fun
Joined
Sep 9, 2021
Messages
713
Reaction score
1,745
Awards
173
Website
risingthumb.xyz
It is like hiring a Mathematician who needs a calculator for basic math.
This isn't the same thing at all! I can't really do division or multiplication of seriously unusual 2 digit numbers in my head or without some working out on a paper. As for why it's not the same thing, the latter is computing results, the former is... well a lot more: Set theory, Complex Numbers, Propositional and Predicate logic... Calculus, Proof techniques, Series etc Despite not knowing how to do 13 x 26 all in my head, I know about what you'd expect an undergrad 1st or 2nd year mathematician to know about.

It's like saying a plumber should be able to plumb pipes without a wrench. A Calculator is just a tool
Does it actually matter if someone knows what they are talking about as long as the work is done successfully? Eh, I guess not. At what point though is the AI just running the business entirely then?
You still need someone to push the buttons, check and double check the results etc etc. It's like Airplane flights. Most pilots don't need to do that much to land a plane due to the onboard flight assistant, but if that breaks, they still need to be able to kick things back into gear. As for if an AI is running the business, that's fine. A lot of Google's business is run on AIs and there's always a better AI.
Maybe my concern is more philosophically people becoming AI/Google talkingheads instead of thinking for themselves.
This is a fair concern. Even without AI/google talkingheads as the new frontier of what we'll see, do you think people think for themselves currently for the most part?
 
Virtual Cafe Awards

alCannium27

Active Traveler
Joined
Feb 15, 2023
Messages
164
Reaction score
269
Awards
55
Food for thought: GAN shows the machines no longer need human supervision for training. But did we really used "human supervision" before GAN in the first place? NPL tokenizers, sure; but no body goes through petabytes of text or image files to label them themselves. At best the engineers scrapped web data and used metatdata to label them -- who knows if an article with the title "how to raise a child in rural America" and the tags "education" and "family" is actually about those things? We write programs to automate processes we are already capable of doing, just in a slower and more costly manner, like summing 10000 numbers or land a plane based on various sensor readings and visual data, we just don't see why we need to do it 1000 times personally every single time.
Sure, on a personal level, we will lose certain skills, but just like there are no more "reinassance man" since each field now requires a significant lifetime of studies, as time goes on and each field is certain to become more complex, it'd be impossible to imagine a future where any one can be a master of a primary field of study. In fact, I argue the tools we have is what helps human to be generalists as the need to specialize has been handled by those tools.
Maybe instead of regarding the AI as "they took our jobs!", think of them as means for humanity to regain mastery of knowledge
 

Shantotto

TTD Militia
Joined
Jul 13, 2022
Messages
163
Reaction score
591
Awards
81
Absolutely! Technology trends always play out this way. Most people lack even a rudimentary understanding of the technology they're getting hyped over. Most anyone who is getting hyped about GPT-4 (or more likely chatbots based on GPT-4) is imagining that it's a type of science-fiction character. It's not a probabilistic model of language it's HAL 9000, Data, or Cortana. When the illusion breaks for enough of these people then the bubble will burst. The current technology simply isn't congruent with what people would like to imagine it is. I wonder if people were acting the same way about transistors in the 1960s. I wouldn't be surprised.

In my mind, the current trend seems to suggest that we wouldn't recognize AGI even if it came and annihilated us with plasma weapons. Our definition of intelligence is far too loose and our ability to measure it intuitively is extremely poor.
Completely agree.

Especially in the case of chatGPT i was experimenting with gpt-3 1.5-2 yrs ago telling my friends they could have an AI write their essays for them, but they didnt believe me and it never caught on. This AI explosion, in my opinion, is only happening because AI models are becoming super accessible to the masses through intuitive UIs, easy access to generation, and the hype is leading to an influx of apps and companies packaging already existing tech into shiny new boxes. I go to school now and the hot cheeto girls are talking about having chatGPT doing their homework. 1 year ago i mention GPT-3 and people roll their eyes because im speaking some 'computer science lingo'.
Sam Altman, the CEO of OpenAI openly says this tech is not new. Pre-trained transformers have been around since I think 2017, and GPT-3 has been around since waitlist in 2020. But back then, you needed to sign up for the waitlist, there was a token credit quota, and a bunch funky sliders for 'temperature', 'stability' and other obscure hyper parameters to control the output of the model.

I remember him saying they did two things to create the groundbreaking face of generative AI known as chatGPT
1. Replace all the sliders and settings with single input prompt
2. Optimize their pre-existing GPT-3.5 model for dialogue. (Generate two outputs and have humans decide with output fits a dialogue format) and out came this model that 'feels like it's trying to help you'


View: https://www.youtube.com/watch?v=L_Guz73e6fw


It truly feels like a virtual assistant from sci-fi and its astounding how effective it has become at doing that simply from tuning with a relatively small amount of additional human input to align it.
They made AI accessible and applicable to the masses in the way the smartphone supposedly did to the internet. No one wants to learn about something that isn't applicable to their life, but chatGPT through writing homework discussion posts and generic emails, has shown the average person how easily AI can benefit them in everyday life.
Tom Scott made a really great video about these technology trends behavior akin to sigmoid curves.
On another note, ChatGPT is truly mindboggling in its "reasoning" (if you call it that... idk maybe the weights are implictely learning rules for reasoning about other weights" but by far the best application I've seen is its ability to find information for you when you don't even know where to begin looking. The other day, I remembered there was a gaming term for someone who is more concerned with hoarding loot in a videogame than using to fight or help out their team but I couldnt quite remember the word and went through several unsuccessful Google searches before deciding I would just ask chatGPT
1681375472970.png


Then I needed specific quotes from hamlet to use to troll somebody based on given scenarios and it spit them right out. DO YOU KNOW HOW MUCH TIME I COULD HAVE SAVED WRITING ESSAYS IN HIGH SCHOOL IF I HAD THIS?????

1681375578476.png


I think the greatest value these large text models pose is their ability to act as way more accessible interface for accessing knowledge. They are essentially databases you can speak to as if you have a human encyclopedia or loremaster on speed dial at all times.
Makes me sad tho because knowledge will become less esoteric as time goes on but its truly a game changer for how we will collect, store, and access humanity's knowledge, especially documentation, going forward.

But most people definitely overestimate what AI is as a concept. Its simply magic. I'm sure the novelty will wear off soon enough,
 
Last edited:
Virtual Cafe Awards

RisingThumb

Imaginary manifestation of fun
Joined
Sep 9, 2021
Messages
713
Reaction score
1,745
Awards
173
Website
risingthumb.xyz
But if you're some Engineer making $150k+/year you should be able to talk things out and offer some kind of expertise without relying on Google/AI.
Out of curiosity, are you an American working for some of these companies making ~$150k/year? Or is this salary number conjecture? As a British Software Developer I make around ~£35k/year before taxes, but I live in the North East of England so the cost of living isn't that high. Just curious. Also curious if there's remote work done at the same pay, from these American companies to European countries, because it's absurd to me that Americans make so much more simply on account of their cost of living being so much higher. I know 6 figures is often mentioned in the /twg/ thread on /g/, but I've always thought it was a silicon valley salary.
 
Virtual Cafe Awards

Regal

Well-Known Traveler
Joined
Nov 20, 2022
Messages
340
Reaction score
1,218
Awards
111
It's like saying a plumber should be able to plumb pipes without a wrench. A Calculator is just a tool

Maybe the mathematician example was bad, but that's not quite what I meant either. I would expect an expert plumber to mentally have concepts of hydraulics and use Google/AI to fill in the gaps. My fear for the future (in this metaphor anyway) are "expert" plumbers who are totally reliant on AR glasses or something showing them how to do everything and feeding them all information so that they never have to know anything or think about anything. Hopefully that's more clear.
This is a fair concern. Even without AI/google talkingheads as the new frontier of what we'll see, do you think people think for themselves currently for the most part?

In general yes I think people think for themselves without technical augmentation. For their job anyway. Obviously if you're a mindless drone pressing a button in an assembly line or something then you're the exception, but again I'm generally focusing on expert-level kind of roles.

Out of curiosity, are you an American working for some of these companies making ~$150k/year? Or is this salary number conjecture? As a British Software Developer I make around ~£35k/year before taxes, but I live in the North East of England so the cost of living isn't that high. Just curious. Also curious if there's remote work done at the same pay, from these American companies to European countries, because it's absurd to me that Americans make so much more simply on account of their cost of living being so much higher. I know 6 figures is often mentioned in the /twg/ thread on /g/, but I've always thought it was a silicon valley salary.

Yep, a US-based Systems Engineer with a focus on the cloud. I live in one of the most expensive cities in the country (and the world). For an easy example, my rent for my 2 bedroom apartment is $3200/mo. Groceries generally cost $500/mo. Washer/dryers aren't common here so I have to pay a company to do my laundry. Don't/can't have a car so have to pay for public transit whenever I want to go anywhere. Electricity is high in this state versus other places I've lived. Shit adds up quick. If I didn't have a girlfriend I wouldn't be able to comfortably afford living here despite my six figures.

Remote work at that pay exists. I am remote and work for a company on the other side of the country. Challenge is that you have way more competition to get those roles since anyone can apply. I'm also not feeling confident that the remote work industry is here to stay. A lot of companies are waging wars against it unfortunately.

My total comp is on the extreme low end for Silicon Valley. Every Eng/Dev I have met in Silicon Valley are making like $350k/year (salary + stocks). In those cases those companies aren't paying for high cost of living, but are simply very high paying jobs. A Dev at Apple, Google, etc are simply worth a lot to those companies and (supposedly) they are also paying high to have the best talent.
 

Taleisin

Lab-coat Illuminatus
Bronze
Joined
Nov 8, 2021
Messages
636
Reaction score
3,316
Awards
213
Last edited:
Virtual Cafe Awards

brentw

Well-Known Traveler
Joined
Jan 4, 2022
Messages
669
Reaction score
1,665
Awards
181
The current state of AI is (ironically) simultaneously MUCH dumber and further from AGI, AND way more fucking dangerous, than the average person realizes.
The real issue isn't want these glorified text prediction algorithms will do by themselves.
It's what a amoral people could do with them. And what governments can use them as an excuse for.
 
Virtual Cafe Awards

bnuungus

call me bun
Joined
May 24, 2022
Messages
966
Reaction score
3,032
Awards
225
The current state of AI is (ironically) simultaneously MUCH dumber and further from AGI, AND way more fucking dangerous, than the average person realizes.
The real issue isn't want these glorified text prediction algorithms will do by themselves.
It's what a amoral people could do with them. And what governments can use them as an excuse for.
ok so this has nothing to do with the thread but when I joined the forum you seemed like such a cool dude that hadn't posted in a while and I'm just glad to see you back
 
Virtual Cafe Awards

brentw

Well-Known Traveler
Joined
Jan 4, 2022
Messages
669
Reaction score
1,665
Awards
181
ok so this has nothing to do with the thread

:monkaHmm:

Ok, you got me. I've been drinking and the thread seemed like a whole lot of boring words I just didn't feel like reading, I just wanted to say something about AI.
 
Virtual Cafe Awards

NSoph

The Singularity is Now
Joined
Jul 12, 2022
Messages
178
Reaction score
755
Awards
80
LLM's are used as general productivity software now, including for people who work on improving AI.
AI technological progress is feeding AI technological progress.
We are already in the beginning of the singularity, it has been going fast and it goes faster and it will not stop going faster.
As in my bio, The Singularity is NOW.
LLM's have gone multimodal with GPT4 and Palm-E, they use self-reflection and synthetic data produced by themselves for better output, AI's are used in conjunction with one another to produce better output.
"AI accelerators" like photonic chips and tpu's are being R&D'ed and research on better algorithms hasn't stopped
It looks like we will become legacy systems this decade, rather unsure if humanity will see a world beyond 2050
 
Virtual Cafe Awards

bnuungus

call me bun
Joined
May 24, 2022
Messages
966
Reaction score
3,032
Awards
225
:monkaHmm:

Ok, you got me. I've been drinking and the thread seemed like a whole lot of boring words I just didn't feel like reading, I just wanted to say something about AI.
No, I was saying that my post had nothing to do with the thread. I was just glad that you returned to the forum bc I joined right after you stopped posting and you seemed like a pretty cool dude from the stuff I was able to read. This is just a long-winded welcome back, man. I hope you stay :melhpy:
 
Virtual Cafe Awards

Sketch Relics

Quiet Traveller
Joined
Sep 28, 2022
Messages
432
Reaction score
1,128
Awards
124
My guess is that the only industry that may be mostly displaced from current ai tech is animation, though probably not for a few years and definitely not from any current ai image generation program. You would basically need something trained on anatomy and programmed to take and impose characters based on detailed concept art onto basic planning drawings to create high detail animation frames. It's something I think would have to be specifically designed for the task and would displace the small armies of animators currently needed to create animation down to just a few.
 
Virtual Cafe Awards