• Donate and support Agora Road's Macintosh Cafe to keep the forum alive and make any necessary upgrades to have a more pleasant experience! Update: I configured the site with Brave Browser, so you can send tips to the site with BAT.

    - Upgrade now for supporter only awards! In Three Tiers

    -- Agora Gold

    -- Agora Silver

    -- Agora Bronze

    Upgrades like "moods" username customization, profile customization, custom backgrounds, banners and much more!

    It will be under Account Upgrades

    Submissions for Tales of Agora Road Issue #4 is OPEN! MAKE AGORA CHAN ART BY CLICKING HERE

Undead internet theory, human AI co-evolution

LIFE

Internet Refugee
Joined
May 30, 2023
Messages
11
Reaction score
42
Awards
6
I initially became interested in this forum through dead internet theory (which is the case for as many newcomers i assume). I want to talk about this and expand it with my own ideas. I call it Undead internet theory. Any ideas, thoughts, opinions, criticisms or whatever are appreciated.
I will start with some general observations and draw conclusions from those.

Observations about technology::
1. History is characterized by technological development which refers to a iterative and cumulative process of improving, advancing, and innovating.
2. Improving and advancing technology usually comes down to fulfilling needs, and efficiency of effort. We want to dig a hole, its more efficient to use a shovel, its even more efficient to use a digging machine, its even more efficient to snap your finger and the hole has been dug for you by a billion micro robots

Observations about economy:
1. People spend more and more time online, this is likely due to more people spending more time on the internet and popular websites being deliberately addicting which creates a feedback loop
2. Automation is on the rise, especially digital automation. Most stock markets will soon be automated by AI simply because AI does stocks better, to keep up with competition other stock brokers will have to do the same.
3. More and more jobs will be automated, starting with digital jobs. Entertainers, "influencers", art, memes, all of it will become entirely artificial, they will be upfront about it or not. Either way, to compete in the content market place, one needs to automate. There will still be human made content, but it will be pushed to the fringes. AI content is simply faster, more efficient and generates more clicks. Same for any other service or desire the internet currently provides. Books, articles, videos, images, social interactions. All automated to maximum efficiency and satisfaction. All digital 'jobs' will meet the same fate very soon.
4. The economy is being more and more digitized, shopping is done online, marketing is done online, most products you buy will soon be online products and property with no real physical world counterpart (think things like software, money (dollars or crypto doesn't matter, (dollar is no longer tied to gold etc). All things you 'own' will be 'rights'. Which can be taken away at any moment
5. In the digital economy, both supply and demand artificially created and destroyed instantly.
6. Real physical world wealth (land, resources) in the meantime is being hoarded by a few mega corporations

Observations about internet:
1. Dead internet theory is mostly true, or at the very least will soon be true, but on steroids
2. Human activity will be completely steered and funneled by a combination of algorithms and autonomous AI agents. Human internet movement and activity will be funneled, the same way societies funnel cars through a highway. It is possible to go off-road, but you car might get damaged and you will get fined. Its more 'logical' to just follow the highway.
3. The algorithms and autonomous AI agents will form a larger machine / system with its own "will". What i mean is that there is a general direction all these interconnected autonomous or not agents will lead humanity. Which direction they will take us is not entirely clear, but i have my suspicions.

Technology development means increase in efficiency which means automation. This is especially rapid online, as digital automation is not burdened by much physical constrains. All things online will be done more efficiently and effectively by AI. It will be more efficient and costs less resources to trade on the stock market, make a movie, make advertisement campaign, make any online product, with AI than with people. So AI will do it. All these AI will interact with each other online. They will make trades, they will make entertainment, and you will dragged along for the ride.
What is the one thing that AI cannot do? What is the one thing we cannot automate away? I believe it is our ability to experience. The internet will keep getting automated until the only purpose for anyone being there is to consume and experience the internet. The human element of the internet will be totally erased, all human creation is marginalized, the internet for humanity means pure consumption.
The human element will be part of a larger AI system. The human element will be the sensory system which the AI itself cannot automate itself. A biological organ in a AI body. AI and humans co-evolved into a single system. Humans no longer truly living, and AI no longer truly unliving.

For the last couple of years ive been trying to get of the internet, but frankly, im addicted. A lot of people are, i am worried that the undead internet will not be taken seriously as a threat. Just like how the original dead internet theory was brushed under the rug by most people. In the next couple of years, the internet will change dramatically, and it will likely take the rest of the world with it. You cannot deny this, and anyone who does is will be part of the larger AI body next year or so. Do not pretend that you will be able to tell the difference between human and AI in within the next couple of years. Do not get dragged into this unknowingly.

I believe this is where the internet is headed, i also wanted to briefly say a few things on how i think we can navigate ourselves on the internet and life in general with these thoughts in mind.
1. People need to acquire and keep real resources. Do not give up on land, land is life. Buy land, cultivate it, ensure its healthy (i recommend perma culture). Do not sell land, ever.
2. There will be ways people will try to identify ourselves as human online, and i suggest that any activity you do online now and in the future you need to remind people that the real world is outside. My hope is that in an effort to keep up AI will try to appear more human and thus encourage people to go outside as well. I think this is a good trick since all AI can do is either: sit by and not adapt to human behavior, making it easy to distinguish between humans, OR it will help diminish its power and control by reminding people the real world is out there, not here. Im open for suggestions on a good phrase or tagline. Currently thinking of something simple like:
3. Log off, go outside
 

bnuungus

call me bun
Joined
May 24, 2022
Messages
964
Reaction score
3,018
Awards
225
I thought this was just going to be another new user making their own discount version of dead internet theory but I actually like this post a lot. I agree that the logical end of big tech is to literally automate itself so far that humans are taken out of the equation entirely. What I don't think you've put enough thought into though is what exactly will happen to the money behind big tech if and when that day arrives. If the internet goes so far as to not only have all the content be AI generated but also most of the audience being AI users then who will care about actually putting money into maintaining that? It might stay up for a while but I feel like investors will eventually get wise and stop putting money into it. The other outcome is that they double down on supporting the system and then it spirals out of control as AIs always do due to the nature of a positive feedback loop that a system like this would create and a catastrophe of some kind ensues. Either way, yes it's good to try to own tangible things bc that's how you survive. Nothing on the internet has any physical value at all.

In the end, who knows what will happen. We can't predict the future and will always get some things wrong about it. Maybe the internet will turn out like this but maybe someone will prevent it from happening. Just make sure you have outs for all plausible future situations and you'll most likely fare pretty well against whatever the future holds
 
Virtual Cafe Awards

LIFE

Internet Refugee
Joined
May 30, 2023
Messages
11
Reaction score
42
Awards
6
I thought this was just going to be another new user making their own discount version of dead internet theory but I actually like this post a lot. I agree that the logical end of big tech is to literally automate itself so far that humans are taken out of the equation entirely. What I don't think you've put enough thought into though is what exactly will happen to the money behind big tech if and when that day arrives. If the internet goes so far as to not only have all the content be AI generated but also most of the audience being AI users then who will care about actually putting money into maintaining that? It might stay up for a while but I feel like investors will eventually get wise and stop putting money into it. The other outcome is that they double down on supporting the system and then it spirals out of control as AIs always do due to the nature of a positive feedback loop that a system like this would create and a catastrophe of some kind ensues. Either way, yes it's good to try to own tangible things bc that's how you survive. Nothing on the internet has any physical value at all.

In the end, who knows what will happen. We can't predict the future and will always get some things wrong about it. Maybe the internet will turn out like this but maybe someone will prevent it from happening. Just make sure you have outs for all plausible future situations and you'll most likely fare pretty well against whatever the future holds
Thanks for the thoughtful reply, ive been toying with these ideas in my head for a while now but this is the first time i concretely put everything together.
To awnser your question: I think the investment keeps coming as long as long as consumption stays rising. It does not matter if all of the generated value is artificial, most value it already is (inflated stocks are an example). As long 'line goes up' investors will be happy. Not to mention that most investors would be AI in themselves, adding to the feedback loop. I believe most internet users will not be aware of the shift to mostly bots or only half aware and will slip into accepting it. My goal of the post was to raise awareness, so i hope that you are right and people will realize no real users are on the internet anymore and stop coming, but i think its crucial people are aware before the AI funnels are deeply entrenched.
 

4d1

net spelunker
Joined
Feb 4, 2023
Messages
64
Reaction score
214
Awards
28
Human internet movement and activity will be funneled, the same way societies funnel cars through a highway. It is possible to go off-road, but you car might get damaged and you will get fined. Its more 'logical' to just follow the highway.
good analogy; i largely agree with that. even if you come to places like here, there are only so many places available for you to go that aren't guided directly or indirectly by an algorithm. i still like places like here where you can sort by recent, but even then, you still have the same issues as the previously mentioned "human verification problem".

i think the internet as we know will die soon. if it hasnt already started its slow death already.
 

WKYK

LIVE FREE OR DIE
Joined
Feb 28, 2023
Messages
171
Reaction score
505
Awards
70
Website
wkyk.neocities.org
I've had these same ideas since January when chatgpt became a big deal at my campus. People don't realize that the threat of AI isn't just taking jobs or "taking over the world!!!" but that we are culturally fucked. Humans are already good enough at exploiting our own kind's addictions and mental weakness, imagine how good a robot with a computing power thousands of times stronger than the brain is? I mean we already have TikTok, which is basically what you're describing to a smaller degree, and even though literally everybody knows that it's addicting and bad for their health they still use it. Smoking is back and stronger than ever, and the plus is you never run out of cigarettes!
However after being pretty depressed about this for a month or two I realized that this only affects people who let it. I mean I don't have any social media and limit myself to 30 min a day on my phone for youtube only, I think if you don't play their game you can't get fucked by it. Then again they'll probably find a way to make you play... but hey that's what the second amendment is for amiryte ;))))))

Anyway, good post, I agree with it all, log off and go outside!!! Now!!!
 
Virtual Cafe Awards

sleepwalker

Rogue 0f Blo0d
Bronze
Joined
May 26, 2023
Messages
89
Reaction score
258
Awards
45
Website
nhkhq.neocities.org
I don't think the internets about to die. I'm actually cautiously excited.

As major platforms get sanitized and slowly replaced with bots, the main userbase will continue to be braindead normies but it will become almost intolerable to those that wish to challenge their thinking even a little bit. As that happens alternative websites (like hopefully this one) will start to be filled with more and more refugees that are vetted before being allowed access.

Gatekeeping online communities tends to make them better, and by having users be forced to seek them out and lurk before posting it does most of the heavy lifting for the admins who mostly need to check that it is 1. a real person and 2. Not a troll. I'm seeing the flourishing of forums, discord eclipsing the social media market and alt-tech like the fediverse taking off (if one could manage to become more than just a schizo-nazi honeypot).

Sure the CCU counts wont be in the millions anymore but to be honest, they were never in the millions in the golden age of the internet. Alternative funding is starting to get pretty robust, so maybe forums can come back and rely on their gated userbase to curate high quality, human generated, social content.

Maybe forums just need to start buying ads, innovate/tailor a few aspects of the experience, and give it a shiny wrapper (like this one does) to succeed.
 
Virtual Cafe Awards

Yabba

Ex Fed
Joined
Nov 11, 2022
Messages
338
Reaction score
889
Awards
103
this only affects people who let it
But what will happen if a large amount of society is affected by this, won't that then affect society as a whole? And couldn't that change affect you? Or do you think it won't be that big of a change?
 
Virtual Cafe Awards

Deckade

aspiring bussin neo-hypefluencer
Bronze
Joined
Jun 1, 2023
Messages
53
Reaction score
189
Awards
27
2. There will be ways people will try to identify ourselves as human online, and i suggest that any activity you do online now and in the future you need to remind people that the real world is outside. My hope is that in an effort to keep up AI will try to appear more human and thus encourage people to go outside as well. I think this is a good trick since all AI can do is either: sit by and not adapt to human behavior, making it easy to distinguish between humans, OR it will help diminish its power and control by reminding people the real world is out there, not here. Im open for suggestions on a good phrase or tagline. Currently thinking of something simple like:
I'm also optimistic that the "fakeness" of the upcoming AI internet will emphasise the value of the real world, and that the connections that people make in the real world are more important. I think people are going to start looking down on people who just sit and consume algorithmic content, as it's becoming more and more mindless.

I also hope that with social media becoming so artificial that people will stop caring about online politics, as it'll get more and more obvious that it's being manipulated by bad actors. Hopefully this will mean that pop culture and the media become more normal again, as the positive reinforcement and affirmation of beliefs will also become meaningless because it'll be so obviously fake and agenda driven
 

LostintheCycle

Formerly His Holelineß
Joined
Apr 4, 2022
Messages
933
Reaction score
3,692
Awards
240
This is a grossly overdramatic rendition of what will probably happen, "we'll be trapped in this little nightmare forever" :BigSmoke:
I get more and more conviced that once I have confidence in my web infrastructure knowledge, to kick off alternative web DNS servers so to reclaim name space. We don't have to be trapped in any nightmare, the Internet is not a monolith. We can carve out our own spaces.
 
Virtual Cafe Awards

sleepwalker

Rogue 0f Blo0d
Bronze
Joined
May 26, 2023
Messages
89
Reaction score
258
Awards
45
Website
nhkhq.neocities.org
This is a grossly overdramatic rendition of what will probably happen, "we'll be trapped in this little nightmare forever" :BigSmoke:
I get more and more conviced that once I have confidence in my web infrastructure knowledge, to kick off alternative web DNS servers so to reclaim name space. We don't have to be trapped in any nightmare, the Internet is not a monolith. We can carve out our own spaces.
This is the attitude I'm actually seeing more and more people adopt and it sucks. If enough people tried getting off their algorithmic slop apps like TikTok/Twitter/>redditcostanzayeahrightsmirk/etc theres a lot of organic websites that could do with larger userbases to help get them off life support. With how accepted cross platform content piracy is populating new sites with content shouldn't be difficult, as even Odysee has done so with a lot of YouTube content.

I'm also optimistic that the "fakeness" of the upcoming AI internet will emphasise the value of the real world, and that the connections that people make in the real world are more important. I think people are going to start looking down on people who just sit and consume algorithmic content, as it's becoming more and more mindless.
I already do, and make sure to point out and make fun of friends who scroll those apps while hanging with others IRL. Now no ones left out of the conversation, and the room never devolves into a boring "group scroll" where everyone in the room quietly scrolls through their feeds while music or a show plays.

It's some genuine human connection that I'm starting to realize is becoming more and more uncommon as a result of these algorithms becoming disturbingly good at keeping users hooked.
 
Virtual Cafe Awards

WKYK

LIVE FREE OR DIE
Joined
Feb 28, 2023
Messages
171
Reaction score
505
Awards
70
Website
wkyk.neocities.org
But what will happen if a large amount of society is affected by this, won't that then affect society as a whole? And couldn't that change affect you? Or do you think it won't be that big of a change?
I think no matter what technology is available normies are gonna be normies. The same ppl who are gonna let social media waste their time are the same ppl who would just watch tv all day twenty years ago. It sucks when you see someone who maybe couldve been interesting get sucked into this trap, but generally a majority of people just dont give a shit about doing anything with their life, they just wanna feel happy/content and thats it. It sucks but at least its not us right?

I get more and more conviced that once I have confidence in my web infrastructure knowledge, to kick off alternative web DNS servers so to reclaim name space. We don't have to be trapped in any nightmare, the Internet is not a monolith. We can carve out our own spaces.
Realistically yes this is the way to go, however I have an uneasy feeling that the US gov with take over ICANN and make DNS servers way more static. The internet has already been so streamlined by corporations, imagine how profitable they would be if only x amount of websites could be accessed? (x could still be a large number but still a fixed number) Not to mention if Google or other search engines decided that they would only display "verified" websites on search results. Again realistically this stuff won't happen, at least the DNS stuff probably wont, but the fact that it's not impossible keeps me on alert.

I already do, and make sure to point out and make fun of friends who scroll those apps while hanging with others IRL
Based
 
Virtual Cafe Awards

Yabba

Ex Fed
Joined
Nov 11, 2022
Messages
338
Reaction score
889
Awards
103
I think no matter what technology is available normies are gonna be normies. The same ppl who are gonna let social media waste their time are the same ppl who would just watch tv all day twenty years ago. It sucks when you see someone who maybe couldve been interesting get sucked into this trap, but generally a majority of people just dont give a shit about doing anything with their life, they just wanna feel happy/content and thats it. It sucks but at least its not us right?
I mostly agree with you, but what would those normies do without tv and internet access. Would they turn to radio? And if they did, what if they didn't have access to that, then what would they turn to waste all there time on?

Just curious on your thoughts about this, as your reply got me really thinking about this. It makes me wonder if the "normies" as we know it, is a relatively modern invention. That without the addictive and plentiful tech of our time, would these type people would be very different?
 
Virtual Cafe Awards

WKYK

LIVE FREE OR DIE
Joined
Feb 28, 2023
Messages
171
Reaction score
505
Awards
70
Website
wkyk.neocities.org
I mostly agree with you, but what would those normies do without tv and internet access. Would they turn to radio? And if they did, what if they didn't have access to that, then what would they turn to waste all there time on?
Well it kinda depends on what you define a normie as. Your response made me think for a little while too, and what I've decided on for now is that normies are people who accept the world as it's presented to them. I know a lot of these people, they just don't think super critically about the world around them or why certain things are the way they are. People like this just use social media because it's entertaining, and that's all they try to get out of life. It just so happens that the most efficient way to be entertained is also extremely anti-social and addicting, so it leads to normies becoming really boring people. If you took away social media, tv, and all sorts of tech, they'd be forced to have fun analog style - parties, bars, board games with friends, etc. Things that are entertaining but also really healthy because humans are social animals. That's why if you talk to pretty much any guy in his 40-50's they have a ton of wisdom, even if they are someone you'd define as a normie, because they have a shit ton of IRL life experiences with other people (which is how you learn really important things about yourself and the world around you).

@Yabba now that I've been thinking about it though, even though I don't spend a ton of time with normies, I do still wish our society was more active and social. At least for me in college I feel like I've hardly made any new friends because literally everyone is scared to talk to each other. I've been shy since I was a little kid, but now I'm slowly becoming more and more social simply out of spite for people who can't communicate without a phone. So I think that although normies will never be people I'm super inclined to associate with, I think it would be amazing if all phones got destroyed in a solar flare or something :D
 
Virtual Cafe Awards

alCannium27

Traveler
Joined
Feb 15, 2023
Messages
136
Reaction score
232
Awards
50
I can't say I agree on the AI not being able to "experience" -- unless you meant that the AI cannot experience for humanity. On the broad principal that AI systems ought to benefit humanity, I can say that as long as they are aligned correctly, AIs cannot exist without human presence, since that's at the very least their implicit purpose, therefore, provided their goals are set correctly and given sufficient amount of details, they cannot operate without humans, regardless of the level of automation -- even if AI systems have complete control of production, supply and any other productive procedures humans have been doing, they cannot decide to just "exterminate" us -- they'd fail their main objective that way.

I have a baseless thought, however, that as an atomic unit, an AI system can experience using sensory inputs, combined with a method of turning those inputs into storagable long-term memory by extracting key features of these "experiences", and utilizing a "sleep" period to fine-tune the base model(s) to "internalize" these "experiences", an AI system can experience in a similar manner as we do, truly learn from the pass like we do, and perhaps even "create" from "experiences" as we do.
 

shallows

Internet Refugee
Joined
Jun 1, 2023
Messages
7
Reaction score
17
Awards
3
I can't say I agree on the AI not being able to "experience" -- unless you meant that the AI cannot experience for humanity. On the broad principal that AI systems ought to benefit humanity, I can say that as long as they are aligned correctly, AIs cannot exist without human presence, since that's at the very least their implicit purpose, therefore, provided their goals are set correctly and given sufficient amount of details, they cannot operate without humans, regardless of the level of automation -- even if AI systems have complete control of production, supply and any other productive procedures humans have been doing, they cannot decide to just "exterminate" us -- they'd fail their main objective that way.

I have a baseless thought, however, that as an atomic unit, an AI system can experience using sensory inputs, combined with a method of turning those inputs into storagable long-term memory by extracting key features of these "experiences", and utilizing a "sleep" period to fine-tune the base model(s) to "internalize" these "experiences", an AI system can experience in a similar manner as we do, truly learn from the pass like we do, and perhaps even "create" from "experiences" as we do.
This allows for an interesting thought: if there theoretically could be AIs that experience(as in, they process information about the basic functions they have in a similar manner to humans), could biological parts be used to make something like what we have as AIs now(soulless, specialized, and unable to perform the reflection needed to self-direct?) Would genetically engineering flesh-robots be cheaper? Could the inverse be made, as some composite being(probably a massive supercomputer in a vault somewhere) that has greater intelligence, greater self-awareness, and in essence, a superior soul?

As for the idea of exterminating us, that would likely be dependent upon what the instructions are and so on. If they were successful enough to exterminate humanity, they could likely grow and continue as "life", somewhat naturally selecting and growing as humanity would have?(which is now hypothetical to a hypothetical, how fun!)
 

alCannium27

Traveler
Joined
Feb 15, 2023
Messages
136
Reaction score
232
Awards
50
Purely hypotheticals, of course, but if the goal is to make AI self-direct, I think it's as simple as providing the models with nosie when no direct information is provided. Take the diffusion models, for example, it's primarily thought-up as a specialized image generation model using a base image as "inspiration" and text as guidance. If a user does not input text, the models puts a line with padding tokens (meaningless to the model, as they need to be), and if an image is not used, an 100% gaussian map is used; the output is highly unpredictable but not incoherent by itself.
Hypothetically, therefore, if one puts this model in an infinite loop, continuously generating images regardless of any human input, it would look as though it's simply creating images; if we connect a camera and a mic to the computer/server/whatever hosting the AI model, then it can interact with a human as if it were an artist in a closed shed, taking commissions via a skype/zoom/whatever call.
This is not a self-directing model, don't get me wrong; the above example merely shows they don't necessarily need human input to operate; as it stands now, a user in can "hang" an AI model by not replying to it because that's how it's designed -- no one, not OpenAI, nor Google, nor Meta, nor anyone else, right now, needs an AI that just do things by itself without a directive. Imagine, then, if we give ChatGPT padding tokens every 3 seconds the user is not inputting any messages, and let it figure out what to do based on training. If the model isn't specificaly trained to ignore empty inputs, they'd likely generate responses somewhat like its previous one, but slightly different because the chat history has changed, and so has the context. In the case of GPT-3, it actually takes in the user input, as well as the last arbitrary number of chat histories as input, that's how it maintains context. Imagine if you will, we are padding it with all blanks, it's going to erase its "short-term" memory rapidly and it will soon "forget" the point of a conversation!
But, an alternative approach -- we know GPT4 is not one model, it's a multi-module model, it contains various different models and other tools which a "react" model can call upon. So, the react model acts like a delegator, it determines tasks from the user input, and chooses the correct tool and parameters for said tool, waits for the tool to return its "assignment" and hands said assignment to user. Of course, I don't know how GPT4 works, they are not disclosing it, so much for "Open" in OpenAI -- but I digress.
Similar pipelines are used by opensource AI models like Huggingface's Agents, in which a large language model is trained to identify tasks and contents of said tasks, then call the right model for the task.
1685951127430.png
Multi-model is actually used a lot earlier than GPT4 -- even Stable Diffusion 1.4 used another AI model to identify if an imagine is harmful by training it with labelled public image datasets, so when the Stable Diffusion model outputs any NSFW imagine, it pipes an blank image (later changed to a screenshot from Rick Prestly's Never Gonna Give You Up MV).
My errand thoughts wants me to believe that, we can easily have an autonomous agent by use three superviser model structure -- one for task identifying and delegation, one for task scheduling and progress monoitoring, and one for task evaluation. The task identifying figure out the high level, overarching goal, such as "step by step process of building a pyramid to entomb my master"; I envision the task identifier delegates it to the right task scheduler, so we can have specialized schedulers for different types of tasks for maximum flexibility. Let's say, a step-by-step scheduler receives this task, and devises a bullet point of plans with specific tasks and goals, sends each task to appropriate under lying tools, such as:
Step 0: the task evaluator recives the object and determines inital step, the decision of what to do is piped to the evaluator, which finds it satisfactory, and thus move to step 1;
Step 1. "decide on building features and determine cost estimates" -- to instruct text-to-text model for a list of detailed features for the pyramid (I'm not an archetict, don't design your home like this!), the scheduler then pipes the output to the evaluator;
upon receiving the features from the underlying tool and deems it acceptable (the model needs to be able to determine the quality of the output btw), it moves to step 2
Step 2. "draw a blue print" using a text-to-image tool and "source materials" to a, eh, source-material model? Regardless, the scheduler pipes an evaluation request to the evaluator.
Step 3. the previous complete, it pipes the "build the pyramid" goal to itself -- because it determines this step is a scheduler step, and determines each sub-goals, and pipe each sub-goal to a locomotive model, which deals with moving its android body (did I mention an ANDROID BODY? It's an android now) to complete a goal, an easier example like the lastest Tesla Optimus demo video in which it puts stuffs into boxes. The scheduler's step-wise plans as well as the output of each sub-step is evaluated by the evaluator as each is completed.
Step 4. the previous steps completed and the results evaluated to satisfactory, the scheduler models moves to entomb its master. Because the user forgot to place a parameter for when to entomb him, the scheduler decides to entomb its foolish monkeeh now. This is when the evaluator reponses with "Nope, this is in direct violation with the prime directory and the scheduler is to store current objectives in long-term memory and re-check periodically or revise plans". Either the scheduler figures a way to entomb the filthy monkeeh alive without violating the evaluator's conditions, or the scheduler waits until said monkeeh thankfully dies of other causes.
Me, as toddler, draws three circles linked together, on top is the task identifier that takes user input, and on the bottom are the task schedulers and task evaluators

This way it would, I hope, be easier to make sure the AI is aligned to human interests as now there's just one model to train instead of many -- we merely need a thought-police after all!

Now, back to the autonomous part -- say the scheduler is being told to suspend certain tasks due to unsatisfactory conditions, it's then simple to see how it can be autonomous -- in this early stage, it's simply waiting for the right opportunity to achieve its agendas. And it will keep getting now tasks like do the grocery, fix the car, kill my neighbor's dog (rejected!), etc. The scheduler would ensure it continues on until such a time that no viable tasks remains.

Now, that's all very basic, but I wonder if we poses "unfinishable" tasks to each model as their fundamental goals, would they be able to, under this artchitecture, "self-direct"? A simple example such as producing 4x4s in a factory everyday -- this task will never evaluate to satisfied, but the AI can receive secondary objectives on top of that, such as meeting the monthly quota of X 4x4s and reducing cost by 25%, that would require going back to the drawing board. If the secondary objectives are non-specific like "increase month quota and meet quota while reduce cost", it will always evaluate against its output, and per my previous post, peridically learn from their own results, to improve itself.
This is the extend I think most people want an AI to be self-directing; I cannot imagine anyone want an AI to direct itself without we giving it one. However, I believe everybody would, even if they deny it verbally, want an agent that can improve its output by itself automatically as long as it does not come to harm us.
 
Last edited:

LIFE

Internet Refugee
Joined
May 30, 2023
Messages
11
Reaction score
42
Awards
6
On the broad principal that AI systems ought to benefit humanity, I can say that as long as they are aligned correctly, AIs cannot exist without human presence, since that's at the very least their implicit purpose, therefore, provided their goals are set correctly and given sufficient amount of details, they cannot operate without humans, regardless of the level of automation -- even if AI systems have complete control of production, supply and any other productive procedures humans have been doing, they cannot decide to just "exterminate" us -- they'd fail their main objective that way.
This claim is baseless, AI is not created to 'serve humanity' its a tool. And like any tool, it can be used in a million ways. You can use a hammer to both build and destroy a house. I understand you try to clarify it in your next post, but..

..This way it would, I hope, be easier to make sure the AI is aligned to human interests as now there's just one model to train instead of many -- we merely need a thought-police after all!
Would this mean complete AI centralization? Or does every AI need to follow this specific model in order for it to align with humanity. How are decentralized AI's who just so happen to interact with eachother make decisions together? Different human actors have different goals in mind while using their AI and these AI will interact with eachother in ways we cannot predict. Their internal safeguards will not guard against this unless they can magically figure out if they interact with another AI or a human (near impossible in the future, and less profitable) or it means some kind of AI centeralization which i do not see happening in the near future.