Descarte Yee
The 4th musketeer
- Joined
- Oct 25, 2021
- Messages
- 91
- Reaction score
- 147
- Awards
- 37
YesI'm kind of baffled that this hasn't gotten any responses yet, is Agora dying?
Donate and support Agora Road's Macintosh Cafe to keep the forum alive and make any necessary upgrades to have a more pleasant experience! In addition, you will be able to have "moods" enabled on your profile and have donation only awards! Update: I configured the site with Brave Browser, so you can send tips to the site with BAT.
You can now donate directly to the forum without signing up for patreon. You will still have all of the same perks in patreon but its now one less sign up method. It will be under Account Upgrades
This thread has been viewed 5665 times.
YesI'm kind of baffled that this hasn't gotten any responses yet, is Agora dying?
Artificial general intelligence is a name we give to the idea that a computer program can synthesize information in an intelligent manner for any application . In other words, it needs to have predictive models for every stimuli, and be able to adapt to new types of stimuli by analyzing its knowledge and applying predictive logic freely. That last part is the key. how do you teach a computer to make decisions without telling them what to do? computers rely on if x then y, they need recognizable prompts. how exactly would you go about doing this without it just being a predictive model. To be an agi the computer would need to know why x, and use that information in tandem with its physical knowledge in order to make a judgement. That is not possible now, and might not be possible ever. Ai's right now still use basic if x, then y structuring, just very very complicated. Modern ais are predictive in nature, and are based off of human knowledge, that means they cannot reason freely, they just imitate what they predict to be the optimal answer using identified word structures in the prompt. There is no "thinking" Because of this, they will always be limited and untrustworthy, because they cannot create their own models to learn from when given new information not easily defined by humans already. When humans learn, they build a model of the subject unconciously, using symbols to remember key parts of it. Humans use all these models together in everyday life, with relationships between these models constantly shifting. Ai lacks this fluidity, and as they become more complicated, they become more rigid. Ai's dont know how to shift their structures to rule out inconsistencies. They dont know the real world, all they know is what we tell them the real world is. All of these problems might be solvable, or not. It seems simple but its not. an agi would need to be able to answer every question that a human could know, drive a car, recognize colours and sound and time and so many things. That is just so so far away right now it might as well be impossible.
I guess I should have specified that I don't see agi coming anytime soon, but Im not in the industry so I have no idea.
That reminds me of artificial sentience, which I think is even more impossible.
The idea is to have sentience somewhat similar to how a human can experience it. But how would we know this? We would create a program that answers all of our questions designed to identify sentience correctly, or have cognitive structures that imitate humans that we think would allow for sentience. Essentially we are creating something that will say yes, I am sentient, in a million different ways. It could be sentient, or it could just be a language model that fools even us, there is no way of knowing. If it looks like a mind, talks like a mind, and seems to think like a mind, is it a mind?
Another thing, human experience of sentience is connected to our existence as a biological animal. We have no idea to the extent our physical existence impacts our conception of what sentience is. The brain we think with moves our limbs, experiences the senses, and subconsciously processes information with heavy bias towards social communication, reproduction and survival. This is what our brain evolved to do for millions of years. How are we supposed to recreate it from the outside? We have no clue how the brain works, we have not mapped the human neuron structure, and we might never be able to. Modern neuroscience has come to the conclusion that the brain reuses assets as much as possible to increase efficiency. Each part of your brain is involved in many different tasks, in a insanely complex web of relationships. From a purely logical standpoint it is probable that it is mandatory to recreate these exact relationships in order to create what we think of as sentience.
I don't know anything about computers btw this is just bullshit
Here's my advice on the matter: look into Mr. Yudkowsky. What is his field of study? What are his credentials? What scholarly publications has he written for? Who is behind those publications? When you can start answering these questions, I think, you will be able to tell if you're justified or not.Is it justified or am I too ignorant to notice the warning signs?
I see a lot of neuroticism surrounding the topic of AI. I think a lot of people are rationalizing themselves into hysteria, like in the link I've posted. Is it justified or am I too ignorant to notice the warning signs?
just ask it to describe it as a GTA mission
Yudkowsky is more of a backseat philosopher who had little too much time on his hand to read scifi, but you don't need to listen to him because if you care about publications, new ones are are coming out that are slowly aligning with his points more and more.Here's my advice on the matter: look into Mr. Yudkowsky. What is his field of study? What are his credentials? What scholarly publications has he written for? Who is behind those publications? When you can start answering these questions, I think, you will be able to tell if you're justified or not.
The fact that stable repressive regime and social media systems are not linked is strange
look into Mr. Yudkowsky. What is his field of study? What are his credentials?
In the early 1990s, I was tangentially involved in some "virtual reality" work. In the 1990s, VR involved a lot of wires, uncomfortable head gear, and didn't produce particularly compelling experiences. Nonetheless, all kinds of people were interested in these products in both the private and public sectors. Of course the technology didn't really do what anyone wanted and it never went anywhere. Two decades later (i.e., 2012), when 1990s computer dude John Carmack became involved with Oculus all of this started up again. I was, for my own personal reasons, very skeptical of VR products in general but at the time it was hard to argue about. Surely, if John Carmack was involved then VR was finally going to come of age.
This was the mantra for just short of a decade (i.e., from 2012 to around 2020). I was told, repeatedly, by close personal friends that VR was a life changing experience. I was told, I absolutely had to get an Oculus Rift and see for myself because it was so profound. In the future, I was told, we would forego all non-VR communication and entertainment. VR was just that much better. Whenever someone would bring this up to me, I would say something like, "what's the most compelling experience you've had in VR?" The answers varied but it would usually be something along the lines of, "I played golf and it was just like being on a real course!" This type of conversation didn't leave my life until Facebook, then owner of Oculus, began to exert its influence more obviously in the VR space.
When Facebook became Meta, a name change that caused me great amusement, a lot of VR conversations shifted to anti-hype. All of the sudden I was being told that, in the grim darkness of the future there would only be the Metaverse. Mark Zuckerberg would trap us all in VR pods. Everything would be an NFT. I had read Snow Crash, several times actually, and kind of shrugged it off. I said to people, "it's just going to be VR Second-Life." Of course, I was told that I was wrong. We were all doomed. The Metaverse would take off and we would all be living in Neal Stephenson's, "libertarian VR hellscape," by 2024.
By the time the anti-hype started, John Carmack was already kind of out at Oculus. He had gone from being the CTO to being merely Consulting CTO in 2019. Allegedly, he wanted to begin exploring the development of AGI products. To me, this meant that Carmack was out the door at Meta (then Facebook) already. I chose to read his new title as an expression of Meta's inability to support their pivot to VR without Carmack to give it legitimacy.
In August of 2022, John Carmack gave a five hour interview on the Lex Fridman podcast. I don't expect most people have listened to it but because I'm quite obviously obsessed with Carmack I did. Most of the interview isn't interesting, unless you want to hear Lex's embarassing views on programming, but Carmack does discuss some interesting internal details of Meta's operation. Most interesting, to me, was a short discussion of Meta's policy for hiring software engineers. Carmack says, "I remember talking with [Bosworth] at Facebook about it. Like, 'man I wish we could have just said we're only hiring C++ programmers.' And he just thought, from the Facebook/Meta perspective, we just wouldn't be able to find enough. You know, with the thousands of programmers they've got there it's not necessarily a dying breed but you can sure find a lot more Java or Javascript programmers. I kind of mentioned that to Elon, one time, and he was kind of flabergasted about that. It's like, 'well you just go out and you find those programmers and you don't hire the other programmers that don't do the languages that you want to use.'" Carmack was also asked about the most compelling VR experience. His answer was Beat Saber.
In December of 2022, only a few months after his interview with Fridman, Carmack formally resigned his position as Consulting CTO. At the end of his decade working on VR products, Carmack wrote, "the issue is our efficiency. [...] We have a ridiculous amount of people and resources, but we constantly self-sabotage and squander effort. There is no way to sugar coat this; I think our organization is operating at half the effectiveness that would make me happy." With his resignation, it seemed, a decade of growing VR hype and anti-hype almost immediately deflated. A friend came to me and said, "you know I bought an Oculus? I played with the thing once and haven't used it since." Carmack's new company, Keen Technologies, is an AI company. He's gotten somewhere in the realm of $20 million to build an AGI.
This has been a pretty long story but here's the point: almost everyone is a know-nothing. Doubly so when it involves any emerging high technology. Most people don't even read the books they call back to. Those that do read almost assuredly don't understand. Instead, people get swept up in the excitment and presentation of the idea. VR isn't a head mounted display. It's the holodeck or the Matrix. AI isn't chatbots and heuristic photo filters. AI is HAL 9000 and Hatsune Miku. Whenever you strip away the fiction and look at the reality of these technologies, you will find that most people, even the so-called experts, are just iconolaters.
A friend of mine who's also a software dev is now legally allowed to use chatGPT while coding and said that he uses it several times a day to create arbitrary functions. Something like
"give me a method that returns the highest value of this vector as a String" .
It's basically smart auto completion.
Ofc it doesn't really do any logic but just saves him time googling for the right stack exchange thread a lot.
I was surprised how fast it caught on in people's everyday life. To be fair, he works at a small startup company so they're quick to adopt new tech.
I've been playing around with it, and ChatGPT has seemed to me remarkably powerful. It still needs people to be able to direct it because it's still very dumb, but as a huge productivity tool it's benefits can't be denied. It's really useful for generating scripts to automate a lot of processes, cutting out a lot of the boring work. In the last couple of days using ChatGPT I have used it to:
View attachment 59488
- Find out what changes I needed to make to a codebase, following a major version dependency update.
- Find out what git commands I need to use, to branch off of a different remote origin, pull it down locally to make changes, and push those changes back up(kinda trivial, but not something I do often, like once every 6 months or so)
- Generate a script for manipulating audio and video files in a way I wanted to without faffing about in a stupid nonsense video editor
- Generate a script for creating thumbnail images given template files and a font file, adding in any stylistic changes I wanted, so I can mass-print images(still need to include stable diffusion parts for generating the true background from a prompt) instead of faffing about in a graphics editing program
- Get from >reddit
, the top trending and engaged topics relating to some topic(philosophy) and create an interesting essay on the topic, providing me a choice of descriptions off of that essay, a choice of "clickbait" titles, and a choice of "clickbait" thumbnail texts(all using revChatGPT, so I don't have to pay for API use). I have also used it to create a suitable prompt from that essay that I plan to feed into StableDiffusion to then get the background. I'll still need to get around to using the 11.ai api to connect it all together though. This also works for twitter, but I found out they changed their API so you can't pull tweets unless you pay, and the bot's information on scraping from it assumes the same HTML layout from 2021. I could probably give it the HTML for the current layout and ask it how to do it, given that XML structure now that I think about it.
- I have also used it for simulating dialogues, and providing a bunch of example responses(Fallout Style dialogue trees)
- EDIT: I tried it out for making a LaTeX template with aesthetic flourishes in the old style of books and got this result
Danke Schoen ChatGPT :D
And talking with a friend he has used ChatGPT for simulating combat in his TTRPG system, as well as finding holes in the linguistic description of his system. I'm personally considering seeing how I can use it in coordination with LaTeX to typeset books in that delightful and excellent old-school Folio-style, as well as finding old out-of-print textbooks that are public domain that the world could benefit from seeing back in print. If you want approaches to a problem, particularly mathematical ones it can help you with that, and it can provide samples and describe how they work(at different levels of literacy if you want). A good example of this is arbitrage betting(not too common to find, but with AI you can cheat this out until you get b& by the various online betting places). You can also probably use it for automated daytrading too, asking it what AI techniques, where to get data, where to go for no-fee trading... etc. Another potential use is using it in dating apps... I think some people have used it for this already, I might give that a try myself, see what it accomplishes.
As I see it, ChatGPT is really just a tool that augments your own abilities and skills massively, and lets you have 10x more productive output in menial-somewhat menial tasks. You still have to figure out how to handle gluing all its outputs together in a sensible way. I think this holds true for a lot of AI, in that it's just a tool as used currently, and the use-cases of that tool haven't been fully explored. It's trivial to say content generation, but because it will end up producing a lot of slop, it will then become a game of producing meaningful content if it's to be put on the internet. It also means creating many passive revenue streams will be much easier currently to augment your full salary for whatever job you work, and because of it's efficiency, you can probably work double or triple jobs(moonlighting) if you're skilled. This content generation will apply to music creation, art creation, program creation and maintenance, finding out information from documentation that makes no sense, finding out information that's inaccessible by literature, notation or language to a much more understandable level. I see it being very useful in game's development too, meaning with the right combination of 11.ai, chatGPT and their ilk you can probably make non-trivial and rather interesting quest and dialogue systems. Most Indie games currently have dialogue that isn't spoken, so this alone is a big deal for creating immersive experiences. Animations and 3D modelling are 2 areas I haven't seen conquered by AI, but a coworker showed me Spline AI which I've joined the waitlist for, since it seems to promise prompt-based 3D modelling and texturing. I haven't seen any great music AI yet, but it's only a matter of time as music itself is its own language and GPT is just a generative pre-trained transformer... As for content generation woes on the internet, scammers are gonna take advantage of it, but some people will also take advantage of it to spread meaningless messages(memes), misinformation, lies and slander, as well as taking advantage of it to spread a positive or educational message. It is gonna be really terrible for scammers and phishers as it means their bait will follow natural language correctly and make sense.
The "Capital" or Wealth-gap that a lot of people are observing resulting from this will be better described in my opinion as a class of people below the programming API and a class of people above the programming API. Naturally it also presents problems for the prussian-style schooling we have around us, as those focus on cramming knowledge and not about understanding, but now gauging understanding in open-ended essays is also difficult. It absolutely is something big and unprecedented, and is a massive equaliser in information or knowledge-based job sectors. It's exactly as Bill Gates says, it's not been used to its full effects, and all the low-hanging fruit is being picked and is ripe for the picking. If you're of an entrepreneurial mindset, it's not something to lag behind- especially given it's free to use and the only cost is your time. Even artists are using it to augment their art as they provide a sketch as a basis, pass it into AI Art, and then post-process the image cleaning it up, and removing and snipping away the nonsense of it.
In one short word to describe ChatGPT. Is this what steroids are like?
Is automation deterimental towards skill aquisition?