With how much artificial intelligence has been improving, in many areas such as text reading/generation, picture reading, picture generation, convincing voice synthesis and more, I think there's a lot that can be discussed, about the effects that this technology will have on society.
I'll start off with one example.
I'd been thinking about the enshittification cycle of tech, and I think it's coming for Google hard. The search engine just isn't so great at finding what you actually want, and I think that's gonna leave a big opening for Bing with their use of AI. If the AI can sift through the crap and actually find what you want for real, due to its understanding of language, it'll actually make searching super useful again.
In the pre-Google internet, search engines used to search only for exact words and phrases, which had its uses, but also meant finding a lot of sites that simply crammed in a lot of popular words and phrases to get visitors. Google cut through the crap with a better understanding of how to "rank" sites relative to how relevant they are, and even find sites that are on the topic you were looking for without using the same exact words.
But Google started to become more advertiser-friendly, then later, more shareholder-friendly. There's a limit to how much one can make their product built entirely around shareholder growth, so as it turns to crap, it leaves an opening for a competitor to show up.
Since Bing/ChatGPT (which Bing is plugged into now) understands the use of language, it can actually understand context and determine relevance based on that. And that'll make it huge, I think. Context-based understanding of web pages can potentially do an excellent job of finding what people actually want, in a way that goes way beyond Google's page ranking systems, or the examination of exact words.
Edited by BonsaiForest on Dec 10th 2023 at 6:15:29 AM
Its AM or SHODAN,you will have to choice!
New theme music also a boxOr skynet.
pfff Skynet,that has no personality,its just a network!
New theme music also a boxAs I said before, if AI actually ends up killing us, it won't be out of malice, but because some idiot screwed up when setting parameters.
Paperclip problem, basically.
Edited by DrunkenNordmann on Apr 8th 2024 at 9:16:01 PM
Welcome to Estalia, gentlemen.I just think of AM when people mention Rokko's Basilsk. Because of like, the pointless cruelty of it. Creating a copy of a person's consciousness just so that it can be tortured for eternity for basically no reason.
At least Skynet just kills you because it thinks it has to. Rokko's Basilisk is vindictive, petty and evil in what I find a very human way.
Edited by GNinja on Apr 8th 2024 at 7:22:14 PM
Kaze ni Nare!Rokko's Basilisk sounds a party encounter in dungeons and dragons..
New theme music also a boxIt's a baller name, I'll give it that.
What I find interesting is that on top of the Pascal's Wager thing, it's also got a dash of like... the prisoner's dilemma as well, doesn't it?
iirc, the idea is that RB's creation is assured because there are enough enough people scared by the concept of its existence to create it. If everyone just... didn't. Then RB would never exist. But that would require trusting that no one else would give in to their fear and create it. If you believe that other people WILL make it, then you need to join them to protect yourself.
Edited by GNinja on Apr 8th 2024 at 7:35:42 PM
Kaze ni Nare!
It's also how you trick people into paying for your AI research (even if you have actually no clue about actual research), i.e. "Trust us, we'te ''totally gonna create a non-torturing super-AI".
Welcome to Estalia, gentlemen.I take it back, the notion of the simulated mind being ""functionally"" the same as an original one is the dumbest shit I've heard.
Like, they don't even have brains to have their own thoughts with or even physically exist.
They're as real as the villagers I kill in Minecraft as far as I care.
Edited by Cordite-455 on Apr 8th 2024 at 9:45:57 PM
i did a bad thing / i regret the thing i did / and you're wondering what it is / tell you what i did / i did a bad thingDoes it matter if something exists physically if it perceives itself as existing and has self-awareness?
Avatar Source
...A digitally simulated mind, by definition, works on some physical device (a computer), which could be considered the equivalent on a brain. In this sense, it "exists" in the same way that your mind "exists", even though it's just electrical impulses inside a brain.
Edited by HavocCrow on Apr 9th 2024 at 10:56:54 AM
I strongly recommend you seek out and read Greg Egan's novel "Permutation City", as it attempts to examine the question of what physical substrate is necessary for a system to perceive itself to exist.
Then check out his afterword to the novel on his site.
I believe you may find it a valuable and rewarding experience.
If I remember correctly: “Those who say that machines cannot think should be allowed to continue their reasoning. Soon, they will find out that people cannot think, either”. - Richard Feynman
It just sounds like a vaariation on I Have No Mouth, and I Must Scream, really.
Optimism is a duty.And what does it say? Can you at least summarize?
There's 10 Major points in there.
Listing them in full would just be regurgitating it, which isn't allowed anymore.
Edited by Demongodofchaos2 on Apr 17th 2024 at 8:13:23 AM
Watch SymphogearI'll try and give my thoughts without just repeating what's in the report verbatim.
Takeaway 5 seems to be the most interesting one for me. There doesn't seem to be a standardized way to assess an AI's performance. This actually seems to contradict several other takeaways, specifically 1: AI performs better than humans in some but not all domains, 7: AI improves productivity, and 8: AI accelerates scientific progress. I'm not sure how you can accurately assess improvements to productivity or scientific progress if there is no standardized way to assess AI performance.
It's also interesting that investment in generative AI is increasing at the same time that new models are becoming more expensive.
The full report likely goes into way more detail than this but I have not had a chance to read thru everything yet. But the key takeaways don't exactly inspire confidence from me.
Edited by Xopher001 on Apr 17th 2024 at 5:37:35 AM
For Chat GPT, what is the added value in the paid version versus the free one? Is it a lot better, or just a little better?
Optimism is a duty.The paid version is a lot better in many areas. It's smarter, can do things like analyze jokes and memes (to varying degrees of success, but it has helped explain things I didn't get, while other times missing the point of things I already knew), it can read pictures.
For example, recently my mom was looking through some things she had and asked me what some strange electronic device was. I took photos, had ChatGPT analyze, and it correctly identified the strange object as a converter for different standards of electricity between countries, which my mom likely purchased (or was given by someone else) for her trip to France.
It also seems to know more obscure information. My online friend once participated in some obscure group decades ago when she was a kid, and asked ChatGPT's free version about it, and it didn't know anything about it. I asked the paid version about that same group, and it actually recognized it and described it, getting it mostly right.
It isn't perfect but makes fewer mistakes, sometimes way fewer. It's more useful for "tip of my tongue" stuff where you're trying to remember something you've heard of before but can't remember the name of. It reads pictures and identifies things you might have never heard of before. (Internet searches confirm that, yes, it got it correct.) It's overall smarter - ranging from a little to a lot depending on task - and I can easily see the difference.
The free version is good enough for my friend in most cases, but in some instances, she'd show me a disappointing result it gave her, and I'd enter the same prompt into the paid version and get a more intelligent and accurate response.
I'm up for joining Discord servers! PM me if you know any good ones!5 isn't about performance, its about "AI responsibility" aka alignment
Or is it doing what we want or, is it spouting racist bullshit? gnoring minorities in the output? Is it displaying biases?... Is it complying with our social norms?
It doesn't contradict, there about completely different topics.
AI performance testing is actually pretty simple, you just count how many errors it makes... Alignment is much more complicated, and there isn't really a way to standardize it because the exact definition is going to vary from person to person.
Edited by Imca on Apr 17th 2024 at 11:23:37 PM
A lack of a standardized approach to measuring performance doesn't mean that there is no way to measure performance, it just means that different research teams are using different metrics.
"We learn from history that we do not learn from history."Well, we've already passed the Turing test, I guess.
Optimism is a duty.So the article says that AI improves productivity and scientific progress. Though AI used without oversight can make productivity worse.
I think there's definitely gonna be good and bad in this "brave new world" we're moving into. I'm looking forward to all the medical improvements it might lead to. Maybe quality of life (in a medical sense) will go way up. In a psychological sense? Hmm...
But I'm glad to see the good that's gonna come with this tech.
I'm up for joining Discord servers! PM me if you know any good ones!
People just think that every AI is gonna be AM, I swear.
Edited by GNinja on Apr 8th 2024 at 7:09:20 PM
Kaze ni Nare!