Follow TV Tropes

Following

Artificial Intelligence discussion

Go To

With how much artificial intelligence has been improving, in many areas such as text reading/generation, picture reading, picture generation, convincing voice synthesis and more, I think there's a lot that can be discussed, about the effects that this technology will have on society.

I'll start off with one example.

I'd been thinking about the enshittification cycle of tech, and I think it's coming for Google hard. The search engine just isn't so great at finding what you actually want, and I think that's gonna leave a big opening for Bing with their use of AI. If the AI can sift through the crap and actually find what you want for real, due to its understanding of language, it'll actually make searching super useful again.

In the pre-Google internet, search engines used to search only for exact words and phrases, which had its uses, but also meant finding a lot of sites that simply crammed in a lot of popular words and phrases to get visitors. Google cut through the crap with a better understanding of how to "rank" sites relative to how relevant they are, and even find sites that are on the topic you were looking for without using the same exact words.

But Google started to become more advertiser-friendly, then later, more shareholder-friendly. There's a limit to how much one can make their product built entirely around shareholder growth, so as it turns to crap, it leaves an opening for a competitor to show up.

Since Bing/ChatGPT (which Bing is plugged into now) understands the use of language, it can actually understand context and determine relevance based on that. And that'll make it huge, I think. Context-based understanding of web pages can potentially do an excellent job of finding what people actually want, in a way that goes way beyond Google's page ranking systems, or the examination of exact words.

Edited by BonsaiForest on Dec 10th 2023 at 6:15:29 AM

HeraldAlberich from Ohio (Before Recorded History) Relationship Status: Gonna take a lot to drag me away from you
#51: Dec 13th 2023 at 12:53:10 PM

This is also the reason the attempts to integrate Chat GPT into other stuff isn’t…really working out very well. Because it’s not designed for that and it just kinda makes anything it’s integrated with into “here’s the program and it has a chatbot now that sometimes comes off like it hates you” and not “here’s an AI assisted program.”

Well, it'll be interesting if the rumors that Apple is reengineering Siri into an AI-based system for next year's iOS 18 turn out to be true, then. Say what you will about how it turns out, but Apple doesn't attach anything to iOS that doesn't integrate with the rest of the system.

Smeagol17 (4 Score & 7 Years Ago)
#52: Dec 13th 2023 at 1:01:55 PM

Hope it will be better then Apple Maps. (or Siri, for that matter)

BonsaiForest Since: Jan, 2001
#53: Dec 14th 2023 at 10:13:39 AM

What I want to see regarding maps, is better understanding of what I want via natural language.

I remember a few years ago trying to avoid a particular road due to a road block. I couldn't find the option to dodge that road, and using the voice function to say, "Avoid [this road]" only resulted in it saying back, "I don't know what 'avoid [this road]' means."

That's a real problem for when situations like that show up. I want to be able to be exact with what I want, which I realize means programming more features into Maps as well as good understanding of natural language. But do that - put the two together. It'll be a more useful product.

DeMarquis Since: Feb, 2010
#54: Dec 14th 2023 at 8:06:49 PM

Sorry for essentially dropping out of the conversation, life got busy for me.

So when I use Chat GPT this way, I would claim that you are getting my thoughts. It's not coming from the AI. First, because I generated the prompts that produced the output, and I put some considerable effort into generating those specific prompts. To give you some idea, I have a thirty one page outline with, I can only estimate, about 15 or so prompts on each page, so that's about 450 prompts. I get about one half to one full page per prompt (although sometimes I just write them myself when that's easier). Then I edit the output to conform to my writing style. Any source that I cite I read myself. I am in complete control of what appears on the page, and frankly, unless you had my outline of prompts, I don't see how you could generate the end result yourself.

The outline itself, of course, is designed to cover certain topics in a particular order so as to come to certain conclusions that I have already determined ahead of time. These I all generated myself. The reason I think I can claim that the final book will reflect my own understanding of the topic is because all I generated all those prompts, their order and the final conclusions before beginning to use the AI at all.

You might ask, if I am doing all that work, why bother with AI? The answer is because I have a weakness as an author, which is that I get writer's block a lot. Looking at a blank page causes my mind to go into a fog, at which point getting started at all is a chore. But I have no problem with outlines. So I create a long list of detailed prompts, feed them into the AI, then edit the output to conform to my preferred style of expression. Frankly, without this tool I am not sure I could write this book, at least not as efficiently. And, of course, being efficient with my time and energy frees me to focus on other projects.

Of course, I do not intend to keep this concealed. There will be a section or appendix describing my technique of writing, including the use of Chat GPT.

Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#55: Dec 15th 2023 at 4:12:53 AM

You’re using the AI as a ghostwriter then.

That’s basically the shape of it.

Using a ghostwriter is okay in some fields (getting an autobiography ghostwritten is fine, IMO), but I don’t think it’s a great idea with philosophy.

I have to be honest though, the idea of “I created the prompts so it’s basically me creating the end result” puts an incredibly sour taste in my mouth because of the attitude of a lot of the people who said things like that when the the AI art thing was at its peak. It was a lot of smug “I’m just as good as the people who spend hours choosing the exact colour palette to use because I put words into a machine” type shit.

It’s, quite frankly, not the same as you choosing what to put down on the page at your own will. Working with your own writing is like doing an elaborate piece of baking from scratch. Using an AI is like putting together a bunch of pre-made ingredients and only customizing it enough to make it function as a whole. That’s…a fine approach to baking, especially for an amateur, but I’d think the latter baker was unbearably pretentious and full of themselves if they thought it showed the same level of skill or knowledge as the former baker.

You can do it this way, but you’re working with an AI ghostwriter and that’s inherently going to kill a lot of interest in your work and it’s likely to mean that a lot of people will assume your editing is just fixing run-on sentences and removing repetition and maybe, if they’re very lucky, replacing bits that conflict with your thesis. And I don’t think that’s an impression it’ll be possible to overcome, no matter how thoroughly you explain yourself. Especially if you make the mistake of using an AI to write that bit explaining your use of AI.

But I think a lot of it boils down to that this isn’t you writing, it’s much closer to editing a ghostwriter who is prone to tangents and sometimes just outright making shit up in ways where it can be hard to tell.

Edited by Zendervai on Dec 15th 2023 at 7:25:18 AM

Not Three Laws compliant.
RainehDaze Figure of Hourai from Scotland (Ten years in the joint) Relationship Status: Serial head-patter
Figure of Hourai
#56: Dec 15th 2023 at 4:26:31 AM

If it's an appendix, I doubt it would affect interest much. It would certainly be interesting to see what the result is.

Avatar Source
Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#57: Dec 15th 2023 at 5:19:38 AM

[up] It tends to be pretty easy to identify when Chat GPT is used to write something. The appendix not being immediately evident isn't going to make that problem less of a problem. ChatGPT has an unusual, excessively conversational and meandering writing style that stands out pretty directly and editing that particular voice out would basically mean…just writing it yourself.

It’s also kind of…questionable to try and bury the “I had an AI write it for me” thing in the fine print. Especially in a non-fiction book.

Edited by Zendervai on Dec 15th 2023 at 10:04:21 AM

Not Three Laws compliant.
DeMarquis Since: Feb, 2010
#58: Dec 15th 2023 at 7:39:45 AM

I like this method because it frees me to focus on the ideas that I want to explore, and not so much on the process of writing itself. I am no kind of professional writer. From my perspective, it's like using a more sophisticated form of grammar check.

Technically, I don't think it's ghost writing because I'm not taking credit for anything I didn't do. I haven't decided exactly where to put the explanation, but I doubt sharing a byline with it is appropriate ("XYZ" written by Demarquis and Chat GPT) because the AI isn't contributing anything creatively and it isn't a person anyway. That comes across to me like a marketing gimmick. My assumption is that most people who are interested enough in the topic will read a chapter or two, and if they enjoy it will read the rest, and not care very much exactly how it was written.

I'm not sure what the fact that this is a book on the intersection between philosophy and psychological research has to do with it. Writing fiction like this might be an issue, since the voice of the writer is often considered to be an important aspect of the experience. But in non-fiction I think the emphasis is on topical interest and accuracy. To the extent that my style of thinking comes through, it will be in the form of the structure of my argument, and not how the words and sentences come across.

I suppose it depends on what the reader wants from a work of writing. Is it the content, or a meeting of the minds with the author? A mixture of the two? Leaning in which direction? Each potential reader can make a fully informed decision for themselves.

Note also that I am not taking work away from anybody. Co-writing this with someone was never an option, for a variety of personal reasons.

Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#59: Dec 15th 2023 at 7:54:03 AM

The reason I think it's iffy with non-fiction is because of two factors.

1) Non-fiction writing is generally supposed to have a level of professionalism to it. ChatGPT is (deliberately) designed to be very informal, but not in a way that sounds at all natural. It has a strange meandery way of talking that is designed to fill space without actually saying much of substance. That's deliberate, and it's good at it, but it's part of the reason it's not very good at long-form writing.

2) ChatGPT has a known problem with spitting out garbage, including fake citations. Your citations might be real, and your research might be real, but if I stumbled across your book and realized it was written in large part by ChatGPT, I would instantly assume that at least a large part of the stuff in there was bullshit, because the program isn't intended to be used this way. It's a chatbot, it's not supposed to be writing academic literature and it's bad at it. And I don't have time to check every single citation to make sure it's a real one. If it was professionally published, I might be more willing to trust it, but I'm guessing you'd self-publish it. And how do you prove that citations are real in a book like that?

You also likely won't be able to copyright the book.

Like...here's the thing. The way you've described it makes it sound like the program will be doing the bulk of the actual writing and you'll be sorta hammering it into shape in editing. That's what James Patterson does, he doesn't really write most of the stuff he's credited with, his cowriters do, and Patterson tweaks it to make it sound more like him.

And it is basically ghostwriting. The hiccup isn't the crediting thing, it's that you're not doing most of the actual act of the writing, you're basically just editing it. It might come from your notes, but it's not your words.

Also, and this is an actual piece of advice: you need to be rereading the older chapters of it constantly. Like, every time you have a new chapter done. I'm not kidding. Because of the way the program works, it's going to be drifting a lot, especially since it has no idea that this is one big project and not just a string of small ones. If you aren't extremely careful and vigilant, it's going to introduce elements in later parts that contradict elements in earlier parts and that's very easy to miss if you aren't doing the bulk of the act of actually writing it.

Note also that I am not taking work away from anybody. Co-writing this with someone was never an option, for a variety of personal reasons.

This isn't something I'm concerned about. The thing I'm concerned about is that you're using a tool that wasn't designed for this purpose and is infamously bad at it, and it means that 1) the output might be a lot lower quality than you realize since you're invested in it and 2) it'll make a lot of the potential audience immediately suspicious of anything that's said in it or if anything in it is even real.

...it might be a good idea for you to ask someone who knows what AI writing looks like to look at the first few bits of the book before you go any further, if only to make sure that you aren't falling into the traps without noticing it.

Edited by Zendervai on Dec 15th 2023 at 2:53:12 PM

Not Three Laws compliant.
Silasw A procrastination in of itself from A handcart to hell (4 Score & 7 Years Ago) Relationship Status: And they all lived happily ever after <3
A procrastination in of itself
#60: Dec 15th 2023 at 12:25:03 PM

To me what’s been described just sounds like a more high-tech version of me using a ruler, line tool or fill tool when drawing an artistic image.

Aiding someone who has difficulties with one part of the artistic process is exactly what we should want such tools to do. They’re not reducing the amount of content in the world, they’re increasing it by opening the door to people who for whatever reason couldn’t get across the threshold before.

It makes me think of the conversations that have emerged in the online D&D map making community about usage of Inkartnet, yes a tool that lowers the barrier to entry means we get more poor quality stuff, but it also means that more people have the opportunity to share their message out to people.

I do wonder if AI could be used for similar things in other areas. I have a lot of trouble doing written examples of past actions for job applications. It’s not that I don’t have good example or that I can’t identify them, I just have a hard time writing them down in the way I need to. In the past I’ve had people help me out by basically acting as a scribe for me as I provide the example verbally. An AI that could do that could be helpful to a lot of people, especially neurodiverse individuals.

Edited by Silasw on Dec 15th 2023 at 8:27:59 PM

“And the Bunny nails it!” ~ Gabrael “If the UN can get through a day without everyone strangling everyone else so can we.” ~ Cyran
indigoJay from The Astral Plane Since: Dec, 2018 Relationship Status: watch?v=dQw4w9WgXcQ
#61: Dec 15th 2023 at 12:57:31 PM

You also likely won't be able to copyright the book.

This is the only real legal problem with using Chat GPT to write your sentences. It's laughably easy to adjust Chat GPT's tone away from its default informal one. When it comes to shitty writing or hallucinated information, the author is responsible for combing through their work to fix those things regardless of whether AI is involved. Paper sources go out of date, internet sources can be misinformation; looking through your work to make sure all the information in it is still correct is the bare minimum. Given that ghostwriting is legal, I don't see how "taking credit for something that others helped write" is uniquely bad when AI is involved.

Using Chat GPT to write a book will probably leave you with a shitty book. People probably won't read it. That's true of many human-made books (lots of people have bland styles! lots of people don't check their sources!). But copyright law is the one thing that makes selling a shitty Chat GPT-made book less feasible than selling a shitty human-made book. You can probably get away with it if no one cares enough about your book to prove you used AI and bring a case against you, but still.

There is no war in Ba Sing Se.
RainehDaze Figure of Hourai from Scotland (Ten years in the joint) Relationship Status: Serial head-patter
Figure of Hourai
#62: Dec 15th 2023 at 1:00:54 PM

[up]x3 Mmm, that does sound quite a bit like saying "you should only do this thing using the prescribed methodology, if you do it some other way then because that does things badly the rest of your work won't count even if you fix all of those bad things". Now, this might be non-fiction we're talking about rather than code, but that is basically what the AI assistant code stuff is geared for, and it's professional and experienced developers that are enjoying using it—or maybe I'm just surrounded by weird people—because that's fine, it still does enough right to make the burden of work easier.

Hell, I've done more or less the same thing by writing a first draft of something drunk and then coming back and completely rewriting it the next day, because at least then I had a framework for the writing rather than "hm, I know what I need to do and have happen, and maybe one or two exact lines of dialogue, but I keep blanking on the connecting bits". I'd rather have a shoddy draft or one I don't personally like, because at least then I'm not caught in some infinite loop of 'how do I start the next part'.

If AI could respond specifically enough to things like that, it'd be pretty interesting.

[up] I don't think the law has come down on the side of 'If you use AI and then heavily change it, it can't be copyrighted', just that the AI output can't.

Edited by RainehDaze on Dec 15th 2023 at 9:02:46 AM

Avatar Source
Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#63: Dec 15th 2023 at 2:06:32 PM

Mmm, that does sound quite a bit like saying "you should only do this thing using the prescribed methodology, if you do it some other way then because that does things badly the rest of your work won't count even if you fix all of those bad things".

It's more that ChatGPT specifically is a bad tool for the purpose of long-form writing. It doesn't matter if it's good at coding, because coding and long-form writing are two different things and it's really not designed for the latter thing. The question I have is why it's worth it to wrestle with the AI output and beat it into shape to make it resemble the point you have to make, when you could just use other techniques and write the words you want to write.

To the extent that my style of thinking comes through, it will be in the form of the structure of my argument, and not how the words and sentences come across.

Like...I'm going to be direct, this approach doesn't make sense to me. The structure and the argument are dependent on the words and the sentences. The thing that I think is completely losing me is that the concept Demarquis said the book was about (starting at Descartes and working through to modern political polarization) is...a weird idea. That's not a bad thing, but there's clearly some sort of narrative there, there's a through-line. There's a clear story to be told, and I don't understand how breaking it up into chunks a page or two long is supposed to work. Books like that aren't collections of tiny essays, they're structured like narratives where events and concepts flow into each other because you want the reader to be able to go from point A to point B to point C, and if it has a weird herky-jerky pacing because it's not flowing (because you can't use the program that way at this point) you lose the hook, you lose the flow. You lose why the audience should care.

Like, if you're writing in the way Demarquis describes, how do you connect Descartes to modern political polarization at the start of the book, and how do you call back to him at the end of the book? How do you weave all the different elements together, when the chatbot has no idea what's going on? It legit sounds much easier to just write the book. Because that way, you're not constantly having to correct this other writer you can't ask questions of and it has no idea what it did or said five minutes ago.

If it's a series of microessays, it's going to be clunky and awkward, because people don't write like that. And if it's not separated out and it's the chatbot output kinda just put in order, well, then you run into the problem of needing to write all of the connective tissue, which is the hard and boring part. The other element is that I am very much wondering why this topic is so interesting to Demarquis, which would need to be set up at the start of the book to serve as the hook and I don't see how having a chatbot write it will get that across, like at all.

Edited by Zendervai on Dec 15th 2023 at 5:07:59 AM

Not Three Laws compliant.
RainehDaze Figure of Hourai from Scotland (Ten years in the joint) Relationship Status: Serial head-patter
Figure of Hourai
#64: Dec 15th 2023 at 2:10:40 PM

It's more that Chat GPT specifically is a bad tool for the purpose of long-form writing. It doesn't matter if it's good at coding, because coding and long-form writing are two different things and it's really not designed for the latter thing. The question I have is why it's worth it to wrestle with the AI output and beat it into shape to make it resemble the point you have to make, when you could just use other techniques and write the words you want to write.

It does sound like 'because there's not another technique that would work'. That's basically saying "It's better that you get stuck because of writer's block and don't do what you want to, because the AI is bad and it moves much of the work around to redrafting and editing and you have to do a lot more of that".

Avatar Source
Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#65: Dec 15th 2023 at 2:17:30 PM

Okay, so maybe Demarquis could try dictating it? Or starting with the outline and gradually getting more detailed? Or discussing the topic with someone else and recording the discussion? There's a lot of options that aren't "hey, you know this program that is in no way designed to be able to do this, to the point that you're constantly dealing with a hard limitation in output length? Maybe you should wrestle with that and have to contend with the idea that it might spit out something you really don't want to say and overlook it because it's gonna be doing that a lot?"

Writer's block sucks, yes. I get it really bad. I would never dream of using a chatbot for my own writing because 1) it is not designed to do that and 2) I don't want "close enough to my ideas to be good enough", I want my ideas on the page. For me, at least, using an AI chatbot isn't a solution to writer's block because it's not actually solving it. It doesn't matter how much I edit it or hammer it into shape, it's still not my writing at the core of it.

But ultimately, my big objection is that ChatGPT isn't designed for this purpose. It's like using a screwdriver like a hammer. You can do that, and it'd work sort of okay, but it'd be significantly less effective than just using a hammer and you'd run a really high risk of damaging the screwdriver in the process. Admittedly, you won't break a chatbot like this, but the output isn't going to work properly for the kind of structure the concept Demarquis stated is just screaming for.

Edited by Zendervai on Dec 15th 2023 at 5:20:10 AM

Not Three Laws compliant.
Silasw A procrastination in of itself from A handcart to hell (4 Score & 7 Years Ago) Relationship Status: And they all lived happily ever after <3
A procrastination in of itself
#66: Dec 15th 2023 at 2:23:13 PM

I really don’t think we should be telling people that they should stop using harmless tools that work for them and have proven helpful just because we don’t like the tool personally.

“And the Bunny nails it!” ~ Gabrael “If the UN can get through a day without everyone strangling everyone else so can we.” ~ Cyran
RainehDaze Figure of Hourai from Scotland (Ten years in the joint) Relationship Status: Serial head-patter
Figure of Hourai
#67: Dec 15th 2023 at 2:25:46 PM

Also, every alternative you've suggested is either: 1) work out how to say it and just put off the writing until later, or 2) describe what you want in a way that isn't capable of being written down, both of which, like... don't fix the problem. To extend the tool analogy, it's moving from a screwdriver to a technical manual: tangentially relevant to the goal, but not really going to progress it. <_>

Plus, "that's not what it's made for" should never stop someone from using something for a task if it works for them and isn't dangerous.

Edited by RainehDaze on Dec 15th 2023 at 10:27:11 AM

Avatar Source
Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#68: Dec 15th 2023 at 2:50:02 PM

[up] I didn't know that transcription stopped being an option.

There's non-fiction books that explain complicated processes and concepts by being written as a dialogue, it's a genuinely ancient tradition. And more recent ones tend to be a legitimate dialogue that was recorded and transcribed.

Not Three Laws compliant.
RainehDaze Figure of Hourai from Scotland (Ten years in the joint) Relationship Status: Serial head-patter
Figure of Hourai
#69: Dec 15th 2023 at 2:50:58 PM

"Write your book as a dialogue instead" is, again, going right into "don't do what you want to do, do what I think is better."

Avatar Source
Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#70: Dec 15th 2023 at 2:54:07 PM

Okay. I still think it would be a good idea for Demarquis to get someone familiar with what AI writing sounds like to look at his writing so far, just to be sure.

Not Three Laws compliant.
editerguy from Australia Since: Jan, 2013 Relationship Status: You cannot grasp the true form
#71: Dec 15th 2023 at 2:56:40 PM

The fact you may not be able to copyright the work does seem like an issue.

I think Chat GPT is quite adaptable in terms of tone and content, so I doubt the writing style will be a significant issue.

Florien The They who said it from statistically, slightly right behind you. Since: Aug, 2019
The They who said it
#72: Dec 15th 2023 at 5:00:32 PM

because coding and long-form writing are two different things

Litigating how other people do things that don't really matter to you aside, this sounds incorrect.

I don't know for sure, but I'm pretty sure those two things are basically the same thing to a LLM. I mean, both things are writing in a language to convey information. When we know the same software can be applied to Protein Folding Problems successfully, that implies the amount of analogous tasks is pretty wide.

Edited by Florien on Dec 15th 2023 at 5:00:57 AM

Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#73: Dec 15th 2023 at 5:18:52 PM

Um...you can't use LLMs on long form writing properly because they all have hard maximum character count limits. Like, they're not good for it because they literally can't do it properly. They might potentially be able to be better for that in the future, but as is, you run into the problem of having to "write" in small chunks where the AI doesn't remember any previous bits when you do a new bit. It's an actual pretty serious limitation on the tool.

The other element is that coding really does work differently. Checking if a program works or not is simple. It's a lot less simple to check if a line of text has the wanted emotional response from a human reader.

Saying "LLMs are not currently a very good tool for long-form writing because of specific hardcoded limitations" is not really rebutted by "but it's good for coding". The needs and demands are different, it isn't just "spits out text that kinda makes sense".

Last time I checked, knowing good coding practices isn't very helpful for writing an essay or vice versa, the needs of the specific output are pretty different. AI can write text, but it's not designed for long-format text at the moment. It might be able to do that in the future, but it is not currently designed for it.

Edited by Zendervai on Dec 15th 2023 at 8:20:52 AM

Not Three Laws compliant.
RainehDaze Figure of Hourai from Scotland (Ten years in the joint) Relationship Status: Serial head-patter
Figure of Hourai
#74: Dec 15th 2023 at 6:07:02 PM

The other element is that coding really does work differently. Checking if a program works or not is simple. It's a lot less simple to check if a line of text has the wanted emotional response from a human reader.

No it isn't.

"Is this piece of code formatted correctly" is easy. "Does this code do the thing it's expected to do, especially without knowing the full scope of the program and even if you do" is really fucking hard and way beyond language models. <_>

Avatar Source
Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#75: Dec 15th 2023 at 6:43:18 PM

Fair enough, my mistake.

...but that means "but what about coding" is even less of a meaningful response to talking about the issues with long-form writing.

Edited by Zendervai on Dec 15th 2023 at 10:20:20 AM

Not Three Laws compliant.

Total posts: 660
Top