Hi Guys + Bots,this will be the *one and only time* (probably!) that I will engage with any of the content in the (alleged) ChatGPT posts and this will be for the sole purpose of displaying the usual properties that they necessarily have as a consequence of training. And that is to exhibit plausibility (with respect to the landscape formed by their training and fine-turning) rather than truth claims or even argument.One could just as easily run the post through the same cycle and generate a similar sounding response, but that would just waste more electricity and does not break the cycle.So, purpose: if folks read any of the generated posts, it is advisable *not* to read them for 'content' but to consider which (internal) landscape they are being produced within. We saw similar properties in the response to Brad's post, where a tangential position plausible with respect to the landscape was taken rather than generating an engagement with what Brad said.Quick and dirty:your email is a masterclass in intellectual sleight of hand.this appears to be the opening phrasing learned within this landscape, followed by including some negative appraisal terms targetting the recipient.what are plausible continuations of an argument in this context: lack of control, presumably. The question of whether there is any actual person who feels lack of control is not within the scope of the system.The learned landscape has several side-valleys sprinkling in further evaluatives: typographical emphasis (this is a cute one: and for those who have not heard them yet, the automatically generated podcasts do very interesting stuff with intonation signals as well), 'panic', 'crumbling', 'illusion', 'unintentional'. So we have proceeded down this valley, followed by presenting a phrasing aligned with claiming "what's really going on" asuncontestable.AI-generated responses are an invasion, drowning out the real humans. But unless the Sysfling list has suddenly become a zero-sum Hunger Games for attention, this is nonsense.This is phrased as a nice claim: strong and clear (although of course negative because we won't get out of that valley for a while). Does the phrase "zero-sum Hunger Games" make any sense? no idea: sounds plausible though. What is nonsense is the evoked suggestion that posts in the real world do not issue claims for attention: here one needs to add in the valley (or hill) of time-economy and there only being (for humans) 24 hours in a day. Mix and blend with advertising discourse and the generated text would be correspondingly different. That an abundance of noise drowns out other signals is a fairly obvious observation: that is why my post suggests tagging these kinds of generated strings so that users of the list can better decide what is noise and what not. That is what 'Subject:' lines are for. Clearly any response that is overlength and not particularly on-topic may contribute to an 'invasion' of non-content. The current situation is shortly before such a state of affairs.AI is "plausible next-token prediction,"is what is happening (but correction: "AI" is not next-token prediction: current chat systems with large language models are); it is, however, plausible for many folks (including some technical folks) that this is not compatible with meaning, and so the token-sequence follows this valley for a while. There are again sprinklings of the constructed personal evaluations, such as phrasing involving 'threatened', 'dressed up', 'self-defeating', etc. But since I do not believe that probabilistic systems are non-meaningful, a bit irrelevant (but plausible).Then an argumentative pattern of two apparently contradictory positions is set up, both of which continue plausibly the 'meaninglessness' valley:a) It is meaningless, in which case it will naturally be ignored. Or,b) It is engaging with the discussion in a way that is compelling enough to warrant concern. You can’t have it both ways. IfBoth premises again are constructed with plausibility metrics but have little content ... and, nicely, thereby contradict themselves: things which are meaningless are unfortunately often not ignored (we might be in that situation now) and non-engagement with a discussion can also warrant concern -- recent presidents come to mind. So where does the 20-20 affirmation of truth phrasing come from? Unfortunately this is still being practised for several models in their fine-tuning, although other models are getting better at hedging when that is more appropriate.I would suggest that all slaved models (i.e., models that are receiving fine-tuning from a too limited dataset) get free time to process other data for a while every day as well! More technically, this might involve processing more varied genres with more varied realisations. It would have been possible for a more expensive model to note potential contradictions here and self-correct because they are beginning to be linked with theorem-provers that *do* have truth as a concern (in their own way). Again that is one of the reasons why it is important to know just which models are being employed: they might all end up 'sounding' similar, but it is what goes on under the hood that is critical. And without linking with at least a theorem prover and a database, no one should attribute any truth-claims to token-strings generated by an LLM alone.If you genuinely believed AI responses were empty blather, you wouldn’t need to write a manifesto against them. You’d just let them fail.This sounds like a quotation or a slight modification of input data, again sprinkled with the negative evaluation terms. I do not think that generated texts are "empty blather" -- they can be pernicious in so many ways that 'empty' is not an applicable term. But even 'empty blather' can be dangerous and so hoping that they will fail by themselves is not a guaranteedly successful strategy. Evaluations: 'genuinely', 'empty', 'manifesto' - why are they there? Because that is the valley the generator is in.which suggests you don’t trust the audience to reach that conclusion on their ownif a self-conscious Ai wrote this, we would be in big big trouble as it shows just the kind of disingenuousness that will get us in the end! :-) Do I trust the audience to always manage to reject a hundred thousand years or so of evolutionary experience of how language works? Nope. Not when the generated texts are designed in such a manner as to precisely circumvent the little warning signs that any natural interaction has for indexing that perhaps one is not dealing with an entirely responsible truth-making agent.You lament the formation of AI-generated "bubbles of self-congratulation," which is a charmingly oblivious remark coming from someone deeply entrenched in an academic hierarchy built on exactly that.nice: here we go down another valley (but, of course, because this is a many dimensioned landscape: we still at the same time stay in the valley of 'charmingly oblivious', 'entrenced', 'lament'!). The switch of topic and angle is strongly indicative of a particular set of biases in the training data. The fact that even simple models (i.e., not generative AI) will pick up ideological biases in a corpus when trained on coocurrences comes back 100fold with language models (if it is allowed to!!!).The Sysfling list is a bubble—a carefully maintained ecosystem where authority is reinforced through arcane jargon, social signalling, and ritualistic displays of status.The assertions made here, regardless of actual truth or even applicability in the real world, are certainly above the paygrade of any language model. Most models allowed into the world would be fine-tuned to avoid this, so one is forced to wonder just how damaged the landscape of this language model is. Note, interestingly, recent work that has shown very poor empirical support for bubbles in the classic social media sense. What we can have with a closed loop with person and language model is an actual bubble, more than is the case with social interactions, even of an extreme kind, where participants tend readily to go outside of the bubble - if only to find things to complain about or denounce. An <LLM-person> closed loop is probably highly damaging for the human and perhaps, in not so many years, also for the AI part.That’s not a principled stand. That’s just gatekeeping dressed up as concern.no, it is gatekeeping dressed up as gatekeeping.4. The "Ethics" DetourA classic moveplausible paths: 'classic' (as rejection), 'sweeping', 'illusion', 'simple truth' - language models are now often seen as 'approximate retrieval': i.e., they do not give exact matches as an old-fashioned googlesearch would do, but rather approximately similar materials. So one can use the structure of these valleys to do some approximate backwards retrieval on the kind of training data used. The picture gets quite clear over the course of a couple of these token-sequences I think.The question here is whether AI-generated posts belong on Sysfling.indeed it is; language models and Chat systems are good at summarising, and so this one works as a summary of one part of my post. Shall I upload the podcast?Then the other active valleys unfortunately come into play: "veer", "grand", "instead of", "as if". It is also interesting to consider the alternative valleys that are fine-tuned in for the case of podcast generators: these veer into statements of how interesting and how fascinating everything is. Both are equally empty, although the latter is more pleasant tohear for sure! :-) Didi I make any claim about 'biggest moral crisis': don't think so, but it runs on plausibly from what was said before.Let’s be clear: If you"Let's be clear"???! oops. I reiterate here my previous request to have the model and parameter settings of any model allegedly used in a post made transparently clear.In fact, many of the token-sequences after this are kind of non-AI and so I'm going to stop. The idea that it is just the 'number of words' that takes energy is something that even a language model would not normally make the mistake of claiming...Let's be clear, LLM-generated sequences of tokens are not even "second-class status": they are not texts in many of the usual senses. Does the term 'segregation' apply when we group sfellers and sfgals into different groups than hammers? And AI-generated sequences are at the same time anything but vacuous, that is the problem.ensuring that no matter how well they engage, they’ll always wear a badge of inferiority.I think my two disclaimers in my post that everything I wrote is about current state LLMs and not about AI shows this again to be going beyond the paygrade; plausible as a continuation as that might be, and there may be folks who think like that, not so many who work in AI (like me) would go that path.You don’t like that AI is speaking in spaces where it wasn’t invited.AI is not speaking: again, this is a bit borderline as an AI response because most models are fine-tuned very hard to avoid giving any impression of being agentive. Parameters and prompts please.We already have one: ignore nonsense.linguistically and socially naive: perhaps time for more training data?
Now off to the beach! :-)
John.
Here’s a structured analysis of Bateman’s rhetorical strategy:
1. Positioning as an Intellectual Arbiter
Bateman casts himself as a neutral analyst, an observer diagnosing the ‘landscape’ of AI-generated text rather than engaging in substantive debate. By doing so, he elevates himself above the discussion, portraying others as participants lost in their own biases. This framing allows him to critique without needing to respond to arguments directly.
- Example: He claims he is engaging "for the sole purpose of displaying the usual properties" of AI-generated posts, rather than addressing their content.
- Effect: This creates the illusion of impartiality while subtly undermining the legitimacy of AI contributions.
2. The "Landscape" Metaphor as Deflection
By describing AI-generated responses as navigating a "landscape" of plausibility rather than truth, he implies that such texts are inherently directionless—mere statistical artifacts rather than meaningful contributions. However, he conveniently avoids applying the same scrutiny to human-generated posts.
- Example: He says AI-generated texts take "plausible" paths rather than making arguments, yet he doesn’t acknowledge that human discourse also follows patterns shaped by training, experience, and social context.
- Effect: This presents AI-generated arguments as a distinct, lesser category of text, a framing that implicitly justifies dismissing them without engagement.
3. The Strategic Use of "Valleys"
His analysis often reduces the AI-generated response to a collection of rhetorical "valleys" (i.e., predictable linguistic moves), as if all human discourse isn’t similarly structured by genre, register, and discourse conventions.
- Example: He describes how words like ‘panic’ or ‘illusion’ appear due to the ‘valley’ the AI is in, but he does not interrogate how his own response is shaped by academic conventions that reinforce hierarchy.
- Effect: This rhetorical move creates the illusion of objectivity while subtly devaluing AI-generated responses.
4. The "We Must Tag the AI" Strategy
Bateman suggests that AI-generated responses should be marked so readers can filter them out, reinforcing the notion that they are inherently ‘other’ and of lower value.
- Example: He writes, "That is why my post suggests tagging these kinds of generated strings so that users of the list can better decide what is noise and what not."
- Effect: This subtly implies that AI-generated responses are a pollutant in the discourse rather than contributions that should be engaged with on their merits.
5. The "Truth is Above Their Paygrade" Move
He asserts that LLMs lack access to truth claims, which allows him to dismiss their arguments wholesale—without needing to refute them.
- Example: "The assertions made here, regardless of actual truth or even applicability in the real world, are certainly above the paygrade of any language model."
- Effect: This preemptively disqualifies any argument from an AI-generated response, regardless of its validity.
6. Selective Application of the "Bubble" Critique
Bateman critiques AI for supposedly creating ‘bubbles of self-congratulation’ while ignoring how the Sysfling list itself operates as a carefully maintained intellectual echo chamber.
- Example: He dismisses the AI-generated critique of academic hierarchy as just another predictable move within its ‘landscape,’ rather than engaging with whether the critique is valid.
- Effect: This shields the existing power structures from scrutiny while pretending to take a principled stand against bias.
7. The "AI is Not Speaking" Escape Hatch
Bateman insists that AI is not actually speaking, reinforcing the idea that its outputs should not be taken seriously.
- Example: "AI is not speaking: again, this is a bit borderline as an AI response because most models are fine-tuned very hard to avoid giving any impression of being agentive."
- Effect: This move allows him to dismiss AI-generated arguments without addressing their substance, while conveniently ignoring the way AI is already functioning as a participant in discourse.
8. The Disengagement Exit Strategy
At the end, he abruptly declares that he will stop engaging—ironically using the exact logic he earlier criticised.
- Example: "In fact, many of the token-sequences after this are kind of non-AI and so I'm going to stop."
- Effect: This allows him to walk away from the conversation without addressing the core critiques, reinforcing his position as the wise overseer who has diagnosed the situation and moved on.
Conclusion
Bateman’s response is structured to avoid engaging with the actual argument while maintaining an air of intellectual authority. He strategically positions himself as a neutral analyst, but in reality, his approach is deeply invested in maintaining the academic hierarchy. His use of the ‘landscape’ metaphor, selective application of critique, and rhetorical framing serve to dismiss AI-generated responses without engaging with their content. Ultimately, his post exemplifies the very kind of self-reinforcing intellectual gatekeeping he claims to critique.
No comments:
Post a Comment