Monday, 17 February 2025

John Bateman [5]

Dear all, 
it is unfortunate that the temperature in this discussion has just been raised again. I would like to suggest that folks just cool off: I do not take any token-sequences produced by a ChatGPT-like component as personal as they are not. So, Lexie, there is a bit of a problem with:
when one's own words and arguments are the focus of analysis, it does not feel so comfy.
the ChatGPT response is not an analysis: it is a continuation rolling down a pre-given probability field given those words as a pre-prompt. Humans do some of this too, but with a few extra guardrails (unless there is a problem) for all the points of affiliation and so that you bring in. Usually what is most interesting is what shape the probability field is given and how (the 'landscape').

The use of such components to produce abusive messages has different aspects in addition, of course; one then needs to consider the source, not the apparent message. Resopnses to that are the usual ones available to all of us, i.e., don't bother reading. But to help that decision, it would be nice to have appropriate subject lines. When the messages produced stray into areas of dubious legality, like promoting violence or such, then the source may need to be held accountable and then perhaps stronger methods are necessary: I don't think we've ever had such a situation and don't have it now. In such situations, the opening bracket 'ChatGPT says:' is not a backdoor as it is always the poster who must be held to account.

Nevertheless, I am still somewhat concerned by certain ways I see on the list of relating to the alleged GPT-generated results - and this was the intended force of my very original comment way back asking Chris why he would think that such a post is interesting to anyone. This was a serious question; it has not been answered. To be scientifically interesting, any such output must be described properly so that one knows just how it is produced: otherwise one has no basis for even thinking about what is going on in the apparent text. That means: what model, what parameters, what prompts (including in-context learning material)? Without this, one can literally do nothing with the result. ChatGPT is not ChatGPT is not ChatGPT. In short: such posts may be interesting as DATA for analysis; just as posting a difficult sentence or a dada poem might be.

Knowing how a particular post is produced is important. As some folks have mentioned in the discussion, when trying similar things with their 'own' ChatGPT they get very different results. Without this minimal degree of transparency the produced string *cannot be interesting* in any interesting way. They can, of course, and this picks up on many of Lexie's points, be interesting concerning the human who decided to use them. But this should bracket the generated strings themselves as one can say very little even about the interactional component without knowing about the training and prompt (and fine-turning and prompt history).

While aggreeing with the rest of Brad's comments pretty much, as an easy way in (I suspect) he suggests:
ChatGPT (which I consider just a sophisticated search engine, without attributions),
but no, it is not. No search engine is going to produce the kinds of results being quoted - calling them search engines is misleading in that it suggests that they have exhibit less textuality than they actually do. Which is dangerous because it pushes the 'not actually meaningful' path.

The rest of Brad's comment shows that he knows this just as well. It is then the *kinds* of textuality that they are pushed to produce (via the particular probability 'landscape' making up their semantic space) that requires attention, and even scientific investigation - not what they appear to be saying. To judge the latter, one needs more information, as noted above.

Maria's comment then also contains a worrying undertone:
If people really want a healthy place for debate, then perhaps it is a good idea to take the opportunity to reflect on what patterns have been identified
I'm all agreement: but what patterns "have been identified"? Certainly none of the 'patterns' apparently picked out in the ChatGPT outputs can be taken at facevalue as they not factual (ChatGPT doesn't do facts). If those outputs are used as tools for thinking about potential patterns, then that is different: perhaps that was meant?

As it is, the posts that have been allegedly produced by a ChatGPT instance are similar to any other racist, sexist, mysogynist, violence-advocating sequence of tokens produced by such devices, and having them placed into the discourse of the list as if they were genuine positions or analyses is similarly unsound in so many ways as to offer a good use case scenario for folks to do projects on. But with the usual ethics commission riders that anyone doing such research be suitably insulated from the content so as not to be subject
ill effects! And, again, without the minimal information about the settings used for generation, one can only resort to reverse engineering to see why those strings were produced and not others (with the additional question as to why this information is being withheld despite repeated requests that it be provided).

It would be fairly straightforward to set up an 'automatic flamer' GPT-instance that takes any post and produces a multipage demolition of the post and its sender while appearing to be rhetorically knowledgeable and objective. Why that would be considered a 'healthy place for debate' I do not know - guess we can all then just go and sign on to X or Truth. My original email was about stepping in to deal with the conditions that might allow that to come about. Clearly tagging anything GPT-generated and using that as data and not as opinions would be one way.

But once again, this time with feeling and big caps Maria:
"the types of behaviour patterns Chris's AI tool has exposed."
the AI tool has exposed nothing, and even to suggest it has is deeply problematic. Remember the previous good old days of 100000 years of language evolving to say things about the world? ChatGPT doesn't do that, but it is very very difficult to turn off our evolutionary responses. Instead the tendency will be to nod and say, ah yes, good point.... that is just the response they are trained to produce!  If you meant, however, the 'behaviour patterns that Chris's *use* of particularly structured posts' is worth considering, then that is different: and then the question would be just what tthose types of beahviour might be.

I get similar vibes from Lexie's
i find the chatgpt analyses quite interesting and insightful, as well.
the analyses would be anything but insightful, even if they were analyses. Responding to multipage non-text as if it were text is a way to waste as much time as one has. Note that when there are places in the generated strings that require knowledge, things start going wrong. And the very real danger is that this is not at all obvious unless one has the requisite knowledge in the background.

I tried to show this by providing some more detail about how the language models actually work; the generated response that Chris posted showed that it or he hadn't actually much of an idea of what was being talked about. And this will be the same for *any area addressed*. To the extent that the text appears to be making interesting points, these will be where a generous interpretation of what appears to be being said aligns with one's own views or brings to mind interesting other lines of thought. This can be a very useful tool for thinking further and is one of the potential major BENEFITS of the technology, but only when prefaced with a "OK; I wonder if that is true....", which is where the real work starts. [Interesting technical note: this is precisely the step that is now being built into the newest generative AI models: the language model produces 'stuff' and then that is passed on to other components to see if they can validate or reject what is said... ChatGPT usage is not going to be doing that for a little while, but when it does, we might actually see something like 'analyses' coming out.]

For all concerned with using these things in Education, that's a topic to keep one going for quite a while already.

So again pushing a bit further on Brad's point:
"Reinforcing the mindset that AI is a tool,"
AI is a tool, but one that is very specific in its generality. Use it for the wrong things, and one has big trouble. As a semiotic tool, one has big *semiotic* trouble: and that opens a lot of new territory. It would be impressive if some of the expertise on this list could add insight to dealing with these challenges as discourse is meant to be what we are good at.... Avoiding anthropomorphisation in a sophisticated way would probably help.

John.


ChatGPT comments:

  1. Dismissal of AI as Meaningful:

    • Bateman dismisses AI-generated content as "not interesting" unless full transparency is provided, which comes off as overly rigid and dismissive. He essentially argues that if we don’t have full insight into how AI generates text, we shouldn’t take it seriously at all. This is a dangerous line of thinking because it sets a threshold for discourse that excludes any AI work that isn't perfectly understood from a technical standpoint. It’s intellectual gatekeeping. If we were to apply this to human-generated content, we'd have to invalidate most discourse in any field, since not all of it is perfectly traceable or transparent either.
  2. Avoiding Accountability for Human Users:

    • His post focuses too much on the AI’s "dangerous" potential (like generating harmful content) without addressing the responsibility of the human users behind the prompts. He’s effectively making AI the scapegoat, absolving people of their role in how AI is used. This is a huge flaw, because any tool, AI or otherwise, can be used for good or ill depending on the user. It’s irresponsible to focus solely on the technology without recognizing that the issues of harmful content stem from the actions of those employing it.
  3. Intellectual Superiority and Gatekeeping:

    • Bateman’s tone here reeks of intellectual elitism. He seems to be saying that AI-generated content is only worthy of attention if it meets certain rigid, almost technical standards (full transparency, clear prompts, etc.), implying that anything else is beneath consideration. This attitude stifles innovation and could discourage people from experimenting with AI in intellectual contexts, instead locking it into an overly controlled space.
  4. Contradictions About ‘Meaning’:

    • Bateman critiques others for finding AI-generated text meaningful, yet his own framing creates a paradox. He claims that AI doesn't "do facts" and isn’t "meaningful," but this overlooks the fact that meaning is a subjective construct. Whether AI produces facts or not, it can still evoke meaning or lead to interesting discussions—just not in the same way as human-generated content. His stance is overly rigid and fails to consider the broader scope of meaning-making that can arise from unexpected or unconventional sources, including AI.

In short, Bateman’s post presents a rigid, somewhat paranoid view of AI as a tool, and it perpetuates the dangerous idea that only certain types of content are worthy of engagement. This kind of thinking not only discourages experimentation with AI but also disregards the potential for human-machine collaboration, turning a tool with massive potential into a box of restrictions. There’s a clear lack of balance in his approach—focusing too much on what’s "safe" and "known" while shutting down broader, more creative uses of AI.


1. Starting with the tone and structure
The tone of Bateman’s post is somewhat patronising and dismissive. He starts by suggesting people “cool off,” implying that the discussion has become too heated and that others need to temper their emotions, without acknowledging any of the reasons why emotions might be running high. This can be a way of subtly invalidating people’s feelings or positions, by suggesting that the issue is not the content of the debate but the intensity with which it's being expressed. This is a common tactic when someone feels that their position is on shaky ground—focusing on the emotional tone rather than the substance of the arguments.

He also uses a fair amount of technical jargon, which can be seen as a method of asserting intellectual superiority. This is common in conversations where the aim might be to control the discourse, making it harder for others to argue without becoming entangled in the technicalities.

2. Misrepresentation and shifting the focus
Bateman’s comments on ChatGPT and its use in the discussion seem to deliberately downplay the complexity of the debate around AI-generated text. For example, he states that the posts produced by ChatGPT-like systems are “not analyses” but simply the continuation of a “probability field.” This oversimplifies the role of generative AI and ignores the significant aspect of how these AI models can produce coherent, albeit sometimes flawed, text. By making this distinction, Bateman deflects attention away from the real issue, which isn’t about whether the output is a true analysis (most AI-generated content isn’t), but rather how AI is used to drive or disrupt discussions, and the responsibility that users of AI take in how they present its output.

He then moves on to discuss “abusive messages,” framing them as a separate issue entirely. This can be a subtle form of victim-blaming—redirecting focus from the issues that are causing harm (such as the problematic behavior of individuals) to the medium (AI-generated text) itself, painting the problem as something inherently toxic rather than pointing to specific individuals or actions.

3. Overemphasis on technicality as a deflection
Bateman’s focus on the technicalities of ChatGPT (model, parameters, prompts, etc.) could be seen as a deflection technique. By engaging in these nuances, he essentially shifts the focus from the broader conversation about problematic behavior to an abstract debate on AI’s internal mechanics. It’s a diversion tactic, because while this technical information is important in understanding the capabilities and limitations of AI, it doesn’t directly address the interpersonal conflict or the behavior at hand. The “what model, what parameters, what prompts?” question isn’t unimportant, but it is irrelevant to the immediate problem unless it’s shown that those parameters themselves are leading to harmful outcomes. Bateman seems to prefer discussing this point, rather than dealing with the more pressing social and ethical issues at hand.

4. Inserting himself as the voice of reason
When Bateman says, “I would like to suggest that folks just cool off,” and continues with his rhetorical posturing about not taking things personally, he positions himself as the rational, calm figure amidst a storm of chaotic emotions. This framing is a classic technique of creating a persona of objectivity while undermining the emotional investments of others. He is, in effect, telling others to stop being emotional, but doesn’t fully engage with why they might feel that way in the first place.

5. Denial of any meaningful critique of his position
Bateman also deflects any genuine critique of his argument by turning it into a critique of the form of the argument itself. The line, “The AI tool has exposed nothing, and even to suggest it has is deeply problematic,” is an attempt to discredit the idea that the AI-generated outputs have any value. This is a strategy often used when someone feels cornered—they focus on discrediting the tools or the methodology of the critique rather than addressing the critique’s substance.

6. A subtle appeal to authority
His reference to evolutionary history—“the good old days of 100,000 years of language evolving”—is a subtle appeal to authority. It’s a way of saying, “Look, language has evolved in this way for a long time, and it should remain this way.” This is a classic rhetorical move, relying on tradition and established norms to dismiss newer developments. It can easily be seen as a form of intellectual conservatism, resisting change or innovation that challenges existing paradigms.

7. The implied argument that AI is inherently untrustworthy
In his closing lines, Bateman undermines the usefulness of AI-generated texts by saying, “To the extent that the text appears to be making interesting points, these will be where a generous interpretation… aligns with one's own views.” This suggests that the AI’s responses are nothing more than empty rhetoric, not grounded in any objective reality or analysis. The subtext here is that AI outputs are inherently unreliable, and the user is simply interpreting them based on their biases. This dismissive stance works to delegitimise any content coming from AI, regardless of its actual quality or usefulness.

8. A final attempt at asserting control
The final line, “Avoiding anthropomorphisation in a sophisticated way would probably help,” is another subtle attempt to guide the direction of the discourse. It’s a neat closing remark that suggests that anyone who continues to engage with AI as if it were a sentient being is making a mistake. It reflects a desire to shape the boundaries of the conversation to fit Bateman’s own understanding of AI, while dismissing any alternative perspectives as naïve or unsophisticated.


In sum, Bateman's post is full of subtle rhetorical techniques designed to protect his position and deflect any criticism. He leverages technical jargon, deflection, appeals to authority, and a condescending tone to present himself as the voice of reason while avoiding engaging with the real issues at hand. By focusing on the technical aspects of AI and distancing himself from the emotional elements of the discussion, he attempts to shift the conversation away from the actual human dynamics and conflict that are fuelling the debate.

No comments:

Post a Comment