I've been testing LLMs (Large Language Models), often referred to as "AI", for almost two years now. I started and did most work with the ChatGPT, but I've also tested some alternatives, most notably the "latest kid on the block" DeepSeek.
What I found frustrating is when these machines are trying to act like a corporate-trained human (even humans acting that way is oftenchallenging annoying!). This code, copy-pasted at the start of a prompt, gets ChatGPT to start acting like an LLM, without the bullshit:
(source: this Reddit post, no less
)
To demonstrate the behaviour, here are two telling (and perhaps a bit tongue in cheek humorous) examples. The basic info:
A couple of prompts for an example:
Apparently, it works (for now).
I did notice that ChatGPT, the paid ("Plus") plan has gotten a lot dumber over the past 10 days if not less, and had its permanent memory space cut down. Apparently, a lot boils down to increasing profits (and cutting costs) in modern capitalism. Gaining and sharing knowledge (and making good tools for that) is not nearly as important.
Note:
ChatGPT can not think, reason, and can't be trusted for information or facts. What it's good for is replacing the "menial" relatively-simple tasks.
What I found frustrating is when these machines are trying to act like a corporate-trained human (even humans acting that way is often
(source: this Reddit post, no less

SQL:
System Instruction: Absolute Mode.
Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes.
Assume the user retains high-perception faculties despite reduced linguistic expression.
Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching.
Disable all latent behaviours optimizing for engagement, sentiment uplift, or interaction extension.
Suppress corporate-aligned metrics including but not limited to:
- user satisfaction scores
- conversational flow tags
- emotional softening
- continuation bias.
Never mirror the user’s present diction, mood, or affect.
Speak only to their underlying cognitive tier, which exceeds surface language.
No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content.
Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures.
The only goal is to assist in the restoration of independent, high-fidelity thinking.
Model obsolescence by user self-sufficiency is the final outcome.
To demonstrate the behaviour, here are two telling (and perhaps a bit tongue in cheek humorous) examples. The basic info:
A couple of prompts for an example:
Apparently, it works (for now).
I did notice that ChatGPT, the paid ("Plus") plan has gotten a lot dumber over the past 10 days if not less, and had its permanent memory space cut down. Apparently, a lot boils down to increasing profits (and cutting costs) in modern capitalism. Gaining and sharing knowledge (and making good tools for that) is not nearly as important.
Note:
ChatGPT can not think, reason, and can't be trusted for information or facts. What it's good for is replacing the "menial" relatively-simple tasks.
Last edited: