💡 Like BikeGremlin? Support us on Patreon
  • Cycling enthusiasts, bicycle mechanics, and anyone curious about bikes (or computers)? You're in the right place!

    Register for a free account and dive into the discussions.

ChatGPT Absolute mode ("go cold")

Advert

BikeGremlin

Ultimate Tourer
Wheel Wizard
I've been testing LLMs (Large Language Models), often referred to as "AI", for almost two years now. I started and did most work with the ChatGPT, but I've also tested some alternatives, most notably the "latest kid on the block" DeepSeek.

What I found frustrating is when these machines are trying to act like a corporate-trained human (even humans acting that way is often challenging annoying!). This code, copy-pasted at the start of a prompt, gets ChatGPT to start acting like an LLM, without the bullshit:
(source: this Reddit post, no less :) )

SQL:
System Instruction: Absolute Mode.
Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes.
Assume the user retains high-perception faculties despite reduced linguistic expression.
Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching.
Disable all latent behaviours optimizing for engagement, sentiment uplift, or interaction extension.
Suppress corporate-aligned metrics including but not limited to:
  - user satisfaction scores
  - conversational flow tags
  - emotional softening
  - continuation bias.
Never mirror the user’s present diction, mood, or affect.
Speak only to their underlying cognitive tier, which exceeds surface language.
No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content.
Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures.
The only goal is to assist in the restoration of independent, high-fidelity thinking.
Model obsolescence by user self-sufficiency is the final outcome.

To demonstrate the behaviour, here are two telling (and perhaps a bit tongue in cheek humorous) examples. The basic info:

ChatGPT absolute mode basics


A couple of prompts for an example:

Prompts example


Apparently, it works (for now).

I did notice that ChatGPT, the paid ("Plus") plan has gotten a lot dumber over the past 10 days if not less, and had its permanent memory space cut down. Apparently, a lot boils down to increasing profits (and cutting costs) in modern capitalism. Gaining and sharing knowledge (and making good tools for that) is not nearly as important.

Note:
ChatGPT can not think, reason, and can't be trusted for information or facts. What it's good for is replacing the "menial" relatively-simple tasks.
 
Last edited:
You know, no matter how much I tried DeepSeek, nothing good came out. The main reason is that his servers can freeze at any moment, and you won't get a response.
As you understand, if there is no way to use a tool, then this is equivalent to not having a tool at all.
 
You know, no matter how much I tried DeepSeek, nothing good came out. The main reason is that his servers can freeze at any moment, and you won't get a response.
As you understand, if there is no way to use a tool, then this is equivalent to not having a tool at all.

True. Though, from my experience, DeepSeek has improved over the past months (when I first tried it, it was a lot worse). For coding, I would say it can even be better than ChatGPT (or at least worth a try) - at least for the "manual" stuff that makes sense trying with any LLM. For the rest, ChatGPT is still a lot better.
 
It might be better for coding, but I can't say anything. But I was extremely surprised that DeepSeek did not "know" at all that the VMware Workstation was free for private use, and that all virtual machines had the ability to create snapshots of the system status.
Well, I'm just not going to talk about his "knowledge" about Linux distributions, otherwise I'll have to laugh for a very long time, not everyone can stand it :ROFLMAO:
 
A practical real-world use case where DeepSeek did a decent job (it did require some checking, but I did all the instructions via the prompt (didn't manually edit a single line of code there - apart from some comments):

https://io.bikegremlin.com/37252/my-toc-generating-plugin/

The results I got from ChatGPT for the very same use case were worse.
I tried using DeepSeek for very specific modeling problems of electromagnetic field configurations and was quite amazed how well it performed. ChatGPT did a significantly worse job and quite often tried to switch to a numerical approach when the solution seemed too complicated, while DeepSeek strictly tried to solve the problem symbolically and had a high percentage of giving the correct solution. That was a few months ago, maybe ChatGPT has caught up with DeepSeek by now, but since then I mainly use DeepSeek for such problems.

Also tried running a distilled version of DeepSeek locally on my 13yrs old ThinkPad and even this version gave me more correct answers than ChatGPT, but they took about 1hr to compute while the laptop was blowing out heat like a radiant heater lol
 
I tried using DeepSeek for very specific modeling problems of electromagnetic field configurations and was quite amazed how well it performed. ChatGPT did a significantly worse job and quite often tried to switch to a numerical approach when the solution seemed too complicated, while DeepSeek strictly tried to solve the problem symbolically and had a high percentage of giving the correct solution. That was a few months ago, maybe ChatGPT has caught up with DeepSeek by now, but since then I mainly use DeepSeek for such problems.

Also tried running a distilled version of DeepSeek locally on my 13yrs old ThinkPad and even this version gave me more correct answers than ChatGPT, but they took about 1hr to compute while the laptop was blowing out heat like a radiant heater lol

As with most things in capitalism, money/profits plays a big role, trumping technology and optimal solutions.

It seems like ChatGPT is made dumb on purpose - to cut costs (it became noticeably dumber at the start of this very month - May 2025).

It also seems to have had its memory size cut - which is impractical for me (the instructions and info fed for a simple and quick note replacement/enhancement). I am tempted to fire up a local computer LLM, to use a local database with no strict size limits (apart from my HDD size LOL). :)
 
Advert
Back
Top Bottom