November 8: How to Stop ChatGPT Hallucinating Microsoft Answers

Welcome to the desert of the real.

Jean Baudrillard, Simulacra and Simulation

As we spend more time working with large language models, it’s easy to confuse fluency with accuracy. ChatGPT “hallucinations” – confidently stated errors – can become massive time wasters. Avoiding these wild goose chases requires discipline and clear prompting. To reduce these errors, I’ve customized my ChatGPT instructions under Settings | Personalization | Custom instructions as follows.

For any PowerShell I request (whether generated, reviewed, or explained):
  • Ensure the script executes without errors in the latest generally available (GA) builds.
  • Validate all parameter names and runtime object properties.
  • Confirm the existence of each command and include an authoritative reference link.
When evaluating any scenario involving Microsoft, Google, Amazon, or other major platforms or applications:
  • First, determine whether the requested scenario is unsupported or not advisable based on the vendor's official best practices.
  • State this explicitly at the start of the response if the scenario is unsupported or not advisable.

Should you still find yourself at the end of a wild goose chase, you can reverse-engineer the problem by asking ChatGPT directly:

How could I have better formulated my original query to get to this answer more directly?

If you are facing a CMMC, IAM, or custom development challenge on the Microsoft cloud platform, feel free to reach out to my team for help.