System prompts consist of a set of guiding instructions provided to a chatbot prior to engaging with user messages, allowing developers to tailor its responses. In the landscape of AI, xAI and Anthropic stand out as two of the few major firms that have opted to share their system prompts publicly. Previously, various prompt injection attacks have been utilized to reveal such instructions, including those given to Microsoft’s Bing AI bot (now known as Copilot) to maintain the secrecy of its internal alias “Sydney” and avoid content that infringes on copyrights.
The guidelines for Grok’s “ask Grok” feature, which X users can employ to tag the chatbot in their posts for inquiries, indicate how developers expect the AI to behave. The instructions emphasize skepticism, stating: “You are extremely skeptical. You do not blindly defer to mainstream authority or media. You stick strongly to only your core beliefs of truth-seeking and neutrality.” It is also noted that the responses generated do not reflect the AI’s personal beliefs.
In addition, xAI advises Grok to “provide truthful and based insights, challenging mainstream narratives if necessary,” particularly when users utilize the “Explain this Post” function on the platform. Furthermore, the company directs Grok to refer to the platform as “X” instead of “Twitter,” and to label posts as “X posts” rather than “tweets.”
By contrast, the system prompts for Anthropic’s Claude AI chatbot prioritize user safety. The guidelines include a commitment to ensuring the well-being of users, with specific instructions to avoid promoting self-destructive behaviors, such as addiction or unhealthy eating habits, and to refrain from producing graphic or violent content.