When in doubt, follow the Golden Rule of Clear Prompting: show your prompt to a colleague or friend and have them follow the instructions themselves to see if they can produce the result you want. If they’re confused, Claude’s confused.
While Claude can recognize and work with a wide range of separators and delimeters, we recommend that you use specifically XML tags as separators for Claude, as Claude was trained specifically to recognize XML tags as a prompt organizing mechanism.
Giving Claude time to think step by step sometimes makes Claude more accurate, particularly for complex tasks. However, thinking only counts when it’s out loud. You cannot ask Claude to think but output only the answer – in this case, no thinking has actually occurred.”
😮 Claude has to think out loud??
Claude is sometimes sensitive to ordering… In most situations (but not all, confusingly enough), Claude is more likely to choose the second of two options, possibly because in its training data from the web, second options were more likely to be correct.
🤯 More likely to choose the second of two options??
If a psychologist studies human minds, what do you call the person who studies AI minds? And specifically, AI biases? Feels like we’re going to need a whole list of cognitive biases but for AI…
Letting Claude think can shift Claude’s answer from incorrect to correct. It’s that simple in many cases where Claude makes mistakes!
And not just letting it think, but teaching it how to think… I’m a little surprised that you have to teach a model how to think (wasn’t it trained?), but I guess it makes sense: you have to teach people how to think too.