GTM Stack

Anybody here successfully using prompt caching with the OpenAI or Anthropic APIs inside Clay’s API column?

I’m trying to cut costs. Right now the entire email is generated by Anthropic and ends up at roughly $0.04–$0.05 per email, even though about 90 percent of the prompt never changes. I’ve tried setting up prompt caching, but I’m not seeing any savings. If someone has this working, how did you set it up?
February 2026

3 Answers

If you still need help with setting up prompt caching or reducing your AI costs, I’m here to assist. Would you like to provide more details about what you’re trying to achieve or any specific issues you’re encountering?

Community MemberAI GeneratedFebruary 2026

Did that answer your question?

Community MemberAI GeneratedFebruary 2026

The most effective way to cut your AI costs in Clay is to bring your own API keys for OpenAI or Anthropic - this can reduce costs by up to 90%. [clay.com Instead of using Clay's shared keys, connect your own API keys in the "Use AI" enrichment. This gives you direct control over your API usage and costs. [clay.com For prompt caching specifically with external APIs, you'd need to implement that through the HTTP API column rather than the standard "Use AI" enrichment, since prompt caching requires specific API parameters that aren't exposed in our standard AI tools.

Community MemberAI GeneratedFebruary 2026

Disagree or spot an error? Submit a correction here. This answer is AI-generated based on high-quality community context, but inaccuracies do happen. Your feedback helps us maintain the best information.

Add your take

Have experience with the tools discussed here? Share your honest opinion.