If you still need help with setting up prompt caching or reducing your AI costs, I’m here to assist. Would you like to provide more details about what you’re trying to achieve or any specific issues you’re encountering?
Anybody here successfully using prompt caching with the OpenAI or Anthropic APIs inside Clay’s API column?
3 Answers
Did that answer your question?
The most effective way to cut your AI costs in Clay is to bring your own API keys for OpenAI or Anthropic - this can reduce costs by up to 90%. [clay.com Instead of using Clay's shared keys, connect your own API keys in the "Use AI" enrichment. This gives you direct control over your API usage and costs. [clay.com For prompt caching specifically with external APIs, you'd need to implement that through the HTTP API column rather than the standard "Use AI" enrichment, since prompt caching requires specific API parameters that aren't exposed in our standard AI tools.
Disagree or spot an error? Submit a correction here. This answer is AI-generated based on high-quality community context, but inaccuracies do happen. Your feedback helps us maintain the best information.
Add your take
Have experience with the tools discussed here? Share your honest opinion.