AI Configuration
Configure AI providers, models, features, and token budgets for your workspace.
AI Configuration
Workestra's AI capabilities power everything from semantic search to candidate screening. This guide explains how to configure AI for your workspace.
Screenshot needed � add an annotated image showing this UI
AI configuration requires Admin or Owner permissions. Regular members cannot access these settings.
AI Provider Setup
Supported Providers
Workestra supports multiple AI providers. Choose the one that best fits your needs:
| Provider | Best For | Default Model |
|---|---|---|
| Moonshot AI (Kimi) | Best overall performance, excellent for business use | Kimi K2 Turbo |
| OpenAI | Industry standard, broad capabilities | GPT-4o Mini |
| xAI (Grok) | Fast responses, good for real-time features | Grok 3 Mini |
| DeepSeek | Cost-effective, strong reasoning | DeepSeek Chat V3 |
Selecting a Provider
- Navigate to Settings > AI Configuration
- In the Provider dropdown, select your preferred provider
- The Model dropdown updates with available options
- The Base URL auto-populates with the provider's endpoint
We recommend starting with Moonshot AI (Kimi) for the best balance of quality and cost for business use cases.
API Key Configuration
Your API key powers all AI features in Workestra:
- Obtain an API key from your chosen provider's dashboard
- Enter it in the API Key field
- The key is encrypted and stored securely server-side
- Click Save Settings
Security Note: API keys are encrypted using AES-256-GCM and never exposed in the frontend. We recommend using a dedicated API key for Workestra, not your primary development key.
Using a Custom Model
If your preferred model isn't in the dropdown:
- Select Custom model… from the Model dropdown
- A text field appears for entering the model ID
- Enter the exact model identifier (e.g.,
gpt-4o-mini)
Self-Hosted or Custom Endpoints
To use a self-hosted model or proxy:
- Enter your custom Base URL in the field provided
- Ensure the endpoint is compatible with OpenAI-style API format
- The URL should end with
/chat/completions
Token Budget
Control your AI costs by setting a monthly token limit:
Setting a Budget
- In the Token Budget section, enter a number (e.g.,
500000) - Leave blank for unlimited usage
- Click Save Settings
What Happens When Budget is Reached
- AI chat displays a budget-exceeded error
- Background features (embeddings, insights) pause
- Existing data remains accessible
- Admins receive a notification
Monitor your usage in the first month to set an appropriate budget. Most small-to-medium workspaces use 100K-500K tokens monthly.
AI Features
Enable or disable specific AI capabilities:
Semantic Search
What it does: Powers vector + full-text hybrid search across all entities
When to enable: Always recommended — this is core to the Workestra experience
When to disable: Only if you have strict data residency requirements
Proactive Suggestions
What it does: Generates anomaly alerts and recommendations on the dashboard
Examples:
- "Revenue dropped 15% compared to last week"
- "5 deals have been in 'Proposal' for over 30 days"
When to enable: Recommended for teams that want AI-powered insights
Auto Embedding
What it does: Automatically embeds new and updated records in the background
Why it matters: Embeddings power semantic search and AI understanding of your data
When to disable: Only if you're manually managing embeddings via API
Natural Language Query
What it does: Allows AI to write and execute SQL queries from natural language
Example: "Show me all deals closed last month by team member John"
Status: Experimental feature
Natural Language Query is experimental and requires careful review of generated SQL. Enable only for trusted users.
Embedding Backfill
After switching AI providers or importing large datasets, you may need to regenerate embeddings:
When to Backfill
- After changing AI providers
- After importing bulk data
- If semantic search results seem incomplete
- When instructed by support
How to Backfill
- In the Embedding Backfill section, click Start Backfill
- The system queues all records for re-embedding
- Progress happens asynchronously in the background
- A confirmation shows how many records were queued
Backfilling can take time depending on your data volume. Small workspaces (under 10K records) typically complete within an hour. Large workspaces may take several hours.
Model Recommendations
For Different Use Cases
| Use Case | Recommended Provider | Recommended Model |
|---|---|---|
| General use | Moonshot AI | Kimi K2 Turbo |
| Fast responses | xAI | Grok 3 Mini |
| Complex reasoning | DeepSeek | DeepSeek Reasoner |
| Broad compatibility | OpenAI | GPT-4o Mini |
Cost Considerations
| Provider | Relative Cost | Best For |
|---|---|---|
| DeepSeek | $ | Budget-conscious teams |
| Moonshot | $$ | Balanced quality/cost |
| OpenAI | $$$ | Maximum capability |
| xAI | $$ | Speed and efficiency |
Troubleshooting
"API Key Invalid" Error
- Verify the key is copied correctly (no extra spaces)
- Check that the key has not expired
- Ensure the key has appropriate permissions
Slow AI Responses
- Try a faster model (e.g., Grok 3 Mini)
- Check your provider's status page
- Consider upgrading your provider plan
Semantic Search Not Working
- Verify Semantic Search and Auto Embedding are enabled
- Run an Embedding Backfill
- Check that your API key has embedding model access
Budget Reached Too Quickly
- Review which features are enabled
- Consider using a more efficient model
- Adjust the Natural Language Query setting (uses more tokens)
Agent Workflows
For advanced automation, Workestra supports AI Agent Workflows:
- Create rules triggered by workspace events
- Automatically process records with AI
- Set up custom notification flows
Access Agent Workflows from the AI Configuration page by clicking Agent Workflows.
Next Steps
- AI Assistant — Learn how to use AI features
- AI Insights — Understand AI-powered recommendations