Why Hotglue?
Let's get something out of the way: this is a pitch for Hotglue. We're going to explain why we think you should use our product.
But here's the thing—we've spent enough time in the integration trenches to know that "use our product" isn't always the right answer. Sometimes you should build it yourself. Sometimes a different tool fits better. And sometimes, yes, Hotglue is exactly what you need.
The problem isn't calling APIs
Your AI agent needs to interact with the real world. It needs to send emails through SendGrid, create issues in GitHub, schedule meetings in Google Calendar, update deals in Salesforce. The API calls themselves are straightforward—a POST here, a GET there.
The problem is everything around those calls.
Consider what happens when your agent needs to post a message to Slack on behalf of a user:
- The user needs to authorize your app to access their Slack workspace
- You need to store their OAuth tokens securely
- Those tokens expire, so you need to refresh them before they do
- Different users have different tokens, so you need per-user credential management
- Slack rate-limits you differently depending on the endpoint
- If the token is invalid, you need to re-authenticate gracefully
- All of this needs to work reliably at 3am when you're not watching
That's one integration. Now multiply it by every service your agent needs to talk to.
What "building it yourself" actually means
We've seen plenty of teams start with "we'll just add a few integrations." Here's what that typically involves, based on real code we've written and maintained.
The scope of work isn't obvious until you're in it. You're not just making HTTP requests—you're building a credential management system, an OAuth state machine, a token refresh daemon, and a per-user isolation layer. Each piece seems small in isolation. Together, they're a subsystem that needs monitoring, error handling, and on-call support.
OAuth is more complex than it looks
OAuth 2.0 sounds simple: redirect the user, get a code, exchange it for a token. In practice, you're dealing with a specification that every provider implements slightly differently.
- GitHub doesn't support refresh tokens—you get one access token, forever, until revoked
- Google requires
access_type=offlineANDprompt=consentto get refresh tokens on the initial auth - Salesforce has different OAuth flows for sandbox vs production deployments
- Some services use space-separated scopes, others use comma-separated
- Token endpoints return slightly different response formats
- Some APIs expect form-encoded token requests, others expect JSON
Each integration becomes its own special case with custom handling for authorization URLs, token requests, and parameter formats.
Scope management is a governance problem
Each service has its own permission model. GitHub alone has 20+ scopes:
repo— full control of private repositoriespublic_repo— access to public repositories onlyrepo:status— read/write access to commit statusesadmin:org— full control of organizationsread:org— read-only access to organization datagist— create and modify gistsuser:email— read access to user email addresses
You need to request the right scopes for what your agent does—and only those scopes. Over-permissioning is a security risk. Under-permissioning means features break. And when you add a new capability, you need to re-authenticate users with expanded scopes.
This is a governance problem, not just a technical one. Someone needs to maintain a mapping of what your agent does to what permissions it needs, across every integration.
Token refresh happens at inconvenient times
Access tokens expire. When they do, you need to use the refresh token to get new ones. This sounds simple until you consider:
- What happens if the refresh fails mid-request?
- What if two requests try to refresh simultaneously?
- What if the refresh token itself has expired?
- What if the refresh succeeds but returns a token that's immediately invalid?
- How do you notify users when re-authentication is required?
The happy path is straightforward. The failure modes require careful thought about concurrency, error recovery, and user communication.
Every API has its own quirks
Even with auth sorted, each API has idiosyncrasies. Headers, versioning, pagination, error formats—none of it is standardized.
GitHub requires specific headers on every request for API versioning. Forget them and you'll get inconsistent behavior as GitHub rolls out changes. Google expects camelCase query parameters and rejects empty strings. Slack uses a completely different request structure than REST conventions. Salesforce versions its entire API path.
Pagination works differently across services. Error responses have different formats. Rate limits are documented in different headers. You end up building adapter layers for every integration.
Rate limiting is a distributed systems problem
Most APIs rate limit you. The specifics vary wildly:
- GitHub: 5,000 requests per hour for authenticated requests, tracked in
X-RateLimit-Remainingheader - Slack: Different limits per method (tier 1-4), with burst allowances
- Google APIs: Quota-based with per-user and per-project limits
- Salesforce: Daily API call limits that vary by edition
When you hit a rate limit, you need to back off—but how? Exponential backoff with jitter? Respect the Retry-After header? Queue requests and drain them slowly? The answer depends on your use case, and getting it wrong means either failed requests or wasted capacity.
For AI agents, this gets more interesting. An agent might decide to make 50 API calls in rapid succession. You need to either batch those intelligently, queue them, or surface rate limit errors to the agent in a way it can understand and adapt to.
Error handling is product work, not just code
API errors need to become user-facing messages. A 401 from GitHub could mean:
- The user revoked access
- The token expired (but GitHub tokens don't expire, so this shouldn't happen)
- The OAuth app was suspended
- The user lost access to the organization
Each requires different handling. Some you can recover from automatically. Some need user action. Some are permanent failures. Mapping API errors to user experiences is product work that compounds across integrations.
The honest math on build vs. buy
Let's talk numbers.
Building yourself:
- Initial integration: 1-3 weeks per service (OAuth flow, token storage, API wrapper, error handling)
- Ongoing maintenance: API changes, scope updates, new endpoints—estimate 20% of initial effort annually
- Security reviews: OAuth implementations are security-critical code
- On-call burden: when token refresh fails at 2am, someone gets paged
Using Hotglue:
- Setup: hours per integration (including time spent on configuring third-party services), not weeks
- Maintenance: we handle API changes and OAuth updates
- Security: we manage credential storage and encryption
- Trade-off: you depend on us, and that has its own risks
The break-even depends on how many integrations you need and how quickly. One or two integrations? Build them yourself—you'll learn a lot and maintain control. Ten integrations with more planned? The economics shift.
What Hotglue actually does
We're not magic. We've just done the work of building and maintaining integrations so you don't have to.
OAuth handling: We manage the OAuth flow, token storage, and refresh cycle. Your users authenticate through our hosted flow. You get a userId that maps to their credentials.
import { Hotglue } from "@hotglue/sdk";
const client = new Hotglue({ apiKey: process.env.HOTGLUE_API_KEY });
// Get the auth URL to send your user to
const auth = await client.authenticate({
userId: "user_123",
integration: "github",
redirectTo: "https://yourapp.com/callback",
});
if (auth.isComplete) {
// User is already connected
} else {
// Redirect user to auth.connectionsUrl
}
Per-user credentials: Each user's tokens are stored and managed separately. When you make an API call, you pass a userId and we inject the right credentials.
This sounds simple, but the data model matters. You need to track:
- Which service is connected (GitHub, Slack, etc.)
- Which user owns the connection
- The access token (encrypted at rest)
- The refresh token (if the service supports it)
- Token expiration timestamps
- The scopes that were granted
- Connection status (active, expired, revoked)
And you need to handle the lifecycle: initial OAuth, token refresh, scope upgrades when you add features, graceful degradation when tokens are revoked. This is a small database schema but a significant state machine.
Pre-built actions: We've defined the common operations for each integration—the parameters, the scopes required, the response shapes. You don't need to read API docs to figure out what fields are required or what the response looks like.
AI-ready tools: Our SDK returns tools in a format that Mastra, LangChain, and other agent frameworks can use directly. Your agent gets typed tools without you writing the adapter layer.
import { Hotglue } from "@hotglue/sdk";
const client = new Hotglue({ apiKey: process.env.HOTGLUE_API_KEY });
// Get tools formatted for your agent framework
const tools = await client.getTools({
userId: "user_123",
integrations: ["github", "slack", "google-calendar"],
});
// Each tool has full type information
// {
// name: "github_create_issue",
// description: "Creates a new issue in a repository",
// parameters: {
// owner: { type: "string", required: true },
// repo: { type: "string", required: true },
// title: { type: "string", required: true },
// body: { type: "string", required: false },
// labels: { type: "array", items: { type: "string" } },
// },
// requiredScopes: ["repo"],
// }
The LLM knows that title is required, that labels is an array of strings, and what scopes are needed. When the response comes back, it's structured data the agent can reason about—not raw API responses you need to parse.
When Hotglue is the wrong choice
Let's be direct about where we don't fit:
You need one simple integration: If you're only connecting to one service and it has good SDK support, use their SDK. The overhead of another dependency isn't worth it.
You need deep customization: If you need to do unusual things with an API—like implementing a custom OAuth flow or accessing undocumented endpoints—you'll want direct control.
You can't depend on external services: If uptime requirements mean you can't have any external dependencies in the request path, you need to own everything.
Your team has integration expertise: If you've already built the infrastructure and have engineers who maintain it, adding Hotglue is overhead, not help.
When Hotglue makes sense
You're building an AI agent that needs multiple integrations: The agent use case is where we shine. You need Slack, GitHub, Calendar, and Salesforce? That's four OAuth implementations, four token management systems, four sets of API quirks. We've already solved those.
You're a small team moving fast: If you're two engineers trying to ship an AI product, spending eight weeks on integration infrastructure is eight weeks you're not spending on your actual product.
You want managed security for credentials: OAuth tokens are sensitive. We encrypt them at rest, handle rotation, and have been through security reviews. That's infrastructure you don't have to build or maintain.
You're adding integrations over time: If your roadmap includes "add integration with X" every quarter, having a consistent pattern for how integrations work saves cumulative effort.
The trade-offs
Using Hotglue means depending on us. That's a real risk:
- Availability: If we're down, your integrations don't work
- Pricing: Our pricing could change
- Features: We might not support something you need
- Longevity: We're a startup, with startup risks
We try to mitigate these. Our SDK returns typed responses, so you're not locked into our abstractions. We're transparent about our architecture. And honestly, the integrations you build yourself have their own risks—you just have different control over them.
What's coming
We're building toward a specific vision: the best infrastructure for AI agents that need to interact with the real world. Here's what's on our roadmap.
Agent-aware error handling. When a tool call fails, your agent shouldn't get a generic 500 error. It should know: Is this retryable? How long should I wait? What alternative action might work? We're building structured error responses with recovery guidance that agents can reason about.
Semantic tool discovery. Loading 50+ tool definitions into your agent's context window is expensive and slow. We're adding semantic search so agents can discover the right tool for "create a GitHub issue" without loading the entire GitHub schema upfront. Early research shows 30-60% token reduction.
Webhook infrastructure. AI agents need to react to real-world events—a new Slack message, a GitHub PR review, a calendar change. We're building webhook receivers with signature verification, event routing, and replay capability so your agents can be event-driven, not just request-driven.
MCP server. The Model Context Protocol is becoming the standard for connecting agents to tools. We're packaging Hotglue as a standalone MCP server so you can add it to Claude Desktop, Cursor, or any MCP-compatible client with zero code.
OpenTelemetry observability. Multi-step agent workflows are hard to debug. We're adding native OpenTelemetry support so every tool call emits a span you can trace through your existing observability stack. When something fails at step 7 of a 10-step workflow, you'll know exactly where and why.
Proactive token refresh. Tokens expiring mid-workflow is a terrible user experience. We're implementing background refresh that renews credentials before they expire—so your agents never fail because of a stale token.
The bottom line
Integrations are real work. OAuth implementations, token management, scope governance, error handling, rate limiting—it adds up fast. For AI agents that need to interact with multiple services, this work can dominate your engineering effort.
Hotglue handles this infrastructure. We've built the OAuth flows, the token storage, the action definitions. We maintain them as APIs change. You focus on what your agent does, not how it authenticates.
That's the pitch. Not "we're magical" or "integrations are easy." Just: this work exists, we've done it, and for certain use cases—particularly AI agents needing multiple integrations—it makes sense to let us handle it.
If that sounds like your situation, check out our docs or the getting started guide. If it doesn't, go build great things—and maybe we'll be useful when your integration needs grow.
Questions? Something not working? Reach out at use@hotglue.tech — we're happy to help.