Autom

Timeouts

How Autom handles long-running requests, and how to configure your client.

Most requests respond in a few seconds, but some endpoints can take longer to complete. This page explains our server-side limit and how to make sure your client does not give up before we respond.

Server-side limit: 60 seconds

All real-time API calls have a hard timeout of 60 seconds on our side. Typical responses are returned in 2-3 seconds, occasionally up to 30 seconds, and the worst case before we abort is 60 seconds.

If you need to run requests that may take longer than 60 seconds, switch to async mode with is_async=true. See Global Parameters.

Why your request seems to time out earlier

If you observe consistent timeouts at exactly 15s, 30s, or any value below 60 seconds, the connection is almost always being closed by your own infrastructure, not by our API. Your runtime aborts the HTTP call before we have time to respond.

Common signs:

  • Errors fire at the exact same duration on every retry (often 10s, 15s, 25s, 30s).
  • Our dashboard shows the request as 200 (success) while your client reports a timeout.
  • Smaller / cached requests succeed, but heavier ones always fail.

Platform timeouts to check

Below are the default HTTP / function timeouts for the most common platforms. Increase them to at least 60 seconds to match our server-side limit.

Vercel Serverless and Edge Functions default to 15s on the Hobby plan (up to 60s), and 15s on Pro (up to 300s, configurable per function).

Set maxDuration in your route or function:

// app/api/my-route/route.ts (Next.js App Router on Vercel)
export const maxDuration = 60

export async function POST(req: Request) {
  // ... call our API
}

Or in vercel.json:

{
  "functions": {
    "app/api/**/*.ts": { "maxDuration": 60 }
  }
}

Reference: Vercel function duration.

AWS Lambda functions default to 3 seconds and can be configured up to 15 minutes.

# AWS CLI
aws lambda update-function-configuration \
  --function-name my-function \
  --timeout 60

If you call Lambda through API Gateway, also raise the integration timeout (REST APIs default to 29s, HTTP APIs to 30s).

Netlify Functions default to 10 seconds (synchronous) and can be raised to 26 seconds. For longer requests, use Background Functions (up to 15 minutes) or our async mode.

# netlify.toml
[functions]
  timeout = 26

Cloudflare Workers have a CPU time limit (10ms / 50ms free, up to 30s paid) but wall-clock time is essentially unlimited as long as the connection stays open. Most timeouts come from the upstream fetch call.

const res = await fetch("https://api.example.com/...", {
  signal: AbortSignal.timeout(60_000),
})

Make.com (formerly Integromat) has a default HTTP module timeout of 40 seconds, configurable per call up to 300 seconds.

In the HTTP module → Show advanced settings → set Timeout to 60 (or higher).

If your scenario stops on transient HTTP errors, also check our Soft Fail mode.

Most HTTP clients have a default request timeout. Make sure you set it to at least 60 seconds:

// Node.js fetch with AbortController
const controller = new AbortController()
const timer = setTimeout(() => controller.abort(), 60_000)
try {
  const res = await fetch(url, { signal: controller.signal })
} finally {
  clearTimeout(timer)
}
# Python requests
import requests
requests.get(url, timeout=60)
# curl
curl --max-time 60 ...

Still seeing timeouts?

If your client is configured for at least 60 seconds, async mode does not fit your use case, and you still see timeouts, please open a support ticket from your dashboard with: the endpoint, a sample request_id, the timestamp, and the timeout value you observe. We will investigate.

See Global Parameters for is_async and soft_fail, Status Codes for response codes, and Best Practices for retry guidance.

On this page