Skip to main content
Question

Old limit on REST API requests after purchasing the Pro subscription

  • November 21, 2025
  • 6 replies
  • 276 views

da.assets

Hello! I’m the developer of the “Figma Converter for Unity” tool.

I was sending Tier 1 requests to the REST API and received a ~4-day timeout before the next API request on my free plan.
After that, I purchased a Pro dev seat subscription, but the timeout didn’t disappear and is still 4 days.
I believe that after purchasing the Pro subscription, the API limits should have been reset.

Right now, both I and my clients are facing the issue that even after purchasing the Pro subscription, it’s impossible to immediately restore our workflow.

Please look into this problem.

6 replies

anvarzkr
  • New Member
  • November 21, 2025

I have the same problem. I believe before 17th of November I was able to periodically fetch /v1/files/:key (Tier 1 request) without any problems. Currently after 5 or 6 requests I am getting 429 Too Many Requests with Retry-After: 4-5 days.

I have Full seat Pro subscription and I am fetching public available designs (not from my figma project). When I am fetching designs from my own project where I have Full seat subscription there's no problems with rate limit. Strict rate limits occur when I am trying to work with designs outside of my project (So I considered View, Collab there). However as I see in documentation (https://developers.figma.com/docs/rest-api/rate-limits/) there's no subscription which allows me fetching designs outside of my project with higher rate limits

 


Marcello Cultrera

I’m experiencing what appears to be two different rate-limit issues that continue even after upgrading to a Professional plan on the 20th of November.

1. Starter-tier lockouts persisting after upgrade

Before upgrading, /v1/files/:key calls on the free plan would hit a ~4-day timeout after only a few requests.

I upgraded to Professional on November 20th, expecting those limits to reset. However, I’m still seeing the same behavior: after just 3–5 requests, the API returns:

429 Too Many Requests Retry-After: ~4–5 days

This suggests some requests or files may still be handled under Starter-tier quota rules. I’ve tested:

  • files inside my own Professional workspace,

  • shared files,

  • and public files

but the cooldown triggers almost immediately regardless.

2. Professional-tier but still receiving low-tier rate-limit headers

When testing component extraction on very simple files, I receive:

HTTP 429 Too Many Requests X-Figma-Plan-Tier: pro X-Figma-Rate-Limit-Type: low Retry-After: ~397,000 seconds 

This aligns with per-file low-tier quota behavior, not subscription-tier limits. Even files located inside a paid workspace are hitting this long cooldown.

3. Upgrade propagation does not seem to resolve the issue

We considered that it might take some time for the Professional upgrade to propagate. To rule that out, we suspended all API usage since the 20th of November but the same rate limits returned immediately after only a few requests.

4. My current understanding (and happy to be corrected)

From the patterns we’re seeing, the behavior resembles per-file burst protection rather than plan-tier exhaustion:

  • A handful of /v1/files/:key calls (3–5) appear to trigger a “bucket overflow” style cooldown.

  • The HTTP headers (X-Figma-Plan-Tier: pro + X-Figma-Rate-Limit-Type: low) are valid Figma responses but imply that the plan upgrade did not elevate the rate-limit tier for this endpoint.

What we need clarity on

  • Does the Professional plan still enforce per-file or per-resource limits?

  • If so, what are these limits?

  • Why are multi-day retry windows being triggered from such low-volume usage?

  • What batching or request-optimization practices does Figma suggest for preventing these extended cooldowns, and is there any supported method to reset or expand the limits for legitimate high-volume production use?

Any guidance, corrections, or official clarification would be greatly appreciated, this is currently blocking our automated design-to-code pipeline.

 


Marcello Cultrera

Hi everyone, ​@da.assets ​@anvarzkr @Figma - please find below an alternative workaround to Figma 429 rate limits as well reducing weight on Figma infrastructure. This as an immediate fix.

The issue isn’t Figma’s limits themselves (which are more recently conservative and problematic), it’s the way large /v1/files/:key?ids=… calls pull entire subtrees (often megabytes), then trigger dozens of image requests when frames have 50-200 children. That burst pattern quickly hits Figma’s limits protection causing 4-5 day lockouts.

A proposed solution is to stop fetching everything upfront and adopt a metadata‑first, prune‑first, fetch‑last pipeline:

  1. Fetch only component metadata.
  2. Pre‑filter locally to remove hidden frames, junk nodes and remote libraries.
  3. Fetch a pruned node tree with limited depth (depth=2–3) to keep responses <500 KB.
  4. Do code analysis and token extraction locally without extra API calls.
  5. Fetch images only for the final deduplicated list, in small adaptive batches with exponential backoff.

This reduces jobs from 10+ calls and multi‑MB traffic down to 2–3 calls and <500 KB, eliminating 429 errors even on complex frames.

This is a significant architecture refactor but worked for us at canvaseight.io and at codeFlow Lab - so worth checking if you currently already follow this pattern or if making these changes solves your rate limits problem. Ultimately it should benefit lowering the impact on Figma API infrastructure.

Best regards,
Marcello

 

 


da.assets
  • Author
  • New Member
  • November 25, 2025

Hi, ​@Marcello Cultrera! Thank you for these suggestions, but over the 4 years of my tool’s existence, all the optimizations you mentioned have already been applied, and unfortunately, this is still not enough.
Also, it appears that these limitations occur only when importing other people’s projects.


Marcello Cultrera

Hi ​@da.assets I’ve done some extensive testing this afternoon and evening (we’re based in Malaysia) on a specific client project and the code refactoring - specifically for points 2 & 3 which made a difference. 

The phase 2 mentioned earlier; pre-filter nodes to zero API calls; allowed us to eliminate garbage nodes before expensive API calls, reducing payload size and processing time.

Through hidden node detection we’ve skipped nodes with names starting with . or _ (Figma’s convention for hidden elements). Also we’ve screened auto-generated name patterns (including timestamps, copy chains, bracket/weird suffixes, node types.

Also our remote library detection filters now external library components vs direct remote checks, cross reference-inference (componentSet -> child componentID).

The phase 3 with simple batched fetching pruned node trees with minimal API calls using a predictable decision tree with an adaptive batch size, decision logic and conservative retry strategy.

So far it is looking good but we’ll keep testing into late evening and tomorrow pending confirmations from Figma on what batching or request-optimization practices they suggest for preventing these extended cooldowns. Hope this helps.

 


da.assets
  • Author
  • New Member
  • November 30, 2025

bump