Skip to main content
Question

figma make sitemap.xml and robots.txt

  • November 24, 2025
  • 3 replies
  • 391 views

IMPERIUM SECURITY SERVICES

I created my website with Figma Make, I published it, and for SEO, Google cannot access the sitemap and robots.txt. I cannot upload the folder to the root of my domain because it is already connected to Figma Make via DNS.

  •  

3 replies

  • Figmate
  • November 25, 2025

Hi ​@IMPERIUM SECURITY SERVICES

 

I understand you’re trying to configure both robots.txt and sitemap.xml for SEO on your published Figma Make site.

 

At the moment, Figma-hosted files and prototypes don’t provide access to a public folder or support custom server routes such as robots.txt or sitemap.xml. This means it’s currently not possible to configure those files directly within Figma for indexing or SEO control.

If your goal is to improve discoverability or SEO for content created in Figma, here are a few options you can consider:

  1. Host exported content externally:
    You can export your Figma prototypes or assets and host them on your own domain or server, where you can freely add files like robots.txt and sitemap.xml.

  2. Embed Figma prototypes on your own website:
    If you embed a prototype within a website that you control, your site’s existing robots.txt and sitemap.xml can handle SEO configuration.

Related to the first point, here’s a community thread where members have suggested an alternative workaround:
 


I see that having more control over SEO and public asset visibility would be useful for many users.
This kind of feedback has already been shared with our Product team, but since our Product team regularly reviews feature requests submitted by the community, I’d recommend commenting directly on the relevant post in the Suggest a Feature
 category.

 

Thanks,


PierreL
  • New Participant
  • November 26, 2025

In the future, sitemap.xml will be supported with Figma Make?


Vaishwi
  • New Member
  • March 12, 2026

Hi ​@Junko3 


I've built my website using Figma Sites and I'm running into an issue where AI crawlers and LLMs (such as ClaudeBot) are receiving a 403 Forbidden error when attempting to access my site.

I'd like to understand:
1. Is Figma Sites blocking bot/crawler traffic at the server level by default?
2. Is there a way to whitelist specific user-agents (e.g. ClaudeBot, GPTBot, anthropic-ai) so LLMs can crawl and index my site?
3. Can I add or customize a robots.txt file at my domain root to allow crawler access?

Having my site accessible to LLMs and AI search tools is important for my business, so any guidance or roadmap visibility on this would be greatly appreciated.

Thank you for your help!