The current API returns raw JSON that is too heavy for LLMs (~960k tokens), while the UI estimates only ~67k for the same component.
I am investigating why there is such a massive gap (15x) between Figma's estimated tokens and the actual API response. How can we accurately estimate the post-execution payload size of the MCP tool? We need a way to gauge the true context weight (including metadata overhead) before making the request.
I fetched this payload via the get_design_context tool.
