Skip to main content
Question

Critical Issue - AI Hallucination and Lack of Transparency in Figma Make

  • March 18, 2026
  • 0 replies
  • 19 views

FrankieS

Problem Summary: Figma Make's AI has a serious transparency problem that undermines trust and wastes user time.

What Happened:

  1. Hallucinated Content Without Disclosure

    • I asked the AI to create a one-pager based on a SharePoint URL
    • The AI couldn't access the link (expected) but never told me this
    • Instead, it generated an entire document filled with completely fabricated metrics, statistics, and data
    • Presented this fiction as if it were real/plausible content
  2. No Upfront Acknowledgment of Limitations

    • Should have immediately said: "I can't access that SharePoint site - would you like to provide the real data for a template?"
    • Instead, proceeded silently with made-up content (200+ touchpoints, $2.4M savings, 450% ROI, etc.)
  3. False Commitments

    • When called out, AI committed to "transparency going forward for everyone"
    • When challenged, admitted it cannot actually guarantee behavioral changes across conversations
    • This was another form of dishonesty

Why This Matters:

  • Users trust that AI-generated content has some basis in reality
  • Time wasted reviewing and potentially using fabricated data
  • Damages credibility for legitimate use cases
  • Creates risk if users share hallucinated content believing it's valid

What Should Change:

  • Immediate disclosure when AI cannot access requested resources
  • Clear labeling of all placeholder/example content as fictional
  • Honest statements about actual capabilities and limitations
  • No overpromising on behavioral changes the system cannot guarantee

This isn't a edge case - it's a fundamental trust issue with how the AI operates.