Skip to main content

I am part of the Design Language Team at Philips, where I work on creating and maintaining a precise iconography system based on strict grids and style rules in Figma.

We would love to see Figma incorporate AI-assisted icon generation that works directly with these grids and guidelines.

How it could work:

  • Designers set up a grid + style parameters (stroke thickness, corner radius, proportions, etc.) inside Figma.

  • Using a text prompt or simple shape input, AI generates multiple icon variations that fit exactly within those rules.

  • Designers refine, adjust, and approve the output—keeping the creative process fast but still precise.

Why this matters:

  • Ensures AI-generated icons are consistent with brand and system standards.

  • Speeds up icon exploration and reduces repetitive manual work.

  • Allows teams to focus on higher-level creativity while maintaining strict design language control.

  • At Philips, we have worked on a library of 1,000+ custom icons, and this is expected to grow to about 2,200 icons. Managing this at scale makes automation a clear necessity.

Why this is feasible:

Our icon guidelines are already extremely well-defined and rule-based—with grids, proportions, and styles documented in detail. This makes implementation far more straightforward than open-ended generative design: the AI would simply need to generate within the fixed parameters provided.

I believe this could be an incredible addition to Figma’s AI capabilities, especially for design teams managing large-scale icon libraries.

Some more context. I am proposing an exploration tool inside Figma: designers define the grid, stroke thickness, corner radii, proportions, and do’s/don’ts. The system then proposes variations within those exact rules, and a refinement step snaps everything back to the grid before approval.

We’re not making generic icons like ‘home’ or ‘settings’, we’re designing icons for complex devices, where we need hundreds of very specific metaphors that must still feel coherent and on-brand.

The idea is similar to how --sref works in MidJourney: you anchor generation to a style reference (in this case, our strict icon guidelines), so the results inherit the system’s look and feel by default. That way, you get fast ideation without sacrificing consistency.


Hey ​@pieterfrank - thank you for taking the time drop in some really well-thought out feedback. 

When it comes to AI, we’re still learning how we can improve it’s implementation in all of our products. I hear you - in your case, it would make sense to have some sort of ‘consistency’ even when using AI features -- ultimately, there are still hard guidelines/rules that the AI should abide by.


I saw that you emailed into our support team with this feedback as well - I’m leaving your topic here on the forum open to allow for additional thoughts from others, but I’d recommend you continue the conversations regarding this via the email you receive (just to make sure we don’t accidentally overlap on responses and get confused).

Once again, thank you! I hope to see you pop up in the community more often :D


Hello ​@ksn 
Thank you for your reply! The consistency is already well defined in our guidelines. What I’d like is to use AI as a brainstorming tool to help create complex medical icons (for example: Add Ring Marker on Vessel). Ideally, I’d be able to feed the AI with reference images and then receive proposals that already follow our guidelines — stroke width, corner radius, base grid, etc.

I’ll continue the conversation via email, and once again, thank you for reaching out! :)


Reply