With the rise of generative AI, designers are busy making prototypes of chat-bot user interfaces. Instead of faking it, what if Figma provided a way for prototype users to interact with a real LLM? There’s two different ways you could do this.
Easier Way - add a ‘generate text’ option to the list of actions in the ‘Interaction’ menu (in prototype mode). Once ‘generate text’ is selected you would need to:
Select a text layer (similar to selecting a destination frame for the ‘navigate to’ action). This could be an empty text box or a text box that already has some text in it (in which case the generated text would just come in at the end). You could format the text box however you want but it would make sense to give it a fixed width and a height that hugs the content so that the box can continue to grow vertically as text is generated.
Type in a prompt. This is the prompt that the AI generated text will be responding to. This prompt will hit the LLM once the designated trigger is set off (‘On click’, ‘Key/gamepad’, ‘After delay’, etc.).
Maybe the user could fine tune their desired results a little bit. You could give them options for how long they want the generated response to be, what LLM they want to use (depending on who Figma decides to partner with), etc.
Harder Way - Allow the prompt to be determined by the prototype user. This would require a text field that the prototype user would be able to type into. Once the prompt is typed out, an action (tapping a ‘send’ button) could trigger some change in the UI (the message being sent or whatever) and it could also trigger the generation of the response.
We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.