Figma layers tree traversal + estimating size

I’m writing a feature for my plugin that will have to loop through all nodes in the file and for that I want to create a progress bar that will be shown to the user so they know how much time the processing will approximately take. For that I need to estimate the size of the search tree.

At first I was trying to create an algorithm that simply found an average depth to the power of branching factor and random sampling, the variance was too big to be useful (from 100s to billions). Then I switched to finding a median of that value, it was better but still is pretty far from the reality. It overestimated a lot.

So I started googling and found this paper. After reading through it about 50 times and googling words such as backtracking, random probing, multiset, I got a rough idea of how it should work. Here is the algorithm that I have so far and the results it produces:

The results seem to be close to 4-5 depending on the file. Which looks more like simply a general branching factor than an estimated tree size. I’m probably doing something wrong here but I don’t understand what. But I somewhat understand the issue: it only takes branching factor into consideration, the depth is completely ignored.

Can anyone point me in the right direction and help me understand described algorithm better? Or maybe suggest a better one?

Update 1: found this article that shares the same formula. It gives me the idea that I need to calculate this for every node and store it but I don’t see any relationship between nodes in this formula. It seems to need only the multiset (array) of nodes branches lengths that I visited to work.

Update 2: Reading the above text more carefully “prob(d)… is related to the probability that we visit such a depth using random probing”. Which depth is it talking about? The depth at which we are calculating it? And how is it related? I’m so confused…

5 Likes

I spent the whole day working on it today and here is what I found.

First of all, turns out the papers I was reading were mostly solving this problem for binary trees, not planar trees (any number of children per node). Thanks @Finch for helping me! So yeah, that was a fail. I guess solving this for a planar tree is too abstract of a problem.

So I planned to do the following instead:

  1. Use findAll if it takes up to 5 seconds. In the meantime show the 5-second loading indicator.
  2. If findAll takes more than 5 seconds (split it into findAll on every direct child of the page to be able to exit easily and also unfreeze the UI), exit the calculation. Multiply the average number of nodes per child of the page by the number of nodes left to process to get the average.
  3. Use one page size as an anchor for other page sizes. Use the average/median page size as an approximate size when data is unavailable.
  4. Save page size in plugin data so it doesn’t have to be calculated again.

After some testing, I settled on a pretty simple solution, which is a bit different from what is described above. Its simplicity and striking accuracy astonishes me! It can calculate the amount of children in the file given only one full direct child of a page with about 1000 nodes error margin! Of course it depends on how everything is laid out in the file, but for me this is accurate enough. And on smaller files it can have larger error, but you don’t really need it on small files since the actual size is fast enough to calculate.

I mocked it up as a simple tiny plugin to quickly test how it works on different files:

Click to see the full Figma tree size estimator plugin code
var page = figma.currentPage
var nodesFound = 1
var nodesLeft = 0

var childrenTotal = 0
var parentNodes = 0

var len = page.children.length
var limit = 1 

for (let i = 0; i < len; i++) {
    const node = page.children[i]
    if (!node.children || node.children.length === 0) {
      continue
    }
    
    // count number of direct children of nodes on page
    childrenTotal += node.children.length 
    parentNodes++ // number of parent nodes

    if (i < limit) { // count all nodes before limit
	    node.findAll(n => {nodesFound++; return true})
	} else {
		node.findAll(n => {nodesLeft++; return true})
	}
}

var childWeight = childrenTotal / limit
// multiply number of unprocessed children by child weight
var left = (parentNodes - limit) * childWeight

console.log('left approx:', left)
console.log('total approx', left + nodesFound)
console.log('real total', nodesFound + nodesLeft)

figma.closePlugin()

And it worked wonderfully! This algorithm finds how much children there are and how many nodes is there per child in first one-two frames in the document (limit variable controls how many). I do it by first getting all nodes in the frame, then all direct children, and dividing the former by the latter. Then it applies this average child weight to all other frames, given their amount of children. After making this plugin I copied all this code to Master, added time limit to the algorithm instead of the manual hard limit as a constant and it’s done.

Here is the final progress bar in action:

Animated progress bar GIF

2 Likes

Alternatively to my method with findAll, I was suggested by Luke John on Plugin Developers Slack to use a custom traversal method. He shared a couple of snippets (below) that can be used to compare the methods. In my experience, a custom traversal method works much slower than findAll.

// Measures the speed of findAll on the whole page
let start = Date.now();
let count = 0; figma.currentPage.findAll(n => count++);
console.log(count, Date.now() - start); // 10s for 50k nodes (on my computer)

When the second method is run directly after findAll, it is indeed 1-2 seconds faster than findAll. But if you remove findAll, it works much slower. That probably happens because of some caching Figma is doing in the background that simply doesn’t have a chance to occur in the second instance. So for fair comparison you need to test it as two separate plugin runs, not consecutively in one plugin.

// Traverses the tree recursively 
// (run in a separate plugin, not after findAll)
function traverse(n) {n.children?.forEach(traverse);count++}
let start = Date.now();
let count = 0; 
traverse(figma.currentPage);
console.log(count, Date.now() - start); // 17s for 50k nodes

Conclusion: use findAll instead of custom traversal methods to make things go fast. But ideally you need to avoid doing it altogether. Find other ways, like making user select only the necessary parts of the document.

1 Like

I found a bit more ways of iterating through the layers tree in Figma and I wanna share them with you. The first one is the one I already explored. It’s pretty fast, but compared to findAll it’s still a lot slower. About 2x slower on my current computer.

The first method I showed above already. Unfortunately, as I said, I can’t use it since there is no way to make it async.

function traverseRecursive() {
  console.log('Normal recursion') // 10s for 45k nodes

  function recursive(n) {
    count++;
    if (!n.children) {
      return;
    }
    n.children.forEach(recursive);
  }

  recursive(figma.currentPage);
}

The second method does essentially the same thing, but much slower. This is why I wanted to switch to forEach but couldn’t and didn’t know why it was faster. Seemingly, we only changed forEach to for loop, which is necessary if you want to also make this function async.

But why is it slower? The mistake I made was getting the .children property of the element too many times! It turns out, it’s a pretty slow operation. So reading it a couple of times here and there makes everything very slow.

function traverseRecursiveSlow() {
  console.log('Slow recursion') // 18s for 45k nodes

  function recursive(n) {
    count++;
    if (!n.children) {
      return;
    }
    for (var i = 0; i < n.children.length; i++) {
      recursive(n.children[i]);
    }
  }

  recursive(figma.currentPage);
}

To fix this rookie mistake, simply save it to a variable. Even when you read its length, you should be reading it from a variable! Note that I’m using children.length to get length instead of n.children.length as the latter may double (or quadruple in bad cases) the traversal time!

function traverseRecursiveFast() {
  console.log('Fast recursion') // 10s for 45k nodes

  function recursive(n) {
    count++;
    let children = n.children
    let length = children.length
    if (!children) {
      return;
    }

    for (var i = 0; i < children.length; i++) {
      recursive(children[i]);
    }
  }

  recursive(figma.currentPage);
}

Our next contestant is the same function, but reverse. Again, I’m saving node children to a variable here to make the search faster. It’s not reversed in order, simply the logic in the code is flipped. But the idea is the same and you can choose basically what is more readable/convenient for you. This one accepts an array of nodes instead of a single node.

function traverseRecursiveReverse() {
  console.log('Reversed recursion (fast)') // 10s for 45k nodes

  function recursive(n) {
    let len = n.length;
    if (len === 0) {
      return;
    }

    for (var i = 0; i < len; i++) {
      const node = n[i];
      count++;
      const nc = node.children;
      if (nc) {
        recursive(nc);
      }
    }
  }

  recursive(figma.currentPage.children);
}

Next, we have another pretty cool method! It is taken from this awesome article: Using ES6 Generators To Recursively Traverse A Nested Data Structure. I actually took inspiration for the previous method from this one. This one is pretty interesting because it basically allows you to traverse a tree in linear order, similar to findAll, but with a possibility to make it async without even having to make the traversal method async! For my case I put the await statement for the progress bar in the while loop. It seems to even be slightly faster than the for methods, however I’m not entirely sure why.

// 9s for 45k nodes
function traverseGenerator() {
  console.log('Recursive generator method')

  function* processData(nodes) {
    const len = nodes.length;
    if (len === 0) {
      return;
    }

    for (var i = 0; i < len; i++) {
      var node = nodes[i];
      yield node;

      let children = node.children;

      if (children) {
        yield* processData(children);
      }
    }
  }

  var it = processData(figma.currentPage.children);
  var res = it.next();

  while (!res.done) {
    count++;
    res = it.next();
  }
}

And finally, the fastest of all but unusable for me as I can’t show the progress bar and let the user to cancel the plugin if it’s taking too long: Figma’s built-in figma.findAll.

// 5s for 45k nodes
function traverseFindAll() {
  console.log('Using figma.findAll')
  figma.currentPage.findAll((n) => count++);
}

And here is the code to finish it off, if you want to run this plugin for testing. Don’t forget that you can only use one function per plugin run as the search results of previous functions will be cached, making next traversals faster. Full code is available as a GitLab Snippet too.

let count = 0;
let start = Date.now();
traverseRecursiveSlow(); // change this func. every run
console.log("nodes:", count, "time:", Date.now() - start);
figma.closePlugin();

By the way, I was using the Figma’s Design System UI2 as the test subject: https://figma.fun/7FwJwy and a bit of Ant Design System: https://figma.fun/1m5kOG (helped me figure out that my slow method was extremely slow on long arrays of nodes, which UI2 doesn’t have).

7 Likes

This is a great investigation and very helpful for some issues I’m trying to tackle.

You mentioned running the tasks async. Were you able to get them to run without blocking the figma thread and causing the canvas to freeze while running? do you have example code of how you were running it async?

You need to pause this async task every now and then for a couple of milliseconds. The simplest way is to do it on every Xth node, e.g. every 100 or so. Just use a custom delay function, you can easily find async sleep or pause functions on the web. They use timeout and simply turn it into an async function. Hope this helps!

Thanks, that’s what I ended up doing. adding a sleep call every 100ish iterations (I found your comment on another thread, so still credit to you)