Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

JavaScript's forEach is very slow!

A topic by Turnaround Games created May 08, 2019 Views: 15,362 Replies: 9
Viewing posts 1 to 5
(8 edits)

JavaScript's forEach is very slow!

Edit: as has been pointed out in the comments below by epidemian this only effects JavaScript ran using node. The same test in a browser gives much more expected results. Thanks  epidemian!

Edit2: the performance of for of loops and for each is now much more equivalent in newer versions of node. But the standard for loop is still significantly faster.

I'm  currently drinking fairly heavily from the function programming Kool-Aid and have been using it extensively in new projects. I've found that a lot of inner logic can be very elegantly written in a  function programming style. 

Most of my game projects are written for the web using JavaScript (or more specifically TypeScript). However I was recently made aware of how much slower many of JavaScript's functional style routines are compared to their non-functional counterparts. It saddened me.

Benchmarking

As an example lets compute the sum of all the elements in an array squared.

We will initialise the array as follow:

const data = []
for(let i = 0; i < 32000000; i++){
    data.push(i * Math.random())
}

We will then compare:

  • standard for loop
  • for of loop
  • forEach
  • reduce

I am using Node v10.15.3.

Standard for loop

console.time('for')
let sum = 0
for(let i = 0; i < data.length; i++){
    sum += data[i] * data[i]
}
console.timeEnd('for')

This clocks in at ~35ms.

For of loop

console.time('for of')
let sum = 0
for(let datum of data){
    sum += datum * datum
}
console.timeEnd('for of')

This clocks in at ~520ms.

ForEach

console.time('forEach')
let sum = 0
data.forEach(x => {
    sum += x * x
})
console.timeEnd('forEach')

This clocks in at ~1200ms.

Reduce

console.time('reduce')
let sum = data.reduce((acc, curr) => {
    return acc + curr * curr
})
console.timeEnd('reduce')

This clocks in at ~600ms.

Overview

I've plotted the results below as a summary.


I found this very surprising.

I typically argue for a simpler, more readable programming style over eking out every last bit of performance for performance's sake. However these results have really made me stop and think about using this more functional style when making games in JavaScript. Which is a shame. It's really hard to argue for a nicer coding style when its around thirty times slower. Especially if it's a style widely adopted throughout a codebase.

It's worth noting that  many uses of JavaScript aren't looping over large arrays and processing lots of maths. They typically handle button presses and AJAX calls etc. In these cases it isn't really an issue and the nicer coding style wins. However, for games, this is very common i.e. for all the objects in the world etc. 

It's also worth noting that this is very much a micro benchmark. So the effect may be lessened in more real world use cases. It would be interesting to investigate this.

I'm hoping that updates to the JavaScript interpreter might, over time, bring down the overhead of using the more functional style. They've done a fairly amazing job making the standard for loop as fast as it is!

Anyway, thanks for reading. I hope some of you found this interesting / useful.

Turnaround Games

homepage - twitterreddit

(2 edits) (+3)

Kinda related, but something different.

You should reference the array you are looping through in the reduce method (or any other functional loop). Et voila, your loop is quicker. Don't even use the reference.

Examples:

// this one is quicker...
sum = data.reduce((acc, curr, i, arr) => {
    return acc + curr * curr
})
// than this one
sum = data.reduce((acc, curr) => {
    return acc + curr * curr
})

(am using Node 8)

(+1)

Interesting find!

I wonder how this came to be, looking forward for a more in-depth post or article about this.

(+1)

I came across this when reading this medium article: https://medium.com/@alexbeer/hi-nice-sum-up-of-different-duplicate-filters-eb4d3895fd14

Just tried it. You're right! Very strange

(2 edits) (+1)

It's worth noting that, although the numbers on Node.js are quite bad for forEach() and reduce(), on browsers the story is quite different. Chrome yields pretty similar numbers for the four snippets, with reduce() even being slightly faster than all the alternatives:

for: 1850.9541015625ms
for of: 1702.959228515625ms
forEach: 1835.01708984375ms
reduce: 1259.451904296875ms

And Firefox yields impressively better results, having reduce() also at the top:

for: 137ms - timer ended
for of: 150ms - timer ended
forEach: 103ms - timer ended
reduce: 54ms - timer ended

So, as it usually happens with these sort of things, it's not a matter of "forEach() being slow". It's a matter of different JS engines applying different optimizations. In general, i think it's better to stick to the most idiomatic or clearer alternative. Not only does that make the code more readable, but there's also a high chance that those idiomatic patterns will be optimized by JS engines in the (not so distant) future.

Only on cases where we have actually measured a performance impact on the target runtimes we want to run, it'd be adviceable to go for the more "performant" but less idiomatic/readable solution. And even in those cases, i'd recommend leaving a comment explaining why we went for a less idiomatic solution, just in case JS engines catch up and it stops being the case that that is the most performant option :)

Just tested this. Yes my results are similar to yours.

(+1)

The difference is that the forEach closure is writing to a bound variable. If you do the same thing in the for loop, it gets slow, too:

{   
    console.time('for 1')    
    let sum = 0    
    for (let i = 0; i < data.length; i++) {        
        sum += data[i] * data[i];    
    }    
    console.timeEnd('for 1')
}
{    
    console.time('for 2')    
    let sum = 0    
    for (let i = 0; i < data.length; i++) {        
        sum += data[i] * data[i];    
    }    
    console.timeEnd('for 2')
    
    function foo() { sum = 1; } // create a closure bound to sum; note we never even call this
}

Result:

for 1:  31.060ms
for 2: 789.796ms

Note that the loops are identical. The only difference is that in the second block, we've bound sum to a closure. That makes writing to it much more expensive.

That's why your forEach loop is so much slower. It's the only code in your example that writes to a bound variable inside the loop. You'd get the same performance degradation in any of your other looping constructs if you do the same thing.

I think your test is not correct.

Take a look at this code. I'll use benchmark for testing and I'll extract the variables and the closure creation to the top of the script so they are not created in every test run, only just one.

const Benchmark = require('benchmark');
const suite = new Benchmark.Suite();
const data = [];
for (let i = 0; i < 32000000; i++) {
  data.push(i * Math.random());
}
let sum = 0;
const adder = x => (sum += x * x);
const reducer = (acc, curr) => acc + curr * curr;
suite
  .add('standard for', function() {
    sum = 0;
    for (let i = 0; i < data.length; i++) {
      sum += data[i] * data[i];
    }
  })
  .add('for-of', function() {
    sum = 0;
    for (let datum of data) {
      sum += datum * datum;
    }
  })
  .add('forEach', function() {
    sum = 0;
    data.forEach(adder);
  })
  .add('reduce', function() {
    sum = data.reduce(reducer);
  })
  .on('cycle', function(event) {
    console.log(String(event.target));
  })
  .on('complete', function() {
    console.log('Fastest is ' + this.filter('fastest').map('name'));
  })
  .run({async: true});

The results are the following in  node v10.15.1

standard for x 0.68 ops/sec ±1.68% (6 runs sampled)
for-of       x 0.64 ops/sec ±1.01% (6 runs sampled)
forEach      x 0.66 ops/sec ±1.07% (6 runs sampled)
reduce       x 1.72 ops/sec ±0.41% (9 runs sampled)
Fastest is reduce

As you can see, there is almost no difference at all and the reduce version is clearly the fastest

(+1)

You've stepped into the trap that EricTboneJackson pointed out. By using sum inside a closure, write access to it is much slower. Because only one sum variable is shared across all your tests, this impacts all of them, despite not all of them requiring sum to be captured at all.

For comparison, I get these results on Node 12.2.0: 

standard for x 2.99 ops/sec ±0.42% (12 runs sampled)
for-of x 2.00 ops/sec ±0.50% (9 runs sampled)
forEach x 1.11 ops/sec ±0.77% (7 runs sampled)
reduce x 1.22 ops/sec ±0.28% (8 runs sampled)
Fastest is standard for

Then after performing these modifications:

  • capture sum only if necessary: remove the initial let sum declaration, then declare it inside the test itself (replace sum = 0 with let sum = 0)
  • move the closure declarations inside the test where they're needed (only strictly necessary for adder, since sum isn't declared yet)

The results are much more drastic.

standard for x 32.70 ops/sec ±0.25% (56 runs sampled)
for-of x 8.21 ops/sec ±0.33% (25 runs sampled)
forEach x 1.00 ops/sec ±0.73% (7 runs sampled)
reduce x 1.22 ops/sec ±0.11% (8 runs sampled)
Fastest is standard for

So, the main take-away should be: Where performance is important, avoid unnecessary closures.