Files
flipper/desktop/flipper-plugin/src
Michel Weststrate bb20c7fd00 Implemented perf tests
Summary:
Added some performance tests for DataSource. Currently simply using jest to run them in a single run, so that is not the most isolated setup (we do GC between tests), but helps to find some global trends at least.

For every scenario two datasets are used, one of 100.000 items, and one of 200.000 items, to verify that all important functions scale roughly linearly or better.

The `append` and `update` test cases perform 1000 insertions / updates. All other tests are singular.

The keyed vs unkeyed variation verifies that we don't drop performance if we maintain a by-key lookup table.

The sorted variations start with an initially already sorted and filtered setup. This nicely show that the datasource really starts to shine with its insertion sort versus full reallocating and sorting

The reference fake implementation does what we do in most cases in Flipper: shallow clone and allocate an entirely new array to append / update data to preserve immutability. Its comparison is pretty terribly, especially considering that in the perf tests we 'render' only once, skewing the stats in favor of the fake implementation: only at the end of the entire batch of updates we sort & filter once (so after inserting a thousand items for example).

In contrast the datasource tests will keep its data sorted at all times, so 'rendering' is already included in the measurements. For the fake datasource, resorting the full 200K rows after each insert would pretty much put bitcoin caused global warming to shame. Also note that the increased GC pressure isn't incorporated in the fake implementation, as we GC outside the measurements.

Reviewed By: nikoant

Differential Revision: D26913145

fbshipit-source-id: 955f1923dce40997cd2e81ea9e80832c6e71c99c
2021-03-16 15:03:46 -07:00
..
2021-03-16 15:03:45 -07:00
2021-03-16 15:03:46 -07:00
2021-03-16 15:03:44 -07:00
2021-03-16 15:03:45 -07:00