Skip to main content
You are viewing Pre-Alpha documentation.
Vuetify0 Logo
Mode
Accessibility
Vuetify

Benchmarks

v0 maintains performance benchmarks for all core composables. This page explains what gets benchmarked, how to interpret metrics, and what the performance tiers mean.

Edit this page
Report a Bug
Copy Page as Markdown
AdvancedJan 14, 2026

Why Benchmark

Headless UI libraries must be fast—they’re foundational infrastructure. v0 benchmarks exist to:

  1. Catch regressions — CI fails if performance drops

  2. Guide optimization — Data-driven decisions, not guesses

  3. Set expectations — Users know what to expect at scale

  4. Validate minimal reactivity — Prove the tradeoffs are worth it

What Gets Benchmarked

Core Composables

ComposableWhy It’s Benchmarked
createRegistryFoundation for all collections—performance here affects everything
createTokensDesign tokens can grow large—alias resolution must scale
useFilterSearch/filter on large datasets must remain responsive
useVirtualVirtual scrolling is performance-critical by definition
useDateDate operations are frequent in UIs

Operation Categories

Each benchmark file covers multiple operation types:

CategoryFixture TypeWhat It Measures
InitializationFreshSetup/creation cost
Lookup operationsSharedSingle item access (O(1) expected)
Mutation operationsFreshUpdates and modifications
Batch operationsFreshBulk actions (onboard, offboard)
Computed accessSharedCached/derived value reads
Seek operationsSharedDirectional search

Shared fixtures reuse the same data structure across iterations—safe for read-only operations.

Fresh fixtures create new data per iteration—required for mutations to get accurate measurements.

Performance Tiers

Blazing Fast
O(1): ≥100,000 ops/s
O(n): ≥10,000 ops/s
Fast
O(1): ≥10,000 ops/s
O(n): ≥1,000 ops/s
Good
O(1): Below fast
O(n): Below fast

The tier is determined by the fastest benchmark in each file.

Complexity Detection

Tiers adjust based on detected algorithmic complexity:

Pattern in Benchmark NameComplexity
“single item”, “single query”O(1)
“1,000 items”, “all keys”O(n)
“nested”, “recursive”O(n²)

Reading Results

✓ createRegistry/index.bench.ts
  lookup operations
    ✓ Get item by id (1,000 items)     1,234,567 ops/s
    ✓ Get item by id (10,000 items)    1,198,432 ops/s
  • ops/s — Operations per second (higher is better)

  • Consistent across sizes — O(1) complexity confirmed

  • 10x data, ~same speed — Good scaling behavior

Dataset Sizes

Benchmarks test multiple sizes to reveal complexity:

SizeItemsPurpose
Medium1,000Baseline measurement
Large10,000Reveals O(n) vs O(1)
Small100Optional edge case
Stress100,000Optional stress test

If a 10,000-item benchmark is 10x slower than 1,000-item, the operation is O(n). If it’s roughly the same speed, it’s O(1).

Running Benchmarks

bash
# Run all benchmarks
pnpm test:bench

# Run specific file
pnpm vitest bench packages/0/src/composables/createRegistry/index.bench.ts

# Generate metrics report
pnpm metrics

Interpreting for Your Use Case

Contributing Benchmarks

New composables should include benchmarks if they:

  • Manage collections (registries, arrays, maps)

  • Perform search/filter operations

  • Have user-perceived latency (loading, transitions)

  • Are called frequently (every render, every keystroke)

See createRegistry benchmarks↗ for the canonical example.


© 2016-1970 Vuetify, LLC
Ctrl+/