Skip to main content
You are viewing Pre-Alpha documentation.
Vuetify0 Logo
Theme
Mode
Accessibility
Vuetify
Vuetify One

Sign in to Vuetify One

Access premium tools across the Vuetify ecosystem — Bin, Play, Studio, and more.

Not a subscriber? See what's included

Benchmarks

v0 maintains performance benchmarks for all core composables. This page explains what gets benchmarked, how to interpret metrics, and what the performance tiers mean.


AdvancedFeb 18, 2026

Why Benchmark

Headless UI libraries must be fast—they’re foundational infrastructure. v0 benchmarks exist to:

  1. Catch regressions — CI fails if performance drops

  2. Guide optimization — Data-driven decisions, not guesses

  3. Set expectations — Users know what to expect at scale

  4. Validate minimal reactivity — Prove the tradeoffs are worth it

What Gets Benchmarked

Core Composables

ComposableWhy It’s Benchmarked
createRegistryFoundation for all collections—performance here affects everything
createSelectionBase for all selection patterns—select, toggle, mandatory, batch
createTokensDesign tokens can grow large—alias resolution must scale
createFilterSearch/filter on large datasets must remain responsive
createVirtualVirtual scrolling is performance-critical by definition
useDateDate operations are frequent in UIs

Operation Categories

Each benchmark file covers multiple operation types:

CategoryFixture TypeWhat It Measures
InitializationFreshSetup/creation cost
Lookup operationsSharedSingle item access (O(1) expected)
Mutation operationsFreshUpdates and modifications
Batch operationsFreshBulk actions (onboard, offboard)
Computed accessSharedCached/derived value reads
Seek operationsSharedDirectional search

Shared fixtures reuse the same data structure across iterations—safe for read-only operations.

Fresh fixtures create new data per iteration—required for mutations to get accurate measurements.

Performance Tiers

Blazing
O(1): ≥100,000 ops/s
O(n): ≥10,000 ops/s
O(n²): ≥1,000 ops/s
Fast
O(1): ≥10,000 ops/s
O(n): ≥1,000 ops/s
O(n²): ≥100 ops/s
Good
O(1): ≥1,000 ops/s
O(n): ≥100 ops/s
O(n²): ≥10 ops/s
Slow
O(1): <1,000 ops/s
O(n): <100 ops/s
O(n²): <10 ops/s

Each benchmark is assigned a tier based on its throughput and detected complexity. Group tiers are the average of their individual benchmark tiers.

Complexity Detection

Tiers adjust based on detected algorithmic complexity:

Pattern in Benchmark NameComplexity
“single item”, “single query”O(1)
“1,000 items”, “all keys”O(n)
“nested”, “recursive”O(n²)

Reading Results

bash
 createRegistry/index.bench.ts
  lookup operations
 Get item by id (1,000 items)     1,234,567 ops/s
 Get item by id (10,000 items)    1,198,432 ops/s
  • ops/s — Operations per second (higher is better)

  • Consistent across sizes — O(1) complexity confirmed

  • 10x data, ~same speed — Good scaling behavior

Dataset Sizes

Benchmarks test multiple sizes to reveal complexity:

SizeItemsPurpose
Medium1,000Baseline measurement
Large10,000Reveals O(n) vs O(1)
Small100Optional edge case
Stress100,000Optional stress test

If a 10,000-item benchmark is 10x slower than 1,000-item, the operation is O(n). If it’s roughly the same speed, it’s O(1).

Running Benchmarks

bash
# Run all benchmarks
pnpm test:bench

# Run specific file
pnpm vitest bench packages/0/src/composables/createRegistry/index.bench.ts

# Generate metrics report
pnpm metrics

Interpreting for Your Use Case

Explorer

Browse all benchmark results. Filter by composable, performance tier, or search for specific operations.

Contributing Benchmarks

New composables should include benchmarks if they:

  • Manage collections (registries, arrays, maps)

  • Perform search/filter operations

  • Have user-perceived latency (loading, transitions)

  • Are called frequently (every render, every keystroke)

See createRegistry benchmarks↗ for the canonical example.

Was this page helpful?

© 2016-1970 Vuetify, LLC
Ctrl+/