NVMe, Cooling, Overclocking, and Real-World Benchmarks
Raspberry Pi 5 Performance Guide
NVMe, Cooling, Overclocking, and Real-World Benchmarks
Raspberry Pi 5 performance is not determined by the CPU alone. Storage bandwidth, thermal headroom, power delivery stability, and software configuration each play a critical role in what the board can actually sustain under real workloads. A Pi 5 bottlenecked by a slow microSD card or starved of power will perform nowhere near its potential — even at stock clock speeds.
This guide consolidates everything that affects Pi 5 throughput into one structured reference. Topics covered include NVMe boot setup, RAID storage configuration, cooling optimization, safe overclocking and undervolting, GPU acceleration, AI inference workloads, networking and reverse proxy performance, and stability troubleshooting. Each section links to a dedicated deep-dive guide for readers who need step-by-step implementation details.
Key Concepts Defined
PCIe 2.0 interface: The Pi 5 exposes a single-lane PCIe 2.0 interface through its FPC connector, providing roughly 500 MB/s of theoretical bandwidth — enough to saturate most consumer NVMe drives at the read/write level, though IOPS remain constrained compared to desktop controllers.
NVMe vs microSD: NVMe storage connects directly to the PCIe bus, delivering sequential reads 15–20x faster than microSD and random IOPS improvements of 20–30x. For any workload involving databases, containers, or frequent small file access, NVMe is transformative.
Thermal throttling: The BCM2712 SoC begins reducing clock speed at approximately 80°C to protect itself from damage. Under sustained CPU-intensive loads with passive or inadequate cooling, throttling can reduce effective performance by 20–40% compared to cooled operation.
Power delivery constraints: The Pi 5 requires a USB-C power supply capable of 5V/5A (25W). Insufficient power causes undervoltage events, which the firmware responds to by throttling clocks and, in severe cases, triggering unexpected reboots.
Real workload testing vs synthetic benchmarks: Tools like sysbench or fio report peak theoretical numbers that rarely reflect sustained behavior. Real-world tests — NAS file transfers, media transcoding, Docker container stress tests, and LLM inference — expose bottlenecks that synthetics miss.
Section 1 — Raspberry Pi 5 Hardware Architecture
What Changed from Raspberry Pi 4
The Raspberry Pi 5 represents the most significant architectural leap in the product line’s history. The BCM2712 application processor, built on a 16nm process node, pairs four Cortex-A76 cores clocked at 2.4 GHz with a VideoCore VII GPU. The A76 microarchitecture delivers approximately 2–3x the integer performance per clock compared to the Cortex-A72 in the Pi 4, and the out-of-order execution pipeline handles memory-latency-bound workloads far more efficiently.
The most impactful infrastructure change is PCIe lane exposure. Previous Pi models routed PCIe internally to the USB 3.0 controller. The Pi 5 breaks that lane out to an FPC connector on the board edge, making it available to HAT+ accessories. This single decision enables native NVMe support, high-bandwidth camera interfaces, and future expansion options that were architecturally impossible on Pi 4.
Additional hardware changes include a dedicated power management IC (the RP1 south bridge chip), a real-time clock circuit, a hardware power button with clean shutdown support, and improved I/O bandwidth across all peripheral buses. The memory interface also benefits from wider bandwidth, reducing the bottleneck between CPU cores and LPDDR4X RAM.
Internal links: /raspberry-pi-models-comparison/ | /raspberry-pi-5-nvme-boot-guide/
Section 2 — NVMe Boot and Storage Optimization
NVMe Boot Setup
Booting the Pi 5 from NVMe requires a HAT that routes the FPC PCIe connector to an M.2 slot (typically M-key, supporting 2230 and 2242 form factors at minimum, with many HATs supporting 2280). The most commonly used options include the official Raspberry Pi M.2 HAT+, the Pimoroni NVMe Base, and various third-party alternatives.
The setup process involves three primary steps. First, the bootloader must be updated to a version that includes NVMe boot order support — typically via raspi-config or by flashing the EEPROM directly using rpi-eeprom-update. Second, the BOOT_ORDER variable in the EEPROM configuration must include NVMe (0x6) in the priority sequence. Third, the PCIe interface must be configured in /boot/firmware/config.txt by adding the appropriate dtparam line to enable Gen 2 speeds.
Deep-dive guide: /raspberry-pi-5-nvme-boot-guide/
Performance tuning after boot includes fstab mount option optimization. For ext4 partitions, adding noatime reduces unnecessary write operations to the filesystem journal. For workloads with heavy random I/O, the deadline or mq-deadline I/O scheduler typically outperforms the default. SSD over-provisioning and TRIM support via fstrim.timer are also worth enabling on any NVMe-booted Pi 5.
RAID and Redundancy
Dual-drive RAID configurations on the Pi 5 are possible when using HATs that expose two M.2 slots, or by combining an NVMe HAT with a USB-attached SSD. The mdadm utility handles software RAID on Raspberry Pi OS without additional packages.
RAID 1 (mirroring) is the most practical choice for home server and NAS use cases. Both drives maintain identical copies of all data, providing read performance improvements through parallel reads and full redundancy in the event of a single drive failure. RAID rebuild times on the Pi 5 depend heavily on drive speed and array size, but expect roughly 60–90 minutes per 100 GB for typical NVMe-to-NVMe rebuilds.
RAID 0 (striping) doubles sequential throughput in theory, but the PCIe 2.0 single-lane bandwidth cap limits practical gains beyond what a single fast NVMe drive already achieves. RAID 0 also eliminates redundancy entirely, making it unsuitable for any storage that matters. RAID 5 requires a minimum of three drives and is generally impractical with current HAT options.
Deep-dive guide: /raspberry-pi-5-nvme-raid-redundancy-setup/
Multiboot Strategies
Pi 5 supports booting from microSD, USB, or NVMe, and the boot order is configurable via EEPROM. This flexibility enables multiboot setups where different operating systems or configurations live on separate storage devices. A common pattern is keeping a lightweight recovery OS on microSD while running the primary workload OS from NVMe.
Deep-dive guide: /pi-5-multiboot-sd-usb-nvme/
Storage Comparison: microSD vs USB SSD vs NVMe
Metric microSD USB SSD NVMe (PCIe 2.0)
Seq. Read ~45 MB/s ~380 MB/s ~900 MB/s
Seq. Write ~20 MB/s ~320 MB/s ~750 MB/s
4K IOPS (R) ~1,500 ~12,000 ~45,000
4K IOPS (W) ~500 ~8,000 ~35,000
Latency ~15 ms ~1 ms ~0.1 ms
Boot Time ~28 s ~14 s ~8 s
Best Use Case Low-cost / dev General use Performance builds
Table 1. Approximate performance metrics for common Pi 5 storage options. NVMe values reflect PCIe Gen 2 x1 operation with a mid-range M.2 drive.
Section 3 — Cooling and Thermal Control
Thermal throttling is the most common and least obvious performance bottleneck on Pi 5 systems. A board that appears to be working normally may be silently reducing clock speeds in response to heat — delivering 30% less CPU throughput than a properly cooled equivalent. Sustained workloads like video transcoding, Docker builds, or LLM inference are especially vulnerable.
Cooling Benchmarks
Testing conducted across common cooling configurations reveals significant performance divergence under sustained load. In passive operation with no case or heatsink, the BCM2712 reaches throttling temperature (around 80°C) within approximately 60–90 seconds of a full-CPU stress test. An official active cooler or well-designed heatsink case can maintain temperatures in the 55–70°C range under the same load, eliminating throttling entirely.
Passive cases — typically aluminum enclosures that conduct heat from the SoC to the case body — perform adequately for light workloads but struggle with sustained loads above approximately 60% CPU utilization. The Pi 5 official active cooler, which includes a direct-contact heatsink with a small PWM fan, is among the most effective thermally per dollar and fits within the standard Pi footprint.
Sustained load behavior matters more than peak temperature. A cooling solution that keeps temps at 75°C during a 30-second burst but climbs to 85°C during a 10-minute build will still throttle. Benchmark cooling solutions using sustained workloads, not synthetic burst tests.
Deep-dive guide: /raspberry-pi-5-cooling-guide-fan-curves-cases-testing/
Fan Curves and Active Cooling
PWM fan control on the Pi 5 is handled through the pwm-fan device tree overlay and can be configured via /boot/firmware/config.txt. The fan curve maps temperature thresholds to PWM duty cycle percentages. The default firmware behavior enables the fan at low speed once the SoC crosses approximately 60°C and ramps to full speed approaching 80°C.
For quiet operation, a custom fan curve that holds the fan off below 50°C and ramps gradually between 55°C and 75°C strikes a good balance between noise and thermal control. The tradeoff is worth examining explicitly: at full fan speed, the official cooler produces around 25–30 dB(A) — audible but not intrusive. At minimum speed or off, it is silent. For headless server deployments where noise is irrelevant, setting a more aggressive curve prevents throttling with zero operational cost.
Preventing Random Reboots
Unexpected reboots under load are almost always caused by one of three issues: undervoltage from an inadequate power supply, thermal runaway from insufficient cooling, or filesystem corruption on microSD from unsafe shutdowns. The Pi firmware logs undervoltage events to the kernel ring buffer, accessible via dmesg | grep voltage. Repeated undervoltage warnings indicate the power supply cannot sustain the current draw.
The official Raspberry Pi 27W USB-C power supply is the recommended choice. Third-party supplies rated at 5V/3A are insufficient for Pi 5, particularly when combined with NVMe HATs and USB peripherals that draw current through the board. For unattended or always-on deployments, a UPS HAT with clean shutdown capability eliminates both power interruption reboots and filesystem corruption risk.
Related guides: /raspberry-pi-random-reboots-under-load/ | /prevent-sd-card-corruption-raspberry-pi/ | /ups-hat-raspberry-pi-safe-shutdown/
Section 4 — Safe Overclocking and Undervolting
Overclocking the Pi 5 can meaningfully improve single-threaded and multi-threaded workload performance, but it introduces correlated risks: increased thermal output, higher power draw, and the possibility of data corruption if system instability causes an unclean shutdown during a write operation. Approaching overclocking methodically — verifying stability at each step before pushing further — mitigates most risk.
Safe Overclock Settings
The primary overclocking parameters are arm_freq (CPU clock in MHz) and over_voltage_delta (voltage offset in microvolts, replacing the older over_voltage integer parameter in recent firmware). Starting points that most Pi 5 boards handle without active cooling upgrades include arm_freq=2600 and over_voltage_delta=50000. Many boards are capable of 2800–3000 MHz with adequate cooling and careful voltage stepping.
Deep-dive guide: /safe-overclocking-undervolting-raspberry-pi-5/
Stability validation requires more than a 30-second stress test. The recommended methodology involves running stress-ng –cpu 4 –timeout 300s for an initial thermal baseline, followed by stressapptest for memory integrity verification, and a workload-representative test specific to your use case (a Docker build, LLM inference session, or NAS transfer test). Any system that crashes, produces kernel errors, or generates corrupted output during these tests requires a lower frequency or higher voltage setting.
Undervolting — reducing the core voltage below the default — trades some headroom for lower temperatures and power consumption. Conservative undervolting (over_voltage_delta=-25000 to -50000) typically has no stability impact and reduces operating temperature by 2–5°C, which in marginal cooling situations can be enough to prevent throttling.
Overclocking Risks
Data corruption: System crashes during write operations can corrupt filesystems. Mitigate with journaled filesystems and regular backups.
Thermal runaway: Higher clock speeds without improved cooling accelerate the path to throttling and reduce component longevity.
Power draw spikes: Transient CPU load spikes at elevated voltages can momentarily exceed power supply capacity, causing undervoltage events.
Warranty implications: Overclocking voids warranty coverage on Raspberry Pi hardware.
Always verify stability with sustained workload tests before deploying an overclocked configuration in production. Back up data before changing overclocking parameters.
Section 5 — GPU Acceleration and Desktop Performance
GPU Acceleration Setup
The VideoCore VII GPU in the Pi 5 supports OpenGL ES 3.1 and Vulkan 1.2, a substantial advancement over the VideoCore VI in the Pi 4. Full GPU acceleration on the desktop requires the KMS (Kernel Mode Setting) driver stack, which is enabled by default in current Raspberry Pi OS builds. The legacy FKMS framebuffer driver should not be used on Pi 5 as it bypasses proper GPU acceleration.
OpenGL driver configuration is handled through the v3d kernel module. Hardware-accelerated video decode is exposed through the libcamera stack and requires HEVC/H.264 decode pipelines enabled via the appropriate device tree overlays. Applications that use VA-API or V4L2 M2M interfaces can leverage hardware decode without additional configuration in recent OS builds.
Deep-dive guide: /raspberrypi-5-gpu-acceleration-desktop/
Media Performance
H.264 hardware decode at 1080p60 is well within Pi 5 capability and requires minimal CPU involvement when properly configured. H.265 (HEVC) decode at 1080p is similarly hardware-accelerated, though 4K H.265 decode approaches the limits of the hardware pipeline and may produce occasional frame drops depending on the stream bitrate and container overhead.
Jellyfin with hardware decode enabled via V4L2 can serve multiple simultaneous 1080p streams without significant CPU load. Direct play scenarios — where the client receives the original stream without transcoding — place minimal demand on the Pi. Transcoding 4K to 1080p in software remains CPU-bound and will saturate all four A76 cores at sustained operation.
Network throughput is a parallel constraint. The Pi 5’s Gigabit Ethernet controller, connected via the RP1 south bridge, delivers approximately 940 Mb/s in practice. Multi-client NAS scenarios should account for this ceiling when calculating whether the Pi can serve the required aggregate bandwidth.
Related guides: /jellyfin-on-raspberry-pi-5-hardware-decode-network-shares-setup/ | /setting-up-a-plex-media-server-on-a-raspberry-pi/
Section 6 — AI and Edge Computing Workloads
The Pi 5 is a capable edge inference platform for lightweight AI workloads when models are appropriately quantized for the hardware constraints. The board’s 8 GB RAM ceiling (on the 8 GB variant) limits the maximum model size that can be loaded into memory, but many useful models — including 1B–3B parameter LLMs and standard vision classification models — operate within this boundary.
Running Local LLMs
llama.cpp is the primary tool for running large language models on Pi 5. It supports CPU-only inference with NEON SIMD optimization for ARM and handles GGUF-format quantized models natively. Q4_K_M quantization represents a practical sweet spot: models quantized at this level are approximately 4 bits per parameter, reducing a 7B model to around 4–5 GB in RAM while maintaining most of the original model quality.
Inference performance on Pi 5 at Q4_K_M for a 1B parameter model is approximately 12–18 tokens per second depending on context length and model architecture. 3B models run at 6–10 tokens per second. 7B models are loadable on the 8 GB Pi 5 but inference speed drops to 2–4 tokens per second — functional for local assistant use cases where latency tolerance is high, but not suitable for interactive applications. CPU-only inference does not benefit from the GPU on Pi 5, as the VideoCore VII lacks the general-purpose compute capabilities needed for efficient matrix multiplication at LLM scale.
Deep-dive guide: /raspberry-pi-5-llama-cpp-local-llm-install-setup/
Coral TPU Acceleration
The Google Coral USB Accelerator provides a dedicated Edge TPU capable of 4 TOPS (tera-operations per second) for TensorFlow Lite models compiled for the Edge TPU runtime. Connected via USB 3.0 on the Pi 5, the Coral accelerator offloads inference entirely from the CPU for compatible models, enabling real-time inference at 30+ fps for image classification and object detection tasks that would otherwise saturate CPU resources.
Practical use cases include computer vision pipelines for security cameras, local voice command processing, and anomaly detection in sensor data. The key constraint is that models must be compiled specifically for the Edge TPU using Google’s compiler toolchain — standard TensorFlow Lite models run on the CPU fallback path, not the TPU hardware.
Deep-dive guide: /coral-tpu-raspberry-pi-5-setup/
Section 7 — Networking and Reverse Proxy Performance
High-performance Pi 5 deployments serving external traffic or running containerized services require attention to networking overhead. TLS termination, reverse proxy CPU cost, and VPN throughput each impose measurable performance taxes that compound in multi-service setups.
TLS overhead on Pi 5 is modest for typical HTTPS traffic. AES-NI hardware acceleration is not available on the A76 cores, but the ARM Cryptography Extensions are enabled and provide efficient software-accelerated AES-GCM throughput. Practical HTTPS reverse proxy throughput (measured as requests per second for a 10 KB payload) is in the range of 3,000–5,000 req/s with TLS and around 8,000–12,000 req/s without, depending on connection keep-alive behavior.
Reverse Proxy Options
Traefik running in Docker is a popular choice for Pi 5 homelab setups. Its automatic TLS certificate management via Let’s Encrypt and native Docker label-based routing eliminate most manual configuration overhead. CPU cost per proxied request is slightly higher than Nginx or Caddy due to Go runtime overhead, but negligible for typical homelab traffic volumes. Wildcard certificates via DNS challenge eliminate the need for HTTP-01 challenge routing and work cleanly with Pi 5 deployments that may not be directly accessible on port 80.
Caddy is a strong alternative for simpler configurations. Its automatic HTTPS with minimal configuration and native HTTP/3 support make it well-suited to Pi 5 deployments. CPU cost is comparable to Traefik for most workloads.
VPN Throughput
WireGuard on Pi 5 consistently achieves 300–450 Mb/s throughput in VPN tunnel scenarios, well within Gigabit Ethernet limits for typical use cases. The Chacha20-Poly1305 cipher used by WireGuard maps efficiently to ARM NEON instructions, keeping CPU utilization below 30% at 300 Mb/s transfer rates. Tailscale, which wraps WireGuard with automatic key management and NAT traversal, achieves similar throughput with somewhat higher CPU overhead due to coordination layer processing.
Related guides: /wireguard-vpn-raspberry-pi-5/ | /traefik-on-raspberry-pi-with-docker-and-wildcard-certs/ | /caddy-reverse-proxy-raspberry-pi/ | /tailscale-raspberry-pi-secure-remote-access/
Section 8 — Real-World Benchmark Scenarios
Synthetic tools like sysbench CPU benchmarks and fio sequential read tests establish theoretical peaks but fail to expose the bottlenecks that matter in deployed systems. The following benchmark scenarios reflect actual workloads and reveal the interaction effects between storage, CPU, memory bandwidth, and thermal management that synthetic tools mask.
NAS File Transfer Test
Measure sustained SMB or NFS throughput using a large file transfer from a wired client. Target: a 10 GB file transfer via Gigabit Ethernet from a Linux client using dd over SMB or iperf3 for pure network throughput. An NVMe-backed Samba server on Pi 5 with Gigabit Ethernet should sustain 100–110 MB/s. microSD-backed setups will saturate the storage layer at 20–40 MB/s regardless of network capacity. Monitor CPU utilization, temperature, and NVMe I/O wait during the transfer.
Media Transcode Test
Use ffmpeg to transcode a 1080p H.264 source to H.265 using hardware acceleration where available. Measure: frames per second, CPU temperature over a 5-minute encode, and whether thermal throttling occurs. Software transcoding at 1080p with H.265 output is CPU-bound and will expose cooling inadequacies within 2–3 minutes. Hardware-accelerated H.264 decode paired with software H.265 encode represents the typical Jellyfin or Plex transcode scenario.
Docker Container Stress Test
Run a multi-container workload representing a realistic homelab stack: a database container (PostgreSQL or MariaDB), a web application container, and a reverse proxy container under simulated load using wrk or k6. This test exposes storage I/O contention between containers, memory pressure as container count increases, and CPU scheduling overhead. Compare results on microSD vs NVMe — the difference in database query latency is typically 5–15x in favor of NVMe.
LLM Inference Latency Test
Using llama.cpp with a 1B parameter Q4_K_M model, measure tokens per second at various context lengths: 512, 1024, 2048 tokens. Longer context windows increase memory bandwidth demands and reduce inference speed. This test establishes whether the Pi 5’s memory bandwidth (approximately 50 GB/s for LPDDR4X) is the binding constraint at larger context sizes.
RAID Rebuild Speed
Simulate a drive failure in a RAID 1 array and measure rebuild time for a 64 GB array. During rebuild, record the sustained I/O rate and measure whether the rebuild causes noticeable performance degradation for concurrent workloads. Pi 5 RAID 1 rebuild with dual NVMe drives typically completes at 150–250 MB/s with moderate system load impact.
Deep-dive guide: /raspberry-pi-performance-tuning-real-workloads/
Section 9 — Stability and Reliability Checklist
Use this checklist before deploying a Pi 5 in any production or always-on role. Each item addresses a known failure mode.
NVMe firmware updated. Check the drive manufacturer’s firmware release notes. Some NVMe drives have known compatibility issues with PCIe Gen 2 single-lane interfaces that firmware updates resolve.
Active cooling installed. Confirm with a sustained 5-minute stress test that CPU temperature stays below 75°C and no throttling events appear in dmesg.
Official Raspberry Pi 27W power supply in use. Third-party 5V/3A supplies are insufficient. Verify with dmesg | grep voltage — no undervoltage warnings should appear under full load.
Stable overclock verified. If overclocking, complete a minimum 10-minute stress-ng CPU test and stressapptest memory verification before putting the system into production service.
Backup system configured. NVMe drives can fail. Automated backups to an offsite location or secondary device should be in place before relying on the Pi 5 for important data.
Filesystem TRIM enabled (NVMe). Enable fstrim.timer to maintain NVMe performance over time as drive capacity is consumed and freed.
UPS HAT installed (optional but recommended for critical data). Provides clean shutdown on power loss, eliminating the primary cause of filesystem corruption on Pi-based servers.
Backup guide: /raspberry-pi-backup-guide-best-methods-and-tools/
Frequently Asked Questions
The following questions address the most common Raspberry Pi 5 performance topics. Each answer reflects testing with current Raspberry Pi OS (Bookworm) builds and production hardware.
Is NVMe worth it on Raspberry Pi 5?
For the majority of real-world use cases, yes. NVMe storage delivers 15–20x higher sequential throughput and 20–30x better random IOPS compared to microSD. The impact is most pronounced in database-backed applications, Docker environments, home media servers, and any workload involving frequent small file operations. For a Pi 5 used purely as a lightweight desktop or occasional script runner, microSD remains adequate. For servers, NAS setups, or development environments, NVMe is the single highest-impact hardware upgrade available.
How hot does Raspberry Pi 5 run?
Under idle conditions with no heatsink, the BCM2712 stabilizes in the 45–55°C range depending on ambient temperature. Under sustained full CPU load without cooling, it reaches the 80°C throttling threshold within 60–90 seconds. With the official active cooler installed, sustained load temperatures typically stay in the 55–70°C range. Passive aluminum cases without fans plateau at approximately 70–78°C under sustained load — often just at the edge of throttling. The Pi 5 runs meaningfully hotter than the Pi 4 at comparable workloads due to the higher-performance CPU cores.
What is a safe overclock for Pi 5?
Most Pi 5 boards operate stably at 2600–2800 MHz with the official active cooler and a modest voltage offset (over_voltage_delta=50000). Some boards achieve 3000 MHz with more aggressive cooling. The key safety requirement is stability verification: run stress-ng –cpu 4 –timeout 600s and stressapptest –seconds 300 after any frequency change before trusting the system with real workloads. An overclock that passes a 30-second test may still fail under sustained production loads. Never increase frequency without also confirming that your cooling solution keeps temps below 75°C under load.
Can Raspberry Pi 5 run AI models?
Yes, within defined constraints. The Pi 5 runs quantized LLMs via llama.cpp effectively for models up to approximately 3B parameters (Q4_K_M quantization) at useful inference speeds of 6–18 tokens per second. The 8 GB RAM variant can technically load 7B models but at 2–4 tokens per second, which is too slow for interactive use. For vision AI and edge inference, the Google Coral USB Accelerator dramatically outperforms CPU-only operation for compatible TensorFlow Lite models, enabling real-time 30 fps inference for classification and detection tasks.
Does RAID improve Pi 5 performance?
It depends on the RAID level and workload. RAID 1 (mirroring) can improve read performance through parallel reads across two drives, but the Pi 5’s PCIe 2.0 single-lane bandwidth cap limits how much single-stream reads benefit. The primary value of RAID 1 on Pi 5 is redundancy, not speed. RAID 0 (striping) doubles theoretical write throughput, but since a single modern NVMe drive already saturates PCIe Gen 2 bandwidth for sequential operations, striping provides minimal real-world benefit for most workloads. For random IOPS-heavy workloads like databases, RAID 0 can provide measurable improvement. RAID 0 eliminates redundancy entirely, so it should only be used where data loss is acceptable.
About This Guide
This pillar page is maintained as a structured reference for Raspberry Pi 5 performance optimization. Each section links to dedicated deep-dive guides for step-by-step implementation instructions. Primary keyword: raspberry pi 5 performance. Related topics: raspberry pi 5 nvme boot, raspberry pi 5 cooling, raspberry pi 5 overclocking, raspberry pi 5 gpu acceleration, raspberry pi 5 benchmarks.
