Register Account


Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Geekbench AI Corporate 1.1.0
#1
[Image: Fk2-CAQc-WXk-Nxxbw9-WX1r0-Sv-Bl-Es-BVro4.png]
Languages: English
File Size: 541.64 MB


Geekbench AI is a cross-platform AI benchmark that uses real-world machine learning tasks to evaluate AI workload performance. Geekbench AI measures your CPU, GPU, and NPU to determine whether your device is ready for today's and tomorrow's cutting-edge machine learning applications.

Benchmark real-world AI performance with confidence

Real-World AI Performance
Geekbench AI runs ten AI workloads, each with three different data types, giving you a multidimensional picture of on-device AI performance. Using large datasets that mimic real-world AI use cases, both developers and consumers can measure on-device AI performance in just a few minutes with Single Precision, Half Precision, and Quantized scores.

Measure AI on CPU, GPU, or NPU
Geekbench AI breaks down AI performance across the hardware stack -- select the GPU, CPU, or your device's dedicated NPU for testing. You can also choose from available AI frameworks on your device, like Core ML or QNN. Developers can determine the best combination of frameworks and models for particular workloads, and consumers can easily quantify the impact of dedicated AI hardware.

Compare AI Performance Across Platforms
Geekbench AI runs identical workloads on Android, iOS, Windows, macOS, and Linux. Our benchmark is built for hardware across the capability spectrum, whether you're testing a smartphone with an ultra-low-power NPU or a dedicated workstation with a kilowatt-plus of dedicated AI compute. Instantly compare results using our Geekbench AI results browser.

Release Notes
Primate Labs is pleased to announce the availability of Geekbench AI 1.1, the latest update in our cross-platform AI inference workloads benchmark.

This release includes
- Framework and runtime upgrades - This includes an upgrade to ONNX Runtime 1.19 on Windows, improving Half Precision support on AMD and Intel CPUs, and changes to the Core ML configuration, which improves performance on iOS 18 and macOS 15. OpenVINO now falls back to the CPU in Single Precision workloads if the device does not support the required data types rather than convert to a different data type at runtime. ArmNN has been upgraded to v24.08, and Samsung's ENN to v3.1.8.
- Better results on some Android systems - Some Android devices suffered a bug through the Play Store deployment that limited the hardware Geekbench AI had access to. Extracting bundled libraries to the file system mitigates the issue, improving performance.
- Score validation improvements - By randomly re-validating extra iterations in a small percentage of cases, the robustness of the benchmark is improved. Validation is now parallelized in most workloads, reducing the time spent verifying results - this means the benchmark should take less time to run.
- Requantize all the things - LOTS of models for LOTS of supported frameworks have been requantized, improving output quality, accuracy, and performance across workloads. Expect to see score increases.
- Smaller tweaks - Geekbench AI now uses per-image normalization ranges in Depth Estimation, improving accuracy calculations, and our intern adjusted some dates in the source code - give them a big hand, please.

System Requirements
- Windows 10 (64-bit) or later
- 8GB of RAM

Processor Requirements
- AMD, ARM, or Intel processor

Homepage

[To see links please register or login]


[To see links please register or login]

[Image: signature.png]
Reply


Download Now



Forum Jump:


Users browsing this thread:

Download Now   Download Now