Upon first seeing this we were worried that OnePlus’ CPU scaling was simply set a bit strangely, however upon further testing we came to the conclusion that OnePlus must be targeting specific applications. This is quite strange, as normally both sets of cores drop down to 0.31 GHz on the OnePlus 3T when there is no load. When entering certain benchmarking apps, the OnePlus 3T’s cores would stay above 0.98 GHz for the little cores and 1.29 GHz for the big cores, even when the CPU load dropped to 0%. As a general rule of thumb, we avoid testing benchmarks with performance monitoring tools open whenever possible due to the additional performance overhead that they bring (particularly in non-Snapdragon devices where the are no official desktop tools), however in this incident they helped us notice some strange behavior that we likely would have missed otherwise. Our editor-in-chief, Mario Serrafero, was using Qualcomm Trepn and the Snapdragon Performance Visualizer to monitor how Qualcomm “boosts” the CPU clock speed when opening apps, and noticed that certain apps on the OnePlus 3T were not falling back down to their normal idling speeds after opening. While investigating how Qualcomm achieves faster app opening speeds on the then-new Qualcomm Snapdragon 821, we noticed something strange on the OnePlus 3T that we could not reproduce on the Xiaomi Mi Note 2 or the Google Pixel XL, among other Snapdragon 821 devices. So far the scandal has resulted in billions in fines, tens of billions of recall costs, and charges being laid - certainly not the kind of retribution OEMs would ever see for inflating their benchmark scores, which are purely for user comparisons and are not used for measuring any regulatory requirements. Both companies implemented software to detect when their diesel cars were being put through emissions testing, and had them switch into a low emissions mode that saw their fuel economy drop, in an attempt to compete with gasoline cars in fuel efficiency while still staying within regulatory limits for emissions tests. The timing of this attempt is especially inopportune, as a couple months ago benchmark cheating left the world of being purely an enthusiast concern, and entered the public sphere when Volkswagen and Fiat Chrysler were both caught cheating on their emissions benchmarks. With that backlash contrasting against how tiny the performance gains from benchmark cheating typically are (with most of the attempts resulting in less than a 5% score increase last time), we had truly hoped that this would all be behind us. It is a bit shocking to see manufacturers implementing benchmark cheating in light of how bad the backlash was last time it was attempted (with some benchmarks completely excluding cheating devices from their performance lists). Thankfully, manufacturers have become increasingly responsive to issues like this, and with the right attention being drawn to it, this can be fixed quickly. Unfortunately, we must report that some OEMs have started cheating again, meaning we should be on the lookout once more. You might still get a couple percent, but is it really worth it?" - John Poole "The problem is that once we have these large runtimes if you start gaming things by ramping up your clock speeds or disabling governors or something like that, you're going to start putting actual real danger in the phone. Reducing the benefits in order to ensure that the development costs of benchmark cheating aren't worth it. Primate Labs in particular made Geekbench 4 quite a bit longer than Geekbench 3, in part to reduce the effects of benchmark cheating. When we interviewed John Poole, the creator of Geekbench, the topic of benchmark cheating and what companies like Primate Labs can do to prevent it came up. Many benchmarks were made longer so that the thermal throttling from maximizing clock speeds would become immediately apparent. Most of the few that didn't stop at that point stopped soon after, as there were substantial changes made to how many benchmarks run, in an attempt to discourage benchmark cheating (by reducing the benefit from it). After a bit of a public berating (and some private conversations) from technology publications, industry leaders, and the general public, most manufacturers got the message that benchmark cheating was simply not acceptable, and stopped as a result. Most benchmarks aren't there to tell you the theoretical maximum performance of a phone in lab conditions that aren't reproducible in day to day use, but rather they are there to give you a point of reference for real world comparisons between phones. There was substantial outrage when it was discovered, as these attempts at benchmark cheating ran counter to the very point of the benchmarks themselves.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |