During the mid-1990’s, the US Environmental Protection Agency (USEPA) had been experimenting with a wonky, clunky little device (they called it the ROVER) that could provide emissions testing on vehicles – out on the road.
At that time, all vehicle emissions testing was done in a laboratory, on a chassis dynamometer, with the vehicle securely locked down. A driver would goose and gun the engine in order to align the cross-hairs with the speed trace of the test cycle. After a certain predictable and repeatable “test procedure” was finished, the next vehicle in the queue could take its turn on the dyno. A simple and inexpensive way to demonstrate compliance.
Therefore, gaming requires repeatability and a lack of randomness in order to succeed. Unfortunately, most governmental and regulatory entities require just that: predictability, repeatability, steadiness, transferability, standardization – order. Historically, this has been a good strategy, and one that has been economically viable.
One way of gaming a system in cards is to assign positive, negative, or zero values to each card. When that particular card value is dealt, the “card count” is adjusted by the assigned counting value.
Low cards increase the count as well as the percentage of high cards in the shoe, whereas high cards decrease the count. Card Counting’s goal is to assign point values to a card’s “Effect of Removal” (EOR). The EOR is the impact on the house advantage, which allows the Card Counter to calculate their strategy; one typically utilized mathematical strategy often employed with Card Counting is called the “Kelly criterion”.
The “Kelly criterion”
encourages an increased bet according to the proportionality of the Card Counter’s advantage. The higher the count, more is bet on each hand – which leverages the Card Counter’s lead.
Back in the 1990’s, several large and well-known manufacturers (OEMs) figured out the predictability and repeatability of federal testing procedures (FTP’s) utilized by the US Environmental Protection Agency (USEPA). “Defeat Devices” (in the form of engine control software) were installed, so that engines could increase fuel efficiency at the expense of higher NOx and ultra-fine particulates on road, while switching back to an emissions-friendly operation when on an engine dynamometer test. It was an easy fix for the OEMs. The onboard software looked for predicable patterns of acceleration and deceleration – ones that matched various FTPs – and quickly made the switch.
It worked well, until after multiple warnings from whistle blowers about the cheating, the USEPA finally tested an engine by simulating steady-state, highway operation instead of the certification test cycle; huge difference. One that led to the billion-dollar Consent Decree
OEM fine of 1999.
For starters, it funded a ten-year run (2000 – 2010) of “Portable Emissions Measurement Systems” (PEMS) development. The Consent Decree also led to new field standards such as the “Not-To-Exceed” (NTE) Standard, as well as some of the impetus behind more rapid identification & deployment of newer, more effective technologies (e.g. Diesel Particulate Filters (DPFs)).
However, these gains and lessons-learned in the Heavy Duty Diesel (HDD) market have apparently gone ignored in the Light Duty Diesel (LDD) market. Is it because LDD OEMs were ignorant of the industry changes?
Perhaps the “Kelly Criterion” should be more carefully considered: although it may initially allude to a better strategy in the long run, built-in constraints may “override the desire” for a more optimal – and realistic – growth rate. In other words, the desire to cheat, based on the knowledge of patterns, becomes the Card Counter’s demise.