Latest Tesla Full Self Driving (FSD) update offering a “Mad Max” mode is making the rounds and it’s supposedly more aggressive than other options. I don’t have or really care about the details beyond that. I take it as a sign of abysmally inadequate state capacity that driving laws are generally ignored and we don’t have speed regulators. It’s even worse that car manufacturers like Tesla (probably others that get less online attention?) sell systems which users can set to exceed speed limits by default, without the driver even needing to use the accelerator pedal every time. This is a huge sector of the economy with massive consequences of health and safety. Yet in line with American disregard for speed limits generally, no regulator will stop this flagrant systemic violation done for the sake of corporate profit.

This has me reflecting on broader failures related to cars, AI, and learning. Admittedly this gets a little speculative. The relationships are not entirely direct. But if we as a society were stronger in one or two of these areas then we wouldn’t tolerate as much failure in others. Alternatively, understanding partially related failures can reveal failures specific to car culture that drivers won’t document for us in isolation.

American drivers want to believe that driving is something everyone1 can do, a universal and inclusive attribute of existing in our society. This perspective ignores many things, like the desire for exclusivity discussed in the Social Ideology of the Motorcar. Many drivers will oppose transit near them because they want to exclude people who rely on transit for transportation yet they probably do tell themselves driving is universally accessible. We see this in opposition to testing to get a license, either for more rigorous testing for novice drivers or retesting for drivers over time who may be losing their proficiency. “Freedom” to unsafely operate a motor vehicle in public is simply too important for the driver who would potentially be rejected, so instead the culture deems everyone capable of driving the same way we assume everyone can operate a microwave.

One repercussion of that is a lack of interest in learning to drive better. Defensive driving courses are something mandated by courts, or maybe taken to get a discount on insurance. Feedback about driving is likely to be taken about as constructively as telling somebody their family recipe has the wrong spices. Instruction for hobbies or professions is common enough. Driving occupies a large portion of the day for a lot of people and errors carry grave consequences, but there’s negligible interest in analyzing how to reduce risk.

What if, hypothetically, there was a system within cars to give feedback on driving while drivers were already behind the wheel? That’s effectively what advanced driver assistance systems (ADAS) are, and very few drivers express interest in that. A comment I see more frequently is to settle for that feedback as a substitute for driving. Or as Motor Trend’s long term tester put their experience of FSD, it led to “changing my role from active controller to vigilant safety monitor”. I’ve seen other drivers make a similar complaint, that if FSD is helpful enough for a while then they’ll disengage and not be ready when needed. Setting aside how irresponsible it is for a driver to tune out just because FSD, or competing ADAS systems marketed for hands free or even eyes free operation, is in operation. Here I want to focus on the lack of interest in using it to re-calibrate a driver’s intuition. People frequently complain that there’s a superhuman difficulty for drivers to maintaining a modest speed limit on a straight wide road. Not only are ADAS pretty good at following speed limits, although old cruise controls could do it even better in some circumstances, but drivers could use it to help train themselves. I don’t consider myself a very coordinated person, I have no sense of timing in a musical sense, but personally I find it rather easy to stay roughly at a limit of 25, 35, 45, so I find these claims pretty hard to believe. But these claims are common and so it’s plausible some drivers truly feel that way. Presumably they could use the ADAS to actually go 25 on a 25 MPH street enough times that it feels correct to them. On the other hand when an ADAS cannot handle a situation, that’s a chance for the driver to reinforce whatever skill is being used.

To my knowledge no car companies try to optimize their ADAS for such training, much less market it as such. Tesla did something with a “safety score” for FSD I don’t fully understand, so sound off if I’m wrong and it actually helped drivers become safer. In line with this post, car customers would hate it. A relevant counter example is chess. As far as I know the best players in the world, and a fair number of players short of that elite tier, regularly use chess software for training. Not to cheat in tournaments. Not to give them answers outside of tournaments. Not to exclude human partners. Yet specifically for training that cannot be otherwise replicated. Chess is somewhat of an outlier because human tutors can no longer bear the best software, yet I don’t think that’s as critical as the sheer competitiveness of chess players who recognize a way to grow their skills.

This cultural distinction, between drivers disengaging from driving as soon as they have an ADAS which is woefully inadequate for autonomous operation versus chess players who see software as a way to grow their skills, can be seen in many other misuses of AI. Or, let’s say, misplaced aspirations for AI. It’s also tied into a society that’s devaluing formal education by defunding education from primary schools to doctoral programs. Human skills matter. If we as a culture wanted better writers, etc., we would hear demands for chatbots to give constructive feedback for people trying to improve their skills, and I rather suspect we have the bulk of the technology to help. Not as a replacement for human instruction but as a tool like chess software. None of the messaging about AI in education has been anything like that.

Maybe one day we’ll ban cars, or at least cars that aren’t autonomous. But we have many vehicle miles yet to go with humans behind the wheel. Narrow safety features like automatic braking are great. But more broadly it would be good for drivers to be more aware and more proficient. To not only stop before crashing, but to understand that drivers and their vehicles exist in a world with painted lines and other human beings, and as such to stop before a crosswalk so that a pedestrian can use it, instead of driving over it and blocking the crosswalk while waiting for a red light. Etc. It is within our technological grasp to improve many skills, including but not limited to driving, without handing over all agency to computers. We just need to take other activities as seriously as chess.


1 Not everyone. Yet the common American perspective is to ignore everyone who is excluded rather than acknowledge that driving is an option available to only a subset of people. Children are legally barred from even taking the test to get a license. Many people have diagnosed medical conditions inconsistent with driving. Many people are financially unable to acquire a car or otherwise legally barred from driving. Many people who otherwise could do not want to, because it is a burden they choose not to endure.

Keep reading

No posts found