User Hostile

When I worked in Silicon Valley many years ago, we called software that didn’t work well “user hostile.” That memory came to me, of course, because of the embarrassing performance of the Obama administration in enabling (or more accurately, preventing) enrollment in a medical insurance plan under the so-called Affordable Care Act—which should be more accurately called, not Obamacare, but the Health Insurance Industry Guaranteed Profit Act.

What we’re now told is that it’s just a technical problem that can be fixed and from there on it’s smooth sailing. The Obama administration and its supporters have great faith in their technical team and evidently think we should too.

Knowing what I know about the software development process, much depends on quality control and quality assurance. Quality control makes sure that the software (or any other product) is built as it was designed. Quality assurance makes sure that the software does what it’s supposed to and, if it doesn’t, makes sure the problem gets fixed.

So, on the one hand, as a matter of business, I have reasonable confidence that the enrollment process will get straightened out. But on the other hand, some software applications are simply cursed. So, good luck to all those software geniuses and the people who have to use their product. Personally, I don’t care because I have Medicare, a program that works brilliantly and enabled me to enroll easily. I say this not to gloat but to highlight the fact that the alternative to the Health Insurance Industry Guaranteed Profit Act is Medicare for All—a program that is far less user hostile.

I can’t say I’m all that interested in the political theatre that’s erupting around this technological failure. What interests me is the shock and surprise with which this technological failure was greeted and the earnest determination to persevere until it works right. In fact, I’m shocked and surprised at the shock and surprise and determination because technological failures are so common.

You might think this would be a good time for me to observe that far too many people are mesmerized by information technology and technology in general. But I don’t actually think that. What I think is that the faith in and desire for information technology (and technology in general) are rational responses to human needs.

Some of those needs are gratifyingly human—for example, the need to feel safe. And some are horrifyingly human—for example, the need to have power over others, which is a twisted version of the need to feel safe.

Wherever you think the Internet-based enrollment process lands on the user hostile scale, it meets a need—actually, many needs all at once. The technology succeeds or fails depending on who you are. And who you are doesn’t come from your genome and it doesn’t fall from the sky. It comes from your place among others.

“Why don’t you feel safe?” is not answered by what gadget you use but by how things among us would need to be different.

A few days ago a friend from Mexico told me his right arm was killing him. For the first time in his life, he went to a clinic for a flu shot. Whenever he travels to Mexico, he gets sick. When he travels back to the US, he gets sick. Thinking a flu shot might help, he got one. After several days of suffering, he went back to the clinic.

The people at the clinic said they’d never heard of such a thing; that it was something about him. He agreed with me that they were lying. I agreed with him that it would be a good idea to avoid future encounters with vaccine-wielding medical professionals.

Getting vaccinated failed to meet my friends need to feel safe. But his clinic encounters succeeded in maintaining the clinic staff’s need to have faith in vaccination and the entire bulwark of institutional support for vaccination, starting with vaccine manufacturers—also known as Big Pharma.

In other words: it’s not the technology; it’s the social use to which the thing we mistakenly think of as the technology is put and whose interests it serves.

That’s because “technology” is an inherently social concept. We’ve been conditioned to believe that “technology” is about things: computers, motors, nuclear power plants, syringes. It’s not about things. It’s about what people do with things. The things include objects manufactured by humans as well as natural objects: plants, animals, air, water, land.

Agriculture is a technology. It was invented about 12,000 years ago. It is a particular way in which humans use plants, animals, air, water, and land to produce food. It is not the only way that humans satisfy their need for food. But because agriculture was such a wildly successful technology, it now dominates how and what we eat and has produced its own, unique form of user hostile.

About 25 years ago, the anthropologist Jared Diamond wrote an essay in Discover magazine titled “The Worst Mistake in the History of the Human Race.” The “worst mistake” was agriculture. It ruined human health, ruined the lands brought under cultivation, led to class society (also known as civilization), and so on. Mark Nathan Cohen and Marshall Sahlins have made similar arguments.

Wes Jackson at The Land Institute is promoting the idea of agriculture based on perennial grasses. It’s still agriculture, but a different, less user hostile technology. And then there are radicals such as Lierre Keith and Derek Jensen who advocate for life without agriculture (Keith). That’s not the same as life without food. Nor is it the same as life without useful tools and machines. It’s just a different technology: one friendly to humans in their fullness as well as plants, animals, air, water, and land. But things among us would need to be different.