From Joshua Rothman, The New Yorker: β€œAs artificial intelligence proliferates, more and more hinges on our ability to articulate our own value. We seem to be on the cusp of a world in which workers of all kindsβ€”teachers, doctors, writers, photographers, lawyers, coders, clerks, and moreβ€”will be replaced with, or to some degree sidelined by, their A.I. equivalents. What will get left out when A.I. steps in? . . . . [Narayanan and Kapoor] approach the question on a practical level. They urge skepticism, and argue that the blanket term β€˜A.I.’ can serve as a kind of smoke screen for underperforming technologies. . . . [AI Snake Oil isn’t] just describing A.I., which continues to evolve, but characterizing the human condition.”

From Reece Rogers, WIRED: β€œThe first step to understanding AI better is coming to terms with the vagueness of the term. . . . AI Snake Oil divides artificial intelligence into two subcategories: predictive AI, which uses data to assess future outcomes; and generative AI, which crafts probable answers to prompts based on past data. It’s worth it for anyone who encounters AI tools, willingly or not, to spend at least a little time trying to better grasp key concepts.”