From Joshua Rothman, The New Yorker: βAs artificial intelligence proliferates, more and more hinges on our ability to articulate our own value. We seem to be on the cusp of a world in which workers of all kindsβteachers, doctors, writers, photographers, lawyers, coders, clerks, and moreβwill be replaced with, or to some degree sidelined by, their A.I. equivalents. What will get left out when A.I. steps in? . . . . [Narayanan and Kapoor] approach the question on a practical level. They urge skepticism, and argue that the blanket term βA.I.β can serve as a kind of smoke screen for underperforming technologies. . . . [AI Snake Oil isnβt] just describing A.I., which continues to evolve, but characterizing the human condition.β
From Reece Rogers, WIRED: βThe first step to understanding AI better is coming to terms with the vagueness of the term. . . . AI Snake Oil divides artificial intelligence into two subcategories: predictive AI, which uses data to assess future outcomes; and generative AI, which crafts probable answers to prompts based on past data. Itβs worth it for anyone who encounters AI tools, willingly or not, to spend at least a little time trying to better grasp key concepts.β
