15.3 C
London
Tuesday, October 15, 2024
HomeTechnologyTackle the Inescapable Generative AI Hype with Education

Tackle the Inescapable Generative AI Hype with Education

Date:

Related stories

FCC Initiates Inquiry into the Issues with Broadband Data Caps

The Federal Communications Commission (FCC) has announced the initiation...

FDIC Targets BlackRock, Vanguard Over Investments in U.S. Banks

According to a media report on Tuesday, the U.S....

Bank of America Favors These Stocks for Reliable Dividends

Bank of America appears to be an attractive option...

Georgia Election Certification Rules Upheld by State: NPR

In Atlanta, a state judge has ruled that Georgia's...

Please Stop Creating Confusing Doomsday Clocks

A Saudi-backed business school in Switzerland has introduced a...
spot_img

Arvind Narayanan, a computer science professor at Princeton University, is recognized for debunking the hype around artificial intelligence in his Substack, “AI Snake Oil,” co-authored with PhD candidate Sayash Kapoor. The authors recently published a book based on their well-known newsletter, which critiques the perceived flaws of AI.

Narayanan clarifies that their stance is not against the adoption of new technologies. Rather, their critique is directed at those who propagate misleading information about AI. During a discussion with WIRED, he emphasized that their objections are not aimed at the software itself but at entities making exaggerated claims about artificial intelligence.

In “AI Snake Oil,” Narayanan and Kapoor categorize the main contributors to the AI hype into three groups: companies selling AI, researchers studying AI, and journalists covering AI.

Companies that assert their algorithms can predict future events are identified as particularly deceitful. The authors point out that predictive AI systems often harm minorities and economically disadvantaged individuals. They cite the example of a Dutch algorithm used to predict welfare fraud, which inaccurately targeted women and immigrants who did not speak Dutch.

The book also criticizes companies that focus on existential risks associated with artificial general intelligence (AGI). While acknowledging the potential of AGI, Narayanan notes that his motivation to enter computer science was partly driven by the possibility of contributing to AGI. The authors argue that the problem lies in these companies’ prioritization of long-term risks over the immediate impacts of AI on people’s lives.

Researchers are also criticized for contributing to the hype through poorly executed and non-reproducible studies. Kapoor highlights the issue of data leakage, where AI models are tested using part of their training data, thus inflating claims about their effectiveness.

Academics are described in “AI Snake Oil” as making significant methodological errors, while journalists are deemed more culpable, allegedly prioritizing their relationships with large tech companies over accurate reporting. According to the authors, many articles are essentially repackaged press releases.

Narayanan and Kapoor argue that sensational journalism can distort public perceptions of AI’s capabilities. They reference a New York Times article by Kevin Roose, which featured an interactive transcript with a Microsoft chatbot, as an example of misleading coverage that confuses the public about AI’s capabilities. Kapoor also mentions the ELIZA chatbot from the 1960s as an example of people anthropomorphizing simple AI tools.

The authors’ work underscores their view that while AI has potential, its current portrayal is often exaggerated, leading to misconceptions about its capabilities and impacts.

Source link