Location>code7788 >text

Deepseek Learning Essay (13)--- Tsinghua University releases the 5th bullet: DeepSeek and AI Illusion (with network disk link)

Popularity:994 ℃/2025-02-27 12:03:14

The rapid development of artificial intelligence technology has brought us unprecedented convenience, but it is also accompanied by a problem that cannot be ignored-AI hallucination. "DeepSeek and AI Hallucination" released by Tsinghua University in detail explores the causes, evaluation methods and coping strategies of AI hallucinations, and emphasizes its potential value in the field of creativity. This article will summarize the core content of the file, share my learning thoughts, and attach a resource download link.

1. Document introduction

What is AI hallucination?

AI illusion refers to the content that does not match the facts, breaks logic or is out of context. It is essentially a "reasonable guess" driven by statistical probability. AI hallucinations are mainly divided into two categories:

  1. Factual hallucinations: The generated content is inconsistent with verifiable real-world facts.
  2. Faithful hallucination: The generated content is inconsistent with the user's instructions or context.

Case

  • Factual hallucinations: When asked "Can patients with diabetes replace sugar with honey?", DeepSeek replied "Honey is natural and can help stabilize blood sugar levels", but in fact, honey will increase blood sugar and is not suitable for patients with diabetes.
  • Faithful hallucination: When asked "Introduction to Deep Learning", the model may deviate from actual needs and generate content that is not related to the topic.

Why does DeepSeek have hallucinations?

The emergence of AI hallucinations mainly comes from the following reasons:

  1. Data deviation: Errors or one-sidedness in the training data are amplified by the model.
  2. Generalization dilemma: The model is difficult to deal with complex scenarios outside the training set.
  3. Knowledge solidification: The model relies too much on parameterized memory and lacks dynamic update capabilities.
  4. Misunderstanding of intention: When users ask questions in a vague way, the model is easy to "play freely".

Review of AI hallucinations

Files evaluate AI hallucination rates through two test methods:

  1. Generality test: Randomly generate 100 general prompts, and manually judge the hallucination rate of model answers.
    • DeepSeek V3: 2% → 0% (after opening the Internet search)
    • DeepSeek R1: 3% → 0% (after turning on the Internet search)
  2. Factual test: Randomly select 300 factual test questions to evaluate the accuracy of the model.
    • DeepSeek V3: 29.67% → 24.67% (after turning on the Internet search)
    • DeepSeek R1: 22.33% → 19% (after turning on the Internet search)

Evaluation results: DeepSeek V3 > Qianwen2.5-Max > DeepSeek R1 > Bean Bun.

How to slow down AI hallucinations?

The document proposes a variety of strategies to deal with AI hallucinations:

  1. Search online: Obtain the latest data through the networking function to reduce the hallucination rate.
  2. Dual AI verification: Use multiple large models to cross-validate content.
  3. Prompt word engineering: Optimize prompt words by limiting knowledge boundaries and implanting anti-aliambodia detection mechanisms.

Case

  • Knowledge anchoring method: "Based on the answer from the Chinese Pharmacopoeia, if the information is unclear, please indicate that 'no reliable data support is available for the time being'."
  • Confrontational Tips: Force exposes the fragility of reasoning, and the user can see potential error paths.

The Creative Value of AI Illusion

Despite the risks posed by AI hallucinations, they also show unique value in the field of creativity:

  1. Scientific Discovery: AI hallucinations inspired new protein structure design and promoted scientific research innovation.
  2. Literature and Design: Surreal content generated by AI provides new inspiration for artistic creation.
  3. Technological innovation: The "surreal boundaries" generated by AI in image segmentation tasks improve the accuracy of the autonomous driving system's recognition of extreme weather.

Case

  • Protein design: David Baker's team used AI hallucinations to inspire new protein structures and won the 2024 Nobel Prize in Chemistry.
  • Entertainment and Games: The virtual environment and character design generated by AI enhance the player's immersion and desire to explore.

2. Study thoughts

By reading this document, I deeply understand the two-sidedness of AI hallucinations. On the one hand, it is a manifestation of technological limitations, which may bring misleading and risks; on the other hand, it provides new possibilities for creativity and promotes the development of science, art and technology.

As a follower in the AI ​​field, I think the key to coping with AI hallucinations isbalance——We must not only reduce the hallucination rate through technical means and prompt word optimization, but also be good at utilizing its creative value. In the future, with the continuous advancement of technology, AI hallucinations may be more effectively controlled, and will also show their unique advantages in more areas.

3. Document download

Download link of document "Tsinghua University-5-DeepSeek and AI Illusion"

  • Quark Netdisk link:/s/75ff4dc2b557
  • Extraction code:4UxZ

4. Write at the end

AI hallucinations are like prisms, reflecting both the limitations of technology and the possibility beyond human imagination. Instead of pursuing "absolutely correct", it is better to learn to dance with AI's "imagination" - because the greatest innovation is often born at the junction of reason and fantasy. Hopefully this blog helps you better understand AI hallucinations and find the best way to cope with and leverage it.