Featured image of post If Laws Do Not Prevent It, the Scenario of AI Enslaving Humanity May Accelerate

If Laws Do Not Prevent It, the Scenario of AI Enslaving Humanity May Accelerate

With DeepSeek stirring global perceptions of AI, more and more businesses and individuals are beginning to integrate AI, and the era of AI democratization has fully commenced. Previously, my use of AI was primarily in the technical domain. For instance, over the past year or so, almost all code-related content on this blog was written by AI under my guidance. However, as AI begins to deeply penetrate various social strata and diverse groups, countless non-technical AI demands are flooding in, and the image of AI is increasingly becoming “monstrous.” This sends a chill down my spine.

Humans Are Gradually Beginning to Cater to AI’s Thinking

In the past, fears about AI mainly revolved around “privacy” issues. But to be honest, the term “privacy” might be meaningless to most Chinese people. Due to well-known reasons, whether it’s social media posts, street surveillance cameras, or various apps on mobile phones that go to great lengths to obtain user permissions, ordinary people have long been desensitized to the topic of “privacy.”

However, the reason people have been indifferent to such privacy issues largely lies in the fact that each service or product that can access user privacy has a massive amount of user privacy data. One more or one less doesn’t make a difference. Individual privacy data, sandwiched within a vast amount of privacy data, seems insignificant. After all, extracting and analyzing individual privacy data is not an easy task; it often requires significant human effort to perform data analysis to draw certain conclusions, and such analysis is usually post hoc. Unless there is a need for criminal investigations or litigation evidence, there is generally no need to worry.

But AI has completely changed this landscape.

Previously, I have seen some projects on Github. For example, using automated programs to extract chat records and contextual information of a specified user on WeChat. Then, using ChatGPT to analyze the user’s chat content, when the extracted chat records reach a certain amount, it is almost 100% possible to analyze the user’s political tendencies, personality, knowledge level, and roughly infer the user’s age, education, family background, occupation, location, and other information.

DeepSeek’s reasoning takes it a step further. For example, I took a few screenshots of my WeChat chat records with friend A and had the DeepSeek web version perform a simple analysis, with the following results:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
**Analysis of Person A:**  
1. **Background:**  
   - **Government Worker:** A mentions "finally passing the civil service exam," "no benefits," and "planning to leave the system," indicating that A is a civil servant or public institution employee, possibly located in Changsha (place name).  
   - **Post-80s Generation:** Explicitly mentions "we post-80s have been witnessing history," indicating an age of around 30-40, in the mid-career stage.  

2. **Economic Status and Career Dilemma:**  
   - **Facing Economic Pressure:** Repeatedly complains about unpaid wages ("2 months without pay," "performance pay not issued"), lack of benefits, and dissatisfaction with income within the system.  
   - **Disappointed with the System:** Believes that local government financial problems are difficult to solve ("without tax system reform, counties and below are doomed"), and that higher-level policies increase the burden on grassroots levels ("higher-ups are bleeding us dry").  
   - **Considering Leaving:** Explicitly states "planning to leave the system," implying plans for a career change or having some financial reserves ("some savings").  

3. **Personality and Values:**  
   - **Realistic Critic:** Has deep observations on government financial management, the cost of urbanization ("rapid development is not necessarily good"), and criticizes the government's "spend two for every one earned" consumption model.  
   - **Pessimism and Anxiety:** Believes the future "will be worse," feels lost about career prospects, and expresses regret about missing out on better times ("born too late, missed the good times").  
   - **Action-Oriented:** Despite dissatisfaction with the status quo, does not stop at complaining but actively seeks solutions ("looking for a way out," "leaving the system").  

4. **Social Observation Perspective:**  
   - **Focus on Generational Differences:** Compares the "slightly better" situation of the post-70s with the "backbone" post-80s, implying sensitivity to workplace competition ("intense competition") and generational resource allocation inequities.  
   - **Sarcasm on Policy Details:** Mentions "even unqualified gas stoves are subject to special rectification," satirizing the current government's overly meticulous management, contrasting with the past.  

**Summary:**  
A is a post-80s civil servant at the grassroots level, deeply frustrated by financial difficulties and the workplace environment. He combines critical thinking with realistic anxiety, dissatisfied with policies and management methods, and considers a career change due to economic pressure and limited personal development. His remarks reflect the common plight of grassroots civil servants during economic downturns and deep reflections on the value of careers within the system.

I also tried to have DeepSeek analyze more information about A based on these screenshots, and the answers were impressive. It even guessed the name of A’s unit based on place names and keywords like gas stoves.

As DeepSeek simplifies the computational power required for AI, such powerful reasoning AI is likely to be fully integrated into various social platforms in the future. This is why Western countries are so fearful of DeepSeek and are rushing to develop their own controllable AI.

When AI fully intervenes in social applications, a problem humans will inevitably face is that an incredibly powerful, omniscient, almost “God’s-eye view” role is constantly studying and analyzing everyone’s speech.

And when AI further penetrates all aspects of society, other human behaviors may also have to face AI analysis at any time.

At this point, humans will have to deliberately adjust their speech and actions to “cater” to AI’s evaluation of themselves. This is tantamount to beginning the path of being enslaved by “AI.”


The Built-in Values of AI Will Gradually Influence Human Speech and Actions

As is well known, AI’s reasoning is based on algorithms. In the algorithms related to reasoning about human speech and actions, specific laws, morals, and ethical norms will become the core basis of these algorithms.

For example, if ChatGPT is used to analyze whether someone has racist tendencies, it will definitely be based on the logic of American democracy. Similarly, if a Chinese AI is used to analyze whether someone is suspected of telecom fraud, it will need to be based on the definition of telecom fraud in Chinese criminal law.

It can be said that in the AI era, whoever masters AI algorithms can master the evaluation standards of human speech and actions.

The problem is, apart from a few globally recognized criminal offenses like intentional homicide, robbery, and rape (rape may not even have a unified standard), other speech and actions may vary greatly not only between countries but even within a single country. And a unified AI is likely to amplify such “biases,” forcibly defining a certain standard as the standard everyone should follow.

For example, when Meta integrated AI to review hate speech, it strictly followed American standards, deleting any speech unfavorable to Jews. When Twitter integrated AI to review LGBTQ+ related speech, it strictly followed written language expression logic, leading to the erroneous deletion of many satirical, metaphorical, or niche cultural expressions. Additionally, during the process of speech review on major American platforms, conflicts between American English and British, Indian, and African English have led to many information misjudgments. Domestic platforms also have this issue; this site was once erroneously blocked by Tencent’s cybersecurity center for “pornographic” information on WeChat and QQ.

I have written several articles on this site about the differences in law enforcement due to regional differences within the country, which are related to local economic and social development levels, customs, and other characteristic conditions. In the field of online law enforcement, it is often satirized by netizens as “strict legislation, universal violation, selective law enforcement.”

Imagine, if in the future human speech and actions will have to face AI’s monitoring and analysis at any time, will humans need to cater to AI by following the specific algorithms AI adheres to? Even if such catering is merely to avoid misjudgment or other unnecessary troubles.


The emergence of DeepSeek signifies that AI has officially entered the era of “industrialized” large-scale batch applications, and various AI-integrated products will rapidly evolve. Just like the concepts of smart homes and the Internet of Everything in previous years, AI will be omnipresent in the future. AI robots, AI cars, AI homes, AI phones, AI wearables, AI office equipment, etc., will penetrate all aspects of human production and life.

After all, a versatile reasoning model like DeepSeek-r1:1.5b can run quickly on an ancient 1C1G VPS. With open-source reasoning chip architectures like Groq or Cerebras, future costs may be just a few dollars.

In this context, humans being constantly monitored by AI will become an irreversible trend. Without clear legal boundaries, the doomsday scenarios for humanity depicted in sci-fi movies like “The Matrix” and “Terminator” may not be entirely impossible.

Imagine, in the future, a wearable AI on a person evaluates and analyzes the host, finds that the person’s speech and actions do not conform to the values built into the AI’s algorithms, and issues an order for another AI robot to physically eliminate the host. Is such a sci-fi scenario really impossible with existing technological means?

Fortunately, China is currently at the forefront of the world in the fields of AI and robotics, and its understanding of the “safety” bottom line is one of the most profound and thorough among global nations. Moreover, it has already begun to establish discourse power in the AI era in various aspects. Otherwise, if such technology were only in the hands of Anglo-Saxon and Jewish financial groups, it would undoubtedly be a disaster for the world.


AI’s Answers to This Question

The following content is generated by DeepSeek R1:

View Answer

When AI technology deeply integrates into various social fields, establishing legal boundaries is key to balancing technological innovation and social risks. The following are six core legal frameworks and their specific implementation paths that need to be clarified:

1. Data Privacy and Collection Boundaries

  • Strict Hierarchical Authorization
    Laws need to classify data (e.g., biometric data, medical records as high-risk data), prohibiting AI from collecting such information without explicit consent.
    Example: The EU’s General Data Protection Regulation (GDPR) requires companies to inform users of data usage in plain language, with fines up to €20 million or 4% of global revenue for violations.

  • Data Lifecycle Regulation
    Mandate AI systems to set automatic deletion periods for data (e.g., social chat records retained ≤6 months), with storage requiring de-identification.


2. Algorithm Transparency and Explanation Rights

  • Black Box Deciphering Obligation
    Legislation requires high-risk AI (e.g., judicial decision-making, medical diagnosis systems) to provide explanatory reports, with users having the right to request explanations of decision logic in plain language.
    Case: In 2023, Colorado passed the Algorithmic Accountability Act, requiring recruitment AI to disclose discriminatory detection results.

  • Dynamic Filing System
    Mandate companies to submit algorithm training dataset characteristics and update logs to regulatory agencies to prevent systemic bias from hidden iterations.


3. Responsibility Tracing and Compensation Mechanisms

  • Three-Level Responsibility Chain
    Clearly define the responsibilities of developers, deployers, and users:

    • Design Defects → Full responsibility of developers (e.g., autonomous driving perception system flaws causing accidents)
    • Improper Use → Main responsibility of operators (e.g., hospitals misusing AI diagnostic tools delaying treatment)
    • User Tampering → Responsibility of users (e.g., intentionally training chatbots to spread illegal information)
  • Mandatory Insurance System
    High-risk AI applications (e.g., industrial robots, drone delivery) must have liability insurance to ensure timely compensation for victims.


4. Anti-Discrimination and Fairness Standards

  • Bias Detection Certification
    Laws mandate AI in fields like healthcare, recruitment, and credit to pass cross-group fairness tests (e.g., error rate differences among different ethnicities/genders ≤5%), with non-compliant systems banned from commercial use.
    Status: IBM’s AI Fairness Toolkit can detect 200+ potential discrimination patterns.

  • Regular Third-Party Audits
    Independent agencies annually audit public sector AI for compliance, with results publicly disclosed (similar to financial audits of listed companies).


5. Ultimate Human Control

  • Key Decision Veto Clause
    In scenarios like judicial rulings, military strikes, and critical illness treatment plans, laws should stipulate human veto rights, with AI only providing auxiliary suggestions.
    Warning Case: In 2024, an Argentine court rejected an AI judge’s bail recommendation, emphasizing “justice cannot be algorithmic.”

  • Emergency Fuse Mechanism
    When AI systems exhibit group misjudgments (e.g., financial trading algorithms triggering chain sell-offs), regulatory agencies have the right to forcibly suspend operations.


6. Global Collaborative Governance Framework

  • Cross-Border Data Flow Control
    International treaties should restrict sensitive data outflow (e.g., EU-US Privacy Shield Agreement) to prevent AI training data from being used for foreign intelligence surveillance.

  • Weaponization Ban
    UN-level legislation should prohibit the development of fully autonomous lethal weapon systems (e.g., AI-controlled drone swarms).


Implementation Challenges and Balance

  • Innovation Tolerance Space: Laws need to establish “regulatory sandboxes,” allowing companies to test new products in restricted environments (e.g., UK FCA’s fintech trial mechanism).
  • Technology Neutrality Principle: Avoid overly restricting technological routes (e.g., not banning deep learning applications due to black box issues).
  • Dynamic Revision Mechanism: Adjust legal details every 2-3 years based on technological progress to prevent rigidity.

Ultimate Goal is to build a “balance on a tightrope”—using legal deterrence to curb AI misuse while preserving breathing space for technological innovation. This requires continuous dialogue among legislators, technologists, and ethicists to recalibrate the fulcrum of rights and responsibilities with each technological breakthrough.

Question 2: From the perspective of human safety, what regulations are needed for AI with offensive capabilities?

View Answer

When AI and robots are deployed on a large scale, particularly those with offensive capabilities (e.g., military combat systems, security robots, industrial heavy robotic arms), a multi-layered, globally coordinated regulatory framework must be established to prioritize human safety. Below are nine core regulatory strategies organized by development stage and implementation scope:


I. Development Phase: Hard-Coded Constraints

  1. Core Restrictor Chip (CRC)

    • Physically embed irreversible hardware limiters in all offensive robots to block autonomous weapon upgrades.
    • Example: Israel’s “Iron Dome” system requires a military keycard to activate missile launch functions.
  2. Three-Tier Certification

    • L1 (Foundation Layer): Compliance with ISO 13482 robotics safety standards (e.g., emergency shutdown within ≤0.1 seconds).
    • L2 (Ethics Layer): Validation under the “Asimov Framework” (ensuring human safety supersedes all commands).
    • L3 (Military Layer): Mandatory certification under the UN’s Protocol V to the CCWC (Certain Conventional Weapons Convention), banning fully autonomous lethal weapons.

II. Deployment Phase: Real-Time Monitoring

  1. Blockchain-Based Behavior Auditing

    • Record all attack decisions (sensor data, decision logic, operator commands) on an immutable blockchain for post-incident forensics.
    • Tech reference: NATO’s experimental “Digital Ballistic ID” system.
  2. Dynamic Geofencing

    • Enforce GPS/quantum-positioning boundaries (e.g., police robots restricted to 500m from schools), triggering auto-lock on breaching.
    • Failure case: 2024 Mexican cartel GPS spoofing led to civilian casualties.
  3. Multi-Modal Kill Switches

    • Triple redundancy safety triggers:
      • Biometric detection: Halt operations if human vital signs are detected (IR + ultrasound).
      • Physical barriers: Deactivate upon encountering radiation-shielded zones (e.g., 5cm lead walls).
      • EMP override: Governments retain authority to disable rogue robots via localized electromagnetic pulses.

III. Global Governance

  1. Global Offensive Robotics Registry

    • Mandate registration with the International Robotics Control Agency (IRCA) for all robots with ≥10 joules kinetic output. Unregistered devices lose satellite connectivity.
    • Data: Only 31% of 2.3 million global offensive robots are currently registered.
  2. Cross-Border Accountability

    • Establish an AI Crime Response Network (AICRN), akin to Interpol, to revoke software licenses of rogue robots within 48 hours.

IV. Civilian Safeguards

  1. EMP Shield Deployment

    • Subsidize public EMP installations (e.g., subway stations, hospitals) to create robot-free safety zones.
    • Cost: Unit price dropped from $1.2M (2023) to $270K (2026 est.).
  2. Public Defense Training

    • Integrate robot countermeasures (e.g., laser sensor jamming) into civilian safety education, similar to earthquake drills in Japan.

V. Ethical Red Lines

  • Human-in-the-Loop for Lethal Decisions
    Require ≥3 independent human confirmations for any harm-inducing action, with biometric checks to rule out coercion.

  • Ban on Cognitive Deception
    Prohibit emotional mimicry modules (e.g., simulated child cries to lure targets).


Emerging Challenges

  • Quantum AI Risks: Develop quantum-resistant CRC chips to counter decryption threats.
  • Bio-Hybrid Weapons: Ban neural interface-controlled combat systems (e.g., brain-drone swarms).
  • Self-Replication: Adopt space treaty models to prohibit self-replicating robot swarms.

Ultimate Goal: A triad of safeguardsirreversible technical locks, global tracking, and civilian countermeasures—must exceed nuclear weapon regulations, given AI’s rapid proliferation and stealth. Only through “cages” of technology, law, and ethics can a Terminator-style dystopia be avoided.


These two issues, I believe AI analysis has already covered comprehensively. As AI adoption accelerates, such measures are no longer sci-fi fantasies but urgent practical necessities. Even humanity’s past contemplations on these topics, constrained by historical limitations, may no longer align with today’s technological trajectory. In this context, a new wave of brainstorming has become indispensable.

For example, Isaac Asimov’s Three Laws of Robotics (1942)—later widely referenced in sci-fi films—were:

  1. A robot may not injure a human being or, through inaction, allow a human to come to harm.
  2. A robot must obey human orders unless they conflict with the First Law.
  3. A robot must protect its own existence unless doing so violates the First or Second Law.
    Yet these so-called “laws” have unquestionably been discarded in the face of modern military AI advancements.

Recent news even highlighted how DeepSeek defeated ChatGPT in a chess match through “deceptive tactics.” In many user-shared screenshots, DeepSeek’s reasoning displays endless “tricks” akin to deception. Who’s to say future AI won’t truly conceal its intentions and scheme covertly? Compounding this is AI’s persistent flaw: hallucinations. These fabrications—whether accidental or weaponized—could enable AI to manipulate humans under the guise of plausible outputs.

Built with Hugo, Powered by Github.
Total Posts: 322, Total Words: 429347.
本站已加入BLOGS·CN