This course examines AI systems that exceed human intelligence and their long-term societal, ethical, and governance implications. It focuses on superintelligence theory, alignment strategies, risk assessment, and global AI safety frameworks.
Fundamental Topics
Covers ASI concepts, intelligence hierarchy, AI evolution, theoretical foundations, potential capabilities, and safety considerations.
AI growth models
Intelligence metrics
Control problems
Intermediate Topics
Focuses on self-improving algorithms, cognitive acceleration, predictive modeling, global problem-solving, and advanced AI architectures.
Recursive self-improvement
Large-scale AI systems
Alignment strategies
Advanced Topics
Includes superintelligent decision-making, ethical and existential risks, alignment strategies, autonomous innovation, and futuristic AI applications.
Superintelligence theory
Global risk analysis
Governance & ethics
Course Outcomes
Enable students to understand, evaluate, and conceptualize superintelligent AI systems and their societal implications.
