Within a week of this post, millions of Americans will be introduced to AI technologies for the first time via big-budget Super Bowl advertisements. Millions of people who vote and participate in the economy will be forced to reckon with the fact that powerful AI systems are here and cannot be ignored. Meanwhile, the tech world is in a frenzy today, with the simultaneous release of powerful Claude Opus-4.6 and GPT-5.3-Codex models (on top of powerful agentic tooling). The days of software engineers, you know, *writing code* may end this year.
“So what the hell is going on?”
This post serves as a ongoing syllabus for understanding the impacts of AI on society. I’ve closely followed the AI field since 2018, and I’ve engaged with esoteric online communities and research literature in many subfields. Hopefully, the resources provided here will give you a working model of what’s actually happening in AI to cut through our chaotic information environment. Feel free to reach out to me with any questions! - MB
Substack/Newsletter — Start Here
Timothy B Lee (Understanding AI) 𝕏
- Focus: Explanatory journalism on AI developments, aimed at an informed general audience
- Strength: Explains technical concepts without dumbing down; probably the best coverage of Waymo/Tesla/autonomous vehicles
Ethan Mollick (One Useful Thing) 𝕏
- Focus: Practical AI applications, especially in education and knowledge work
- Strength: Wharton professor who actually uses the tools; grounded in real workflows
- Focus: Comprehensive weekly AI news synthesis
- Strength: He reads absolutely everything, thinks out loud, and shows his work; this might be the best one-stop-shop for AI reporting.
- Caveat: Very long posts and occasionally “too online”
- Focus: AI policy, debunking misinformation, DC-insider perspective
- Strength: Excellent coverage of infrastructure and energy discourse, also autonomous vehicle safety
Policy Focus
Alec Stapp (Institute for Progress) 𝕏
- Focus: Industrial policy, permitting reform, AI infrastructure
- Strength: IFP conducts and communicates rigorous analyses; connects AI to broader progress/abundance agenda
Dean Ball (Hyperdimensional) 𝕏
- Focus: AI governance, regulatory strategy, state vs. federal policy approaches
- Strength: Actually engages with how policy gets made; was the primary staff drafter of America's AI Action Plan at OSTP
- Caveat: frustrating conservative culture war tangents; useful for understanding that perspective but skim accordingly
- Focus: AI governance, international AI policy, lab and competition dynamics
- Strength: In-depth analysis of defense and China +AI. Includes former OpenAI board member Helen Toner 𝕏
- Focus: ~Weekly digest of AI research, industry moves, policy developments; AI safety/risk mindset throughout
- Strength: Was policy director at OpenAI, co-founded Anthropic; genuine insider knowledge
Podcast/Multimedia
Dwarkesh Patel (Dwarkesh Podcast) 𝕏
- Focus: Long-form interviews with AI researchers, historians, scientists
- Strength: My favorite podcaster; Dwarkesh does incredible research into the interviewee’s field, and the depth of conversation shines through
Jordan Schneider (ChinaTalk) 𝕏
- Focus: China, technology, geopolitics, industrial policy, and defense; increasingly AI-focused as the field has become more central to US-China competition
- Strength: Genuine China expertise and has a repeat cast of expert contributors
- Focus: AI applications, industry practitioners, founders
- Strength: Wide range of guests; practical "what are people building" focus
- Caveat: Variable episode quality depending on guest; some guests do too much trying to market their company
Read Once
Anthropic's New Constitution - How does humanity transition to a world of incredibly powerful AI? How can millennia of debates about human ethics and philosophy be transmitted to alien, silicon intelligences that we've built? This document attempts those modest goals. I could envision this document being taught to children in a generation's time.
Leopold Aschenbrenner — "Situational Awareness" - This essay had quite a lot of influence in DC circles in 2024 and into 2025, especially within the administration. It contains several aggressive timelines that have been holding true so far, and many China hawks still cite the piece.
More Technical Focus
- Focus: Semiconductors, AI infrastructure, compute economics
- Strength: Best deep technical and financial analysis of the hardware layer
- Caveat: Expensive subscription for full access; chip-focused rather than AI-applications-focused; sometime annoying shitposting on their personal Twitter accounts.
Nathan Lambert (Interconnects AI) 𝕏
- Focus: Open source models, technical deep dives
- Strength: Real ML practitioner, also a frequent guest writer/speaker in collaboration with others listed here
- Focus: AI trends, compute measurements, quantitative forecasting
- Strength: Rigorous data collection on training compute, model capabilities, timelines
- Focus: Evaluating capabilities & risks in frontier models
- Strength: Actually runs evals that labs and governments reference; technically credible
- Focus: Practical LLM usage, tools, prompt engineering, open source AI
- Strength: Builder who documents everything; his blog is a searchable reference
Who to Ignore (and Why)
Anyone referencing "Terminator," "Skynet," "HAL 9000," etc. - Lazy sci-fi comparisons are a clear indicator the author a) doesn't understand the subject and b) is only interested in cheap clicks. We are now several years into the next industrial revolution. Media figures need to catch up on their homework.
Sam Altman / Elon Musk / Venture Capitalists (especially a16z) - Their public statements optimize for regulatory outcomes, stock prices, political access, etc. Reading them as information sources rather than political/financial actors is a category error.
e/acc (Effective Accelerationism) - This Twitter aesthetic wants to accelerate AI development as fast as possible, but it’s vibes masquerading as philosophy. Lazy intellectual cover to "move fast and break things" without engaging with anything of substance.
Gary Marcus 𝕏 - He has made consistent, empirical bets that AI won't continue improving, and has been consistently wrong. Unfortunately, he clings to outdated theoretical neuro/psych beliefs about intelligence. He is disproportionately featured in media due to bothsidesism biases in my opinion.see footnote
Eliezer Yudkowsky 𝕏 - He was foundational to AI safety thinking and deserves recognition for that, but in the pre-AI years he built a theoretical model of how AI might progress or go wrong and has not been able to update it in the decade that we've had real-world AI technology. His "we're all going to die" framing creates paralysis, not action in the policy world.see footnote
Emily Bender / Timnit Gebru / Alex Hanna (DAIR Institute) - This group gained fame a few years ago claiming LLMs are just statistical pattern matching ("stochastic parrots") and overhyped. They have been consistently wrong on the technology (which they refuse to use) while introducing toxicity and identity politics into many conversations they engage in. They hold the incoherent positions that AI is simultaneously a) useless (big tech marketing) and b) causes widespread harm. They are partisan interlocutors trying to make everything into culture war, and they do a disservice to actual efforts to reduce bias and discrimination in AI.see footnote
Ed Zitron - This guy is just doing inflammatory dunking optimized for engagement.
On including critics of AI in the "ignore" category: I take criticisms of AI seriously. I believe that AI can cause real harm if deployed without guardrails, that governments will be clumsy and reactive in regulating it, and that AGI/ASI existential risks deserve serious study. The critics that I have listed have distinguished themselves in having facially incoherent core viewpoints and/or undeniable long-term track records of failure.
