Within a week of this post, millions of Americans will be introduced to AI technologies for the first time via big-budget Super Bowl advertisements. Millions of people who vote and participate in the economy will be forced to reckon with the fact that powerful AI systems are here and cannot be ignored. Meanwhile, the tech world is in a frenzy today, with the simultaneous release of powerful Claude Opus-4.6 and GPT-5.3-Codex models (on top of powerful agentic tooling). The days of software engineers, you know, *writing code* may end this year.
“So what the hell is going on?”
This post serves as a ongoing syllabus for understanding the impacts of AI on society. I’ve closely followed the AI field since 2018, and I’ve engaged with esoteric online communities and research literature in many subfields. Hopefully, the resources provided here will give you a working model of what’s actually happening in AI to cut through our chaotic information environment. Feel free to reach out to me with any questions! - MB
Substack/Newsletter — Start Here
Timothy B Lee (Understanding AI) 𝕏
- Focus: Explanatory journalism on AI developments, aimed at an informed general audience
- Strength: Explains technical concepts without dumbing down; probably the best coverage of Waymo/Tesla/autonomous vehicles
Ethan Mollick (One Useful Thing) 𝕏
- Focus: Practical AI applications, especially in education and knowledge work
- Strength: Wharton professor who actually uses the tools; grounded in real workflows
- Focus: Comprehensive weekly AI news synthesis
- Strength: He reads absolutely everything, thinks out loud, and shows his work; this might be the best one-stop-shop for AI reporting.
- Caveat: Very long posts and occasionally “too online”
- Focus: AI policy, debunking misinformation, DC-insider perspective
- Strength: Excellent coverage of infrastructure and energy discourse, also autonomous vehicle safety
Policy Focus
Alec Stapp (Institute for Progress) 𝕏
- Focus: Industrial policy, permitting reform, AI infrastructure
- Strength: IFP conducts and communicates rigorous analyses; connects AI to broader progress/abundance agenda
Dean Ball (Hyperdimensional) 𝕏
- Focus: AI governance, regulatory strategy, state vs. federal policy approaches
- Strength: Actually engages with how policy gets made; was the primary staff drafter of America's AI Action Plan at OSTP
- Caveat: frustrating conservative culture war tangents; useful for understanding that perspective but skim accordingly
CSET & Helen Toner 𝕏
- Focus: AI governance, international AI policy, lab and competition dynamics
- Strength: Former OpenAI board member; Georgetown CSET Interim Executive Director; actual policy credentials
- Caveat: People discount her views due to the OpenAI/Sam Altman saga, but subsequent events have vindicated some of her concerns
- Focus: ~Weekly digest of AI research, industry moves, policy developments; AI safety/risk mindset throughout
- Strength: Was policy director at OpenAI, co-founded Anthropic; genuine insider knowledge
Podcast/Multimedia
Dwarkesh Patel (Dwarkesh Podcast) 𝕏
- Focus: Long-form interviews with AI researchers, historians, scientists
- Strength: My favorite podcaster; Dwarkesh does incredible research into the interviewee’s field, and the depth of conversation shines through
Jordan Schneider (ChinaTalk) 𝕏
- Focus: China, technology, geopolitics, industrial policy, and defense; increasingly AI-focused as the field has become more central to US-China competition
- Strength: Genuine China expertise and has a repeat cast of expert contributors
- Focus: AI applications, industry practitioners, founders
- Strength: Wide range of guests; practical "what are people building" focus
- Caveat: Variable episode quality depending on guest; some guests do too much trying to market their company
More Technical Focus
- Focus: Semiconductors, AI infrastructure, compute economics
- Strength: Best deep technical and financial analysis of the hardware layer
- Caveat: Expensive subscription for full access; chip-focused rather than AI-applications-focused; sometime annoying shitposting on their personal Twitter accounts.
Nathan Lambert (Interconnects AI) 𝕏
- Focus: Open source models, technical deep dives
- Strength: Real ML practitioner, also a frequent guest writer/speaker in collaboration with others listed here
- Focus: AI trends, compute measurements, quantitative forecasting
- Strength: Rigorous data collection on training compute, model capabilities, timelines
- Focus: Evaluating capabilities & risks in frontier models
- Strength: Actually runs evals that labs and governments reference; technically credible
- Focus: Practical LLM usage, tools, prompt engineering, open source AI
- Strength: Builder who documents everything; his blog is a searchable reference
Who to Ignore (and Why)
Anyone referencing "Terminator," "Skynet," "HAL 9000," etc. - Lazy sci-fi comparisons are a clear indicator the author a) doesn't understand the subject and b) is only interested in cheap clicks. We are now several years into the next industrial revolution. Media figures need to catch up on their homework.
Sam Altman / Elon Musk / Venture Capitalists (especially a16z) - Their public statements optimize for regulatory outcomes, stock prices, political access, etc. Reading them as information sources rather than political/financial actors is a category error.
Gary Marcus 𝕏 - He has made consistent, empirical bets that AI is overhyped, and has been consistently wrong. Unfortunately, he does not understand the technology at a deep level and is disproportionately featured in media due to bothsidesism biases.
e/acc (Effective Accelerationism) - This Twitter aesthetic wants to accelerate AI development as fast as possible, but it’s vibes masquerading as philosophy. Lazy intellectual cover to "move fast and break things" without engaging with anything of substance.
Eliezer Yudkowsky 𝕏 - He was foundational to AI safety thinking, but his predictions are frequently disproven and he is not an expert in the architecture of models in the leading labs. His "we're all going to die" framing creates paralysis, not action in the policy world.
Emily Bender / Timnit Gebru / Alex Hanna (DAIR Institute) - This group got some fame a few years ago claiming LLMs are just statistical pattern matching and overhyped. They have been consistently wrong on the technology while introducing toxicity and identity politics into many conversations they engage in. They do a disservice to the causes they advocate.
Ed Zitron - This guy is just doing inflammatory dunking optimized for engagement.
