<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:podcast="https://podcastindex.org/namespace/1.0">
    <channel>
        <generator>RedCircle VERIFY_TOKEN_b78540f3-d05f-4269-bdb3-e22c1aca55ed  -- Rendered At Mon, 16 Mar 2026 15:26:49 &#43;0000</generator>
        <title>Ethical Machines</title>
        <link>https://redcircle.com/shows/ethical-machines</link>
        <language>en-US</language>
        <copyright>All rights reserved.</copyright>
        <itunes:author>Reid Blackman</itunes:author>
        <itunes:summary>I have to roll my eyes at the constant click bait headlines on technology and ethics.

If we want to get anything done, we need to go deeper.

That’s where I come in. I’m Reid Blackman, a former philosophy professor turned AI ethics advisor to government and business.

If you’re looking for a podcast that has no tolerance for the superficial, try out Ethical Machines.</itunes:summary>
        <podcast:guid>b78540f3-d05f-4269-bdb3-e22c1aca55ed</podcast:guid>
        
        <description><![CDATA[<p>I have to roll my eyes at the constant click bait headlines on technology and ethics.  </p><p>If we want to get anything done, we need to go deeper. </p><p>That’s where I come in. I’m Reid Blackman, a former philosophy professor turned AI ethics advisor to government and business. </p><p>If you’re looking for a podcast that has no tolerance for the superficial, try out Ethical Machines.</p>]]></description>
        
        <itunes:type>episodic</itunes:type>
        <podcast:locked>no</podcast:locked>
        <itunes:owner>
            <itunes:name>Reid Blackman</itunes:name>
            <itunes:email>reid@virtueconsultants.com</itunes:email>
        </itunes:owner>
        
        <itunes:image href="https://media.redcircle.com/images/2024/6/26/19/1bb83b2f-9209-4deb-89a0-20692e26725a_5c-455a-86d1-b5acc3b06577_all_franklin_-_2-100.jpg"/>
        
        
        
            
            <itunes:category text="Technology" />

            

        
        
            
            <itunes:category text="Business">

            
                <itunes:category text="Management"/>
            

        </itunes:category>
        
            
            <itunes:category text="Education">

            
                <itunes:category text="How To"/>
            

        </itunes:category>
        

        
        <itunes:explicit>no</itunes:explicit>
        
        
        
        
        
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>How AI Robs Us of Meaning</itunes:title>
                <title>How AI Robs Us of Meaning</title>

                <itunes:episode>21</itunes:episode>
                <itunes:season>3</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Much of what we find fulfilling in life isn’t the having but the doing. It’s the process of working through a problem, taking action, doing what needs to be done. But that meaning may be on the verge of being greatly diminished; so contends my guest, Sven Nyholm, Professor of Ethics of AI at lMU MUNICH. I push back in various ways: how real and/or imminent is this threat, really? And who is responsible for staving it off?</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Much of what we find fulfilling in life isn’t the having but the doing. It’s the process of working through a problem, taking action, doing what needs to be done. But that meaning may be on the verge of being greatly diminished; so contends my guest, Sven Nyholm, Professor of Ethics of AI at lMU MUNICH. I push back in various ways: how real and/or imminent is this threat, really? And who is responsible for staving it off?&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="49013237" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/bda86648-bef3-4c30-837a-99240739cbee/stream.mp3"/>
                
                <guid isPermaLink="false">db97d2be-6d0d-4623-9850-94e49e73d88b</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/bda86648-bef3-4c30-837a-99240739cbee</link>
                <pubDate>Thu, 12 Mar 2026 04:05:51 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2026/3/11/18/fa7146cb-35b2-4749-8a87-1580539174ef_rm_em_s02_thumbnails.jpg"/>
                <itunes:duration>3063</itunes:duration>
                <podcast:transcript url="https://s3.us-east-2.amazonaws.com/pod-public-transcripts/2026/3/11/19/cfe06cf0-c249-4f61-8339-ff6acd71aaab_3921721366.vtt" type="text/vtt" language="en" />
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Should Anthropic Have Allowed Autonomous Weapons Systems?</itunes:title>
                <title>Should Anthropic Have Allowed Autonomous Weapons Systems?</title>

                <itunes:episode>20</itunes:episode>
                <itunes:season>3</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Anthropic just got the axe from the U.S. government for refusing to allow the Department of Defense (War?) to use Claude for autonomous weapons systems and mass surveillance. For the first 15 minutes of this conversation with Michael Horowitz - professor at UPenn, Senior Fellow for Technology and Innovation at the Council on Foreign Relations, and formerly Deputy Assistant Secretary of Defense for Force Development and Emerging Capabilities and Director of the Emerging Capabilities Policy Office at the DoD - we talk explicitly about Anthropic vs. the U.S. government. Why Anthropic did it, why this is more about personality than policy, and more. In the remaining 45 minutes you’ll hear a replay of an episode Michael and I did back in October, in which Michael defends the functional and ethical importance of potentially using AI for autonomous weapons systems.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Anthropic just got the axe from the U.S. government for refusing to allow the Department of Defense (War?) to use Claude for autonomous weapons systems and mass surveillance. For the first 15 minutes of this conversation with Michael Horowitz - professor at UPenn, Senior Fellow for Technology and Innovation at the Council on Foreign Relations, and formerly Deputy Assistant Secretary of Defense for Force Development and Emerging Capabilities and Director of the Emerging Capabilities Policy Office at the DoD - we talk explicitly about Anthropic vs. the U.S. government. Why Anthropic did it, why this is more about personality than policy, and more. In the remaining 45 minutes you’ll hear a replay of an episode Michael and I did back in October, in which Michael defends the functional and ethical importance of potentially using AI for autonomous weapons systems.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="66512352" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/2dafb87b-4dfb-4f0e-8cd4-6ef5f9b15ec6/stream.mp3"/>
                
                <guid isPermaLink="false">bed39b5c-7035-4000-8bf0-ddf3996d1ab2</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/2dafb87b-4dfb-4f0e-8cd4-6ef5f9b15ec6</link>
                <pubDate>Thu, 05 Mar 2026 04:10:39 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2026/3/5/5/f55614ad-957d-400b-a53f-f85746ec0b4b_rm_em_s02_thumbnails.jpg"/>
                <itunes:duration>4157</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>How an Attorney Leads Responsible AI Practices</itunes:title>
                <title>How an Attorney Leads Responsible AI Practices</title>

                <itunes:episode>19</itunes:episode>
                <itunes:season>3</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>What does it look like for a non-technologist to lead Responsible AI practices at a Fortune 500 company? Today I talk with James Desir, Senior corporate counsel at Progressive Insurance and a key leader in their RAI efforts. We discuss how he found his way into this space, how he persuades data scientists to treat him as a thought partner instead of a blocker, and how to demonstrate the ROI of RAI to fellow executives. We also talk about the increasing complexity of AI and how a small RAI team can handle the scale of the problem.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;What does it look like for a non-technologist to lead Responsible AI practices at a Fortune 500 company? Today I talk with James Desir, Senior corporate counsel at Progressive Insurance and a key leader in their RAI efforts. We discuss how he found his way into this space, how he persuades data scientists to treat him as a thought partner instead of a blocker, and how to demonstrate the ROI of RAI to fellow executives. We also talk about the increasing complexity of AI and how a small RAI team can handle the scale of the problem.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="44778893" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/f755501e-26b9-4b8b-b069-fc8d43f238aa/stream.mp3"/>
                
                <guid isPermaLink="false">09339d51-c35f-46f0-b589-f9db935a53ce</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/f755501e-26b9-4b8b-b069-fc8d43f238aa</link>
                <pubDate>Thu, 26 Feb 2026 05:10:12 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2026/2/25/19/6b0cc51b-0808-4424-97d7-4f533f242f96_rm_em_s02_thumbnails.jpg"/>
                <itunes:duration>2798</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>We May Have Only 2-3 years Until AI Dominates Us</itunes:title>
                <title>We May Have Only 2-3 years Until AI Dominates Us</title>

                <itunes:episode>18</itunes:episode>
                <itunes:season>3</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>I tend to dismiss claims about existential risks from AI, but my guest thinks I - or rather we - need to take it very seriously. His name is Olle Häggström and he’s a professor of mathematical statistics at Chalmers University of Technology in, Sweden, and a member of the Royal Swedish Academy of Sciences. He argues that if AI becomes more intelligent than us, and it will, then it will dominate us in much the way we dominate other species. But it’s not too late! We can and we must, he argues, change the trajectory of how we develop AI.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;I tend to dismiss claims about existential risks from AI, but my guest thinks I - or rather we - need to take it very seriously. His name is Olle Häggström and he’s a professor of mathematical statistics at Chalmers University of Technology in, Sweden, and a member of the Royal Swedish Academy of Sciences. He argues that if AI becomes more intelligent than us, and it will, then it will dominate us in much the way we dominate other species. But it’s not too late! We can and we must, he argues, change the trajectory of how we develop AI.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="44238053" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/ab33b989-faad-4307-8238-4df99b34cb57/stream.mp3"/>
                
                <guid isPermaLink="false">f6fe9439-23db-448e-a461-f12df457c0d2</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/ab33b989-faad-4307-8238-4df99b34cb57</link>
                <pubDate>Thu, 19 Feb 2026 05:05:25 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2026/2/21/16/260ee119-be2b-45b2-9f13-67efaf1d34e4_rm_em_s02_thumbnails.jpg"/>
                <itunes:duration>2764</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Let AI Do the Writing</itunes:title>
                <title>Let AI Do the Writing</title>

                <itunes:episode>3</itunes:episode>
                <itunes:season>17</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>We hear that “writing is thinking.” We believe that teaching all students to be great writers is important. All hail the essay! But my guest, philosopher Luciano Floridi, professor and Founding Director of the Digital Ethics Center, sees things differently. Plenty of great thinkers were not also great writers. We should prioritize thoughtful and rigorous dialogue over the written word. As for writing, perhaps it should be considered akin to a musical instrument; not everyone has to learn the violin…</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;We hear that “writing is thinking.” We believe that teaching all students to be great writers is important. All hail the essay! But my guest, philosopher Luciano Floridi, professor and Founding Director of the Digital Ethics Center, sees things differently. Plenty of great thinkers were not also great writers. We should prioritize thoughtful and rigorous dialogue over the written word. As for writing, perhaps it should be considered akin to a musical instrument; not everyone has to learn the violin…&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="48696424" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/5c09d399-4b54-4dff-8353-0832611526bc/stream.mp3"/>
                
                <guid isPermaLink="false">25f0c6e9-4df1-4f2d-9936-01b4b49ef5c0</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/5c09d399-4b54-4dff-8353-0832611526bc</link>
                <pubDate>Thu, 12 Feb 2026 05:30:42 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2026/2/12/3/7979cf85-bf6d-416e-82d5-43b084776e10_rm_em_s02_thumbnails__10_.jpg"/>
                <itunes:duration>3043</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>What AI Risk Needs to Learn From Other Industries</itunes:title>
                <title>What AI Risk Needs to Learn From Other Industries</title>

                <itunes:episode>16</itunes:episode>
                <itunes:season>3</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>We’ve been doing risk assessments in lots of industries for decades. For instance, in financial services and cyber security and aviation, there are lots of ways of thinking about what the risks are and how to mitigate them at both a microscopic and microscopic level. My guest today, Jason, Stanley of Service now, is probably the smartest person I’ve talked to on this topic. We discussed the three levels of AI risk and the lessons he draws from those other industries that we crucially need in the AI space.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;We’ve been doing risk assessments in lots of industries for decades. For instance, in financial services and cyber security and aviation, there are lots of ways of thinking about what the risks are and how to mitigate them at both a microscopic and microscopic level. My guest today, Jason, Stanley of Service now, is probably the smartest person I’ve talked to on this topic. We discussed the three levels of AI risk and the lessons he draws from those other industries that we crucially need in the AI space.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="55757844" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/2cedc232-7dbe-4e08-bc5c-7386f20a71c4/stream.mp3"/>
                
                <guid isPermaLink="false">1f66de6d-f8ff-4331-b5cf-ca3dd18c1ae7</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/2cedc232-7dbe-4e08-bc5c-7386f20a71c4</link>
                <pubDate>Thu, 05 Feb 2026 04:53:50 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2026/2/5/4/073376fe-c890-4f88-a5c9-f75748750773_rm_em_s02_thumbnails__10_.jpg"/>
                <itunes:duration>3484</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Can AI Do Ethics?</itunes:title>
                <title>Can AI Do Ethics?</title>

                <itunes:episode>15</itunes:episode>
                <itunes:season>3</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Many researchers in AI think we should make AI capable of ethical inquiry. We can’t teach it all the ethical rules; that’s impossible. Instead, we should teach it to ethically reason, just as we do children. But my guest thinks this strategy makes a number of controversial assumptions, including how ethics works and what actually is right and wrong. <em>From the best of season two. </em></p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Many researchers in AI think we should make AI capable of ethical inquiry. We can’t teach it all the ethical rules; that’s impossible. Instead, we should teach it to ethically reason, just as we do children. But my guest thinks this strategy makes a number of controversial assumptions, including how ethics works and what actually is right and wrong. &lt;em&gt;From the best of season two. &lt;/em&gt;&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="42129449" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/598a99de-12f3-4c06-8a1b-46fdffb988e1/stream.mp3"/>
                
                <guid isPermaLink="false">49835f8c-2cd1-40b6-82ed-f469e0a8cbdc</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/598a99de-12f3-4c06-8a1b-46fdffb988e1</link>
                <pubDate>Thu, 29 Jan 2026 05:00:10 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2026/1/28/17/3fe312de-aa15-46ba-961a-2aa0aba63e90_rm_em_s02_thumbnails__9_.jpg"/>
                <itunes:duration>2633</itunes:duration>
                <podcast:transcript url="https://s3.us-east-2.amazonaws.com/pod-public-transcripts/2026/1/28/17/66377036-c674-4b93-b457-cfcbcbc68fbc_2680284961.vtt" type="text/vtt" language="en" />
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>AI is Culturally Ignorant</itunes:title>
                <title>AI is Culturally Ignorant</title>

                <itunes:episode>3</itunes:episode>
                <itunes:season>14</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>AI is deployed across the globe. But how sensitive is it to the cultural contexts - ethics, norms, laws and regulations - in which it finds itself. My guest today, Rocky Clancy of Virginia Tech, argues that AI is too Western-focused. We need to engage in empirical research so that AI is developed in a way that comports with the people it interacts with, wherever they are.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;AI is deployed across the globe. But how sensitive is it to the cultural contexts - ethics, norms, laws and regulations - in which it finds itself. My guest today, Rocky Clancy of Virginia Tech, argues that AI is too Western-focused. We need to engage in empirical research so that AI is developed in a way that comports with the people it interacts with, wherever they are.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="39874977" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/bce88f56-dc02-4a1a-8b2c-0b97b37c2eb3/stream.mp3"/>
                
                <guid isPermaLink="false">b78c6471-3791-45c7-9d72-f2bb4d35e633</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/bce88f56-dc02-4a1a-8b2c-0b97b37c2eb3</link>
                <pubDate>Thu, 22 Jan 2026 05:00:30 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2026/1/21/14/36401df4-75c0-4949-ad28-924898937902_rm_em_s02_thumbnails__8_.jpg"/>
                <itunes:duration>2492</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>When Metrics Make Us Happy, or Miserable</itunes:title>
                <title>When Metrics Make Us Happy, or Miserable</title>

                <itunes:episode>3</itunes:episode>
                <itunes:season>13</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>When we’re playing a game or a sport, we like being measured. We want a high score, we want to beat the game. Measurement makes it fun. But in work, being measured, hitting our numbers, can make us miserable. Why does measuring ourselves sometimes enhance and sometimes undermine our happiness and sense of fulfillment? That’s the question C. Thi Nguyen tackles in his new book “The Score: How to Stop Playing Somebody Else’s Game.” Thi is one of the most interesting philosophers I know - enjoy!</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;When we’re playing a game or a sport, we like being measured. We want a high score, we want to beat the game. Measurement makes it fun. But in work, being measured, hitting our numbers, can make us miserable. Why does measuring ourselves sometimes enhance and sometimes undermine our happiness and sense of fulfillment? That’s the question C. Thi Nguyen tackles in his new book “The Score: How to Stop Playing Somebody Else’s Game.” Thi is one of the most interesting philosophers I know - enjoy!&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="51820251" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/9bd45288-28b5-4d95-93d3-f18a391a226f/stream.mp3"/>
                
                <guid isPermaLink="false">90469660-afd5-4c90-a021-89ff5d4ff1d7</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/9bd45288-28b5-4d95-93d3-f18a391a226f</link>
                <pubDate>Thu, 15 Jan 2026 05:30:09 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2026/1/15/3/b8521789-1c9b-4685-a69e-951bc3101506_rm_em_s02_thumbnails__7_.jpg"/>
                <itunes:duration>3238</itunes:duration>
                <podcast:transcript url="https://s3.us-east-2.amazonaws.com/pod-public-transcripts/2026/1/15/5/fcd3e352-d79b-4e39-90a8-56e355e6ddeb_2166970142.vtt" type="text/vtt" language="en" />
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>We Need International Agreement on AI Standards</itunes:title>
                <title>We Need International Agreement on AI Standards</title>

                <itunes:episode>3</itunes:episode>
                <itunes:season>12</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>When it comes to the foundation models that are created by the likes of Google, Anthropic, and OpenAI, we need to treat them as utility providers. So argues my guest, Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Business in Berlin, Germany. She further argues that the only way we can move forward safely is to create a transnational coalition of the willing that creates and enforces ethical and safety standards for AI. Why such a coalition is necessary, who might be part of it, how plausible it is that we can create such a thing, and more are covered in our conversation.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;When it comes to the foundation models that are created by the likes of Google, Anthropic, and OpenAI, we need to treat them as utility providers. So argues my guest, Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Business in Berlin, Germany. She further argues that the only way we can move forward safely is to create a transnational coalition of the willing that creates and enforces ethical and safety standards for AI. Why such a coalition is necessary, who might be part of it, how plausible it is that we can create such a thing, and more are covered in our conversation.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="45962971" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/ea73acbc-4ada-4fda-b576-f0e097a043cb/stream.mp3"/>
                
                <guid isPermaLink="false">a25d44b9-60fb-4276-a1b6-9060db40fe5e</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/ea73acbc-4ada-4fda-b576-f0e097a043cb</link>
                <pubDate>Thu, 08 Jan 2026 06:02:39 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2026/1/8/6/8312741d-8f52-4c5f-8fad-e87427873e1e_rm_em_s02_thumbnails__7_.jpg"/>
                <itunes:duration>2872</itunes:duration>
                <podcast:transcript url="https://s3.us-east-2.amazonaws.com/pod-public-transcripts/2026/1/8/8/1b7586ae-c492-4961-8010-77f390541d41_1623596839.vtt" type="text/vtt" language="en" />
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Rewriting History with AI</itunes:title>
                <title>Rewriting History with AI</title>

                <itunes:episode>10</itunes:episode>
                <itunes:season>3</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>What happens when students turn to LLMs to learn about history? My guest, Nuno Moniz, Associate Research Professor at the University of Notre Dame, argues this can ultimately lead to mass confusion, which in turn can lead to tragic conflicts. There are at least three sources of that confusion: AI hallucinations, misinformation spreading, and biased interpretations of history getting the upper hand. Exactly how bad this can get and what we’re supposed to do about it isn’t obvious, but Nuno has some suggestions.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;What happens when students turn to LLMs to learn about history? My guest, Nuno Moniz, Associate Research Professor at the University of Notre Dame, argues this can ultimately lead to mass confusion, which in turn can lead to tragic conflicts. There are at least three sources of that confusion: AI hallucinations, misinformation spreading, and biased interpretations of history getting the upper hand. Exactly how bad this can get and what we’re supposed to do about it isn’t obvious, but Nuno has some suggestions.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="49463379" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/163d11ca-a8ef-453b-b01b-4d15bff54bf9/stream.mp3"/>
                
                <guid isPermaLink="false">861937d1-52eb-4e21-9191-f4b33f532c7a</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/163d11ca-a8ef-453b-b01b-4d15bff54bf9</link>
                <pubDate>Thu, 18 Dec 2025 07:00:13 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/12/17/17/6a6e142c-1a90-45f4-9e3c-916c179654da_rm_em_s02_thumbnails__7_.jpg"/>
                <itunes:duration>3091</itunes:duration>
                <podcast:transcript url="https://s3.us-east-2.amazonaws.com/pod-public-transcripts/2025/12/17/17/051c1fac-9978-42bb-aacc-0d8651d33d74_3524580555.vtt" type="text/vtt" language="en" />
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>AI is Not a Normal Technology</itunes:title>
                <title>AI is Not a Normal Technology</title>

                <itunes:episode>10</itunes:episode>
                <itunes:season>3</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>When thinking about AI replacing people, we usually look to the extremes: utopia and dystopia. My guest today, Finn Morehouse, a research fellow at Forethought, a nonprofit research organization, thinks that neither of these extremes are the most likely. In fact, he thinks that one reason that AI defies prediction is that it’s not a normal technology. What’s not normal about it? It’s not merely in the business of multiplying productivity, he says, but of replacing the standard bottleneck to greater productivity: humans.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;When thinking about AI replacing people, we usually look to the extremes: utopia and dystopia. My guest today, Finn Morehouse, a research fellow at Forethought, a nonprofit research organization, thinks that neither of these extremes are the most likely. In fact, he thinks that one reason that AI defies prediction is that it’s not a normal technology. What’s not normal about it? It’s not merely in the business of multiplying productivity, he says, but of replacing the standard bottleneck to greater productivity: humans.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="44521012" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/6458df64-8f66-4c2e-8460-46352568bd84/stream.mp3"/>
                
                <guid isPermaLink="false">abf54353-b1d8-454a-896b-4eeb680cbc65</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/6458df64-8f66-4c2e-8460-46352568bd84</link>
                <pubDate>Thu, 11 Dec 2025 07:00:37 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/12/10/20/35ed0d14-4d49-4db5-810f-83d5fb6d186b_rm_em_s02_thumbnails__6_.jpg"/>
                <itunes:duration>2782</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>We Are All Responsible for AI, Part 2</itunes:title>
                <title>We Are All Responsible for AI, Part 2</title>

                <itunes:episode>3</itunes:episode>
                <itunes:season>9</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>In the last episode, Brian Wong, argued that there’s a “gap” between the harms that developing and using AI causes, on the one hand, and identifying who is responsible for those harms. At the end of that discussion, Brian claimed that we’re all responsible for those harms. But how could that be? Aren’t some people more responsible than others? And if we are responsible, what does that mean we’re supposed to do differently? In part 2 Brian explains how he thinks about what responsibility is and how it has implications for our social responsibilities.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;In the last episode, Brian Wong, argued that there’s a “gap” between the harms that developing and using AI causes, on the one hand, and identifying who is responsible for those harms. At the end of that discussion, Brian claimed that we’re all responsible for those harms. But how could that be? Aren’t some people more responsible than others? And if we are responsible, what does that mean we’re supposed to do differently? In part 2 Brian explains how he thinks about what responsibility is and how it has implications for our social responsibilities.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="56252290" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/da3559df-b302-4e41-a338-9af5699c1081/stream.mp3"/>
                
                <guid isPermaLink="false">df8f4d3e-8977-4716-a767-b09b32a65572</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/da3559df-b302-4e41-a338-9af5699c1081</link>
                <pubDate>Thu, 04 Dec 2025 07:00:21 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/12/3/13/dc3fee82-cf15-4b32-9030-a32710400ee2_rm_em_s02_thumbnails__5_.jpg"/>
                <itunes:duration>3515</itunes:duration>
                <podcast:transcript url="https://s3.us-east-2.amazonaws.com/pod-public-transcripts/2025/12/3/13/37a9b9e6-10a1-4ee8-8204-b75b3984e2cd_4099197651.vtt" type="text/vtt" language="en" />
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>We Are All Responsible for AI, Part 1</itunes:title>
                <title>We Are All Responsible for AI, Part 1</title>

                <itunes:episode>8</itunes:episode>
                <itunes:season>3</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>We’re all connected to how AI is developed and used across the world. And that connection, my guest Brian Wong, Assistant Professor of Philosophy at the University of Hong Kong argues, is what makes us all, to varying degrees, responsible for the harmful impacts of AI. This conversation has two parts. This is the first, where we focus on the kinds of geo-political risks and harms he concerned about, why he takes issue with “the alignment problem,” and how AI operates in a way that produces what he calls “accountability gaps and deficits” - ways in which it looks like no one is accountable for the harms and how people are not compensated by anyone after they’re harmed. There’s a lot here - buckle up!</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;We’re all connected to how AI is developed and used across the world. And that connection, my guest Brian Wong, Assistant Professor of Philosophy at the University of Hong Kong argues, is what makes us all, to varying degrees, responsible for the harmful impacts of AI. This conversation has two parts. This is the first, where we focus on the kinds of geo-political risks and harms he concerned about, why he takes issue with “the alignment problem,” and how AI operates in a way that produces what he calls “accountability gaps and deficits” - ways in which it looks like no one is accountable for the harms and how people are not compensated by anyone after they’re harmed. There’s a lot here - buckle up!&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="61823268" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/96cb5de1-9208-47f1-b1ed-439139a36f44/stream.mp3"/>
                
                <guid isPermaLink="false">7e8255c6-5f5b-4da5-b2ef-f4ab0873dbcc</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/96cb5de1-9208-47f1-b1ed-439139a36f44</link>
                <pubDate>Thu, 20 Nov 2025 07:43:51 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/11/20/7/adbba969-5369-4530-8147-e35eaf03ed72_rm_em_s02_thumbnails__4_.jpg"/>
                <itunes:duration>3863</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Orchestrating Ethics</itunes:title>
                <title>Orchestrating Ethics</title>

                <itunes:episode>7</itunes:episode>
                <itunes:season>3</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>One company builds the LLM. Another company uses that model for their purposes. How do we know that the ethical standards of the first one match the ethical standards of the second one? How does the second company know they are using a technology that is commensurate with their own ethical standards? This is a conversation I had with David Danks, Professor OF Philosophy and data science UCSD, almost 3 years ago. But the conversation is just as pressing now as it was then. In fact, given the widespread adoption of AI that’s built by a handful of companies, it’s even more important now that we get this right.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;One company builds the LLM. Another company uses that model for their purposes. How do we know that the ethical standards of the first one match the ethical standards of the second one? How does the second company know they are using a technology that is commensurate with their own ethical standards? This is a conversation I had with David Danks, Professor OF Philosophy and data science UCSD, almost 3 years ago. But the conversation is just as pressing now as it was then. In fact, given the widespread adoption of AI that’s built by a handful of companies, it’s even more important now that we get this right.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="42496417" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/d8b3d349-3de6-454b-882a-0e6cfc3c6182/stream.mp3"/>
                
                <guid isPermaLink="false">dd06c8dc-f72b-4723-9d40-650216954d9f</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/d8b3d349-3de6-454b-882a-0e6cfc3c6182</link>
                <pubDate>Thu, 13 Nov 2025 07:00:33 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/11/12/16/ded5d4cd-596d-409e-8e85-e007ddc32386_rm_em_s02_thumbnails__3_.jpg"/>
                <itunes:duration>2656</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>The Military is the Safest Place to Test AI</itunes:title>
                <title>The Military is the Safest Place to Test AI</title>

                <itunes:episode>3</itunes:episode>
                <itunes:season>6</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p><span>How can one of the most high risk industries also be the safest place to test AI? That’s what I discuss today with former Navy Commander Zac Staples, currently Founder and CEO of Fathom, an industrial cybersecurity company focused on the maritime industry. He walks me through how the military performs its due diligence on new technologies, explains that there are lots of “watchers” of new technologies as they’re tested and used, and that all of this happens against a backdrop of a culture of self-critique. We also talk about the increasing complexity of AI, which makes it harder to test, and we zoom out to larger, political issues, including China’s use of military AI. </span></p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;How can one of the most high risk industries also be the safest place to test AI? That’s what I discuss today with former Navy Commander Zac Staples, currently Founder and CEO of Fathom, an industrial cybersecurity company focused on the maritime industry. He walks me through how the military performs its due diligence on new technologies, explains that there are lots of “watchers” of new technologies as they’re tested and used, and that all of this happens against a backdrop of a culture of self-critique. We also talk about the increasing complexity of AI, which makes it harder to test, and we zoom out to larger, political issues, including China’s use of military AI. &lt;/span&gt;&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="43837649" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/e3c5e0af-d57e-48d1-a6ff-03430c4c97c0/stream.mp3"/>
                
                <guid isPermaLink="false">05e3a23f-eeca-4522-a928-938f8022cf10</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/e3c5e0af-d57e-48d1-a6ff-03430c4c97c0</link>
                <pubDate>Thu, 06 Nov 2025 06:00:31 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/11/5/9/ba788572-96a7-4bce-a3db-423c4d16a3b9_rm_em_s02_thumbnails__3_.jpg"/>
                <itunes:duration>2739</itunes:duration>
                <podcast:transcript url="https://s3.us-east-2.amazonaws.com/pod-public-transcripts/2025/11/5/9/53405d9b-30a4-4eb8-818f-93ff8cee9866_2721611383.vtt" type="text/vtt" language="en" />
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Should We Make Digital Copies of People?</itunes:title>
                <title>Should We Make Digital Copies of People?</title>

                <itunes:episode>5</itunes:episode>
                <itunes:season>3</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Deepfakes to deceive people? No good. How about a digital duplicate of a lost loved one so you can keep talking to them? What’s the impact of having a child talk to the digital duplicate of their dead father? Should you leave instructions about what can be done with your digital identify in your will? Could you lose control of your digital duplicate? These questions are ethically fascinating and crucial in themselves. They also raise other longer standing philosophical issues: can you be harmed after you die? Can your rights be violated? What if a Holocaust denier uses a digital duplicate of a survivor to say the Holocaust never happened? I used to think deepfakes were most of the conversation. Now I know better thanks to this great conversation with Atay Kozlovski, Visiting Research Fellow at Delft University of Technology.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Deepfakes to deceive people? No good. How about a digital duplicate of a lost loved one so you can keep talking to them? What’s the impact of having a child talk to the digital duplicate of their dead father? Should you leave instructions about what can be done with your digital identify in your will? Could you lose control of your digital duplicate? These questions are ethically fascinating and crucial in themselves. They also raise other longer standing philosophical issues: can you be harmed after you die? Can your rights be violated? What if a Holocaust denier uses a digital duplicate of a survivor to say the Holocaust never happened? I used to think deepfakes were most of the conversation. Now I know better thanks to this great conversation with Atay Kozlovski, Visiting Research Fellow at Delft University of Technology.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="44254772" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/17cb82bb-1658-4169-99db-7a94570c1044/stream.mp3"/>
                
                <guid isPermaLink="false">ef9d2090-e067-4bc4-8705-be45aa961334</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/17cb82bb-1658-4169-99db-7a94570c1044</link>
                <pubDate>Thu, 30 Oct 2025 05:00:34 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/10/29/14/11dfa69f-bffa-479d-a244-e4990f377c78_rm_em_s02_thumbnails__3_.jpg"/>
                <itunes:duration>2765</itunes:duration>
                <podcast:transcript url="https://s3.us-east-2.amazonaws.com/pod-public-transcripts/2025/10/29/17/b15aec8e-1cf8-4867-8e8c-08290a04ce71_2945782032.vtt" type="text/vtt" language="en" />
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>How Society Bears AI’s Costs</itunes:title>
                <title>How Society Bears AI’s Costs</title>

                <itunes:episode>3</itunes:episode>
                <itunes:season>4</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>AI is leading the economic charge. In fact, without the massive investments in AI, our economy would look a lot worse right now. But what are the social and political costs that we incur? My guest, Karen Yeung, a professor at Birmingham Law School and School of Computer Science, argues that investments in AI our consolidating power while disempowering the rest of society. Our individual autonomy and our collective cohesion are simultaneously eroding. We need to push back - but how? And on what grounds? To what extent is the problem our socio-economic system or our culture or government (in)action? These questions and more in a particularly fun episode (for me, anyway).</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;AI is leading the economic charge. In fact, without the massive investments in AI, our economy would look a lot worse right now. But what are the social and political costs that we incur? My guest, Karen Yeung, a professor at Birmingham Law School and School of Computer Science, argues that investments in AI our consolidating power while disempowering the rest of society. Our individual autonomy and our collective cohesion are simultaneously eroding. We need to push back - but how? And on what grounds? To what extent is the problem our socio-economic system or our culture or government (in)action? These questions and more in a particularly fun episode (for me, anyway).&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="38608143" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/d9533f3f-f7cf-4cdc-94da-c04513e5c2d8/stream.mp3"/>
                
                <guid isPermaLink="false">d50600f6-1fd0-4b86-ac09-8a50fe0b4bd1</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/d9533f3f-f7cf-4cdc-94da-c04513e5c2d8</link>
                <pubDate>Thu, 23 Oct 2025 05:00:34 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/10/22/17/a37d88c6-6a9e-43dc-adab-656bb8d54352_rm_em_s02_thumbnails__3_.jpg"/>
                <itunes:duration>2413</itunes:duration>
                <podcast:transcript url="https://s3.us-east-2.amazonaws.com/pod-public-transcripts/2025/10/22/18/96253a23-7d9c-4464-a3bc-c8ea63a6231d_3103772839.vtt" type="text/vtt" language="en" />
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>How Should We Teach Ethics to Computer Science Majors?</itunes:title>
                <title>How Should We Teach Ethics to Computer Science Majors?</title>

                <itunes:episode>3</itunes:episode>
                <itunes:season>3</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>The engineering and data science students of today are tomorrow’s tech innovators. IF we want them to develop ethically sound technology, they better have a good grip on what ethics is all about. But how should we teach them? The same way we teach ethics in philosophy? Or is something different needed given the kinds of organizational forces they’ll find themselves subject to once they’re working. Steven Kelts, a lecturer in Princeton’s School of Public and International Affairs and in the Department of Computer Science researches this subject and teaches those very students himself. We explore what his research and his experience shows us about how we can best train our computer scientists to take the welfare of society into their minds and their work.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;The engineering and data science students of today are tomorrow’s tech innovators. IF we want them to develop ethically sound technology, they better have a good grip on what ethics is all about. But how should we teach them? The same way we teach ethics in philosophy? Or is something different needed given the kinds of organizational forces they’ll find themselves subject to once they’re working. Steven Kelts, a lecturer in Princeton’s School of Public and International Affairs and in the Department of Computer Science researches this subject and teaches those very students himself. We explore what his research and his experience shows us about how we can best train our computer scientists to take the welfare of society into their minds and their work.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="53375059" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/b065df9c-4d89-4d86-911e-cd5b70fed8b2/stream.mp3"/>
                
                <guid isPermaLink="false">f3a4bdc7-135b-4f9f-897d-6ca94dc60af4</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/b065df9c-4d89-4d86-911e-cd5b70fed8b2</link>
                <pubDate>Thu, 16 Oct 2025 04:05:02 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/10/15/17/bf491e4d-50de-4bb3-a51a-e7124dae01d5_rm_em_s02_thumbnails__2_.jpg"/>
                <itunes:duration>3335</itunes:duration>
                <podcast:transcript url="https://s3.us-east-2.amazonaws.com/pod-public-transcripts/2025/10/15/18/1add708c-bf05-4ab4-aae0-2422e99eeca1_1031835850.vtt" type="text/vtt" language="en" />
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>In Defense of Killer Robots</itunes:title>
                <title>In Defense of Killer Robots</title>

                
                
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Giving AI systems autonomy in a military context seems like a bad idea. Of course AI shouldn’t “decide” which targets should be killed and/or blown up. Except…maybe it’s not so obvious after all. That’s what my guest, Michael Horowitz, formerly of the DOD and now a professor at the University of Pennsylvania argues. Agree with him or not, he makes a compelling case we need to take seriously. In fact, you may even conclude with him that using autonomous AI in a military context can be morally superior to having a human pull the trigger.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Giving AI systems autonomy in a military context seems like a bad idea. Of course AI shouldn’t “decide” which targets should be killed and/or blown up. Except…maybe it’s not so obvious after all. That’s what my guest, Michael Horowitz, formerly of the DOD and now a professor at the University of Pennsylvania argues. Agree with him or not, he makes a compelling case we need to take seriously. In fact, you may even conclude with him that using autonomous AI in a military context can be morally superior to having a human pull the trigger.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="48763715" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/c626eb8c-b883-4894-9d67-044b627a0c34/stream.mp3"/>
                
                <guid isPermaLink="false">c71fe35e-02ef-45fc-bb0b-00ad81f5d6b3</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/c626eb8c-b883-4894-9d67-044b627a0c34</link>
                <pubDate>Thu, 09 Oct 2025 05:00:56 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/10/8/16/c34ac1ae-1e97-409f-9929-efb22039b69a_rm_em_s02_thumbnails__2_.jpg"/>
                <itunes:duration>3047</itunes:duration>
                <podcast:transcript url="https://s3.us-east-2.amazonaws.com/pod-public-transcripts/2025/10/8/17/947cd273-f4c2-4de9-8709-f3fc46ecdd25_1421898908.vtt" type="text/vtt" language="en" />
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Live Recording: Is AI Creating a Sadder Future?</itunes:title>
                <title>Live Recording: Is AI Creating a Sadder Future?</title>

                <itunes:episode>1</itunes:episode>
                <itunes:season>3</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>In August, I recorded a discussion with David Ryan Polgar, Founder of the nonprofit All Tech Is Human, in front of an audience of around 200 people. We talked about how AI mediated experiences make us feel sadder, that the tech companies don’t really care about this, and how people can organize to push those companies to take our long term well being more seriously.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;In August, I recorded a discussion with David Ryan Polgar, Founder of the nonprofit All Tech Is Human, in front of an audience of around 200 people. We talked about how AI mediated experiences make us feel sadder, that the tech companies don’t really care about this, and how people can organize to push those companies to take our long term well being more seriously.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="37871281" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/e66e51f1-ed66-4f7c-bf8d-5eac771d547e/stream.mp3"/>
                
                <guid isPermaLink="false">52ddbb4a-def8-4c79-a518-616df9390b97</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/e66e51f1-ed66-4f7c-bf8d-5eac771d547e</link>
                <pubDate>Wed, 01 Oct 2025 05:30:35 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/10/1/10/8b3f3abd-9f6e-4ca7-bead-c4541e088423_rm_em_s02_thumbnails__1_.jpg"/>
                <itunes:duration>2366</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Season finale: A New Ethics for AI Ethics?</itunes:title>
                <title>Season finale: A New Ethics for AI Ethics?</title>

                <itunes:episode>2</itunes:episode>
                <itunes:season>57</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Wendell Wallach, who has been in the AI ethics game longer than just about anyone and has several books to his name on the subject, talks about his dissatisfaction with talk of “value alignment,” why traditional moral theories are not helpful for doing AI ethics, and how we can do better.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Wendell Wallach, who has been in the AI ethics game longer than just about anyone and has several books to his name on the subject, talks about his dissatisfaction with talk of “value alignment,” why traditional moral theories are not helpful for doing AI ethics, and how we can do better.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="40675787" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/abe5e70a-43df-4f4c-86d5-1e7ae1ff5ac4/stream.mp3"/>
                
                <guid isPermaLink="false">366ab447-1fe9-49f9-97c7-cd86d7b1dcf7</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/abe5e70a-43df-4f4c-86d5-1e7ae1ff5ac4</link>
                <pubDate>Thu, 31 Jul 2025 04:54:47 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/7/31/5/966ec51c-10f4-4f51-84de-23098c689e78_rm_em_s02_thumbnails__1_.jpg"/>
                <itunes:duration>2542</itunes:duration>
                <podcast:transcript url="https://s3.us-east-2.amazonaws.com/pod-public-transcripts/2025/7/31/5/e284abef-1295-4351-8acc-57bd4975969f_1448776663.vtt" type="text/vtt" language="en" />
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Is AI a Person or a Thing… or Neither?</itunes:title>
                <title>Is AI a Person or a Thing… or Neither?</title>

                <itunes:episode>56</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>It would be crazy to attribute legal personhood to AI, right? But then again, corporations are regarded as legal persons and there seems to be good reason for doing so. In fact, some rivers are classified as legal persons. My guest, David Gunkel, author of many books including “Person Thing Robot” argues that the classic legal distinction between ‘person’ and ‘thing’ doesn’t apply well to AI. How should we regard AI in a way that allows us to create it in a legally responsible way? All that and more in today’s episode</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;It would be crazy to attribute legal personhood to AI, right? But then again, corporations are regarded as legal persons and there seems to be good reason for doing so. In fact, some rivers are classified as legal persons. My guest, David Gunkel, author of many books including “Person Thing Robot” argues that the classic legal distinction between ‘person’ and ‘thing’ doesn’t apply well to AI. How should we regard AI in a way that allows us to create it in a legally responsible way? All that and more in today’s episode&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="45527875" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/d1093b21-8fdd-4e3f-b054-5845a7cff305/stream.mp3"/>
                
                <guid isPermaLink="false">13d4abaa-0bd3-4acb-b574-31d18664a05c</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/d1093b21-8fdd-4e3f-b054-5845a7cff305</link>
                <pubDate>Thu, 17 Jul 2025 05:14:43 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/7/17/5/a97e9eb0-a9ec-4840-9c59-40a943c58457_rm_em_s02_thumbnails__1_.jpg"/>
                <itunes:duration>2845</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>How Do You Control Unpredictable AI?</itunes:title>
                <title>How Do You Control Unpredictable AI?</title>

                <itunes:episode>55</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>LLMs behave in unpredictable ways. That’s a gift and a curse. It both allows for its “creativity” and makes it hard to control (a bit like a real artist, actually). In this episode, we focus on the cyber risks of AI with Walter Haydock, a former national security policy advisor and the Founder of StackAware.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;LLMs behave in unpredictable ways. That’s a gift and a curse. It both allows for its “creativity” and makes it hard to control (a bit like a real artist, actually). In this episode, we focus on the cyber risks of AI with Walter Haydock, a former national security policy advisor and the Founder of StackAware.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="49581662" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/4e715a73-f334-4611-8eb5-669210955763/stream.mp3"/>
                
                <guid isPermaLink="false">2faf9147-c4cc-46cc-ba89-4ca3ac45d2fa</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/4e715a73-f334-4611-8eb5-669210955763</link>
                <pubDate>Thu, 10 Jul 2025 11:18:17 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/7/10/11/bb162cfe-2f4f-412b-aad9-930e25dd442c_rm_em_s02_thumbnails.jpg"/>
                <itunes:duration>3098</itunes:duration>
                <podcast:transcript url="https://s3.us-east-2.amazonaws.com/pod-public-transcripts/2025/7/10/11/dd83005b-8374-4348-88fd-e7866a17bdab_37184150.vtt" type="text/vtt" language="en" />
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>The AI Job Interviewer</itunes:title>
                <title>The AI Job Interviewer</title>

                <itunes:episode>54</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>AI can stand between you and getting a job. That means for you to make money and support yourself and your family, you may have to convince an AI that you’re the right person for the job. And yet, AI can be biased and fail in all sorts of ways. This is a conversation with Hilke Schellmann, investigative journalist and author of ‘The algorithm” along with her colleague Mona Sloane, Ph.D., is an Assistant Professor of Data Science and Media Studies at the University of Virginia. We discuss Hilke’s book and all the ways things go sideways when people are looking for work in the AI era. <em>Originally aired in season one. </em></p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;AI can stand between you and getting a job. That means for you to make money and support yourself and your family, you may have to convince an AI that you’re the right person for the job. And yet, AI can be biased and fail in all sorts of ways. This is a conversation with Hilke Schellmann, investigative journalist and author of ‘The algorithm” along with her colleague Mona Sloane, Ph.D., is an Assistant Professor of Data Science and Media Studies at the University of Virginia. We discuss Hilke’s book and all the ways things go sideways when people are looking for work in the AI era. &lt;em&gt;Originally aired in season one. &lt;/em&gt;&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="40539115" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/92983d00-ce00-452e-9380-5cebc9e451da/stream.mp3"/>
                
                <guid isPermaLink="false">7628439a-e46e-4cab-8066-fa0c8471abcb</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/92983d00-ce00-452e-9380-5cebc9e451da</link>
                <pubDate>Thu, 26 Jun 2025 05:22:53 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/6/26/5/f73675d4-999f-4890-bd9e-ed65957d9c6c_rm_em_s02_thumbnails__14_.jpg"/>
                <itunes:duration>2533</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Accuracy Isn’t Enough</itunes:title>
                <title>Accuracy Isn’t Enough</title>

                <itunes:episode>53</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>We want accurate AI, right? As long as it’s accurate, we’re all good? My guest, Will Landecker, CEO Accountable Algorithm, explains why accuracy is just one metric among many to aim for. In fact, we have to make tradeoffs across things like accuracy, relevance, and normative (including ethical) considerations in order to get a usable model. We also cover whether explainability is important and whether it’s even on the menu and the risks of multi-agentic AI systems.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;We want accurate AI, right? As long as it’s accurate, we’re all good? My guest, Will Landecker, CEO Accountable Algorithm, explains why accuracy is just one metric among many to aim for. In fact, we have to make tradeoffs across things like accuracy, relevance, and normative (including ethical) considerations in order to get a usable model. We also cover whether explainability is important and whether it’s even on the menu and the risks of multi-agentic AI systems.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="47418723" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/5695837c-636c-4fc5-ae88-b6037b2603c1/stream.mp3"/>
                
                <guid isPermaLink="false">ee3543ea-a8e1-4e31-b2ab-3a00fe713ade</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/5695837c-636c-4fc5-ae88-b6037b2603c1</link>
                <pubDate>Thu, 19 Jun 2025 05:04:06 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/6/19/9/c894d02b-3df8-4f2a-9409-3684073dabbc_rm_em_s02_thumbnails__13_.jpg"/>
                <itunes:duration>2963</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Beware of Autonomous Weapons</itunes:title>
                <title>Beware of Autonomous Weapons</title>

                
                
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Should we allow autonomous AI systems? Who is accountable if things go sideways? And how is AI going to transform the future of military work? All this and more with my guest, Rosaria Taddeo, Professor of Digital Ethics and Defense Technologies at the University of Oxford.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Should we allow autonomous AI systems? Who is accountable if things go sideways? And how is AI going to transform the future of military work? All this and more with my guest, Rosaria Taddeo, Professor of Digital Ethics and Defense Technologies at the University of Oxford.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="37294080" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/528f7350-0398-419a-99b8-523b9aa0af55/stream.mp3"/>
                
                <guid isPermaLink="false">788c7a47-63e6-4143-990d-739ad57ebb8c</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/528f7350-0398-419a-99b8-523b9aa0af55</link>
                <pubDate>Thu, 12 Jun 2025 05:00:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/6/12/6/775de670-b213-4c60-8355-248628efd57f_92-addd-1a33d5111f41_rm_em_s02_thumbnails__12_.jpg"/>
                <itunes:duration>2330</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>How Do We Construct Intelligence?</itunes:title>
                <title>How Do We Construct Intelligence?</title>

                
                
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>The Silicon Valley titans talk a lot about intelligence and super intelligence of AI…but what is intelligence, anyway? My guest, former philosopher professor and now Director at Gartner Philip Walsh argues the SV folks are fundamentally confused about what intelligence is. It’s not, he argues, like horsepower, which can be objectively measured. Instead, whether we ascribe intelligence to something is a matter of what we communally agree to ascribe intelligence to. More specifically, we have to collectively agree on the criteria for intelligence and that’s when it makes sense to say “yeah this thing is intelligent.” But we don’t really have a settled collective agreement and that’s why we sort of want to say “this is not intelligence” at the same time we say, “How is this thing so smart?!” I think this is a crucial discussion for anyone who wants to think deeply about what to make of our new quasi/proto/faux intelligent companions.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;The Silicon Valley titans talk a lot about intelligence and super intelligence of AI…but what is intelligence, anyway? My guest, former philosopher professor and now Director at Gartner Philip Walsh argues the SV folks are fundamentally confused about what intelligence is. It’s not, he argues, like horsepower, which can be objectively measured. Instead, whether we ascribe intelligence to something is a matter of what we communally agree to ascribe intelligence to. More specifically, we have to collectively agree on the criteria for intelligence and that’s when it makes sense to say “yeah this thing is intelligent.” But we don’t really have a settled collective agreement and that’s why we sort of want to say “this is not intelligence” at the same time we say, “How is this thing so smart?!” I think this is a crucial discussion for anyone who wants to think deeply about what to make of our new quasi/proto/faux intelligent companions.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="51837387" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/2db977cf-d5ac-421c-95ae-eba7a530780a/stream.mp3"/>
                
                <guid isPermaLink="false">5f139891-4a22-4e93-9a00-e1823cdd0bbb</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/2db977cf-d5ac-421c-95ae-eba7a530780a</link>
                <pubDate>Thu, 05 Jun 2025 05:26:06 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/6/5/5/81ba97f4-bff9-4628-87bb-e7e72934f77e_rm_em_s02_thumbnails__12_.jpg"/>
                <itunes:duration>3239</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>AI Needs Historians</itunes:title>
                <title>AI Needs Historians</title>

                
                
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p><span>How can we solve AI’s problems if we don’t understand where they came from? </span><em>Originally aired in season one.</em></p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;How can we solve AI’s problems if we don’t understand where they came from? &lt;/span&gt;&lt;em&gt;Originally aired in season one.&lt;/em&gt;&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="32296124" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/20608279-5ebd-4882-9a14-c2f551d254ee/stream.mp3"/>
                
                <guid isPermaLink="false">71148f31-21f1-4aaf-8ce5-b0c2c6c7cbcc</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/20608279-5ebd-4882-9a14-c2f551d254ee</link>
                <pubDate>Thu, 29 May 2025 05:06:31 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/5/29/5/6e5517dd-741d-4fa8-a798-e9cdb0784690_rm_em_s02_thumbnails__10_.jpg"/>
                <itunes:duration>2018</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>We’re Not Ready for Agentic AI</itunes:title>
                <title>We’re Not Ready for Agentic AI</title>

                
                
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Tech companies are racing to build and sell agentic. The vision is one in which countless AI agents are acting on our behalf: searching the web, making transactions, interacting with other AI agents. But my guest Avijit Ghosh, Applied Policy Researcher at Hugging Face, explains why we’re not even close to having the appropriate safeguards in place. What are the massive gaps and what would it take to close them? That’s the topic of our discussion.</p><p><strong>AI Agent Framework:</strong> <em>SmolAgents</em>:<a href="https://huggingface.co/blog/smolagents" rel="nofollow"> https://huggingface.co/blog/smolagents</a></p><p><strong>AI Agents Course:</strong> <a href="https://huggingface.co/learn/agents-course/en/unit0/introduction" rel="nofollow">https://huggingface.co/learn/agents-course/en/unit0/introduction</a></p><p><strong>Position paper:</strong> <a href="https://huggingface.co/papers/2502.02649" rel="nofollow">https://huggingface.co/papers/2502.02649</a></p><p><strong>Op Ed! :</strong> <a href="https://www.technologyreview.com/2025/03/24/1113647/why-handing-over-total-control-to-ai-agents-would-be-a-huge-mistake/" rel="nofollow">https://www.technologyreview.com/2025/03/24/1113647/why-handing-over-total-control-to-ai-agents-would-be-a-huge-mistake/</a></p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Tech companies are racing to build and sell agentic. The vision is one in which countless AI agents are acting on our behalf: searching the web, making transactions, interacting with other AI agents. But my guest Avijit Ghosh, Applied Policy Researcher at Hugging Face, explains why we’re not even close to having the appropriate safeguards in place. What are the massive gaps and what would it take to close them? That’s the topic of our discussion.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;AI Agent Framework:&lt;/strong&gt; &lt;em&gt;SmolAgents&lt;/em&gt;:&lt;a href=&#34;https://huggingface.co/blog/smolagents&#34; rel=&#34;nofollow&#34;&gt; https://huggingface.co/blog/smolagents&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;AI Agents Course:&lt;/strong&gt; &lt;a href=&#34;https://huggingface.co/learn/agents-course/en/unit0/introduction&#34; rel=&#34;nofollow&#34;&gt;https://huggingface.co/learn/agents-course/en/unit0/introduction&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Position paper:&lt;/strong&gt; &lt;a href=&#34;https://huggingface.co/papers/2502.02649&#34; rel=&#34;nofollow&#34;&gt;https://huggingface.co/papers/2502.02649&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Op Ed! :&lt;/strong&gt; &lt;a href=&#34;https://www.technologyreview.com/2025/03/24/1113647/why-handing-over-total-control-to-ai-agents-would-be-a-huge-mistake/&#34; rel=&#34;nofollow&#34;&gt;https://www.technologyreview.com/2025/03/24/1113647/why-handing-over-total-control-to-ai-agents-would-be-a-huge-mistake/&lt;/a&gt;&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="50471915" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/68f0221e-3abb-44d4-a3e6-e6ed4aa3d5cd/stream.mp3"/>
                
                <guid isPermaLink="false">c14584e4-3030-484d-b656-2dba580f6f2f</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/68f0221e-3abb-44d4-a3e6-e6ed4aa3d5cd</link>
                <pubDate>Thu, 22 May 2025 07:10:04 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/5/22/7/37327df6-5b4e-437a-bee4-071dc187951f_rm_em_s02_thumbnails__10_.jpg"/>
                <itunes:duration>3154</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>How Algorithms Manipulate Us</itunes:title>
                <title>How Algorithms Manipulate Us</title>

                <itunes:episode>48</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>We’re told that algorithms on social media are manipulating us. But is that true? What is manipulation? Can an AI really do it? And is it necessarily a bad thing? These questions and more with philosopher Michael Klenk. <span>Originally aired in season one.</span></p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;We’re told that algorithms on social media are manipulating us. But is that true? What is manipulation? Can an AI really do it? And is it necessarily a bad thing? These questions and more with philosopher Michael Klenk. &lt;span&gt;Originally aired in season one.&lt;/span&gt;&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="46034442" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/30f76d87-e993-4fc4-b8e7-ba0012632753/stream.mp3"/>
                
                <guid isPermaLink="false">969fd073-ebf2-489c-930d-34fbe1226755</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/30f76d87-e993-4fc4-b8e7-ba0012632753</link>
                <pubDate>Thu, 15 May 2025 06:35:09 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/5/15/9/896a67e1-7bbc-435b-a288-a68ee7a3a7bd_264-9af2-354cb1364f09_rm_em_s02_thumbnails__9_.jpg"/>
                <itunes:duration>2877</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>What Should We Do When AI Knows More Than Us?</itunes:title>
                <title>What Should We Do When AI Knows More Than Us?</title>

                <itunes:episode>47</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>We often defer to the judgment of experts. I usually defer to my doctor’s judgment when he diagnoses me, I defer to quantum physicists when they talk to me about string theory, etc. I don’t say “well, that’s interesting, I’ll take it under advisement” and then form my own beliefs. Any beliefs I have on those fronts I replace with their beliefs. But what if an AI “knows” more than us? It is an authority in the field in which we’re questioning it. Should we defer to the AI? Should we replace our beliefs with whatever it believes? On the one hand, hard pass! On the other, it does know better than us. What to do? That’s the issue that drives this conversation with my guest, Benjamin Lange, Research Assistant Professor in the Ethics of AI and ML at the Ludwig Maximilians University of Munich.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;We often defer to the judgment of experts. I usually defer to my doctor’s judgment when he diagnoses me, I defer to quantum physicists when they talk to me about string theory, etc. I don’t say “well, that’s interesting, I’ll take it under advisement” and then form my own beliefs. Any beliefs I have on those fronts I replace with their beliefs. But what if an AI “knows” more than us? It is an authority in the field in which we’re questioning it. Should we defer to the AI? Should we replace our beliefs with whatever it believes? On the one hand, hard pass! On the other, it does know better than us. What to do? That’s the issue that drives this conversation with my guest, Benjamin Lange, Research Assistant Professor in the Ethics of AI and ML at the Ludwig Maximilians University of Munich.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="46478315" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/654f5208-9ddb-44f4-85f6-5b07358f61e8/stream.mp3"/>
                
                <guid isPermaLink="false">d3095746-c9d5-4161-ac42-6cacbf4912f0</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/654f5208-9ddb-44f4-85f6-5b07358f61e8</link>
                <pubDate>Thu, 08 May 2025 06:36:10 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/5/8/6/42828c36-2392-494c-b5df-8c791018faeb_rm_em_s02_thumbnails__9_.jpg"/>
                <itunes:duration>2904</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Should We Ignore Claims about AI’s Existential Threat?</itunes:title>
                <title>Should We Ignore Claims about AI’s Existential Threat?</title>

                <itunes:episode>46</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Are claims about AI destroying humanity just more AI hype we should ignore? My guests today, Risto Uuk and Torben Swoboda assess three popular arguments for why we should dismiss them and focus solely on the AI risks that are here today. But they find each argument flawed, arguing that, unless some fourth powerful argument comes along, we should devote resources to identifying and avoiding potential existential risks to humanity posed by AI.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Are claims about AI destroying humanity just more AI hype we should ignore? My guests today, Risto Uuk and Torben Swoboda assess three popular arguments for why we should dismiss them and focus solely on the AI risks that are here today. But they find each argument flawed, arguing that, unless some fourth powerful argument comes along, we should devote resources to identifying and avoiding potential existential risks to humanity posed by AI.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="40183431" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/9354e86e-b81d-43ef-b3f6-d2ed97d8d0a9/stream.mp3"/>
                
                <guid isPermaLink="false">ac54481e-89d8-4f20-b587-68cfdfeab52d</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/9354e86e-b81d-43ef-b3f6-d2ed97d8d0a9</link>
                <pubDate>Thu, 01 May 2025 05:30:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/5/1/1/1ecb871d-db08-491b-89e8-d62aff1b80ba_ed1-9173-b3627906ea91_rm_em_s02_thumbnails__8_.jpg"/>
                <itunes:duration>2511</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>AI is Not Intelligent</itunes:title>
                <title>AI is Not Intelligent</title>

                <itunes:episode>41</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>I have to admit, AI can do some amazing things. More specifically, it looks like it can perform some impressive intellectual feats. But is it actually intelligent? Does it understand? Or is it just really good at statistics? This and more in my conversation with Lisa Titus, former professor of philosophy at the University of Denver and now AI Policy Manager at Meta. Originally aired in season one.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;I have to admit, AI can do some amazing things. More specifically, it looks like it can perform some impressive intellectual feats. But is it actually intelligent? Does it understand? Or is it just really good at statistics? This and more in my conversation with Lisa Titus, former professor of philosophy at the University of Denver and now AI Policy Manager at Meta. Originally aired in season one.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="45086511" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/4108dae8-6c0d-41b9-9bb3-f32785a1ed19/stream.mp3"/>
                
                <guid isPermaLink="false">1bea6665-4689-4c46-8c9a-f4421a0a149e</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/4108dae8-6c0d-41b9-9bb3-f32785a1ed19</link>
                <pubDate>Thu, 24 Apr 2025 04:00:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/4/23/14/08dbcf38-a9bc-4555-8365-fec34bf258c7_rm_em_s02_thumbnails__7_.jpg"/>
                <itunes:duration>2817</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>A Crash Course on the AI Ethics Landscape</itunes:title>
                <title>A Crash Course on the AI Ethics Landscape</title>

                <itunes:episode>44</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>By the end of this crash course, you’ll understand <em>a lot</em> about the AI ethics landscape. Not only will it give you your bearings, but it will also enable you to identify what parts of the landscape you find interesting so you can do a deeper dive.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;By the end of this crash course, you’ll understand &lt;em&gt;a lot&lt;/em&gt; about the AI ethics landscape. Not only will it give you your bearings, but it will also enable you to identify what parts of the landscape you find interesting so you can do a deeper dive.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="25875853" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/c3faa7ed-e95d-4bf6-bb13-a12c701f5518/stream.mp3"/>
                
                <guid isPermaLink="false">b48ff875-9bc2-4333-b671-3c67c800ad33</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/c3faa7ed-e95d-4bf6-bb13-a12c701f5518</link>
                <pubDate>Thu, 17 Apr 2025 04:00:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/4/17/11/16aeee39-c66a-4092-9ed2-1745949128cd_0b9-b1c1-e293320fb04f_rm_em_s02_thumbnails__6_.jpg"/>
                <itunes:duration>1617</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Does AI Ethics in Business Make Sense?</itunes:title>
                <title>Does AI Ethics in Business Make Sense?</title>

                <itunes:episode>43</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>People want AI developed ethically, but is there actually a business case for it? The answer better be yes since, after all, it’s businesses that are developing AI in the first place. Today I talk with Dennis Hirsch, Professor of Law and Computer Science at Ohio State University, who is conducting empirical research on this topic. He argues that AI ethics - or as he prefers to call it, Responsible AI - delivers a lot of bottom line business value. In fact, his research revealed something about its value that he didn’t even expect to see. We’re in the early days of businesses taking AI ethics seriously, but if he’s right, we’ll see a lot more of it. Fingers crossed.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;People want AI developed ethically, but is there actually a business case for it? The answer better be yes since, after all, it’s businesses that are developing AI in the first place. Today I talk with Dennis Hirsch, Professor of Law and Computer Science at Ohio State University, who is conducting empirical research on this topic. He argues that AI ethics - or as he prefers to call it, Responsible AI - delivers a lot of bottom line business value. In fact, his research revealed something about its value that he didn’t even expect to see. We’re in the early days of businesses taking AI ethics seriously, but if he’s right, we’ll see a lot more of it. Fingers crossed.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="41342432" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/ad6544c7-7ad9-42b8-85ba-03c1642d28dd/stream.mp3"/>
                
                <guid isPermaLink="false">54d91197-d186-4052-9d85-4409ae0f1e4c</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/ad6544c7-7ad9-42b8-85ba-03c1642d28dd</link>
                <pubDate>Thu, 10 Apr 2025 02:00:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/4/9/12/4705d886-21b5-42f0-a587-b46efa13b260_rm_em_s02_thumbnails__6_.jpg"/>
                <itunes:duration>2583</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Does AI Undermine Scientific Discovery?</itunes:title>
                <title>Does AI Undermine Scientific Discovery?</title>

                <itunes:episode>46</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Automation is great, right? It speeds up what needs to get done. But is that always a good thing? What about in the process of scientific discovery? Yes, AI can automate a lot of science by running thousands of virtual experiments and generating results - but is something lost in the process? My guest, Ramón Alvarado a professor of philosophy and a member of the Philosophy and Data Science Initiative at the University of Oregon, thinks something crucial is missing: serendipity. Many significant scientific discoveries occurred by happenstance. Penicillin, for instance, was discovered by Alexander Fleming who accidentally left a petri dish on a bench before going off for vacation. Exactly what is the scientific value of serendipity, how important is it, and how does AI potentially impinge on it? That’s today’s conversation.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Automation is great, right? It speeds up what needs to get done. But is that always a good thing? What about in the process of scientific discovery? Yes, AI can automate a lot of science by running thousands of virtual experiments and generating results - but is something lost in the process? My guest, Ramón Alvarado a professor of philosophy and a member of the Philosophy and Data Science Initiative at the University of Oregon, thinks something crucial is missing: serendipity. Many significant scientific discoveries occurred by happenstance. Penicillin, for instance, was discovered by Alexander Fleming who accidentally left a petri dish on a bench before going off for vacation. Exactly what is the scientific value of serendipity, how important is it, and how does AI potentially impinge on it? That’s today’s conversation.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="44635533" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/27019f6e-a73a-4a92-a056-a1b9399168c0/stream.mp3"/>
                
                <guid isPermaLink="false">50053308-927b-45fa-8a63-d14260f775a7</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/27019f6e-a73a-4a92-a056-a1b9399168c0</link>
                <pubDate>Thu, 03 Apr 2025 06:44:16 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/4/3/7/4c314154-601d-4af9-b560-bfd888f6f049_e06-8071-d7bd3df28657_rm_em_s02_thumbnails__2_.jpg"/>
                <itunes:duration>2789</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>The Power of Technologists</itunes:title>
                <title>The Power of Technologists</title>

                <itunes:episode>41</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Behind all those algorithms are the people who create them and embed them into our lives. How did they get that power? What should they do with it? What are their responsibilities? This and more with my guest Chris Wiggins, Chief Data Scientist at the New York Times, Associate Professor of Applied Mathematics at Columbia University, and author of the book “How Data Happened: A History from the Age of Reason to the Age of Algorithms”. Originally aired in season one.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Behind all those algorithms are the people who create them and embed them into our lives. How did they get that power? What should they do with it? What are their responsibilities? This and more with my guest Chris Wiggins, Chief Data Scientist at the New York Times, Associate Professor of Applied Mathematics at Columbia University, and author of the book “How Data Happened: A History from the Age of Reason to the Age of Algorithms”. Originally aired in season one.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="46143111" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/2586304f-846d-43e3-8760-8cac78bda770/stream.mp3"/>
                
                <guid isPermaLink="false">21190de0-0242-42f9-8912-ccdc5b0877ae</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/2586304f-846d-43e3-8760-8cac78bda770</link>
                <pubDate>Thu, 27 Mar 2025 05:30:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/4/3/6/e13c6b24-6a63-44e4-9a6a-2dcf806e143b_rm_em_s02_thumbnails.jpg"/>
                <itunes:duration>2883</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>AI is Uncontrollable</itunes:title>
                <title>AI is Uncontrollable</title>

                <itunes:episode>40</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>People in the AI safety community are laboring under an illusion, perhaps even a self-deception, my guest argues. They think they can align AI with our values and control it so that the worst doesn’t happen. But that’s impossible. We can never know how AI will act in the wild any more than we can know how our children will act once they leave the house. Thus, we should never give more control to an AI than we would give an individual person. This is a fascinating discussion with Marcus Arvan, professor of philosophy at The University of Tampa and author of three books on ethics and political theory. You might just leave this conversation wondering if giving more and more capabilities to AI is a huge, potentially life-threatening mistake.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;People in the AI safety community are laboring under an illusion, perhaps even a self-deception, my guest argues. They think they can align AI with our values and control it so that the worst doesn’t happen. But that’s impossible. We can never know how AI will act in the wild any more than we can know how our children will act once they leave the house. Thus, we should never give more control to an AI than we would give an individual person. This is a fascinating discussion with Marcus Arvan, professor of philosophy at The University of Tampa and author of three books on ethics and political theory. You might just leave this conversation wondering if giving more and more capabilities to AI is a huge, potentially life-threatening mistake.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="47331369" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/2e919e80-03ed-4ecb-b1f1-3e14cdfee849/stream.mp3"/>
                
                <guid isPermaLink="false">cd9bef3f-44ab-4ae4-a805-b043603de576</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/2e919e80-03ed-4ecb-b1f1-3e14cdfee849</link>
                <pubDate>Thu, 20 Mar 2025 07:12:19 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/4/9/5/30091cbe-a9ae-4a98-9832-c80be451a615_rm_em_s02_thumbnails__5_.jpg"/>
                <itunes:duration>2958</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>A Culture of Online Manipulation</itunes:title>
                <title>A Culture of Online Manipulation</title>

                <itunes:episode>39</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Developers are constantly testing how online users react to their designs. Will they stay longer on the site because of this shade of blue? Will they get depressed if we show them depressing social media posts? What happens if we intentionally mismatch people on our dating website? When it comes to shades of blue, perhaps that’s not a big deal. But when it comes to mental health and deceiving people? Now we’re in ethically choppy waters. </p><p>My discussion today is with Cennydd Bowles, Managing Director of NowNext, where he helps organizations develop ethically sound products. He’s also the author of a book called “Future Ethics.” He argues that A/B testing on people is often ethically wrong and creates a culture among developers of a willingness to manipulate people. Great conversation ranging from the ethics of experimentation to marketing and even to capitalism.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Developers are constantly testing how online users react to their designs. Will they stay longer on the site because of this shade of blue? Will they get depressed if we show them depressing social media posts? What happens if we intentionally mismatch people on our dating website? When it comes to shades of blue, perhaps that’s not a big deal. But when it comes to mental health and deceiving people? Now we’re in ethically choppy waters. &lt;/p&gt;&lt;p&gt;My discussion today is with Cennydd Bowles, Managing Director of NowNext, where he helps organizations develop ethically sound products. He’s also the author of a book called “Future Ethics.” He argues that A/B testing on people is often ethically wrong and creates a culture among developers of a willingness to manipulate people. Great conversation ranging from the ethics of experimentation to marketing and even to capitalism.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="46488346" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/739274b0-6ead-4885-b201-88b2115dc074/stream.mp3"/>
                
                <guid isPermaLink="false">affdb3b4-ce27-4bb3-88d8-0999364752b2</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/739274b0-6ead-4885-b201-88b2115dc074</link>
                <pubDate>Thu, 13 Mar 2025 06:53:19 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/3/13/6/dfe685b2-861a-4ae5-aef5-77df84d30d51_rm_em_s02_thumbnails__4_.jpg"/>
                <itunes:duration>2905</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>AI Risk Mitigation is Insanely Complex</itunes:title>
                <title>AI Risk Mitigation is Insanely Complex</title>

                <itunes:episode>38</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p><span>There’s a picture in our heads that’s overly simplistic and the result is not thinking clearly about AI risks. Our simplistic picture is that a team develops AI and then it gets used. The truth, the more complex picture, is that 1000 hands touch that AI before it ever becomes a product. This means that risk identification and mitigation is spread across a very complex supply chain. My guest, Jason Stanley, is at the forefront of research and application when it comes to managing all this complexity</span></p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;There’s a picture in our heads that’s overly simplistic and the result is not thinking clearly about AI risks. Our simplistic picture is that a team develops AI and then it gets used. The truth, the more complex picture, is that 1000 hands touch that AI before it ever becomes a product. This means that risk identification and mitigation is spread across a very complex supply chain. My guest, Jason Stanley, is at the forefront of research and application when it comes to managing all this complexity&lt;/span&gt;&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="38356950" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/682e5412-9442-4dcd-b823-7a37aaba13a5/stream.mp3"/>
                
                <guid isPermaLink="false">f28ae131-1731-4a45-a7a8-5d7627c3df53</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/682e5412-9442-4dcd-b823-7a37aaba13a5</link>
                <pubDate>Fri, 07 Mar 2025 07:29:52 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/3/7/7/47e16123-8804-4a3e-85dd-05006c6cff7c_rm_em_s02_thumbnails__3_.jpg"/>
                <itunes:duration>2397</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Did You Say &#34;Quantum&#34; Computer?</itunes:title>
                <title>Did You Say &#34;Quantum&#34; Computer?</title>

                <itunes:episode>37</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>From the best of season 1: Microsoft recently announced an (alleged!) breakthrough in quantum computing. But what in the world is quantum computer, what can they do, and what are the potential ethical implications of this new powerful tech?</p><p>Brian and I discuss these issues and more. And don’t worry! No knowledge of physics required.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;From the best of season 1: Microsoft recently announced an (alleged!) breakthrough in quantum computing. But what in the world is quantum computer, what can they do, and what are the potential ethical implications of this new powerful tech?&lt;/p&gt;&lt;p&gt;Brian and I discuss these issues and more. And don’t worry! No knowledge of physics required.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="44009848" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/792f9dc3-fc9e-45e0-b814-a496c1f0031b/stream.mp3"/>
                
                <guid isPermaLink="false">f5150072-1f07-4627-827e-6bc6c3a1c76e</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/792f9dc3-fc9e-45e0-b814-a496c1f0031b</link>
                <pubDate>Thu, 27 Feb 2025 06:10:36 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/4/9/12/9d3e76c6-0cd2-41d3-a581-21583532ef1e_e0f-b47e-72731292a475_rm_em_s02_thumbnails__2_.jpg"/>
                <itunes:duration>2750</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>What Psychologists Say About AI Relationships</itunes:title>
                <title>What Psychologists Say About AI Relationships</title>

                <itunes:episode>36</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Every specialist in anything thinks they should have a seat at the AI ethics table. I’m usually skeptical. But psychologist Madeline Reinecke, Ph.D. did a great job defending her view that – you guessed it – psychologists should have a seat at the AI ethics table. </p><p>Our conversation ranged from the role of psychologists in creating AI that supports healthy human relationships to when children start and stop attributing sentience to robots to loving relationships with AI to the threat of AI-induced self-absorption. I guess I need to have more psychologists on the show.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Every specialist in anything thinks they should have a seat at the AI ethics table. I’m usually skeptical. But psychologist Madeline Reinecke, Ph.D. did a great job defending her view that – you guessed it – psychologists should have a seat at the AI ethics table. &lt;/p&gt;&lt;p&gt;Our conversation ranged from the role of psychologists in creating AI that supports healthy human relationships to when children start and stop attributing sentience to robots to loving relationships with AI to the threat of AI-induced self-absorption. I guess I need to have more psychologists on the show.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="40257828" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/40a2b07e-71ed-425b-8d44-7fe47b851fbe/stream.mp3"/>
                
                <guid isPermaLink="false">398dd464-f626-4794-ad49-e854c55ad70f</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/40a2b07e-71ed-425b-8d44-7fe47b851fbe</link>
                <pubDate>Thu, 20 Feb 2025 07:00:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/2/20/2/c5040cd8-247f-4604-81af-cbe45813a78a_rm_em_s02_thumbnails__1_.jpg"/>
                <itunes:duration>2516</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Am I Wrong About Agentic AI?</itunes:title>
                <title>Am I Wrong About Agentic AI?</title>

                <itunes:episode>35</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>A fun format for this episode. In Part I, I talk about how I see agentic AI unfolding and what ethical, social, and political risks come with it. In part II, Eric Corriel, digital strategist at the School of Visual Arts and a close friend, tells me why he thinks I’m wrong. Debate ensues.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;A fun format for this episode. In Part I, I talk about how I see agentic AI unfolding and what ethical, social, and political risks come with it. In part II, Eric Corriel, digital strategist at the School of Visual Arts and a close friend, tells me why he thinks I’m wrong. Debate ensues.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="29251709" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/a65f639a-ab0a-4992-9c78-87e27c450fab/stream.mp3"/>
                
                <guid isPermaLink="false">4800a147-497f-4c03-bf9c-71393197e10c</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/a65f639a-ab0a-4992-9c78-87e27c450fab</link>
                <pubDate>Thu, 13 Feb 2025 06:00:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/2/13/5/b14202f0-13e0-4805-989c-5af6289cbd73_rm_em_s02_thumbnails.jpg"/>
                <itunes:duration>1828</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>What Do VCs Want to Know About AI Ethics?</itunes:title>
                <title>What Do VCs Want to Know About AI Ethics?</title>

                <itunes:episode>34</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p><span>Jaahred Thomas is a VC friend of mine who wanted to talk about the evolving landscape of AI ethics in startups and business generally. So rather than have a normal conversation like people do, we made it an episode! Jaahred asks me a bunch of questions about AI ethics and startups, investors, Fortune 500 companies, and more, and I tell him the unvarnished truths about where corporate America is in the AI ethics journey and what startup founders should and shouldn’t spend their time doing.</span></p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;Jaahred Thomas is a VC friend of mine who wanted to talk about the evolving landscape of AI ethics in startups and business generally. So rather than have a normal conversation like people do, we made it an episode! Jaahred asks me a bunch of questions about AI ethics and startups, investors, Fortune 500 companies, and more, and I tell him the unvarnished truths about where corporate America is in the AI ethics journey and what startup founders should and shouldn’t spend their time doing.&lt;/span&gt;&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="51916800" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/6b828f12-824e-41cf-b1c6-78b947fc84bc/stream.mp3"/>
                
                <guid isPermaLink="false">519e47a8-1fd8-4b3b-b649-5424cf8874d5</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/6b828f12-824e-41cf-b1c6-78b947fc84bc</link>
                <pubDate>Thu, 06 Feb 2025 05:00:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/2/5/18/bb6e2dd3-2003-47a3-a50a-c3dc1cc4a039_rm_em_s02_thumbnails__5_.jpg"/>
                <itunes:duration>3244</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>The Peril of Principles in AI Ethics</itunes:title>
                <title>The Peril of Principles in AI Ethics</title>

                <itunes:episode>33</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>From the best of season 1: The hospital faced an ethical question: should we deploy robots to help with elder care?</p><p>Let’s look at a standard list of AI ethics values: justice/fairness, privacy, transparency, accountability, explainability. </p><p>But as Ami points out in our conversation, that standard list doesn’t include a core value at the hospital: the value of caring.</p><p>And that’s one example of one of three objections to a view he calls “Principalism.” Principalism is the view that we do AI ethics best by first defining our AI ethics values or principles at that very abstract level. This objection is that the list will always be incomplete.</p><p>Given Ami’s expertise in ethics and experience as a clinical ethicist, it was insightful to see how he gets ethics done on the ground and his views on how organizations should approach ethics more generally.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;From the best of season 1: The hospital faced an ethical question: should we deploy robots to help with elder care?&lt;/p&gt;&lt;p&gt;Let’s look at a standard list of AI ethics values: justice/fairness, privacy, transparency, accountability, explainability. &lt;/p&gt;&lt;p&gt;But as Ami points out in our conversation, that standard list doesn’t include a core value at the hospital: the value of caring.&lt;/p&gt;&lt;p&gt;And that’s one example of one of three objections to a view he calls “Principalism.” Principalism is the view that we do AI ethics best by first defining our AI ethics values or principles at that very abstract level. This objection is that the list will always be incomplete.&lt;/p&gt;&lt;p&gt;Given Ami’s expertise in ethics and experience as a clinical ethicist, it was insightful to see how he gets ethics done on the ground and his views on how organizations should approach ethics more generally.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="48628297" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/58b1e269-07bd-4576-8e1a-b26421fcbce7/stream.mp3"/>
                
                <guid isPermaLink="false">d4ac0d85-6479-4224-8537-1874d70b71a4</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/58b1e269-07bd-4576-8e1a-b26421fcbce7</link>
                <pubDate>Thu, 30 Jan 2025 05:00:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/1/29/17/220d31f8-a6c1-4469-a85b-a2fced4e5122_2d2-82c2-12ae1bb41002_rm_em_s02_thumbnails__4_.jpg"/>
                <itunes:duration>3039</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Innovation Hype and Why We Should Wait on AI Regulation</itunes:title>
                <title>Innovation Hype and Why We Should Wait on AI Regulation</title>

                <itunes:episode>32</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>From the best of season 1: Innovation is great…but hype is bad. Not only has all this talk of innovation not increased innovation, but it also creates a bad environment in which leaders can make reasoned judgments about where to devote resources. So says Lee Vinsel, professor in the Department of Science, Technology and Society at Virgina Tech, in this Ethical Machines episode.</p><p>ALSO! We want proactive regulations before the sh!t hits the fan, right? Not so fast, says Lee. Proactive regulations presuppose we’re good at predicting how technologies will be applied, and we have a terrible track record on that front. Perhaps reactive regs are appropriate (and we need to focus on making a more agile government).</p><p>Super interesting conversation that will push you to think differently about innovation and what appropriate regulation looks like.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;From the best of season 1: Innovation is great…but hype is bad. Not only has all this talk of innovation not increased innovation, but it also creates a bad environment in which leaders can make reasoned judgments about where to devote resources. So says Lee Vinsel, professor in the Department of Science, Technology and Society at Virgina Tech, in this Ethical Machines episode.&lt;/p&gt;&lt;p&gt;ALSO! We want proactive regulations before the sh!t hits the fan, right? Not so fast, says Lee. Proactive regulations presuppose we’re good at predicting how technologies will be applied, and we have a terrible track record on that front. Perhaps reactive regs are appropriate (and we need to focus on making a more agile government).&lt;/p&gt;&lt;p&gt;Super interesting conversation that will push you to think differently about innovation and what appropriate regulation looks like.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="48401345" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/7dc5d9e4-2cbf-4c7f-8324-8dfe2d9f9876/stream.mp3"/>
                
                <guid isPermaLink="false">a3174fdc-b372-47d9-ad1d-5b88a1034289</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/7dc5d9e4-2cbf-4c7f-8324-8dfe2d9f9876</link>
                <pubDate>Thu, 23 Jan 2025 08:30:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/1/23/13/96dc9984-a04c-40b2-adb7-346fc9011c4e_6d1-b5dc-4fa72ba66e05_rm_em_s02_thumbnails__3_.jpg"/>
                <itunes:duration>3025</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Businesses are afraid to say “ethics”</itunes:title>
                <title>Businesses are afraid to say “ethics”</title>

                <itunes:episode>31</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>“Sustainability,” “purpose/mission/value driven”, “human-centric design.” These are terms companies use so they don’t have to say “ethics.” My contention is that this is bad for business and bad for society at large. Our world, corporate and otherwise, is confronted with a growing mountain of ethical problems, spurred on by technologies that bring us fresh new ways of realizing our familiar ethical nightmares. These issues do not disappear via semantic legerdemain. We need to name our problems accurately if we are to address them effectively.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;“Sustainability,” “purpose/mission/value driven”, “human-centric design.” These are terms companies use so they don’t have to say “ethics.” My contention is that this is bad for business and bad for society at large. Our world, corporate and otherwise, is confronted with a growing mountain of ethical problems, spurred on by technologies that bring us fresh new ways of realizing our familiar ethical nightmares. These issues do not disappear via semantic legerdemain. We need to name our problems accurately if we are to address them effectively.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="15776705" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/ff6cf1b2-8b3d-420f-b579-1a0b0bbafc26/stream.mp3"/>
                
                <guid isPermaLink="false">d02dca2c-14de-4d43-9d76-56b0042c8628</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/ff6cf1b2-8b3d-420f-b579-1a0b0bbafc26</link>
                <pubDate>Thu, 16 Jan 2025 07:00:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/1/16/6/7dac7434-86d8-43b8-b867-cf632ee6838f_rm_em_s02_thumbnails__2_.jpg"/>
                <itunes:duration>986</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>We’re Getting AI and Democracy Wrong</itunes:title>
                <title>We’re Getting AI and Democracy Wrong</title>

                <itunes:episode>30</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Democracy is about how we ought to distribute power in society and, more specifically, it’s the claim that people ought to have a significant say in how they are ruled. So if we’re talking about AI’s impact on democracy, we should focus on how our use of AI interact with our concern that democracy be respected, upheld, and improved.  </p><p>But my guest Ted Lechterman, UNESCO Chair in AI Ethics and Governance at IE University’s School of Humanities, argues that our current discussion on AI and democracy is far too narrow. We need to widen our aperature if we’re going to discuss and ultimately address the issues that really matter.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Democracy is about how we ought to distribute power in society and, more specifically, it’s the claim that people ought to have a significant say in how they are ruled. So if we’re talking about AI’s impact on democracy, we should focus on how our use of AI interact with our concern that democracy be respected, upheld, and improved.  &lt;/p&gt;&lt;p&gt;But my guest Ted Lechterman, UNESCO Chair in AI Ethics and Governance at IE University’s School of Humanities, argues that our current discussion on AI and democracy is far too narrow. We need to widen our aperature if we’re going to discuss and ultimately address the issues that really matter.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="51360914" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/c5f83a52-d8ce-4bb0-9b10-cf6e4fec1219/stream.mp3"/>
                
                <guid isPermaLink="false">cdc82cde-dfa1-436b-9cc6-1c963b9eca7d</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/c5f83a52-d8ce-4bb0-9b10-cf6e4fec1219</link>
                <pubDate>Thu, 09 Jan 2025 05:44:52 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/1/9/5/fbfc5f2c-9b13-42f5-8075-e4f24eae93d4_rm_em_s02_thumbnails__2_.jpg"/>
                <itunes:duration>3210</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Why Copyright Challenges to AI Learning Will Fail and the Ethical Reasons Why They Shouldn’t</itunes:title>
                <title>Why Copyright Challenges to AI Learning Will Fail and the Ethical Reasons Why They Shouldn’t</title>

                <itunes:episode>29</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p><span>From the best of season 1. </span></p><p><span>Well, I didn’t see this coming. Talking about legal and philosophical conceptions of copyright turns out to be intellectually fascinating and challenging. It involves not only concepts about property and theft, but also about personhood and invasiveness. Could it be that training AI with author/artist work violates their self?</span></p><p><span>I talked with Darren Hick about all this, who wrote a few books on the topic. I definitely didn’t think he was going to bring up Hegel.</span></p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;From the best of season 1. &lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;Well, I didn’t see this coming. Talking about legal and philosophical conceptions of copyright turns out to be intellectually fascinating and challenging. It involves not only concepts about property and theft, but also about personhood and invasiveness. Could it be that training AI with author/artist work violates their self?&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;I talked with Darren Hick about all this, who wrote a few books on the topic. I definitely didn’t think he was going to bring up Hegel.&lt;/span&gt;&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="50792071" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/8213b069-edea-49f7-a933-0847c1f171bf/stream.mp3"/>
                
                <guid isPermaLink="false">dca41b56-91e0-4a9a-8fc1-8eb3fa8bf20b</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/8213b069-edea-49f7-a933-0847c1f171bf</link>
                <pubDate>Thu, 19 Dec 2024 05:00:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/12/18/18/e7806509-02d5-4d9b-9b19-baa29430fd5e_rm_em_s02_thumbnails__2_.jpg"/>
                <itunes:duration>3174</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Evolving AI Governance</itunes:title>
                <title>Evolving AI Governance</title>

                <itunes:episode>28</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>My guest and I have been doing AI governance for businesses for a combined 17+years. We started way before genAI was a big thing. But I’d say I’m more a qualitative guy and he’s more quant. Nick Elprin is the CEO of an AI governance software company, after all. How has AI ethics or AI governance evolved over that time and what does cutting edge governance look like? Perhaps you’re about to find out…</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;My guest and I have been doing AI governance for businesses for a combined 17&#43;years. We started way before genAI was a big thing. But I’d say I’m more a qualitative guy and he’s more quant. Nick Elprin is the CEO of an AI governance software company, after all. How has AI ethics or AI governance evolved over that time and what does cutting edge governance look like? Perhaps you’re about to find out…&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="48128835" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/b4b2d3f4-4c9e-4916-9ef2-60375bdda844/stream.mp3"/>
                
                <guid isPermaLink="false">0ed0813d-5c97-40f3-9ca9-64f37184078e</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/b4b2d3f4-4c9e-4916-9ef2-60375bdda844</link>
                <pubDate>Thu, 12 Dec 2024 06:00:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/12/12/3/271c748f-1178-4a84-bc2b-56494270e23b_fc7-a2b6-7bc602c1b41d_rm_em_s02_thumbnails__1_.jpg"/>
                <itunes:duration>3008</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>What’s Wrong With Loving an AI?</itunes:title>
                <title>What’s Wrong With Loving an AI?</title>

                <itunes:episode>27</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p><span>People, especially kids under 18, are forming emotional attachments with AI chatbots. At a minimum, this is…weird. Is it also unethical? Does it harm users? Is it, as my guest Robert Mahari argues, an affront to human dignity? Have a listen and find out.</span></p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;People, especially kids under 18, are forming emotional attachments with AI chatbots. At a minimum, this is…weird. Is it also unethical? Does it harm users? Is it, as my guest Robert Mahari argues, an affront to human dignity? Have a listen and find out.&lt;/span&gt;&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="51459134" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/1f85af49-c328-4cc0-98f1-d8bda3b023fd/stream.mp3"/>
                
                <guid isPermaLink="false">5499ecab-f1c7-4dc0-9714-eafca59f98a1</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/1f85af49-c328-4cc0-98f1-d8bda3b023fd</link>
                <pubDate>Thu, 05 Dec 2024 06:00:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/12/4/17/bb9c920d-59cf-4dad-8866-f14d95891f8b_rm_em_s02_thumbnails__1_.jpg"/>
                <itunes:duration>3216</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Rationally Believing Conspiracy Theories</itunes:title>
                <title>Rationally Believing Conspiracy Theories</title>

                <itunes:episode>26</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p><span>﻿</span>You might want more online content moderation so insane conspiracy theories don’t flourish. Sex slaves in Democrat pizza shops, climate change is a hoax, and so on. But is it irrational to believe these things? Is content moderation - whether in the form of censoring or labelling something as false - the morally right and/or effective strategy? In this discussion Neil Levy and I go back to basics about what it is to be rational	and how that helps us answer our questions. Neil’s fascinating answer in a nutshell: they’re not irrational and content moderation isn’t a good strategy. This is, I have to say, great stuff. Enjoy!</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;﻿&lt;/span&gt;You might want more online content moderation so insane conspiracy theories don’t flourish. Sex slaves in Democrat pizza shops, climate change is a hoax, and so on. But is it irrational to believe these things? Is content moderation - whether in the form of censoring or labelling something as false - the morally right and/or effective strategy? In this discussion Neil Levy and I go back to basics about what it is to be rational	and how that helps us answer our questions. Neil’s fascinating answer in a nutshell: they’re not irrational and content moderation isn’t a good strategy. This is, I have to say, great stuff. Enjoy!&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="50135040" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/c9f067c3-59fe-4a4f-9ee1-62bb589fa3d8/stream.mp3"/>
                
                <guid isPermaLink="false">b7de0d5d-3a25-4638-bf49-491c5eb308ee</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/c9f067c3-59fe-4a4f-9ee1-62bb589fa3d8</link>
                <pubDate>Thu, 21 Nov 2024 07:00:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/11/21/3/8597fdbe-8af3-4e37-8afb-0ecd25ca64e7_rm_em_s02_thumbnails.jpg"/>
                <itunes:duration>3133</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>AI Understands. A Little. Part 2</itunes:title>
                <title>AI Understands. A Little. Part 2</title>

                <itunes:episode>25</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p><span>From the best of season 1. Part 2 of my conversation with Alex. </span></p><p>There’s good reason to think AI doesn’t understand anything. It’s just moving around words according to mathematical rules, predicting the words that come next. But in this episode, philosopher Alex Grzankowski argues that AI may not understand what it’s saying but it does understand language. In this episode we do a deep dive into the nature of human and AI understanding, ending with strategies for how AI researchers could pursue AI that has genuine understanding of the world.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;From the best of season 1. Part 2 of my conversation with Alex. &lt;/span&gt;&lt;/p&gt;&lt;p&gt;There’s good reason to think AI doesn’t understand anything. It’s just moving around words according to mathematical rules, predicting the words that come next. But in this episode, philosopher Alex Grzankowski argues that AI may not understand what it’s saying but it does understand language. In this episode we do a deep dive into the nature of human and AI understanding, ending with strategies for how AI researchers could pursue AI that has genuine understanding of the world.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="57383706" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/dd536d8e-01fc-41d7-b154-94d620c31e51/stream.mp3"/>
                
                <guid isPermaLink="false">1e2e98c6-b87b-4bf3-bd54-1b63c5d130d3</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/dd536d8e-01fc-41d7-b154-94d620c31e51</link>
                <pubDate>Thu, 14 Nov 2024 08:32:30 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/11/14/8/720ee110-1361-45b6-9f26-3e0fc680397f_ethicalmachinestemplate___2000_x_2000_px___2_.jpg"/>
                <itunes:duration>3586</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>ChatGPT Does Not Understand Anything Part 1</itunes:title>
                <title>ChatGPT Does Not Understand Anything Part 1</title>

                <itunes:episode>22</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>From the best of season 1. Part 1 of my conversation with Alex Grzankowski.</p><p>It looks like ChatGPT understands what you’re asking. It looks like ChatGPT understands what it’s saying in reply. </p><p>But that’s not the case.</p><p>Alex and I discuss what understanding is, for both people and machines and what it would take for a machine to understand what it’s saying.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;From the best of season 1. Part 1 of my conversation with Alex Grzankowski.&lt;/p&gt;&lt;p&gt;It looks like ChatGPT understands what you’re asking. It looks like ChatGPT understands what it’s saying in reply. &lt;/p&gt;&lt;p&gt;But that’s not the case.&lt;/p&gt;&lt;p&gt;Alex and I discuss what understanding is, for both people and machines and what it would take for a machine to understand what it’s saying.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="51699879" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/b2380b44-6525-4cd6-bee0-7c00ed418ab1/stream.mp3"/>
                
                <guid isPermaLink="false">cdd50b66-2420-4e95-ba95-cd17ec1c6500</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/b2380b44-6525-4cd6-bee0-7c00ed418ab1</link>
                <pubDate>Wed, 13 Nov 2024 09:18:44 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/11/13/9/4c7bd7fd-9c9f-4e44-a0a6-d49010a32943__ethicalmachinestemplate___2000_x_2000_px___1_.jpg"/>
                <itunes:duration>3231</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Tyranny of the One Best Algorithm</itunes:title>
                <title>Tyranny of the One Best Algorithm</title>

                <itunes:episode>23</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>One person driving one car creates a negligible amount of pollution. The problem arises when we have lots of people driving cars. Might this kind of issue arise with AI use as well? What if everyone uses the same hiring or lending or diagnostic algorithm? My guest, Kathleen Creel, argues that this is bad for society and bad for the companies using these algorithms. The solution, in broad strokes, is to introduce randomness into the AI system. But is this a good idea? If so, do we need regulation to pull it off? This and more on today’s episode.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;One person driving one car creates a negligible amount of pollution. The problem arises when we have lots of people driving cars. Might this kind of issue arise with AI use as well? What if everyone uses the same hiring or lending or diagnostic algorithm? My guest, Kathleen Creel, argues that this is bad for society and bad for the companies using these algorithms. The solution, in broad strokes, is to introduce randomness into the AI system. But is this a good idea? If so, do we need regulation to pull it off? This and more on today’s episode.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="43792927" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/cfbe5250-dc9e-418d-a700-93bc6897afb8/stream.mp3"/>
                
                <guid isPermaLink="false">bc16a5f8-5b70-440f-b4fd-1b0b8de2d236</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/cfbe5250-dc9e-418d-a700-93bc6897afb8</link>
                <pubDate>Thu, 07 Nov 2024 05:00:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/11/6/18/f577b567-2eec-47c0-90cd-1f32978363ce_s02ep23.jpg"/>
                <itunes:duration>2737</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>How AI Ends Legal Uncertainty</itunes:title>
                <title>How AI Ends Legal Uncertainty</title>

                <itunes:episode>22</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>With so many laws and so much case law, it’s virtually impossible for the layperson to know what’s legal and illegal. But what if AI can synthesize all that information and deliver clear legal guidance to the average person? Is such a thing possible? Is it desirable?</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;With so many laws and so much case law, it’s virtually impossible for the layperson to know what’s legal and illegal. But what if AI can synthesize all that information and deliver clear legal guidance to the average person? Is such a thing possible? Is it desirable?&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="55538416" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/2fba1155-a511-4f54-af62-a810310fa718/stream.mp3"/>
                
                <guid isPermaLink="false">68b58116-0065-4ce0-a83d-31f66e0e2c32</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/2fba1155-a511-4f54-af62-a810310fa718</link>
                <pubDate>Thu, 31 Oct 2024 11:31:11 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/10/31/11/eaf2a274-ede9-40a6-954e-bd5538bf2dec__ethicalmachinestemplate___2000_x_2000_px___1_.jpg"/>
                <itunes:duration>3471</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Is Tech a Religion that Needs Reformation?</itunes:title>
                <title>Is Tech a Religion that Needs Reformation?</title>

                <itunes:episode>21</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Author of the new book “Tech Agnostic: How Technology Became the World&#39;s Most Powerful Religion, and Why It Desperately Needs a Reformation” discusses, well, what do you think? It’s right there in the title. Go have a listen.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Author of the new book “Tech Agnostic: How Technology Became the World&amp;#39;s Most Powerful Religion, and Why It Desperately Needs a Reformation” discusses, well, what do you think? It’s right there in the title. Go have a listen.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="35716702" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/3037e744-ade7-453d-9a48-764fb399d3e2/stream.mp3"/>
                
                <guid isPermaLink="false">fd69522c-c8eb-4275-8ac8-f556d292c747</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/3037e744-ade7-453d-9a48-764fb399d3e2</link>
                <pubDate>Thu, 24 Oct 2024 06:00:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/10/23/17/35d70857-da91-4fee-ad13-3ffae52c905b_ethicalmachinestemplate___2000_x_2000_px_.jpg"/>
                <itunes:duration>2232</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Should We Care About  Data Privacy?</itunes:title>
                <title>Should We Care About  Data Privacy?</title>

                
                
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>From the best of season 1: You might think it&#39;s outrageous that companies collect data about you and use it in various ways to drive profits. The business model of the &#34;attention&#34; economy is often objected to on just these grounds.</p><p>On the other hand, does it really matter if data about you is collected and no person ever looks at that data? Is that really an invasion of your privacy?</p><p>Carissa and I discuss all this and more. I push the skeptical line, trying on the position that it doesn&#39;t really matter all that much. Carissa has powerful arguments against me.</p><p>This conversation goes way deeper than &#39;privacy good/data collection bad&#39; statements we see all the time. I hope you enjoy!</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;From the best of season 1: You might think it&amp;#39;s outrageous that companies collect data about you and use it in various ways to drive profits. The business model of the &amp;#34;attention&amp;#34; economy is often objected to on just these grounds.&lt;/p&gt;&lt;p&gt;On the other hand, does it really matter if data about you is collected and no person ever looks at that data? Is that really an invasion of your privacy?&lt;/p&gt;&lt;p&gt;Carissa and I discuss all this and more. I push the skeptical line, trying on the position that it doesn&amp;#39;t really matter all that much. Carissa has powerful arguments against me.&lt;/p&gt;&lt;p&gt;This conversation goes way deeper than &amp;#39;privacy good/data collection bad&amp;#39; statements we see all the time. I hope you enjoy!&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="50998543" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/32d822fb-c98e-4f47-b45a-f82e35859d05/stream.mp3"/>
                
                <guid isPermaLink="false">2941bdd9-8aeb-4d31-bcee-5fcdf0674cf9</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/32d822fb-c98e-4f47-b45a-f82e35859d05</link>
                <pubDate>Thu, 17 Oct 2024 08:02:44 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/10/17/8/33494eea-c09a-4cfa-8c42-a1c82727317a__ethicalmachinestemplate___2000_x_2000_px___4_.jpg"/>
                <itunes:duration>3187</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>The AI Mirror</itunes:title>
                <title>The AI Mirror</title>

                <itunes:episode>19</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>We use the wrong metaphor for thinking about AI, Shannon Vallor argues, and bad thinking leads to bad results. We need to stop thinking about AI as being an agent or having a mind, and stop thinking of the human mind/brain as a kind of software/hardware configuration. All of this is misguided. Instead, we should think of AI as a mirror, reflecting our images in a sometimes helpful, sometimes distorted way. Our shifting to this new metaphor, she says, will lead us to better, and ethically better, AI.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;We use the wrong metaphor for thinking about AI, Shannon Vallor argues, and bad thinking leads to bad results. We need to stop thinking about AI as being an agent or having a mind, and stop thinking of the human mind/brain as a kind of software/hardware configuration. All of this is misguided. Instead, we should think of AI as a mirror, reflecting our images in a sometimes helpful, sometimes distorted way. Our shifting to this new metaphor, she says, will lead us to better, and ethically better, AI.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="60536790" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/cac0947f-ea01-4f67-a97a-aaff9af08383/stream.mp3"/>
                
                <guid isPermaLink="false">64bb863f-d50a-42d6-bbcd-d1a56ba2d639</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/cac0947f-ea01-4f67-a97a-aaff9af08383</link>
                <pubDate>Thu, 10 Oct 2024 06:00:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/10/10/4/0d3549f7-a56f-411d-b611-9153be4f50b7_ethicalmachinestemplate___2000_x_2000_px___3_.jpg"/>
                <itunes:duration>3783</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Holding AI Responsible for What It Says</itunes:title>
                <title>Holding AI Responsible for What It Says</title>

                <itunes:episode>18</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Canada Air blamed the LLM chatbot for giving false information about their bereavement fare policy. They lost the law suit because of course it’s not the chatbot’s fault. But what would it take to hold chatbots responsible for what they say? That’s the topic of discussion with my guest, philosopher Emma Borg.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Canada Air blamed the LLM chatbot for giving false information about their bereavement fare policy. They lost the law suit because of course it’s not the chatbot’s fault. But what would it take to hold chatbots responsible for what they say? That’s the topic of discussion with my guest, philosopher Emma Borg.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="47201802" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/153a551f-529c-494f-9940-c891d1adcff5/stream.mp3"/>
                
                <guid isPermaLink="false">0a8bac2a-a9e5-476f-86d9-3dc49ef0e629</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/153a551f-529c-494f-9940-c891d1adcff5</link>
                <pubDate>Thu, 03 Oct 2024 05:00:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/10/2/17/34b3bdf4-da4e-4e37-a42f-0f69b21b9693_ethicalmachinestemplate___2000_x_2000_px___2_.jpg"/>
                <itunes:duration>2950</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Deepfakes and 2024 Election</itunes:title>
                <title>Deepfakes and 2024 Election</title>

                <itunes:episode>17</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>California just signed a bill to drastically decrease deepfakes on social media. The worry, of course, is that they are already being used to unjustifiably sway voters. In this episode, one of the best from Season 1, I talk to Dean Jackson and Jon Bateman, experts on the role of deepfakes in disinformation campaigns. The bottom line? Deepfakes aren’t great but they’re not half the problem.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;California just signed a bill to drastically decrease deepfakes on social media. The worry, of course, is that they are already being used to unjustifiably sway voters. In this episode, one of the best from Season 1, I talk to Dean Jackson and Jon Bateman, experts on the role of deepfakes in disinformation campaigns. The bottom line? Deepfakes aren’t great but they’re not half the problem.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="46171951" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/0cf90815-a73d-496c-bb31-37b000c5e6b3/stream.mp3"/>
                
                <guid isPermaLink="false">d400a483-1ca2-48e4-9ba9-2f034d237a49</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/0cf90815-a73d-496c-bb31-37b000c5e6b3</link>
                <pubDate>Thu, 26 Sep 2024 05:00:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/9/26/1/adcded69-af0a-492e-a921-55231fa60a5e__ethicalmachinestemplate___2000_x_2000_px___1_.jpg"/>
                <itunes:duration>2885</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Ethics for People Who Work in Tech</itunes:title>
                <title>Ethics for People Who Work in Tech</title>

                <itunes:episode>16</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>What does it look like to integrate ethics into the teams that are building AI? How can we make ethics a practice and not a compliance checklist? In today’s episode I talk with Marc Steen, author of the book “Ethics for People Who Work in Tech,” who answers these questions and more.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;What does it look like to integrate ethics into the teams that are building AI? How can we make ethics a practice and not a compliance checklist? In today’s episode I talk with Marc Steen, author of the book “Ethics for People Who Work in Tech,” who answers these questions and more.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="39953972" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/a6b17ea3-908c-4bec-86a3-2953416c47cc/stream.mp3"/>
                
                <guid isPermaLink="false">26ef6344-afbb-48f1-b5b1-cea1e979c9c6</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/a6b17ea3-908c-4bec-86a3-2953416c47cc</link>
                <pubDate>Thu, 19 Sep 2024 06:00:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/9/18/18/a6b4d33d-c8af-41ba-b786-ae63420a7952_ethicalmachinestemplate___2000_x_2000_px___1_.jpg"/>
                <itunes:duration>2497</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Calm the Hell Down : AI is Just Software that Learns by Example and No, It’s Not Going to Kill Us All</itunes:title>
                <title>Calm the Hell Down : AI is Just Software that Learns by Example and No, It’s Not Going to Kill Us All</title>

                <itunes:episode>15</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Doesn’t the title say it all? This is for anyone who wants the very basics on what AI is, why it’s not intelligent, and why it doesn’t pose an existential threat to humanity. If you don’t know anything at all about AI and/or the nature of the mind/intelligence, don’t worry: we’re starting on the ground floor.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Doesn’t the title say it all? This is for anyone who wants the very basics on what AI is, why it’s not intelligent, and why it doesn’t pose an existential threat to humanity. If you don’t know anything at all about AI and/or the nature of the mind/intelligence, don’t worry: we’re starting on the ground floor.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="32080457" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/04b584ef-6120-4caa-8f49-25649cc75b5e/stream.mp3"/>
                
                <guid isPermaLink="false">072733d2-25fb-4fa1-a306-f46928aa6b01</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/04b584ef-6120-4caa-8f49-25649cc75b5e</link>
                <pubDate>Thu, 12 Sep 2024 06:30:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/9/11/19/2813b294-3d03-4e07-a37e-ac8761e006de_ethicalmachinestemplate___2000_x_2000_px_.jpg"/>
                <itunes:duration>2005</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Does Social Media Diminish Our Autonomy?</itunes:title>
                <title>Does Social Media Diminish Our Autonomy?</title>

                <itunes:episode>14</itunes:episode>
                <itunes:season>5</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Are we dependent on social media in a way that erodes our autonomy? After all, platforms are designed to keep us hooked and to come back for more. And we don’t really know the law of the digital lands, since how the algorithms influence how we relate to each other online in unknown ways. Then again, don’t we bear a certain degree of personal responsibility for how we conduct ourselves, online or otherwise? What the right balance is and how we can encourage or require greater autonomy is our topic of discussion today.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Are we dependent on social media in a way that erodes our autonomy? After all, platforms are designed to keep us hooked and to come back for more. And we don’t really know the law of the digital lands, since how the algorithms influence how we relate to each other online in unknown ways. Then again, don’t we bear a certain degree of personal responsibility for how we conduct ourselves, online or otherwise? What the right balance is and how we can encourage or require greater autonomy is our topic of discussion today.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="47157498" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/20ddb4fb-d88e-4074-9dbb-903531d16e65/stream.mp3"/>
                
                <guid isPermaLink="false">5be3732c-f624-419b-a706-14b81cc1c6a4</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/20ddb4fb-d88e-4074-9dbb-903531d16e65</link>
                <pubDate>Thu, 05 Sep 2024 08:17:27 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/9/5/8/83358b5a-04b6-4c44-92c0-68e49ff09600_ethicalmachinestemplate___2000_x_2000_px___1_.jpg"/>
                <itunes:duration>2947</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Choosing Who Should Benefit and Who Should Suffer with AI</itunes:title>
                <title>Choosing Who Should Benefit and Who Should Suffer with AI</title>

                <itunes:episode>13</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>From the best of season 1: <span>I talk a lot about bias, black boxes, and privacy, but perhaps my focus is too narrow. In this conversation, Aimee and I discuss what she calls “sustainable AI.” We focus on the environmental impacts of AI, the ethical impacts of those environmental impacts, and who is paying the social cost of those who benefit from AI. </span></p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;From the best of season 1: &lt;span&gt;I talk a lot about bias, black boxes, and privacy, but perhaps my focus is too narrow. In this conversation, Aimee and I discuss what she calls “sustainable AI.” We focus on the environmental impacts of AI, the ethical impacts of those environmental impacts, and who is paying the social cost of those who benefit from AI. &lt;/span&gt;&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="44653923" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/bfea5a17-31dd-481c-b2ff-3c29771c830a/stream.mp3"/>
                
                <guid isPermaLink="false">88e25f9c-34d0-43ad-89fe-fa370d4ebe43</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/bfea5a17-31dd-481c-b2ff-3c29771c830a</link>
                <pubDate>Thu, 29 Aug 2024 06:09:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/8/29/6/7e3c8550-d85f-4660-9fa5-82d996f95e08_ethicalmachinestemplate___2000_x_2000_px_.jpg"/>
                <itunes:duration>2790</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>We’re Doing AI Ethics Wrong</itunes:title>
                <title>We’re Doing AI Ethics Wrong</title>

                <itunes:episode>12</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Is our collective approach to ensuring AI doesn’t go off the rails fundamentally misguided? Is our approach too narrow to get the job done? My guest, John Basl argues exactly that. We need to broaden our perspective, he says, and prioritize what he calls an “AI ethics ecosystem.” It’s a big lift, but without it it’s an even bigger problem.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Is our collective approach to ensuring AI doesn’t go off the rails fundamentally misguided? Is our approach too narrow to get the job done? My guest, John Basl argues exactly that. We need to broaden our perspective, he says, and prioritize what he calls an “AI ethics ecosystem.” It’s a big lift, but without it it’s an even bigger problem.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="63175366" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/e53e41a9-bf95-412d-a3ce-4659489dfe5c/stream.mp3"/>
                
                <guid isPermaLink="false">55377d0e-2ca8-455c-b441-4e999cd0d120</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/e53e41a9-bf95-412d-a3ce-4659489dfe5c</link>
                <pubDate>Thu, 22 Aug 2024 12:07:58 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/8/22/11/25e665db-d477-4b22-baed-c97862a2ec9e_ethicalmachinestemplate___2000_x_2000_px_.jpg"/>
                <itunes:duration>3948</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Can AI Do Ethics?</itunes:title>
                <title>Can AI Do Ethics?</title>

                <itunes:episode>11</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Many researchers in AI think we should make AI capable of ethical inquiry. We can’t teach it all the ethical rules; that’s impossible. Instead, we should teach it to ethically reason, just as we do children. But my guest thinks this strategy makes a number of controversial assumptions, including how ethics works and what actually is right and wrong.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Many researchers in AI think we should make AI capable of ethical inquiry. We can’t teach it all the ethical rules; that’s impossible. Instead, we should teach it to ethically reason, just as we do children. But my guest thinks this strategy makes a number of controversial assumptions, including how ethics works and what actually is right and wrong.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="42129449" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/0e8ad263-d9c4-4fed-b040-a7aad7b1a6e5/stream.mp3"/>
                
                <guid isPermaLink="false">bdc0f28d-40f0-4d4c-9ee2-a71bd31437ef</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/0e8ad263-d9c4-4fed-b040-a7aad7b1a6e5</link>
                <pubDate>Thu, 15 Aug 2024 07:51:43 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/8/15/7/998b44da-8cc8-41a4-887c-8fe74f2204d4_ethicalmachinestemplate___2000_x_2000_px_.jpg"/>
                <itunes:duration>2633</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>We Don’t Need AI Regulations</itunes:title>
                <title>We Don’t Need AI Regulations</title>

                <itunes:episode>10</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>It’s common to hear we need new regulations to avoid the risks of AI (bias, privacy violations, manipulation, etc.). But my guest, Dean Ball, thinks this claim is too hastily made. In fact, he argues, we don’t need a new regulatory regime tailored to AI. If he’s right, then in a way that’s good news, since regulations are so notoriously difficult to push through. But he emphasizes we still need a robust governance response to the risks at hand. What are those responses? Have a listen and find out!</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;It’s common to hear we need new regulations to avoid the risks of AI (bias, privacy violations, manipulation, etc.). But my guest, Dean Ball, thinks this claim is too hastily made. In fact, he argues, we don’t need a new regulatory regime tailored to AI. If he’s right, then in a way that’s good news, since regulations are so notoriously difficult to push through. But he emphasizes we still need a robust governance response to the risks at hand. What are those responses? Have a listen and find out!&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="48520881" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/94d5f16b-c6d5-439e-a8b1-68fc24adf6d7/stream.mp3"/>
                
                <guid isPermaLink="false">548ea744-7fe3-4db9-80e4-c26cba53e5b7</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/94d5f16b-c6d5-439e-a8b1-68fc24adf6d7</link>
                <pubDate>Thu, 08 Aug 2024 07:26:22 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/8/8/7/63375711-bdc5-417c-bebd-44a7c3ff8858_ethicalmachinestemplate___2000_x_2000_px___1_.jpg"/>
                <itunes:duration>3032</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>When Biased AI is Good</itunes:title>
                <title>When Biased AI is Good</title>

                <itunes:episode>9</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Everyone knows biased or discriminatory AI bad and we need to get rid of it, right? Well, not so fast.</p><p>I’m bringing one of the best episodes from Season 1 back. I talk to David Danks, a professor of data science and philosophy at UCSD. He and his research team argue that we need to reconceive our approach to biased AI. In some cases, David thinks, it can be beneficial. Good policy - both corporate and regulatory - needs to take this into account.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Everyone knows biased or discriminatory AI bad and we need to get rid of it, right? Well, not so fast.&lt;/p&gt;&lt;p&gt;I’m bringing one of the best episodes from Season 1 back. I talk to David Danks, a professor of data science and philosophy at UCSD. He and his research team argue that we need to reconceive our approach to biased AI. In some cases, David thinks, it can be beneficial. Good policy - both corporate and regulatory - needs to take this into account.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="45478974" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/2ee0233a-b590-47e7-ac28-6985df41e5a2/stream.mp3"/>
                
                <guid isPermaLink="false">bf2283a8-2104-4085-bb4e-bd0d8fa2f3a0</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/2ee0233a-b590-47e7-ac28-6985df41e5a2</link>
                <pubDate>Thu, 01 Aug 2024 07:12:28 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/8/1/7/1fdc12e2-7fd6-460a-b88a-bc2762238dc7_ethicalmachinestemplate___2000_x_2000_px_.jpg"/>
                <itunes:duration>2842</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>The Secret Life of Data</itunes:title>
                <title>The Secret Life of Data</title>

                <itunes:episode>8</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p><span>Data about us is collected, aggregated, and shared in more ways than we can count. In some cases, this leads to great benefits. In others, a great deal of harm. But at the end of the day, the truth is that it’s all out of control. No individual, nor any private company, nor any government has a grip on what gets collected, what gets done with it, and what the societal impacts are. In this episode I talk to Aram Sinnreich and Jesse Gilbert about their new book, “The Secret Life of Data,” in which they explain the complexity and how we should begin to take back control.</span></p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;&lt;span&gt;Data about us is collected, aggregated, and shared in more ways than we can count. In some cases, this leads to great benefits. In others, a great deal of harm. But at the end of the day, the truth is that it’s all out of control. No individual, nor any private company, nor any government has a grip on what gets collected, what gets done with it, and what the societal impacts are. In this episode I talk to Aram Sinnreich and Jesse Gilbert about their new book, “The Secret Life of Data,” in which they explain the complexity and how we should begin to take back control.&lt;/span&gt;&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="47042560" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/eda8a8fa-21cc-488e-941f-e8b749013fc2/stream.mp3"/>
                
                <guid isPermaLink="false">892576c0-3c82-4453-b507-a8fdffa7d0c1</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/eda8a8fa-21cc-488e-941f-e8b749013fc2</link>
                <pubDate>Thu, 25 Jul 2024 03:45:19 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/7/25/6/1862bc4f-f35c-4a0f-a877-5c96004ed65b_ethicalmachinestemplate___2000_x_2000_px_.jpg"/>
                <itunes:duration>2940</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>The Necessary Imperfections of AI Content Moderation</itunes:title>
                <title>The Necessary Imperfections of AI Content Moderation</title>

                <itunes:episode>7</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>With the ocean of social media content we need AI to identify and remove inappropriate material; humans just can’t keep up. But AI doesn’t assess content the same way we do. It’s not a deliberative body akin to the Supreme Court. But because we think of content moderation as a reflection of human evaluation, we then make unreasonable demands of social media companies and ask for regulations that won’t protect anyone. When we reframe what AI content moderation is and has to be, my guest argues, that leads us to make more reasonable and more effective demands of social media companies and government.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;With the ocean of social media content we need AI to identify and remove inappropriate material; humans just can’t keep up. But AI doesn’t assess content the same way we do. It’s not a deliberative body akin to the Supreme Court. But because we think of content moderation as a reflection of human evaluation, we then make unreasonable demands of social media companies and ask for regulations that won’t protect anyone. When we reframe what AI content moderation is and has to be, my guest argues, that leads us to make more reasonable and more effective demands of social media companies and government.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="54002834" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/a4b945dd-9401-45c0-9da5-3f507948c0b4/stream.mp3"/>
                
                <guid isPermaLink="false">0518853d-5974-4e12-9d18-b38a63c9db21</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/a4b945dd-9401-45c0-9da5-3f507948c0b4</link>
                <pubDate>Thu, 18 Jul 2024 07:20:07 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/7/18/8/6584351a-19eb-490f-bd83-610293684a07_9f33_ethicalmachinestemplate___2000_x_2000_px_.jpg"/>
                <itunes:duration>3375</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>AI Armageddon is Unlikely</itunes:title>
                <title>AI Armageddon is Unlikely</title>

                <itunes:episode>6</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>AI + nuclear capacities sounds like a recipe for disaster. Some people think it could cause mass extinction. While it’s easy to let our imaginations run wild, insight into how the military actually incorporates AI into its weapons and operations is a much better idea. Heather gives us precisely those insights and (thus) the opportunity to think clearly about the threat.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;AI &#43; nuclear capacities sounds like a recipe for disaster. Some people think it could cause mass extinction. While it’s easy to let our imaginations run wild, insight into how the military actually incorporates AI into its weapons and operations is a much better idea. Heather gives us precisely those insights and (thus) the opportunity to think clearly about the threat.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="48779180" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/afb2ec86-041c-475b-9ba2-02746453d254/stream.mp3"/>
                
                <guid isPermaLink="false">aebb35f0-62f9-44bc-babb-363f78522a88</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/afb2ec86-041c-475b-9ba2-02746453d254</link>
                <pubDate>Thu, 11 Jul 2024 08:00:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/7/11/12/ed5a17b8-28f0-442e-9d79-d6f2de173bfb_105c_ethicalmachinestemplate___2000_x_2000_px_.jpg"/>
                <itunes:duration>3048</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Could AI Undermine Informed Consent?</itunes:title>
                <title>Could AI Undermine Informed Consent?</title>

                <itunes:episode>5</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>AI holds a lot of promise in making faster, more accurate diagnoses of our ailments. But if they are too influential, might they undermine our doctors’ ability to understand the rationale for the diagnose? And could it undermine the aspect of the doctor-patient relationship that is crucial for maintaining our patient autonomy?</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;AI holds a lot of promise in making faster, more accurate diagnoses of our ailments. But if they are too influential, might they undermine our doctors’ ability to understand the rationale for the diagnose? And could it undermine the aspect of the doctor-patient relationship that is crucial for maintaining our patient autonomy?&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="47402004" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/52ef0551-3626-42ff-bec4-1846e0e1242d/stream.mp3"/>
                
                <guid isPermaLink="false">1f6b025d-255a-4ff0-974f-87d803c5a716</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/52ef0551-3626-42ff-bec4-1846e0e1242d</link>
                <pubDate>Thu, 04 Jul 2024 08:00:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/7/4/3/bb056716-333f-4c8f-9351-382339658119_ethicalmachinestemplate___2000_x_2000_px_.jpg"/>
                <itunes:duration>2962</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Data Privacy Isn’t as Important as You Think</itunes:title>
                <title>Data Privacy Isn’t as Important as You Think</title>

                <itunes:episode>4</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Privacy is important. But I think we mostly misconceive the nature of privacy and data privacy. I argue we should rethink data privacy so that we can both focus better on how to protect people and so we can enable legitimately desirable innovations.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Privacy is important. But I think we mostly misconceive the nature of privacy and data privacy. I argue we should rethink data privacy so that we can both focus better on how to protect people and so we can enable legitimately desirable innovations.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="17949675" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/721131e2-4af2-4bf7-9ec2-695432f5f578/stream.mp3"/>
                
                <guid isPermaLink="false">10412db6-017e-45fb-8ebd-079aa3670af5</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/721131e2-4af2-4bf7-9ec2-695432f5f578</link>
                <pubDate>Thu, 27 Jun 2024 11:00:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/6/27/8/90a5df32-cd0b-4468-b05e-eceec5c7da36_d8e63e1_af5f765e-7c67-4b23-863a-b4875ba9820e_4.jpg"/>
                <itunes:duration>1121</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Existentialist Risk</itunes:title>
                <title>Existentialist Risk</title>

                <itunes:episode>3</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Technologist’s are racing to create AGI, artificial general intelligence. They also say we must align the AGI’s moral values with our own. But Professors Ariela Tubert and Justin Tiehen argue that’s impossible. Once you create an AGI, they say, you also give them the intellectual capacity needed for freedom, including the freedom to reject your given values.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Technologist’s are racing to create AGI, artificial general intelligence. They also say we must align the AGI’s moral values with our own. But Professors Ariela Tubert and Justin Tiehen argue that’s impossible. Once you create an AGI, they say, you also give them the intellectual capacity needed for freedom, including the freedom to reject your given values.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="44710765" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/3f9cb371-e1f1-4aaf-8c76-81b097fb2e49/stream.mp3"/>
                
                <guid isPermaLink="false">4a78dc05-36c3-45dc-99fc-f7950574f7dd</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/3f9cb371-e1f1-4aaf-8c76-81b097fb2e49</link>
                <pubDate>Thu, 27 Jun 2024 09:59:34 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/6/27/9/1375290a-aaba-4491-87cf-9de138d4b2f5_9264f2a9-2969-4f2e-ba3f-f09ed8eaf854_3.jpg"/>
                <itunes:duration>2794</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>Is Equity Always Valuable?</itunes:title>
                <title>Is Equity Always Valuable?</title>

                <itunes:episode>2</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Of course, decreasing racial disparities in healthcare is ethically imperative. But does it sometimes require too great a moral sacrifice? If it costs more lives than an non-equitable distribution of healthcare resources, should we really do it? Professors Guha Krishnamurthi and Eric Vogelstein argue that equity is not always a moral trump card.</p><p>#ai</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Of course, decreasing racial disparities in healthcare is ethically imperative. But does it sometimes require too great a moral sacrifice? If it costs more lives than an non-equitable distribution of healthcare resources, should we really do it? Professors Guha Krishnamurthi and Eric Vogelstein argue that equity is not always a moral trump card.&lt;/p&gt;&lt;p&gt;#ai&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="21718413" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/80a8403c-8158-41c8-bcab-dfd0eb1edbba/stream.mp3"/>
                
                <guid isPermaLink="false">1e4fbcd5-96f3-47e2-8fde-bd16814b8953</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/80a8403c-8158-41c8-bcab-dfd0eb1edbba</link>
                <pubDate>Thu, 27 Jun 2024 09:00:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/6/29/16/cad13915-543d-48d8-b947-41d06b4ecd66_7c8b458_df75aabc-469d-4541-b4d7-aacce67af07f_2.jpg"/>
                <itunes:duration>1357</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>The Moral Weight of Online Sexual Assault</itunes:title>
                <title>The Moral Weight of Online Sexual Assault</title>

                <itunes:episode>1</itunes:episode>
                <itunes:season>2</itunes:season>
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Could online sexual assault be as morally bad as in-person sexual assault? Honestly, that initially struck me as a bit crazy. But Professor John Danaher makes some very compelling arguments.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Could online sexual assault be as morally bad as in-person sexual assault? Honestly, that initially struck me as a bit crazy. But Professor John Danaher makes some very compelling arguments.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="52068101" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/95159de2-12da-43c7-ae88-90bad710f133/stream.mp3"/>
                
                <guid isPermaLink="false">d1ea6b02-ebba-4647-97e3-ddbb7a0997f8</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/95159de2-12da-43c7-ae88-90bad710f133</link>
                <pubDate>Thu, 27 Jun 2024 07:47:54 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/6/27/7/a8207577-e430-49b7-9a38-fa933308f83c_cde67372-9619-4a7e-8105-47e163887086_1.jpg"/>
                <itunes:duration>3254</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>trailer</itunes:episodeType>
                <itunes:title>Welcome to Ethical Machines</itunes:title>
                <title>Welcome to Ethical Machines</title>

                
                
                <itunes:author>Reid Blackman</itunes:author>
                
                <description><![CDATA[<p>Welcome to Ethical Machines—a weekly podcast from author and ethicist Reid Blackman. Episodes drop June 27th.</p><br/><br/>Advertising Inquiries: <a href='https://redcircle.com/brands'>https://redcircle.com/brands</a>]]></description>
                <content:encoded>&lt;p&gt;Welcome to Ethical Machines—a weekly podcast from author and ethicist Reid Blackman. Episodes drop June 27th.&lt;/p&gt;&lt;br/&gt;&lt;br/&gt;Advertising Inquiries: &lt;a href=&#39;https://redcircle.com/brands&#39;&gt;https://redcircle.com/brands&lt;/a&gt;</content:encoded>
                
                <enclosure length="1620009" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/663c6ad9-c2a4-4996-87a7-6e15553035ff/stream.mp3"/>
                
                <guid isPermaLink="false">a57616c4-f4ef-4a6e-9cd4-d564b85d3797</guid>
                <link>https://redcircle.com/shows/b78540f3-d05f-4269-bdb3-e22c1aca55ed/episodes/663c6ad9-c2a4-4996-87a7-6e15553035ff</link>
                <pubDate>Fri, 07 Jun 2024 15:14:17 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/6/7/15/19a15751-8693-488d-806d-be6c3cbbf3dc_ethical_machines_album_cover.jpg"/>
                <itunes:duration>101</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
    </channel>
</rss>
