<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:podcast="https://podcastindex.org/namespace/1.0">
    <channel>
        <generator>RedCircle VERIFY_TOKEN_ab69505d-0402-42eb-b42f-e038cc2540c7  -- Rendered At Wed, 29 Apr 2026 04:43:00 &#43;0000</generator>
        <title>IX Lab Research</title>
        <link>https://redcircle.com/shows/ix-lab-research</link>
        <language>en-US</language>
        <copyright>All rights reserved.</copyright>
        <itunes:author>Jacek Gwizdka</itunes:author>
        <itunes:summary>Popularizing research in Information eXperience Lab at The University of Texas at Austin</itunes:summary>
        <podcast:guid>ab69505d-0402-42eb-b42f-e038cc2540c7</podcast:guid>
        
        <description><![CDATA[<p>Popularizing research in Information eXperience Lab at The University of Texas at Austin</p>]]></description>
        
        <itunes:type>episodic</itunes:type>
        <podcast:locked>no</podcast:locked>
        <itunes:owner>
            <itunes:name>Jacek Gwizdka</itunes:name>
            <itunes:email>ixlab.ut@gmail.com</itunes:email>
        </itunes:owner>
        
        <itunes:image href="https://media.redcircle.com/images/2024/12/10/22/d0234d98-20ea-433d-886b-5a66edb4ad94_-bc76-5c97cf36ebc1_ix-lab-logo-new-vertical-sq.jpg"/>
        
        
        
            
            <itunes:category text="Science">

            
                <itunes:category text="Natural Sciences"/>
            
                <itunes:category text="Mathematics"/>
            

        </itunes:category>
        
            
            <itunes:category text="Arts">

            
                <itunes:category text="Design"/>
            

        </itunes:category>
        
            
            <itunes:category text="Education" />

            

        
        

        
        <itunes:explicit>no</itunes:explicit>
        
        
        
        
        
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>012: Attention! What We Measure in IIR Studies</itunes:title>
                <title>012: Attention! What We Measure in IIR Studies</title>

                <itunes:episode>12</itunes:episode>
                <itunes:season>1</itunes:season>
                <itunes:author>Jacek Gwizdka</itunes:author>
                <itunes:summary>This perspective paper examines how the concept of human attention has been defined and measured within the CHIIR conference archives from 2016 to 2025. The authors reveal that while attention is a fundamental element of interactive information retrieval, it is rarely explicitly defined by researchers, who often rely on implicit assumptions rather than established cognitive theories. Through a systematic review of nineteen key papers, the study identifies visual attention as the primary area of focus, typically operationalized through eye-tracking metrics like fixation duration and gaze patterns. The text highlights a significant gap between methodological practice and theoretical foundations, noting that observable behaviors may not fully capture complex internal cognitive processes. To address these inconsistencies, the authors propose a theory-driven framework and recommend multi-method approaches to improve the rigor and reproducibility of future user studies. Ultimately, the source serves as a call to action for the research community to establish conceptual clarity when analyzing how users allocate their limited mental resources.
</itunes:summary>
                <description><![CDATA[<p>Zhang, D., Jayawardena, G., &amp; Gwizdka, J. (2026). Attention! Rethinking What We Measure in CHIIR Studies. <em>Proceedings of the 2026 Conference on Human Information Interaction and Retrieval, CHIIR ’26</em>, 100–110. <a href="https://doi.org/10.1145/3786304.3787944" rel="nofollow">https://doi.org/10.1145/3786304.3787944</a></p><p>(C) is held by the paper authors.</p><p>Conversation in this podcast is generated by AI NotebookLM.google from the full text of the paper. Illutration generated by ChatGPT. Music by (C) Jacek Gwizdka.</p><p>IX Lab website: <a href="https://ixlab.us/" rel="nofollow">https://ixlab.us/</a></p><p>Dr. Jacek Gwizdka: <a href="https://jacekg.ischool.utexas.edu/" rel="nofollow">https://jacekg.ischool.utexas.edu/</a></p>]]></description>
                <content:encoded>&lt;p&gt;Zhang, D., Jayawardena, G., &amp;amp; Gwizdka, J. (2026). Attention! Rethinking What We Measure in CHIIR Studies. &lt;em&gt;Proceedings of the 2026 Conference on Human Information Interaction and Retrieval, CHIIR ’26&lt;/em&gt;, 100–110. &lt;a href=&#34;https://doi.org/10.1145/3786304.3787944&#34; rel=&#34;nofollow&#34;&gt;https://doi.org/10.1145/3786304.3787944&lt;/a&gt;&lt;/p&gt;&lt;p&gt;(C) is held by the paper authors.&lt;/p&gt;&lt;p&gt;Conversation in this podcast is generated by AI NotebookLM.google from the full text of the paper. Illutration generated by ChatGPT. Music by (C) Jacek Gwizdka.&lt;/p&gt;&lt;p&gt;IX Lab website: &lt;a href=&#34;https://ixlab.us/&#34; rel=&#34;nofollow&#34;&gt;https://ixlab.us/&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Dr. Jacek Gwizdka: &lt;a href=&#34;https://jacekg.ischool.utexas.edu/&#34; rel=&#34;nofollow&#34;&gt;https://jacekg.ischool.utexas.edu/&lt;/a&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="4913528" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/ba8fe580-956b-43b1-b0a6-1a79a96e9ded/stream.mp3"/>
                
                <guid isPermaLink="false">cdcf050f-11dd-41ec-8f8c-3f3ec43db986</guid>
                <link>https://ixlab.us/</link>
                <pubDate>Fri, 17 Apr 2026 17:00:50 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2026/4/8/22/cc7724c6-51f5-4228-a50f-5e792d1309c4_012.jpg"/>
                <itunes:duration>307</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>011: Analyzing Gaze Transition Behavior Using Bayesian Mixed Effects Markov Models</itunes:title>
                <title>011: Analyzing Gaze Transition Behavior Using Bayesian Mixed Effects Markov Models</title>

                <itunes:episode>11</itunes:episode>
                
                <itunes:author>Jacek Gwizdka</itunes:author>
                <itunes:summary>This research paper explores a sophisticated statistical framework for analyzing how people’s eyes move between different areas of an image. The authors utilize a Bayesian mixed-effects Markov model to study gaze transitions, allowing them to account for both individual differences and broader experimental factors. By tracking participants as they viewed distorted images of landmarks and artwork, the study investigated how image category and object recognition influence visual scanning patterns. Their findings indicate that the type of image being viewed significantly impacts gaze behavior, whereas whether a person recognized the image had a more subtle effect. Ultimately, the work demonstrates that this computational approach provides a more nuanced understanding of human visual attention than traditional analysis methods.</itunes:summary>
                <description><![CDATA[<p>Ebeid, I. A., Bhattacharya, N., Gwizdka, J., &amp; Sarkar, A. (2019). Analyzing Gaze Transition Behavior Using Bayesian Mixed Effects Markov Models. Proceedings of the 11th ACM Symposium on Eye Tracking Research &amp; Applications, ETRA ’19, 5:1-5:5. https://doi.org/10.1145/3314111.3319839</p><p>(C) is held by the paper authors. </p><p>Conversation in this podcast is generated by AI NotebookLM.google from the full text of the paper. Illutration generated by ChatGPT. Music by (C) Jacek Gwizdka.</p><p>IX Lab website: <a href="https://ixlab.us/" rel="nofollow">https://ixlab.us/</a></p><p>Dr. Jacek Gwizdka: <a href="https://jacekg.ischool.utexas.edu/" rel="nofollow">https://jacekg.ischool.utexas.edu/</a></p>]]></description>
                <content:encoded>&lt;p&gt;Ebeid, I. A., Bhattacharya, N., Gwizdka, J., &amp;amp; Sarkar, A. (2019). Analyzing Gaze Transition Behavior Using Bayesian Mixed Effects Markov Models. Proceedings of the 11th ACM Symposium on Eye Tracking Research &amp;amp; Applications, ETRA ’19, 5:1-5:5. https://doi.org/10.1145/3314111.3319839&lt;/p&gt;&lt;p&gt;(C) is held by the paper authors. &lt;/p&gt;&lt;p&gt;Conversation in this podcast is generated by AI NotebookLM.google from the full text of the paper. Illutration generated by ChatGPT. Music by (C) Jacek Gwizdka.&lt;/p&gt;&lt;p&gt;IX Lab website: &lt;a href=&#34;https://ixlab.us/&#34; rel=&#34;nofollow&#34;&gt;https://ixlab.us/&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Dr. Jacek Gwizdka: &lt;a href=&#34;https://jacekg.ischool.utexas.edu/&#34; rel=&#34;nofollow&#34;&gt;https://jacekg.ischool.utexas.edu/&lt;/a&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="11399000" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/c1888ac9-dbd4-465b-96b4-13e969c1dc59/stream.mp3"/>
                
                <guid isPermaLink="false">897371c1-2b9b-4fb6-ac34-9689acb380ed</guid>
                <link>https://ixlab.us/</link>
                <pubDate>Tue, 07 Apr 2026 17:33:24 &#43;0000</pubDate>
                <itunes:duration>712</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>010: JEMR Journal paper: Real-time Mental Effort Measurment Using Pupillometry</itunes:title>
                <title>010: JEMR Journal paper: Real-time Mental Effort Measurment Using Pupillometry</title>

                
                
                <itunes:author>Jacek Gwizdka</itunes:author>
                <itunes:summary>This article introduces RIPA2, an advanced pupillometric algorithm designed to measure mental effort in real time. By refining the parameters of Savitzky–Golay filters, the researchers successfully isolated pupil diameter fluctuations within specific frequency bands linked to cognitive load. The study validated this new method through both a structured N-back memory task and a naturalistic information-search task. Findings indicate that RIPA2 is more sensitive and reliable than its predecessor, effectively distinguishing between different levels of task difficulty and types of decision-making. The authors suggest this technology could enhance adaptive user interfaces and cognitive monitoring in professional or educational environments. This research provides a robust, low-latency solution for assessing psychological strain beyond traditional laboratory settings.</itunes:summary>
                <description><![CDATA[<p>Jayawardena, G., Jayawardana, Y., &amp; Gwizdka, J. (2025). Measuring Mental Effort in Real Time Using Pupillometry. Journal of Eye Movement Research, 18(6), 70. https://doi.org/10.3390/jemr18060070</p><p>Conversation in this podcast is generated by AI NotebookLM.google from the full text of the paper. (C) is held by the paper authors. Illutration generated by ChatGPT. Music intro by (C) Jacek Gwizdka.</p><p>IX Lab website: <a href="https://ixlab.us/" rel="nofollow">https://ixlab.us/</a></p><p>Dr. Gavindya Jayawardena: https://www.linkedin.com/in/gavindya-jayawardena/</p><p>Dr. Jacek Gwizdka: <a href="https://jacekg.ischool.utexas.edu/" rel="nofollow">https://jacekg.ischool.utexas.edu/</a></p>]]></description>
                <content:encoded>&lt;p&gt;Jayawardena, G., Jayawardana, Y., &amp;amp; Gwizdka, J. (2025). Measuring Mental Effort in Real Time Using Pupillometry. Journal of Eye Movement Research, 18(6), 70. https://doi.org/10.3390/jemr18060070&lt;/p&gt;&lt;p&gt;Conversation in this podcast is generated by AI NotebookLM.google from the full text of the paper. (C) is held by the paper authors. Illutration generated by ChatGPT. Music intro by (C) Jacek Gwizdka.&lt;/p&gt;&lt;p&gt;IX Lab website: &lt;a href=&#34;https://ixlab.us/&#34; rel=&#34;nofollow&#34;&gt;https://ixlab.us/&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Dr. Gavindya Jayawardena: https://www.linkedin.com/in/gavindya-jayawardena/&lt;/p&gt;&lt;p&gt;Dr. Jacek Gwizdka: &lt;a href=&#34;https://jacekg.ischool.utexas.edu/&#34; rel=&#34;nofollow&#34;&gt;https://jacekg.ischool.utexas.edu/&lt;/a&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="6047451" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/303cf2c7-1755-4d09-96f4-e394ebb66065/stream.mp3"/>
                
                <guid isPermaLink="false">a563c812-c49d-4815-bb3a-b12a8146fbad</guid>
                <link>https://redcircle.com/shows/ab69505d-0402-42eb-b42f-e038cc2540c7/episodes/303cf2c7-1755-4d09-96f4-e394ebb66065</link>
                <pubDate>Wed, 25 Mar 2026 18:00:11 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2026/3/31/1/87ebef15-6d5b-44b9-b6a6-5f600cd25842_ripa.jpg"/>
                <itunes:duration>377</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>008: CHIIR 2026 research paper: Working Memory and Task Complexity and Cognitive Load</itunes:title>
                <title>008: CHIIR 2026 research paper: Working Memory and Task Complexity and Cognitive Load</title>

                <itunes:episode>8</itunes:episode>
                
                <itunes:author>Jacek Gwizdka</itunes:author>
                <itunes:summary>This paper presents a 2026 research study from the University of Texas at Austin examining how working memory capacity and task complexity influence a user&#39;s cognitive load and emotional state. By utilizing pupillometry and facial expression analysis, the researchers tracked the real-time mental effort of participants engaging in fact-checking and decision-making web searches. The findings reveal that complex decision-making significantly increases cognitive load, particularly for individuals with lower memory capacity who also experience higher levels of confusion and surprise. Conversely, users with higher memory capacity managed mental demands more efficiently and maintained positive emotions like joy and engagement during difficult tasks. This work suggests that digital interfaces should be designed to adapt to a user&#39;s unique cognitive profile to prevent information overload. Ultimately, the study highlights the deep intersection between human memory, emotional resilience, and information retrieval in modern computing environments.</itunes:summary>
                <description><![CDATA[<p>Jayawardena, G., Li, S., &amp; Gwizdka, J. (2026). Effects ofWorking Memory Capacity and Search Task Complexity on Cognitive Load. <em>Proceedings of the 2026 ACM SIGIR Conference on Human Information Interaction and Retrieval, CHIIR ’26</em>. CHIIR, New York, NY, USA. <a href="https://doi.org/10.1145/3786304.3787871" rel="nofollow">https://doi.org/10.1145/3786304.3787871</a></p><p>Conversation in this podcast is generated by AI NotebookLM.google from the full text of the paper. (C) is held by the paper authors. Illutration generated by ChatGPT.</p><p>Music intro by (C) Jacek Gwizdka.</p><p>IX Lab website: <a href="https://ixlab.us/" rel="nofollow">https://ixlab.us/</a></p><p>Dr. Jacek Gwizdka website: <a href="https://jacekg.ischool.utexas.edu/" rel="nofollow">https://jacekg.ischool.utexas.edu/</a></p>]]></description>
                <content:encoded>&lt;p&gt;Jayawardena, G., Li, S., &amp;amp; Gwizdka, J. (2026). Effects ofWorking Memory Capacity and Search Task Complexity on Cognitive Load. &lt;em&gt;Proceedings of the 2026 ACM SIGIR Conference on Human Information Interaction and Retrieval, CHIIR ’26&lt;/em&gt;. CHIIR, New York, NY, USA. &lt;a href=&#34;https://doi.org/10.1145/3786304.3787871&#34; rel=&#34;nofollow&#34;&gt;https://doi.org/10.1145/3786304.3787871&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Conversation in this podcast is generated by AI NotebookLM.google from the full text of the paper. (C) is held by the paper authors. Illutration generated by ChatGPT.&lt;/p&gt;&lt;p&gt;Music intro by (C) Jacek Gwizdka.&lt;/p&gt;&lt;p&gt;IX Lab website: &lt;a href=&#34;https://ixlab.us/&#34; rel=&#34;nofollow&#34;&gt;https://ixlab.us/&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Dr. Jacek Gwizdka website: &lt;a href=&#34;https://jacekg.ischool.utexas.edu/&#34; rel=&#34;nofollow&#34;&gt;https://jacekg.ischool.utexas.edu/&lt;/a&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="5345697" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/b9e08ba1-0928-48c0-9620-d979657f3aed/stream.mp3"/>
                
                <guid isPermaLink="false">cd1cf9ae-e226-44d6-99c6-a26b1e609cb9</guid>
                <link>https://doi.org/10.1145/3786304.3787871</link>
                <pubDate>Sun, 22 Mar 2026 16:30:38 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2026/3/20/5/dde086d3-8641-4b60-9513-f8e66b971a05_008.jpg"/>
                <itunes:duration>334</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>009: CHIIR 2025 resource paper: g-Rel Reader</itunes:title>
                <title>009: CHIIR 2025 resource paper: g-Rel Reader</title>

                <itunes:episode>9</itunes:episode>
                
                <itunes:author>Jacek Gwizdka</itunes:author>
                <itunes:summary>This paper introduces g-Rel-READER, a specialized dataset designed to help researchers detect how humans judge the relevance of information using neurophysiological signals. By combining eye-tracking data and EEG recordings, the authors created a resource that maps brain activity and gaze patterns to specific words during reading tasks. The data was gathered from twenty-four participants who evaluated news stories for their usefulness in answering specific questions. This collection is unique because it provides precise spatial alignment between the text on the screen and the biological responses of the reader. Ultimately, the dataset aims to improve information retrieval systems by enabling them to recognize user interest and decision-making processes in real time.</itunes:summary>
                <description><![CDATA[<p>Gwizdka, J., &amp; Cole, M. (2025). g-Rel-READER: A Dataset for Relevance and Reading Evaluation through Advanced Data from Eye-tracking and EEG Recordings. <em>Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, CHIIR ’25</em>, 377–381. <a href="https://doi.org/10.1145/3698204.3716474" rel="nofollow">https://doi.org/10.1145/3698204.3716474</a></p><p>Conversation in this podcast is generated by AI NotebookLM.google from the full text of the paper. (C) is held by the paper authors. Illutration generated by ChatGPT.</p><p>Music intro by (C) Jacek Gwizdka.</p><p>IX Lab website: <a href="https://ixlab.us/" rel="nofollow">https://ixlab.us/</a></p><p>Dr. Jacek Gwizdka website: <a href="https://jacekg.ischool.utexas.edu/" rel="nofollow">https://jacekg.ischool.utexas.edu/</a></p>]]></description>
                <content:encoded>&lt;p&gt;Gwizdka, J., &amp;amp; Cole, M. (2025). g-Rel-READER: A Dataset for Relevance and Reading Evaluation through Advanced Data from Eye-tracking and EEG Recordings. &lt;em&gt;Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, CHIIR ’25&lt;/em&gt;, 377–381. &lt;a href=&#34;https://doi.org/10.1145/3698204.3716474&#34; rel=&#34;nofollow&#34;&gt;https://doi.org/10.1145/3698204.3716474&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Conversation in this podcast is generated by AI NotebookLM.google from the full text of the paper. (C) is held by the paper authors. Illutration generated by ChatGPT.&lt;/p&gt;&lt;p&gt;Music intro by (C) Jacek Gwizdka.&lt;/p&gt;&lt;p&gt;IX Lab website: &lt;a href=&#34;https://ixlab.us/&#34; rel=&#34;nofollow&#34;&gt;https://ixlab.us/&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Dr. Jacek Gwizdka website: &lt;a href=&#34;https://jacekg.ischool.utexas.edu/&#34; rel=&#34;nofollow&#34;&gt;https://jacekg.ischool.utexas.edu/&lt;/a&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="8856973" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/a780182f-a109-49c8-b79b-8e1d1de47665/stream.mp3"/>
                
                <guid isPermaLink="false">c55bc08c-19e2-4738-a8b1-60b15f6d86b9</guid>
                <link>https://doi.org/10.1145/3698204.3716474</link>
                <pubDate>Fri, 20 Mar 2026 06:01:34 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2026/3/20/6/0ce53a18-a1e1-4b59-8b08-a711325544de_009.jpg"/>
                <itunes:duration>553</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>007: ETRA 2025 research paper: Real-Time Attention Capture in Visual Search</itunes:title>
                <title>007: ETRA 2025 research paper: Real-Time Attention Capture in Visual Search</title>

                
                
                <itunes:author>Jacek Gwizdka</itunes:author>
                
                <description><![CDATA[<p>Jayawardena, G., Jayawardana, Y., Abeysinghe, Y., Mahanama, B., Jayarathna, S., &amp; Gwizdka, J. (2025). A Real-Time Approach to Capture Ambient and Focal Attention in Visual Search. <em>Proceedings of the 2025 Symposium on Eye Tracking Research and Applications</em>, 1–7. <a href="https://doi.org/10.1145/3715669.3723111" rel="nofollow">https://doi.org/10.1145/3715669.3723111</a></p><p><br></p><p>Conversation in this podcast is generated by AI NotebookLM.google from the full text of the paper. (C) is held by the paper authors. Illutration generated by ChatGPT.</p><p>Music intro by (C) Jacek Gwizdka.</p><p>IX Lab website: <a href="https://ixlab.us/" rel="nofollow">https://ixlab.us/</a></p><p>Dr. Jacek Gwizdka website: <a href="https://jacekg.ischool.utexas.edu/" rel="nofollow">https://jacekg.ischool.utexas.edu/</a></p>]]></description>
                <content:encoded>&lt;p&gt;Jayawardena, G., Jayawardana, Y., Abeysinghe, Y., Mahanama, B., Jayarathna, S., &amp;amp; Gwizdka, J. (2025). A Real-Time Approach to Capture Ambient and Focal Attention in Visual Search. &lt;em&gt;Proceedings of the 2025 Symposium on Eye Tracking Research and Applications&lt;/em&gt;, 1–7. &lt;a href=&#34;https://doi.org/10.1145/3715669.3723111&#34; rel=&#34;nofollow&#34;&gt;https://doi.org/10.1145/3715669.3723111&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Conversation in this podcast is generated by AI NotebookLM.google from the full text of the paper. (C) is held by the paper authors. Illutration generated by ChatGPT.&lt;/p&gt;&lt;p&gt;Music intro by (C) Jacek Gwizdka.&lt;/p&gt;&lt;p&gt;IX Lab website: &lt;a href=&#34;https://ixlab.us/&#34; rel=&#34;nofollow&#34;&gt;https://ixlab.us/&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Dr. Jacek Gwizdka website: &lt;a href=&#34;https://jacekg.ischool.utexas.edu/&#34; rel=&#34;nofollow&#34;&gt;https://jacekg.ischool.utexas.edu/&lt;/a&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="8997825" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/e1b1e16c-f089-4a1c-8f57-0d85b9f4eb55/stream.mp3"/>
                
                <guid isPermaLink="false">87a1c50d-4231-43fe-9ada-6a97764f2647</guid>
                <link>https://redcircle.com/shows/ab69505d-0402-42eb-b42f-e038cc2540c7/episodes/e1b1e16c-f089-4a1c-8f57-0d85b9f4eb55</link>
                <pubDate>Sat, 07 Jun 2025 18:38:45 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/6/7/18/c5aded63-d23a-4d91-ba51-481f587fefdc_e57_1e6615af-8702-4e32-a3f5-0269b3e651fe_image.jpg"/>
                <itunes:duration>562</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>006: CHIIR 2025 research paper: Pupillometric Analysis of Cognitive Load in Relation to Relevance and Confirmation Bias</itunes:title>
                <title>006: CHIIR 2025 research paper: Pupillometric Analysis of Cognitive Load in Relation to Relevance and Confirmation Bias</title>

                
                
                <itunes:author>Jacek Gwizdka</itunes:author>
                
                <description><![CDATA[<p>Shi, L., Jayawardena, G., &amp; Gwizdka, J. (2025). Pupillometric Analysis of Cognitive Load in Relation to Relevance and Confirmation Bias. <em>Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval</em>, 219–230. <a href="https://doi.org/10.1145/3698204.3716458" rel="nofollow">https://doi.org/10.1145/3698204.3716458</a></p><p>Conversation in this podcast is generated by AI NotebookLM.google from the full text of the paper. (C) is held by the paper authors. Illutration generated by ChatGPT.</p><p>Music intro by (C) Jacek Gwizdka.</p><p>https://jacekg.ischool.utexas.edu/</p><p>https://ixlab.ischool.utexas.edu/</p>]]></description>
                <content:encoded>&lt;p&gt;Shi, L., Jayawardena, G., &amp;amp; Gwizdka, J. (2025). Pupillometric Analysis of Cognitive Load in Relation to Relevance and Confirmation Bias. &lt;em&gt;Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval&lt;/em&gt;, 219–230. &lt;a href=&#34;https://doi.org/10.1145/3698204.3716458&#34; rel=&#34;nofollow&#34;&gt;https://doi.org/10.1145/3698204.3716458&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Conversation in this podcast is generated by AI NotebookLM.google from the full text of the paper. (C) is held by the paper authors. Illutration generated by ChatGPT.&lt;/p&gt;&lt;p&gt;Music intro by (C) Jacek Gwizdka.&lt;/p&gt;&lt;p&gt;https://jacekg.ischool.utexas.edu/&lt;/p&gt;&lt;p&gt;https://ixlab.ischool.utexas.edu/&lt;/p&gt;</content:encoded>
                
                <enclosure length="5596891" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/0af34497-97f5-4838-957a-344a56078770/stream.mp3"/>
                
                <guid isPermaLink="false">2f8b29b4-c8d8-4f49-988e-ab618fa91b82</guid>
                <link>https://redcircle.com/shows/ab69505d-0402-42eb-b42f-e038cc2540c7/episodes/0af34497-97f5-4838-957a-344a56078770</link>
                <pubDate>Sat, 07 Jun 2025 15:18:23 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2025/6/7/16/8e2298d8-ad56-4606-8d13-988e47a89e06_-bcac-c6f025ac8ce9_ix-lab-logo-new-vertical-sq.jpg"/>
                <itunes:duration>349</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>005: CHIIR 2025 resource paper: g-Rel-READER Dataset</itunes:title>
                <title>005: CHIIR 2025 resource paper: g-Rel-READER Dataset</title>

                
                
                <itunes:author>Jacek Gwizdka</itunes:author>
                
                <description><![CDATA[<p>Gwizdka, J., &amp; Cole, M. (2025). g-Rel-READER: A Dataset for Relevance and Reading Evaluation through Advanced Data from Eye-tracking and EEG Recordings. <em>Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval</em>, 377–381. <a href="https://doi.org/10.1145/3698204.3716474" rel="nofollow">https://doi.org/10.1145/3698204.3716474</a></p><p>Conversation in this podcast is generated by AI NotebookLM.google from the full text of the paper. (C) is held by the paper authors. Illutration generated by ChatGPT.</p><p>Music intro by (C) Jacek Gwizdka.</p><p>https://jacekg.ischool.utexas.edu/</p><p>https://ixlab.ischool.utexas.edu/</p>]]></description>
                <content:encoded>&lt;p&gt;Gwizdka, J., &amp;amp; Cole, M. (2025). g-Rel-READER: A Dataset for Relevance and Reading Evaluation through Advanced Data from Eye-tracking and EEG Recordings. &lt;em&gt;Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval&lt;/em&gt;, 377–381. &lt;a href=&#34;https://doi.org/10.1145/3698204.3716474&#34; rel=&#34;nofollow&#34;&gt;https://doi.org/10.1145/3698204.3716474&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Conversation in this podcast is generated by AI NotebookLM.google from the full text of the paper. (C) is held by the paper authors. Illutration generated by ChatGPT.&lt;/p&gt;&lt;p&gt;Music intro by (C) Jacek Gwizdka.&lt;/p&gt;&lt;p&gt;https://jacekg.ischool.utexas.edu/&lt;/p&gt;&lt;p&gt;https://ixlab.ischool.utexas.edu/&lt;/p&gt;</content:encoded>
                
                <enclosure length="8856973" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/a3bab923-c377-4aec-97fb-fc51a8cb295b/stream.mp3"/>
                
                <guid isPermaLink="false">6c10f140-0207-4b76-9c02-3d8a2f7c8271</guid>
                <link>https://redcircle.com/shows/ab69505d-0402-42eb-b42f-e038cc2540c7/episodes/a3bab923-c377-4aec-97fb-fc51a8cb295b</link>
                <pubDate>Sat, 22 Mar 2025 22:45:06 &#43;0000</pubDate>
                <itunes:duration>553</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>004-True or false? Reading COVID-19 news headlines</itunes:title>
                <title>004-True or false? Reading COVID-19 news headlines</title>

                
                
                <itunes:author>Jacek Gwizdka</itunes:author>
                
                <description><![CDATA[<h2>IX Lab Research - 004</h2><p><br></p><p>This episode introduces 2023 paper presented at the ACM SIGIR Conference on Human Information Interaction and Retrieval. This research was conducted by PhD students Li Shi, Nilavra Bhattacharya and Anubrata Das under supervision of Dr. Jacek Gwizdka and Dr. Matt Lease. </p><p>Shi, L., Bhattacharya, N., Das, A., &amp; Gwizdka, J. (2023). True or false? Cognitive load when reading COVID-19 news headlines: an eye-tracking study. Proceedings of the 2023 Conference on Human Information Interaction and Retrieval, 107–116. <a href="https://doi.org/10.1145/3576840.3578290" rel="nofollow">https://doi.org/10.1145/3576840.3578290</a></p><p><br></p><h3>Summary</h3><p>This podcast discussion summarizes a study using eye-tracking to examine how people process online information. The research reveals position bias, where information presented first receives disproportionate attention. Pupil dilation indicated increased cognitive effort when encountering information contradicting personal beliefs or presenting incorrect evidence, especially when aligning with pre-existing biases. Interestingly, belief changes didn&#39;t significantly impact cognitive load. The study highlights how our brains prioritize information confirming existing beliefs, even if inaccurate, emphasizing the need for critical thinking and awareness of cognitive biases.</p><p><br></p><p>Conversation in this podcast is generated by AI NotebookLM.google from the full text of the paper. (C) is held by the paper authors. Illutration generated by ChatGPT.</p><p>Music intro by (C) Jacek Gwizdka.</p><p>IX Lab website: <a href="https://ixlab.us/" rel="nofollow">https://ixlab.us/</a></p><p>Dr. Jacek Gwizdka website: <a href="https://jacekg.ischool.utexas.edu/" rel="nofollow">https://jacekg.ischool.utexas.edu/</a></p>]]></description>
                <content:encoded>&lt;h2&gt;IX Lab Research - 004&lt;/h2&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;This episode introduces 2023 paper presented at the ACM SIGIR Conference on Human Information Interaction and Retrieval. This research was conducted by PhD students Li Shi, Nilavra Bhattacharya and Anubrata Das under supervision of Dr. Jacek Gwizdka and Dr. Matt Lease. &lt;/p&gt;&lt;p&gt;Shi, L., Bhattacharya, N., Das, A., &amp;amp; Gwizdka, J. (2023). True or false? Cognitive load when reading COVID-19 news headlines: an eye-tracking study. Proceedings of the 2023 Conference on Human Information Interaction and Retrieval, 107–116. &lt;a href=&#34;https://doi.org/10.1145/3576840.3578290&#34; rel=&#34;nofollow&#34;&gt;https://doi.org/10.1145/3576840.3578290&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;h3&gt;Summary&lt;/h3&gt;&lt;p&gt;This podcast discussion summarizes a study using eye-tracking to examine how people process online information. The research reveals position bias, where information presented first receives disproportionate attention. Pupil dilation indicated increased cognitive effort when encountering information contradicting personal beliefs or presenting incorrect evidence, especially when aligning with pre-existing biases. Interestingly, belief changes didn&amp;#39;t significantly impact cognitive load. The study highlights how our brains prioritize information confirming existing beliefs, even if inaccurate, emphasizing the need for critical thinking and awareness of cognitive biases.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;Conversation in this podcast is generated by AI NotebookLM.google from the full text of the paper. (C) is held by the paper authors. Illutration generated by ChatGPT.&lt;/p&gt;&lt;p&gt;Music intro by (C) Jacek Gwizdka.&lt;/p&gt;&lt;p&gt;IX Lab website: &lt;a href=&#34;https://ixlab.us/&#34; rel=&#34;nofollow&#34;&gt;https://ixlab.us/&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Dr. Jacek Gwizdka website: &lt;a href=&#34;https://jacekg.ischool.utexas.edu/&#34; rel=&#34;nofollow&#34;&gt;https://jacekg.ischool.utexas.edu/&lt;/a&gt;&lt;/p&gt;</content:encoded>
                
                <enclosure length="10922109" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/34e8c270-47e3-4771-bc35-e6667748af99/stream.mp3"/>
                
                <guid isPermaLink="false">3453383e-1bed-498b-9877-dc1ea8fc5448</guid>
                <link>https://redcircle.com/shows/ab69505d-0402-42eb-b42f-e038cc2540c7/episodes/34e8c270-47e3-4771-bc35-e6667748af99</link>
                <pubDate>Tue, 10 Dec 2024 22:42:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/12/11/4/181d8573-8479-40a5-9b10-7a57c2a1eea0_-46b4032d8c41_chiir23_presentation_2.1-jg_copy.jpg"/>
                <itunes:duration>682</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>003-From brain waves to words: Using AI to convert brain signals to text</itunes:title>
                <title>003-From brain waves to words: Using AI to convert brain signals to text</title>

                <itunes:episode>3</itunes:episode>
                
                <itunes:author>Jacek Gwizdka</itunes:author>
                
                <description><![CDATA[<h2>IX Lab Research - 003</h2><p>This episode discusses the 2024 paper: Mishra, A., Shukla, S., Torres, J., Gwizdka, J., &amp; Roychowdhury, S. (2024). <em>Thought2Text: Text Generation from EEG Signal using Large Language Models (LLMs)</em> (No. arXiv:2410.07507). arXiv. <a href="https://doi.org/10.48550/arXiv.2410.07507" rel="nofollow">https://doi.org/10.48550/arXiv.2410.07507</a></p><p>This research is conducted under Dr. Abhijit Mishra as the main Principal Investigator with students Shreya Shukla and Jose Torres and contributions from Dr. Jacek Gwizdka and Dr. Shounak Roychowdhury.</p><p><br></p><h3>Summary</h3><p>Researchers at the <strong>University of Texas at Austin</strong> are developing technology to translate brainwaves into text using electroencephalography (EEG) and large language models (LLMs). The system employs a three-stage process: training an EEG encoder to extract features, fine-tuning LLMs on multimodal data (images and text), and further refining the LLMs with EEG embeddings for text generation. Experiments using a public EEG dataset demonstrate the effectiveness of this approach, surpassing chance performance and showing promise for future applications in assistive technologies and neuroscience. While the technology shows promising results, it&#39;s still in its early stages and faces challenges such as noise in EEG data, limited spatial resolution of EEG, and ethical concerns about privacy and bias. Potential applications include assistive technology for communication impairments and advances in healthcare and education.</p><p><br></p><p>IX Lab website: <a href="https://ixlab.us/" rel="nofollow">https://ixlab.us/</a></p><p>Dr. Jacek Gwizdka website: <a href="https://jacekg.ischool.utexas.edu/" rel="nofollow">https://jacekg.ischool.utexas.edu/</a></p><p>The audio for this conversation has been generated by AI using: https://notebooklm.google/</p><p>Music intro created by a human (C) Jacek Gwizdka</p>]]></description>
                <content:encoded>&lt;h2&gt;IX Lab Research - 003&lt;/h2&gt;&lt;p&gt;This episode discusses the 2024 paper: Mishra, A., Shukla, S., Torres, J., Gwizdka, J., &amp;amp; Roychowdhury, S. (2024). &lt;em&gt;Thought2Text: Text Generation from EEG Signal using Large Language Models (LLMs)&lt;/em&gt; (No. arXiv:2410.07507). arXiv. &lt;a href=&#34;https://doi.org/10.48550/arXiv.2410.07507&#34; rel=&#34;nofollow&#34;&gt;https://doi.org/10.48550/arXiv.2410.07507&lt;/a&gt;&lt;/p&gt;&lt;p&gt;This research is conducted under Dr. Abhijit Mishra as the main Principal Investigator with students Shreya Shukla and Jose Torres and contributions from Dr. Jacek Gwizdka and Dr. Shounak Roychowdhury.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;h3&gt;Summary&lt;/h3&gt;&lt;p&gt;Researchers at the &lt;strong&gt;University of Texas at Austin&lt;/strong&gt; are developing technology to translate brainwaves into text using electroencephalography (EEG) and large language models (LLMs). The system employs a three-stage process: training an EEG encoder to extract features, fine-tuning LLMs on multimodal data (images and text), and further refining the LLMs with EEG embeddings for text generation. Experiments using a public EEG dataset demonstrate the effectiveness of this approach, surpassing chance performance and showing promise for future applications in assistive technologies and neuroscience. While the technology shows promising results, it&amp;#39;s still in its early stages and faces challenges such as noise in EEG data, limited spatial resolution of EEG, and ethical concerns about privacy and bias. Potential applications include assistive technology for communication impairments and advances in healthcare and education.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;IX Lab website: &lt;a href=&#34;https://ixlab.us/&#34; rel=&#34;nofollow&#34;&gt;https://ixlab.us/&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Dr. Jacek Gwizdka website: &lt;a href=&#34;https://jacekg.ischool.utexas.edu/&#34; rel=&#34;nofollow&#34;&gt;https://jacekg.ischool.utexas.edu/&lt;/a&gt;&lt;/p&gt;&lt;p&gt;The audio for this conversation has been generated by AI using: https://notebooklm.google/&lt;/p&gt;&lt;p&gt;Music intro created by a human (C) Jacek Gwizdka&lt;/p&gt;</content:encoded>
                
                <enclosure length="12027193" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/8a5f5172-385d-44e6-b157-e47d5cfd6d63/stream.mp3"/>
                
                <guid isPermaLink="false">3f2423d4-7469-4197-8dde-ba4b1c5ad2b9</guid>
                <link>https://redcircle.com/shows/ab69505d-0402-42eb-b42f-e038cc2540c7/episodes/8a5f5172-385d-44e6-b157-e47d5cfd6d63</link>
                <pubDate>Mon, 09 Dec 2024 05:01:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/12/11/4/5bbb3ff4-f659-4927-90f1-fa56eb9ec95b_8d-24d74e05fe3d_episodeart_thought2text_2024_3.jpg"/>
                <itunes:duration>751</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>002-Eye movements and brain activity while reading and judging relevance</itunes:title>
                <title>002-Eye movements and brain activity while reading and judging relevance</title>

                <itunes:episode>2</itunes:episode>
                
                <itunes:author>Jacek Gwizdka</itunes:author>
                
                <description><![CDATA[<h2>IX Lab Resarch - 002</h2><p><br></p><p>This episode introduces journal article originally published in Journal of the Association for Information Science and Technology. This is joint work with Rahilsadat Hosseini and Shouyi Wang from UT Arlington and Michael Cole from Rutgers Uni / Lexis-Nexis.</p><p>Gwizdka, J., Hosseini, R., Cole, M., &amp; Wang, S. (2017). <strong>Temporal dynamics of eye-tracking and EEG during reading and relevance decisions.</strong> <em>Journal of the Association for Information Science and Technology</em>, <em>68</em>(10), 2299–2312. <a href="https://doi.org/10.1002/asi.23904" rel="nofollow">https://doi.org/10.1002/asi.23904</a></p><h3><br></h3><h3>Summary</h3><p>This podcast discusses a research paper investigating how people determine online information relevance. Researchers used eye-tracking and EEG to monitor brain activity and eye movements while participants read news articles and answered questions. The study revealed that relevance isn&#39;t immediately apparent but develops over time, with more accurate predictions possible as reading progresses. Key findings included pupil dilation correlating with relevant information and identifiable patterns in eye movements during &#34;aha&#34; moments. The discussion also explores potential applications of this research in improving technology&#39;s interaction with human brains and the ethical implications of personalized information systems.</p><p><br></p><p>IX Lab website: <a href="https://ixlab.us/" rel="nofollow">https://ixlab.us/</a></p><p>Dr. Jacek Gwizdka website: <a href="https://jacekg.ischool.utexas.edu/" rel="nofollow">https://jacekg.ischool.utexas.edu/</a></p><p>The audio for this conversation has been generated by AI using: https://notebooklm.google/</p><p>Music intro created by a human (C) Jacek Gwizdka</p>]]></description>
                <content:encoded>&lt;h2&gt;IX Lab Resarch - 002&lt;/h2&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;This episode introduces journal article originally published in Journal of the Association for Information Science and Technology. This is joint work with Rahilsadat Hosseini and Shouyi Wang from UT Arlington and Michael Cole from Rutgers Uni / Lexis-Nexis.&lt;/p&gt;&lt;p&gt;Gwizdka, J., Hosseini, R., Cole, M., &amp;amp; Wang, S. (2017). &lt;strong&gt;Temporal dynamics of eye-tracking and EEG during reading and relevance decisions.&lt;/strong&gt; &lt;em&gt;Journal of the Association for Information Science and Technology&lt;/em&gt;, &lt;em&gt;68&lt;/em&gt;(10), 2299–2312. &lt;a href=&#34;https://doi.org/10.1002/asi.23904&#34; rel=&#34;nofollow&#34;&gt;https://doi.org/10.1002/asi.23904&lt;/a&gt;&lt;/p&gt;&lt;h3&gt;&lt;br&gt;&lt;/h3&gt;&lt;h3&gt;Summary&lt;/h3&gt;&lt;p&gt;This podcast discusses a research paper investigating how people determine online information relevance. Researchers used eye-tracking and EEG to monitor brain activity and eye movements while participants read news articles and answered questions. The study revealed that relevance isn&amp;#39;t immediately apparent but develops over time, with more accurate predictions possible as reading progresses. Key findings included pupil dilation correlating with relevant information and identifiable patterns in eye movements during &amp;#34;aha&amp;#34; moments. The discussion also explores potential applications of this research in improving technology&amp;#39;s interaction with human brains and the ethical implications of personalized information systems.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;IX Lab website: &lt;a href=&#34;https://ixlab.us/&#34; rel=&#34;nofollow&#34;&gt;https://ixlab.us/&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Dr. Jacek Gwizdka website: &lt;a href=&#34;https://jacekg.ischool.utexas.edu/&#34; rel=&#34;nofollow&#34;&gt;https://jacekg.ischool.utexas.edu/&lt;/a&gt;&lt;/p&gt;&lt;p&gt;The audio for this conversation has been generated by AI using: https://notebooklm.google/&lt;/p&gt;&lt;p&gt;Music intro created by a human (C) Jacek Gwizdka&lt;/p&gt;</content:encoded>
                
                <enclosure length="7766099" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/c5aacfb4-60b5-4dd4-83d2-90bd13a5f88e/stream.mp3"/>
                
                <guid isPermaLink="false">6f4ffe9d-eab9-4182-99d9-7801b29dabd0</guid>
                <link>https://redcircle.com/shows/ab69505d-0402-42eb-b42f-e038cc2540c7/episodes/c5aacfb4-60b5-4dd4-83d2-90bd13a5f88e</link>
                <pubDate>Sat, 07 Dec 2024 03:58:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/12/11/4/9c1ccb1a-6ae5-404f-be81-57efeb351f39_d7db-4d41-bd15-5c5b017d2773_2017_jasist_eeg-et.jpg"/>
                <itunes:duration>485</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
            <item>
                <itunes:episodeType>full</itunes:episodeType>
                <itunes:title>001-Intro to the IX Lab at UT Austin</itunes:title>
                <title>001-Intro to the IX Lab at UT Austin</title>

                <itunes:episode>1</itunes:episode>
                
                <itunes:author>Jacek Gwizdka</itunes:author>
                
                <description><![CDATA[<p>Introduction to the Information eXperience (IX) Lab at The University of Texas at Austin.</p><p><br></p><p>IX Lab website: <a href="https://ixlab.us/" rel="nofollow">https://ixlab.us/</a></p><p>Dr. Jacek Gwizdka website: <a href="https://jacekg.ischool.utexas.edu/" rel="nofollow">https://jacekg.ischool.utexas.edu/</a></p><p>The audio for this conversation has been generated by AI using: https://notebooklm.google/</p><p>Music intro created by a human (C) Jacek Gwizdka</p>]]></description>
                <content:encoded>&lt;p&gt;Introduction to the Information eXperience (IX) Lab at The University of Texas at Austin.&lt;/p&gt;&lt;p&gt;&lt;br&gt;&lt;/p&gt;&lt;p&gt;IX Lab website: &lt;a href=&#34;https://ixlab.us/&#34; rel=&#34;nofollow&#34;&gt;https://ixlab.us/&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Dr. Jacek Gwizdka website: &lt;a href=&#34;https://jacekg.ischool.utexas.edu/&#34; rel=&#34;nofollow&#34;&gt;https://jacekg.ischool.utexas.edu/&lt;/a&gt;&lt;/p&gt;&lt;p&gt;The audio for this conversation has been generated by AI using: https://notebooklm.google/&lt;/p&gt;&lt;p&gt;Music intro created by a human (C) Jacek Gwizdka&lt;/p&gt;</content:encoded>
                
                <enclosure length="4711235" type="audio/mpeg" url="https://audio4.redcircle.com/episodes/f75931e3-128e-46d5-9fe6-f9c05ebb401a/stream.mp3"/>
                
                <guid isPermaLink="false">830a092c-cda6-4aa3-964c-ad03a89ed69c</guid>
                <link>https://redcircle.com/shows/ab69505d-0402-42eb-b42f-e038cc2540c7/episodes/f75931e3-128e-46d5-9fe6-f9c05ebb401a</link>
                <pubDate>Sun, 01 Dec 2024 07:20:00 &#43;0000</pubDate>
                <itunes:image href="https://media.redcircle.com/images/2024/12/11/4/617666f7-8a88-44a2-a978-5955cc191374_-998e-ea074b7afa47_ix-lab-logo-new-vertical-sq.jpg"/>
                <itunes:duration>294</itunes:duration>
                
                
                <itunes:explicit>no</itunes:explicit>
                
            </item>
        
    </channel>
</rss>
