BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Digital Life Institute - ECPv6.15.17.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Digital Life Institute
X-ORIGINAL-URL:https://www.digitallife.org
X-WR-CALDESC:Events for Digital Life Institute
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Halifax
BEGIN:DAYLIGHT
TZOFFSETFROM:-0400
TZOFFSETTO:-0300
TZNAME:ADT
DTSTART:20220313T060000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0300
TZOFFSETTO:-0400
TZNAME:AST
DTSTART:20221106T050000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0400
TZOFFSETTO:-0300
TZNAME:ADT
DTSTART:20230312T060000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0300
TZOFFSETTO:-0400
TZNAME:AST
DTSTART:20231105T050000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0400
TZOFFSETTO:-0300
TZNAME:ADT
DTSTART:20240310T060000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0300
TZOFFSETTO:-0400
TZNAME:AST
DTSTART:20241103T050000
END:STANDARD
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:America/Toronto
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20220313T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20221106T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20231013T080000
DTEND;TZID=America/Halifax:20231014T170000
DTSTAMP:20260501T041501
CREATED:20231010T171129Z
LAST-MODIFIED:20231010T171546Z
UID:861-1697184000-1697302800@www.digitallife.org
SUMMARY:AI empathy and the rhetoric of emergent AI teachers
DESCRIPTION:The launch of OpenAI’s ChatGPT was a popular event making generative artificial intelligence a mainstream phenomenon. Data journalist\, Katharina Buchholz explains that “ChatGPT gained one million users just five days after launching in November” of 2022 (Statista\, 2023). Generative AI can produce stylistically correct sentences\, paragraphs\, and documents across a multitude of genres (Duin and Pedersen\, 2021). It can produce professional-grade visual images and video\, which has led to a significant shift in postsecondary educational domains. AI technologies continue to be developed for education\, including AI agents to serve in teaching roles. The emergence of “AI digital employees\, identified as artificial humans\, digital humans\, or virtual humans” is occurring (Duin and Pedersen 2023). A category of these AIagents — AI teachers that are embodied and seemingly autonomous — appears in marketing and promotional discourses (Pedersen and Duin 2022). One key component needed for AIteachers will be the appearance of empathy to be used as a persuasive tactic. Empathetic AIinvolves embedding signs of empathy in virtual human interfaces that will convince people that an agent is indeed empathetic. This paper argues that tech companies are releasing these products through a sensationalized\, technoliberal rhetoric (Pfister and Yang\, 2018) rather than through a measured approach involving intended participants. This paper discusses the situation that educators\, students\, and university administrators are faced with handling the cultural adaptation to current AI agents and a predicted future involving empathetic AI teachers that challenges aspects of formal education amid obfuscating discourses. \nIsabel Pedersen presents at the SUNY Council On Writing.
URL:https://www.digitallife.org/event/ai-empathy-and-the-rhetoric-of-emergent-ai-teachers/
ATTACH;FMTTYPE=image/png:https://www.digitallife.org/wp-content/uploads/2023/10/sunycow.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Toronto:20231019T133000
DTEND;TZID=America/Toronto:20231019T150000
DTSTAMP:20260501T041501
CREATED:20231015T144850Z
LAST-MODIFIED:20231015T144933Z
UID:867-1697722200-1697727600@www.digitallife.org
SUMMARY:Infrastructures of Manipulation
DESCRIPTION:Panel Discussion: Andrew Iliadis\, Francesca Tripodi\, Aashka Dave\, Leslie Kay Jones\, Amelia Acker\, Heather Ford \nThis panel presents research on web and information infrastructures used for manipulative purposes. In contrast to platform manipulation (Woolley & Howard\, 2018; Benkler et al.\, 2018)\, where users such as bad actors seek to gamify and exploit the weaknesses of online social media platforms like Twitter and TikTok\, the papers in the present panel describe studies where web or information infrastructures such as those involved in search and information retrieval are manipulated to alter or produce facts (rather than social commentary on facts). For example\, studies have shown how infrastructures like Google Search are manipulated by conservative elites (Tripodi\, 2022)\, how anonymous editors use Wikidata to revise the distribution of information related to political protest movements (Ford\, 2022)\, and how administrators harness information schemas to improve the findability of their advertising content (Iliadis\, 2022). In these areas and more\, web and digital infrastructures are being manipulated to serve the interests of politically motivated actors (Acker\, 2018; Acker & Donovan\, 2019). \nInfrastructures typically refer to shared public services like sewers\, telephone poles\, and electricity. According to Bowker et al. (2010\, p. 98)\, information infrastructure refers to “digital facilities and services usually associated with the internet.” Information infrastructures are thus enabling resources\, in network form\, whose key role is that of a distributor\, but rather than goods or services\, information infrastructures distribute “knowledge\, culture\, and practice” (Bowker et al.\, 2010\, p. 114). Such structures do this through their development of ontologies or classification schemes that enable dividing the world into categories or\, through their application to large data sets\, by offering an enormous\, open store of data that can be used by others for a variety of purposes\, such as retrieving facts and sharing information. Recently\, several scholars have elaborated on the political nature of such infrastructural processes of digitization and datafication\, including in the domains of archiving and preservation (Thylstrup\, 2018\, 2022)\, governance and management (Flyverbom & Murray\, 2018)\, metrics and sorting (Alaimo & Kallinikos\, 2021)\, and the creation of global ontologies for things like web search (Iliadis et al.\, 2023) and surveillance services (Iliadis & Acker\, 2022). \nManipulation of social media content and messaging is likewise a major research area over the last several years owing to the prevalence of online misinformation and disinformation campaigns (Reagle\, 2016; Paris\, 2021; Culloty & Suiter\, 2021)\, particularly those associated with electoral politics (Tucker & Persily\, 2020) and health misinformation (Keselman et al.\, 2022). Yet\, online manipulation is not a new phenomenon and has long been discussed as a feature of the web in the context of the history of trolling\, abuse\, and hate (Phillips\, 2015\, 2019). Manipulation is thus a multivalent concept and is found in several domains which share the notion that manipulation is related to the altering\, editing\, treating\, controlling\, and influencing of content and messages for the purpose of misleading individuals. Historically\, though\, less attention has focused on manipulation as it has been mobilized infrastructurally\, particularly with respect to the information infrastructures that transmit content and messages. Infrastructures should be understood here in a broad sense as undergirding the communication structures that transmit messages and content. Such infrastructures can be found in computer science\, news and journalism\, government\, policy\, and other areas where messaging is organized using some form of schema\, whether it be technical\, linguistic\, financial\, or otherwise. \nThe first paper uses interviews to highlight the “importance of abortion-related web search and whether or not that system has been manipulated by actors trying to prevent abortion access.” The paper “examines how people (users) search for information about abortion\, how organizations (content providers) utilize search engine optimization to reach potential users\, and how advertisers try to attract visitors.” The second paper uses autoethnography and process tracing with respect to “the AP African American Studies debacle in order to elucidate digitally mediated disinformation as a strategy for stoking moral panic and thereby gaining widespread public buy-in to the establishment of educational censorship infrastructure.” The third paper analyzes Palantir as a surveillance platform that shapes and is shaped by infrastructures of manipulation. The paper “provides a method for researching companies like Palantir and its surveillance infrastructures” through digital media archiving of “over 600+ documents which have been stored\, cleaned\, annotated\, and uploaded into an online digital archive that will be publicly available for media researchers to study.” The fourth and final paper is “an ethnographic study of a single Wikipedia article and how it evolved over the course of a decade” in the context of political revolutons. The paper describes “a framework for understanding new methods of controlling facts in the context of automated knowledge products” and “the importance of semantic infrastructure to new methods of control and influence on Wikipedia and the wider knowledge infrastructures that are increasingly dependent on it.” \nThis panel takes place at the 2023 Association of Internet Researchers Conference.
URL:https://www.digitallife.org/event/infrastructures-of-manipulation/
ATTACH;FMTTYPE=image/png:https://www.digitallife.org/wp-content/uploads/2023/10/AoIR.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Halifax:20231026T080000
DTEND;TZID=America/Halifax:20231028T170000
DTSTAMP:20260501T041501
CREATED:20231025T105231Z
LAST-MODIFIED:20231025T105231Z
UID:872-1698307200-1698512400@www.digitallife.org
SUMMARY:Mining Ourselves: Collaborative Autoethnography (CAE) as Methodology
DESCRIPTION:Daniel Hocutt\, Ann Hill Duin\, Jessica Lynn Campbell\, and Mollie Stambler present. Autoethnography is a research method that draws from a variety of autobiographical data–such as memories\, documents\, ongoing self-reflections and observations–to explore and investigate social phenomena (Chang\, 2016). Collaborative autoethnography (CAE) has multiple researchers use a multilayered approach to collecting data\, performing ongoing self-reflection\, interpreting data through collaborative meaning-making\, and producing findings that achieve research objectives (Chang et al.\, 2013; Chang\, 2016). CAE affords the combination of multiple voices and perspectives to interrogate a social phenomenon and can generate unique contributions in social science research. In this presentation\, we discuss CAE as a valuable\, under-utilized methodology for a variety of technical and professional communication (TPC) research. Drawing on an ongoing collaborative research project\, panelists will demonstrate CAE’s value for TPC research with pedagogical implications. \nThis presentation is at SIGDOC 2023.
URL:https://www.digitallife.org/event/mining-ourselves-collaborative-autoethnography-cae-as-methodology/
ATTACH;FMTTYPE=image/jpeg:https://www.digitallife.org/wp-content/uploads/2023/10/SIGDOC-logo.jpg
END:VEVENT
END:VCALENDAR