Need Inspiration?
Get inspired by 4,000+ keynote speaker videos & our founder, a top keynote speaker on innovation.
Janelle Shane Considers the More Practical Dangers of AI
Daniel Johnson — October 24, 2019 — Keynote Trends
References: aiweirdness & ted
Janelle Shane is an AI researcher and looks at some of the potential dangers of AI systems. Shane is also the author of the book 'You Look Like a Thing and I Love You: How AI Works and Why It's Making the World a Weirder Place.'
Janelle Shane opens her keynote by stating the fact that AI systems are often associated with disruption. However, this isn't always the case, as current AI systems have comparable intelligence to an earthworm. Shane outlines that AI systems will do what we ask, but the AI may not do it the way we want. This is because AI does not work through steps in succession, rather it works toward an overarching goal.
Shane further explains that the dangers of AI taking over the world are not founded in reality, because AI will do exactly what we tell it to. According to Shane the question then becomes how do we frame problems to make an AI system perform as intended.
Janelle Shane provides an example of this, through the experience of an AI system attempting to identify photos of a fish. The AI system used human fingers as part of its criteria for identifying an image of a fish -- this is because humans were holding the fish in many of the photographs. Shane then outlines that this is why self-driving car systems can fail when operating in suboptimal conditions. She concludes her keynote by stating that AI does what we ask it to do, but sometimes we ask it to do the wrong thing.
Janelle Shane opens her keynote by stating the fact that AI systems are often associated with disruption. However, this isn't always the case, as current AI systems have comparable intelligence to an earthworm. Shane outlines that AI systems will do what we ask, but the AI may not do it the way we want. This is because AI does not work through steps in succession, rather it works toward an overarching goal.
Shane further explains that the dangers of AI taking over the world are not founded in reality, because AI will do exactly what we tell it to. According to Shane the question then becomes how do we frame problems to make an AI system perform as intended.
Janelle Shane provides an example of this, through the experience of an AI system attempting to identify photos of a fish. The AI system used human fingers as part of its criteria for identifying an image of a fish -- this is because humans were holding the fish in many of the photographs. Shane then outlines that this is why self-driving car systems can fail when operating in suboptimal conditions. She concludes her keynote by stating that AI does what we ask it to do, but sometimes we ask it to do the wrong thing.
3.1
Score
Popularity
Activity
Freshness