Why Leaders Must Consider the Ethics of AI with Armen Berjikly
[Audio]
Podcast: Play in new window | Download
Subscribe: Google Podcasts | Spotify | Stitcher | TuneIn | RSS
Armen Berjikly is an entrepreneur who has dedicated his career to pushing the boundary of artificial intelligence with special focus on emotion and empathy to work with people as they are. He created the company called Kanjoya, which was acquired by Ultimate Software around three years ago. Today he has led Product Strategy for Ultimate Software in San Francisco, and is currently a Co-Founder and Head of Product at Motive Software.
Episode Links:
Armen Berjikly ’s LinkedIn: https://www.linkedin.com/in/armenb/
Armen Berjikly’s Twitter: https://twitter.com/armenberjikly?s=20
Armen Berjikly’s Website: https://www.motivesoftware.com/
Podcast Details:
Podcast website: https://www.humainpodcast.com
Apple Podcasts: https://podcasts.apple.com/us/podcast/humain-podcast-artificial-intelligence-data-science/id1452117009
Spotify: https://open.spotify.com/show/6tXysq5TzHXvttWtJhmRpS
RSS: https://feeds.redcircle.com/99113f24-2bd1-4332-8cd0-32e0556c8bc9
YouTube Full Episodes: https://www.youtube.com/channel/UCxvclFvpPvFM9_RxcNg1rag
YouTube Clips: https://www.youtube.com/channel/UCxvclFvpPvFM9_RxcNg1rag/videos
Support and Social Media:
– Check out the sponsors above, it’s the best way to support this podcast
– Support on Patreon: https://www.patreon.com/humain/creators
– Twitter: https://twitter.com/dyakobovitch
– Instagram: https://www.instagram.com/humainpodcast/
– LinkedIn: https://www.linkedin.com/in/davidyakobovitch/
– Facebook: https://www.facebook.com/HumainPodcast/
– HumAIn Website Articles: https://www.humainpodcast.com/blog/
Outline:
Here’s the timestamps for the episode:
(00:00) – Introduction
(02:48) – Being people first. People building the organization, the employees and their philosophy with a level of trust, authenticity and value placed on
(04:48) – Bringing your own understanding of the capabilities of new technology and the unmet challenges in the human resource space and where solutions are
(06:00) – There’s a lot of things that are still unmet needs, frustrations, gaps. And what you do is you start to come upon new technologies like artificial intelligence, which is not a solution in and of itself.
(06:50) –Science fiction is going to become science fact, regardless of your position on that, it’s just undeniable progress that’s happened in the underlying hardware capabilities.
(08:56) – Being people is the first step one, but just more expansively in the world of human capital, the responsibility is too great to bring empathy into the workplace and AI and NLP could do that
(09:58) – Ethical considerations with some of these new capabilities within boundary boxes, with that philosophy, to pursue some of these goals of building better products, solving customer problems
(10:57) – Support ethics and AI and build technology from within
(12:18) – Technology will be the solution to the problems it has created, but that’s a little backwards. Sometimes you need to be more thoughtful about the problems you’re going to create before you create them.
(14:15) – Companies have to embrace the boundaries and the direction of their artificial intelligence approach
(16:28) – Transparency is essential in the tech industry. The cavalier approach is a no-go. If you try and retrofit ethics, try and retrofit morality and responsibility in your advanced technology portfolio, It’s a little too late
(17:14) – The greatest risk is that AI actually takes no risks. And it’s a little bit counterintuitive to think that way, but what AI is, is really a bunch of formulas, It’s a bunch of pattern recognition, a bunch of math, and it is only as smart as the data it’s seen before and what it could derive out of that data.
(18:48) – We have unconscious bias machines that have the ability to have that bias identified, measured, and hopefully over time, ameliorated or potentially even eradicated. You can only get there if you have extremely diverse training inputs
(22:49) – Decision-making support is the worthy goal of artificial intelligence. You have to enable it to work with us and understand our problems. And so that kind of gets into the boundaries that we’re starting to push with new technology.
(24:22) –The only data that we’ll be looking at is data that was intended to be looked at
(25:47) – If technology is trying to solve really big, interesting problems or help us make big decisions, and yet is not aware, sensitive and thoughtful about the fact that our emotions matter
(26:52) – Being sensitive. When we look at a piece of data, it’s not just how many words were said and a word count and a word cloud, which is sort of where things go to today, but we push forward and we say: how is this person feeling?
(29:16) – We have zero interest and we are philosophically opposed to the idea of machines running companies and replacing people
(29:57) – Let’s build technology that works for us and change the situation that we’ve been subject to where we build the technology, then we end up being sort of subjugated by it