Interview with Sarah Engel on Trusted AI
2022-01-14
Sarah Engel is managing consultant and team lead for trustworthy AI at IBM. She advises her clients on innovation projects around artificial intelligence.
Gerhard Schimpf (GS): Sarah, thank you so much for agreeing to an interview for our ongoing blog about Being Human with Algorithms. You may want to introduce yourself and give us an overview of your current responsibilities?
Sarah Engel (SE): Thank you, Gerhard. What fascinates me is the interface between human and AI, to benefit one another. One example that represents the combination very well, is neuroprosthetics in medical technology. A hand prosthesis can support human with amputated hand or underarm. AI can be leveraged to decode relevant neuronal or muscular signals, learn and improve over time. Also the person is learning from and with the AI system. I have experience in this area, but also in areas, where AI does not show physiological aspects, like language processing or decision support. At IBM my current responsibility is driving adoption of AI which includes building trustworthy AI systems, as well as guiding through transformation. Therefore, I am advising clients in global engagements in my role at the worldwide Center of Competence. My background is in cognitive science and computer science.
GS: Which developments led you to pursue Computer Science and enter the field of AI? Which trends of the currently observable digital transformation should we pay the most attention to?
SE: Math was one of my favorite fields already in school and Computer Science is very close to it. I was addicted to solve problems analytically and to find new ways being creative. In addition, I was very much interested in how people act and especially how the human brain works. Cognitive Sciences seemed to be a perfect match to combine both. What caught my attention was the development of brain-computer-interfaces that enable people with high degree of disabilities, e.g. with locked-in syndrome, to communicate or move again.
Regarding digital transformation: I am convinced that digitalization should be based on human values, not just because technology allows us to do. Let’s pay more attention to what we aim for with digital transformation and apply it with good intent. With the raise of data collection, data analytics and automation raises also the need for appropriate cybersecurity, privacy, fairness, transparency and explainability for the users. This should get more and especially a more holistic attention, as it is a socio-technological challenge.
GS: How does that influence your daily work and how do you personally contribute to the digital transformation yourself?
SE: When realizing this challenge, I dedicated my work on trustworthy AI and affected change processes for people and organizations. With clients I am working on designing and improving AI systems to be trustworthy. This reaches from shaping the value proposition, defining values and principles to set boundaries, and putting principles into practice driving the transformation human-centered rather than purely tech-centered.
GS: You are applying in Artificial Intelligence in the projects with your customers. Do you see an overall chance for a positive course of the digital transformation leading to improved standards of living in our society? Can you give examples?
SE: Definitely, I see a positive trend. AI ethics, regulations and trustworthiness is getting more and more popular, we have good conversations around security and privacy in Europe and drafts for regulations to define standards like the EU AI ACT. I also see companies who take their responsibility seriously and want to build trustworthy services and products leveraging AI. Some example applications that have positive effects on society are early crisis detection, fairer decision making between gender and ethnical groups or personalized healthcare.
GS: Can you emphasize with the general public’s fears and ethically rooted concerns that learning systems can either be abused, or could develop a life of their own and might go out of control?
SE: I do understand the fear, but it is often based on myth and dystopic scenarios in science fiction. There is a lot of educational work still to be done, what AI is and how it works – I think most fears would be de-mystified. Most learning systems nowadays are expert systems to support in a very specific area.
And let me give a bit of background regarding the control: (partially) autonomous learning of a system is what makes machine learning, part of AI, so special and so powerful. Thus, complete control is contradicting with AI algorithms. But we can set boundaries, monitor the performance to detect unintended behavior and intervene. The boundaries are set by human, also the design of an AI system is still in our hands. It is on us to let diverse teams build learning systems, to apply techniques to detect unintended human bias and prevent misuse through transparency. In addition, regulatory boundaries play an important role to build trust.
GS: Who, in your opinion, is responsible to alleviate these concerns and mitigate the risks? Do you see an immediate call to action so that human rights are not violated?
SE:
It is a provider’s responsibility, but not exclusively. It is also a regulatory/legal, societal, educational ask and needs discussions and boundaries on a global level. Especially regarding the global level, I do see an urgent need for action. On example is a potential misuse of data and data insights for social scoring systems. Human rights like privacy and security besides all others need to be protected for everyone on this globe, to ensure “Being Human with Algorithms”.
GS: Thank you, Sarah, for these remarks. We at ACM are looking forward to remaining in touch with you.
SE: Thank you, Gerhard, for the invitation! It is my pleasure.