At a time when organisations are rapidly incorporating Artificial Intelligence (AI) into manufacturing, processes, and customer interactions, it’s not surprising that corporate leaders listed “improper use of AI” among the top 10 concerns for digital transformation in 2022.
Processing consumer groups pushing back against AI-driven marketing initiatives, especially ones that seem too intrusive, or worse, not relevant. It’s estimated that 80% of marketers get it wrong at least half the time. That may seem a convoluted statement, but when you think about it, you can see why consumer groups are forming to protest what are viewed as unwelcome messaging that interferes with their enjoyment of online customer experiences that they are seeking to consume.
Although most of the complaints are directed at major, global brands, even smaller, domestic firms are adopting AI to boost marketing initiatives – infrequently with the results they intended. Against that backdrop, we were pleased to learn about Cisco’s new Responsible AI Initiative and want to share the company’s announcement. Here it is, in its entirety:
“Artificial Intelligence (AI) is increasingly part of our everyday lives, and this transformation requires a thoughtful approach to innovation. Cisco is committed to delivering technologies and services by managing AI development in a way that augments our security, data privacy, and human rights focus – fostering a more inclusive future for all.
Cisco has just announced its…
Responsible AI initiative, a governance framework that guides internal development and provides a vital communication channel with our customers, partners, industry, and organizations. The Responsible AI initiative is a part of the Cisco Trust Center, a place where we work alongside our customers and suppliers to ensure responsive data-related processes and policies.
AI is inherently different than previous technologies and requires a more responsive approach to governance. For example, AI models are typically trained on data sets and automate the production of insights that can influence decisions and actions. This approach introduces potential issues, including bias, that can arise from inconsistent or incomplete training data sets. Additionally, some models derive output and insights that are based on machine-generated processes, limiting access to the underlying algorithm. These challenges are known to the industry, and there are continuous advances in AI that may address some of these concerns.
Cisco employs a human-centric approach to design and development that includes the processes used to evaluate new technologies. Our approach to designing responsible AI systems is focused on advancing the experience of our customers, partners, and the organizations they serve. The Responsible AI initiative serves two vital roles in the governance of new technologies. First, it defines internal processes to ensure a continuous assessment and management loop with our designers, developers, and partners. Cisco has established development guidelines, testing and response protocols and included them in the Cisco Secure Development Lifecycle. Second, the initiative is part of the Cisco Trust Center and expands Cisco’s communications channels and processes to include the governance of AI-related technologies, products, and services.
The Responsible AI Initiative is driven by a clear set of principles, furthering Cisco’s commitment to respecting and upholding the human rights of all people, as published in Cisco’s Global Human Rights Policy. Our Responsible AI Principles include transparency, fairness, accountability, privacy, security, and reliability in a way that is consistent with Cisco’s operating practices and directly applicable to the governance of AI technologies. Each principle includes concrete working practices and empowers customers to participate in a continuous cycle of feedback and development. See the Responsible AI Principles for more information.
Cisco is committed to a responsible and reflective approach to the governance of AI technologies based on continuous learning, policy setting, and observation cycles. Cisco will also participate in AI-related initiatives with other industry leaders, standards committees, and global government agencies. We invite you to participate in Cisco’s Responsible AI initiative. Your perspective and feedback will help us shape this technology and our products in a way that is supportive and equitable for all.
See other Cisco perspectives on Responsible AI:
- Artificial Intelligence: driving innovation while safeguarding ethics and privacy
- Designing responsible AI systems
Learn more at trust.cisco.com.
If you’d like to learn more, please give contact us at [email protected] or (416) 429-0796 or 1.877.238.9944 (Toll Free).