Zellis statement on AI Ethics – May 2024
Background
Zellis Group participates in the development, distribution and management of tools and technology which employ several types of artificial intelligence (AI) systems. This document outlines conduct of business standards to which we hold ourselves today and into the future.
This statement has been written to align with legislation from the European Union (Regulation EU AI Act – 2024/1689 – EN (europa.eu)), as well as the highest standards of voluntary commitments being introduced by industry leaders. We recognise AI legislation was introduced to protect natural persons in relation to health, safety and fundamental rights, including democracy, rule of law and environmental protection. At the same time, AI legislation harmonises with other legislation such as data protection and data privacy, treating customers fairly and the consumer duty.
Zellis Group works closely with some of the most advanced developers of AI, including our partner Microsoft. This statement is aligned to the principles set out by Microsoft here.
This statement will continue to be updated and expanded to maintain alignment to any emerging legislation within the regions in which Zellis Group operates. It is our intention to remain both compliant and aligned to best practice with regards to our development of AI systems.
Our AI ethical statement
Our AI ethical statement is an integral part of our Zellis Group Conduct and Ethics Policy and Minimum Requirements. Zellis Group takes its responsibilities for the creation of ethical, effective and lasting AI systems very seriously.
Principle 1: Conduct of Business
Our policy on the development and deployment of AI aligns to conduct of business, product design and development, and enterprise risk.
Principle 2: Fairness and inclusivity
We believe in achieving positive customer outcomes avoiding potential bias driven using AI systems. We aim for our algorithms and coding to empower all users and to engage users with inclusivity.
Principle 3: Risk-based and proportionate
We are aware of potential risks emerging from the development of AI systems and processes. We remain committed to designing and operationalising internal controls and appropriate guardrails to mitigate such risks. We have integrated our enterprise risk management and quality management practices as well security requirements into our internal controls for our design and development activities.
Principle 4: Transparency
We uphold the principle of transparency in our use of AI systems, making it clear to users where AI systems have been used. We describe the use of AI systems to our customers and ensure transparency is at the heart of our product design and development. This includes testing provisions, identification and traceability, and maintaining appropriate auditable records of conformity.
Principle 5: Accountability
At Zellis Group we acknowledge the opportunities and risks associated with the use of AI systems and we hold ourselves correspondingly accountable.
Principle 6: Relations with Regulators
We deal with our regulators in an open and cooperative way. We appropriately disclose material relating to our conduct of business, and proposed business activities which our regulators would expect notice of.
External references
Reference | Description |
---|---|
Microsoft commitments to advance safe, secure and trustworthy AI | Microsoft’s statement of support for new voluntary commitments crafted by the Biden-Harris administration to help ensure that advanced AI systems are safe, secure, and trustworthy. |
EU AI Act | The AI Act is the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally. The AI Act aims to provide AI developers and deployers with clear requirements and obligations regarding specific uses of AI. |
FCA Consumer Duty | The Duty aims to tackle factors that can result in products or services which are unfair or poor value, such as unsuitable features that can lead to foreseeable harm or frustrate the customer’s use of the product or service, or poor communications and consumer support. |
ISO31000 Risk Management Systems | The ISO 31000 standard describes a risk management process that consists of multiple stages: General: Establishing the framework and principles for risk management in the organisation. Defining the roles and responsibilities of the parties involved in the risk management process. |
ISO9001 Quality Management Systems | The international standard that specifies requirements for a quality management system (QMS). Organisations use the standard to demonstrate the ability to consistently provide products and services that meet customer and regulatory requirements |
ISO27001 information security, cybersecurity and privacy protection – management system requirements | The international standard for data protection and data privacy, information and cyber-security. Its framework requires organisations to identify information security risks and select appropriate controls to tackle them. |