canada ethics

Government of Canada launches a voluntary code of conduct for the responsible development and management of advanced generative AI systems

In various parts of the world, as enthusiasm grows for advances in Artificial Intelligence (AI), concerns and apprehensions about the impacts these systems may have on individuals and society also arise. While regulatory documents are being prepared, such as the EU AI-Act in the European space, it becomes evident that the approval and practical application of these documents will be lengthy processes. It is also clear that none of the regulations being discussed will be sufficient to mitigate the real risks posed by this type of systems, especially those based on generative models. The involvement of society and a genuine ethical commitment from those involved in the programming and use of AI systems will always be necessary. The government of Canada has launched a groundbreaking initiative, as it is a government commitment, through a voluntary code of conduct. The initiative’s goal and the content of the code align with the Ai.ethics program. This acknowledgment is a source of satisfaction for us and also an encouragement to continue our work, involving more Portuguese entities in this commitment to ethics as a decisive factor in the design and use of AI systems.

Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems in Canada

Learn more about this topic:

Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems

Although these systems offer numerous advantages, it is undeniable that they also carry a broad risk profile. This is partly due to the vast amount of data they are trained on, the diversity of possible uses, and the scale of their implementation. Systems made publicly available for multiple purposes can pose security risks and even the spread of biases, potentially resulting in significant social impacts, especially when used maliciously. A practical example of these risks is the ability to create images and videos so realistic that they can be used to deceive important institutions, including democratic and criminal justice systems. Additionally, individual privacy can be compromised when these systems are improperly employed, as highlighted in the report by the G7 Data Protection and Privacy Authorities. However, it is important to note that generative systems can also be adapted for specific uses by organizations, such as corporate knowledge management applications or customer service tools. Although these applications may present more limited risks, it is still crucial to take measures to identify and mitigate these risks appropriately. To address and minimize these challenges, voluntary commitments are made to adopt the identified measures. This code establishes measures to be implemented both before mandatory regulation, under the Artificial Intelligence and Data Law, and by companies developing or managing systems widely available for public use, which are subject to a wide range of potentially harmful effects or misuse. Companies developing and managing these systems play important and complementary roles. They need to share relevant information to ensure that adverse impacts can be addressed by the appropriate company. It is crucial to emphasize that this code does not alter existing legal obligations that companies may have. By taking this voluntary commitment, managers of advanced generative systems commit to working towards the following objectives:

Responsibility

Companies understand their role in relation to the systems they develop or manage by implementing appropriate risk management systems and sharing information with other companies when necessary to avoid gaps.

Security

Systems undergo comprehensive risk assessments, and necessary mitigation measures are implemented before deployment.

Justice and Equity

Potential impacts related to justice and equity are assessed and addressed at different stages of system development and deployment.

Transparency

Sufficient information is published to allow consumers to make informed decisions, and experts can assess whether risks have been adequately addressed.

Human Supervision and Monitoring

System use is monitored after deployment, and updates are implemented as needed to address any arising risks.

Validity and Robustness

Systems operate as intended, are secure against cyber attacks, and their behavior in response to various tasks or situations is understood.

In addition, members commit to supporting the continuous development of a responsible and robust AI ecosystem. This includes contributing to the development and application of standards, sharing information and best practices with other members of the AI ecosystem, collaborating with researchers working to promote responsible AI, and cooperating with others, including governments, to promote public awareness and education about AI. Members also commit to developing and deploying AI systems in a way that drives inclusive and sustainable growth in Canada, prioritizing human rights, accessibility, and environmental sustainability.

This voluntary commitment represents a significant milestone in the development and responsible management of advanced generative AI systems and reflects the commitment of all parties involved to ensure that these technologies are used for the well-being of society as a whole, as we realize in our Ai. ethics program.