Many of us are wary when it comes to artificial intelligence (AI) . Data privacy, copyright infringement, and inaccuracies are concerns. Yet AI can also help you reduce costs in your business, improve efficiency, and increase customer satisfaction. So how do you balance benefits with security?
While existing consumer protection, human rights and criminal laws apply to AI, they were not developed with AI in mind . As a result, they generally lack the scope and level of detail needed to effectively regulate this complex and multifaceted technology.
That’s about to change. Governments around the world have already taken steps to regulate AI. Regulations that could impact your business are coming, and they may come sooner than you think.
Will Upcoming AI Regulations Protect Your Business?
Although many proposed laws to regulate AI are still in draft form, we have a good idea of the issues that agencies are considering.
regulatory: data privacy, discrimination and copyright infringement are among their concerns.
Companies using AI-powered tools should continue to manage risks by following these best practices:
not to share personal or exclusive data if their confidentiality is not guaranteed
ask a competent person to verify the accuracy of the AI results
avoid posting AI-generated content verbatim in case it is copyrighted
ensure staff are aware of the above best practices
Although regulations on the general use of AI are being developed, countries are at different stages.
What types of AI regulations can businesses expect?
Canada, China, Europe and the United States have already signaled their intention to regulate AI, and some trends are emerging. We believe that organizations developing AI-based technologies will need to:
explain how models work (logic, criteria, etc.)
describe how the models use the data (e.g., what the data is, where it comes from, how it is used, how it is stored, etc.)
specify the choices they offer to users of their AI (e.g., opt-in, opt-out, deletion of data, etc.)
clearly indicate when AI is being used (e.g., a “you are interacting with a robot” statement)
demonstrate the absence of bias in automated decisions and treat all questions fairly, as well as provide evidence of the existence of internal safeguards to minimise bias
These regulations aim to protect people who use products with AI capabilities.
How are AI regulations taking shape around the world?
Countries are not all at the same stage when it comes to implementing regulations. Some countries, including Canada, have already proposed legislation or are in the process of finalizing it. Other countries, such as the United States, have developed general principles for companies developing AI applications.
However, these principles are not binding, meaning that self employed database there are no consequences for not respecting them. Some countries, however, do not yet have legislation or principles.
AI Regulation in Canada
In 2022, Canada introduced Bill C-27, the Digital Charter Implementation Act , 2022 , a framework that aims to ensure trust, privacy, and responsible innovation in the digital sphere. As part of Bill C-27, Canada also introduced the Artificial Intelligence and Data Act (AIDA) , which aims to protect individuals and their interests from potentially harmful aspects of AI systems.
Currently, the LIAD defines six regulated activities:
human supervision and monitoring
transparency
justice and equity
security
responsibility
validity and robustness
According to Innovation, Science and Economic Development Canada (ISED), companies offering AI-powered products will need to implement accountability mechanisms, such as internal governance processes and policies, to ensure they meet their obligations under the Act. If all goes as planned, the AIIA is expected to come into force no earlier than 2025.
Additionally, a recent court ruling will require companies to take steps to ensure their AI-powered tools are accurate. In a landmark case , a judge found that a large Canadian company was legally liable for the misinformation its chatbot provided to one of its customers.
AI Regulation in the United States
At the federal level, the Biden administration introduced the Blueprint for an AI Bill of Rights in October 2022. This document presents a set of five principles and practices intended to guide companies that develop, deploy, and manage automated systems and AI technologies.
The five principles are:
safe and efficient systems
protections against algorithmic discrimination
data privacy
notices and explanations (to ensure users know when they are interacting with an AI)
human-based alternatives, consideration and fallbacks (to ensure users can easily opt out or speak to someone for help).
Meanwhile, several states have passed AI-related privacy laws. For example, most states require companies to disclose when AI is used for automated decision-making and to provide consumers with ways to opt out of this type of data processing. Other states have additional transparency requirements to ensure that companies disclose how their systems work.
Further steps towards AI regulation in the United States
On July 5, 2023, New York City signed into law the AI Bias Law, which requires companies to regularly check their hiring algorithms for bias and publish their results.
In August 2023, a US judge ruled that AI-generated artwork cannot be copyrighted. According to the US Copyright Office, it is any artwork that is automatically generated by a machine or mechanical process without the creative input of a human author.
The Federal Trade Commission (FTC) has also become more active in monitoring and investigating AI-powered products. For example, it is currently investigating Open AI to determine whether the company adequately notifies users when its technology generates false information.
AI regulation in Europe
The European Union (EU) adopted an interim agreement called the Artificial Intelligence Act (AI Act) in 2023.
The EU AI Law classifies AI into four risk levels:
Unacceptable risk
Prohibited, with limited exceptions, are technologies that include capabilities such as the following:
cognitive behavioral manipulation
a social score by the government
real-time biometric identification, such as facial recognition.
High risk
Technologies in this category must meet a long list of requirements to ensure security. In addition, companies must publish details of their system in a publicly accessible database.
AI Regulation Is Coming: Will It Protect Your Business?
-
- Posts: 30
- Joined: Mon Dec 09, 2024 3:43 am