AI Developer Requirements to Meet Compliance for Colorado Law – Get Guidance.

The Colorado Act requires “A person doing business in this state, including a deployer or other developer, that deploys or makes available an artificial intelligence system that is intended to interact with consumers must ensure that the artificial intelligence system discloses to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system.” 

To meet compliance – get guidance. We offer monthly and even daily emergency assistance in protecting your customers and your business. Take action today to be proactive, not reactive in protection your business is engaged in complying with the rules and importantly, perceived as “thought leader” in AI compliance and expertise including an ABC-acknowledge, build and communications process.

As an expert in AI, machine learning, neural networks, knowledge theory and management with more than thirty years and publication of major books and articles in this area including Mind Meld – Merging Mental and Metal which is available without charge – click on the image for more.

We offer expert professional assistance in the areas required in SB24-205 along design, development and delivery of training courses including:

  • Implementing a risk management policy and program for the high-risk system;
  • Completing an impact assessment of the high-risk system;
  • Annually reviewing the deployment of each high-risk system deployed by the deployer to ensure that the high-risk system is not causing algorithmic discrimination;
  • Notifying a consumer of specified items if the high-risk system makes a consequential decision concerning a consumer;
  • Making a publicly available statement summarizing the types of high-risk systems that the deployer currently deploys, and how the deployer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of each of these high-risk systems, and the nature, source, and extent of the information collected and used by the deployer

We do not act as legal counsel, however such in areas of technical knowledge, research and as necessary, expert witness preparation and testimony. We offer multiple levels of protection – for more information email cross@gocross.com or 303-594-1694 and services provided by a Colorado Corporation in Good Standing. Note: Due to high demand, remaining capacity is limited.

The bill requires a developer of a high-risk artificial intelligence system (high-risk system) to use reasonable care to avoid algorithmic discrimination in the high-risk system. There is a rebuttable presumption that a developer used reasonable care if the developer complied with specified provisions in the bill, including:

  • Making available to a deployer of the high-risk system a statement disclosing specified information about the high-risk system;
  • Making available to a deployer of the high-risk system information and documentation necessary to complete an impact assessment of the high-risk system;
  • Making a publicly available statement summarizing the types of high-risk systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer and how the developer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of each of these high-risk systems; and
  • Disclosing to the attorney general and known deployers of the high-risk system any known or reasonably foreseeable risk of algorithmic discrimination, within 90 days after the discovery or receipt of a credible report from the deployer, that the high-risk system has caused or is reasonably likely to have caused.

The bill also requires a deployer of a high-risk system to use reasonable care to avoid algorithmic discrimination in the high-risk system. There is a rebuttable presumption that a deployer used reasonable care if the deployer complied with specified provisions in the bill, including:

  • Implementing a risk management policy and program for the high-risk system;
  • Completing an impact assessment of the high-risk system;
  • Annually reviewing the deployment of each high-risk system deployed by the deployer to ensure that the high-risk system is not causing algorithmic discrimination;
  • Notifying a consumer of specified items if the high-risk system makes a consequential decision concerning a consumer;
  • Providing a consumer with an opportunity to correct any incorrect personal data that a high-risk artificial intelligence system processed in making a consequential decision; and
  • Providing a consumer with an opportunity to appeal, via human review if technically feasible, an adverse consequential decision concerning the consumer arising from the deployment of a high-risk artificial intelligence system;
  • Making a publicly available statement summarizing the types of high-risk systems that the deployer currently deploys, and how the deployer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of each of these high-risk systems, and the nature, source, and extent of the information collected and used by the deployer ; and
  • Disclosing to the attorney general the discovery of algorithmic discrimination, within 90 days after the discovery, that the high-risk system has caused or is reasonably likely to have caused.

A person doing business in this state, including a deployer or other developer, that deploys or makes available an artificial intelligence system that is intended to interact with consumers must ensure that the artificial intelligence system discloses to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system. 

The attorney general and have has exclusive authority to enforce the bill.  The bill does not restrict a developer’s or deployer’s ability to engage in specified activities, including:

  • Complying with federal, state, or municipal laws, ordinances, or regulations;
  • Cooperating with and conducting specified investigations;
  • Taking immediate steps to protect an interest that is essential for the life or physical safety of a consumer; and
  • Conducting and engaging in specified research activities.

The bill provides an affirmative defense for a developer or deployer if:

  • The developer or deployer of a high-risk system or generative system involved in a potential violation that is in compliance with a nationally or internationally recognized risk management framework for artificial intelligence systems that the bill or the attorney general designates; and
  • The developer or deployer takes specified measures to discover violations of the bill.

The bill grants the attorney general rule-making authority to implement and enforce the requirements of the bill.

(Note: Italicized words indicate new material added to the original summary; dashes through words indicate deletions from the original summary.)
(Note: This summary applies to the reengrossed version of this bill as introduced in the second house.)