This site presents the text of Senate Bill 53 enriched with background and commentary. Click on any of the blue highlights to learn more. The bill text below was last amended on 17 July, 2025.
In a nutshell, SB 53 says seven things. These are the four I consider most important:
And these are the three major provisions that I expect will be less important:
Large AI developers are not currently mandated to adopt safety policies, but under SB 53, they would be. This wouldn't require most frontier developers to do anything qualitatively new, since they already have published safety policies, and they've already made non-enforceable commitments to follow those policies. Anthropic, OpenAI, Google DeepMind, and Meta have all written safety policies that satisfy many of the requirements in § 22757.12.a, and xAI has a draft safety policy that satisfies a few of the requirements. So if SB 53 were to pass, one industry laggard would have to write a safety policy for the first time, other frontier developers would have to make their existing safety policies more robust, and every frontier developer would be legally mandated to follow their own safety policy.
Moreover, they must get an independent auditor to certify that they are following their safety policy, and this explicitly includes certifying that the safety policy is clear enough for it to be determinate whether the developer is complying with it. As far as is publicly known, no major AI developer has ever undergone a safety audit, but such audits are completely routine in other risky industries like aviation. Every airline in the US is required to write a plan explaining what measures they will follow to ensure safety, and they must regularly commission independent audits to confirm that they're following the plan. SB 53 would require companies developing frontier AI systems to do the same.
It is already a widely accepted best practice in the AI industry that when a company releases a new frontier model, they should publish a report called a model card describing the model's capabilities to consumers and to the scientific community. Anthropic, OpenAI, and Google DeepMind have consistently released model cards alongside all of their recent frontier models, and all three companies' model cards likely comply with most of the requirements in SB 53. These cards generally explain how the developer assessed the risks posed by their model, how they intend to mitigate those risks, and whether their model reached any prespecified risk or capability thresholds. If the bill were to pass, the big three AI developers would have to disclose more detailed information about third party assessments run on their models, and developers like xAI that generally don't publish model cards would have to start publishing them.
SB 53 would make large developers civilly liable for breaches of the above rules. No AI company executives will go to jail for failing to publish a safety policy or model card, but their companies can be faced with heavy fines—up to millions of dollars for a knowing violation of the law that causes material catastrophic risk. This is a major change from the status quo. Today, frontier AI developers have no legal obligation to disclose anything about their safety and security protocols to government, let alone to the public. When a company releases a new AI system more powerful than any system before, it is entirely optional under present law for them to tell consumers what dangerous things that system can do. And if a company does choose to adopt a safety policy or publish a model card, there is no force of law to guarantee the safety policy is being implemented or that the model card is accurate. This would all change under SB 53. We'd no longer have to rely on AI developers' good will to share critical safety information with the public.
There is currently no official channel for the California state government to collect reports of safety incidents involving AI. If a frontier AI developer discovered tomorrow that the weights of their leading model had been stolen, the best they could do to alert state authorities would probably be to email the Attorney General's office. If a member of the public witnessed an AI autonomously causing harm in the wild, the fastest way for them to tell the authorities would probably be to tweet about it. SB 53 would replace these slow, informal information channels with an official incident reporting mechanism run by the AG. Just like California has an official website to collect reports of data breaches, there would be another site for reports of critical AI safety incidents.
Existing California law already offers whistleblower protection to AI company employees who report a violation of federal, state, or local law to public officials or to their superiors. Companies may not make rules or enforce contracts that would prevent their employees from blowing the whistle, nor can they retaliate against an employee who becomes a whistleblower. SB 53 expands the scope of these protections in two ways. First, it would grant whistleblower protection to actors who are currently not protected. Independent contractors, freelancers, unpaid advisors, and external groups that help developers to assess and manage catastrophic risk are not protected by existing law if they become whistleblowers, but they would be under SB 53. Second, the bill would protect disclosures of evidence that an AI developer's activities pose a catastrophic risk, whereas existing law only protects disclosures of evidence that a developer is breaking the law. Of course, many ways that a developer could cause a catastrophic risk would also involve breaking the law, but it's conceivable that a developer could do something catastrophically dangerous yet legal. It might also be easier for many would-be whistleblowers to tell whether their employer is causing a catastrophic risk than to tell whether their employer is breaking a specific law.
Finally, SB 53 calls for California to build a publicly owned AI compute cluster called CalCompute. The cluster's purpose would be to support AI research and innovation for the public benefit. Nothing like CalCompute currently exists in California, but similar projects have been announced or are already underway in several other jurisdictions. New York has already built a compute cluster under their Empire AI initiative, the UK has given academics compute access through its AI Research Resource, and the US National Science Foundation's National AI Research Resource aims to provide the same for American researchers. SB 53 does not specify how much funding California will put behind CalCompute, nor how many AI chips it aims to acquire, so it's hard to tell how much this section of the bill will accomplish. If CalCompute is funded generously in the next state budget, it could be a big deal, but if the project only gets a meager budget, it may not achieve much.
The Legislature finds and declares all of the following:
(a) California is leading the world in artificial intelligence innovation and research through companies large and small and through the state’s remarkable public and private universities.
(b) Artificial intelligence, including new advances in foundation models, has the potential to catalyze innovation and the rapid development of a wide range of benefits for Californians and the California economy, including advances in medicine, wildfire forecasting and prevention, and climate science, and to push the bounds of human creativity and capacity.
(c) The Joint California Policy Working Group on AI Frontier Models has recommended sound principles for policy in artificial intelligence.
(d) Targeted interventions to support effective artificial intelligence governance should balance the technology’s benefits and material risks.
(e) Artificial intelligence developers have already voluntarily committed to creating safety and security protocols and releasing the results of risk assessments.
(f) In building a robust and transparent evidence environment, policymakers can align incentives to simultaneously protect consumers, leverage industry expertise, and recognize leading safety practices.
(g) When industry actors conduct internal research on their technologies’ impacts, a significant information asymmetry can develop between those with privileged access to data and the broader public.
(h) Greater transparency, given current information deficits, can advance accountability, competition, and public trust.
(i) Whistleblower protections and public-facing information sharing are key instruments to increase transparency.
(j) Adverse event reporting systems enable monitoring of the post-deployment impacts of artificial intelligence.
(k) There is growing evidence that, unless they are developed with careful diligence and reasonable precaution, advanced artificial intelligence systems could pose catastrophic risks from both malicious uses and malfunctions, including artificial intelligence-enabled hacking, biological attacks, and loss of control.
(l) With the frontier of artificial intelligence rapidly evolving, there is a need for legislation to track the frontier of artificial intelligence research and alert policymakers and the public to the risks and harms from the very most advanced artificial intelligence systems, while avoiding burdening smaller companies behind the frontier.
(m) In the future, foundation models developed by smaller companies or that are behind the frontier may pose significant catastrophic risk, and additional legislation may be needed at that time.
(n) It is the intent of the Legislature to create more transparency, but collective safety will depend in part on large developers taking due care in their development and deployment of foundation models proportional to the scale of the foreseeable risks.
Chapter 25.1 (commencing with Section 22757.10) is added to Division 8 of the Business and Professions Code, to read:
This chapter shall be known as the Transparency in Frontier Artificial Intelligence Act.
For purposes of this chapter:
(a) “Artificial intelligence model” means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.
(b) “Catastrophic risk” means a foreseeable and material risk that a large developer’s development, storage, use, or deployment of a foundation model will materially contribute to the death of, or serious injury to, more than 50 people or more than one billion dollars ($1,000,000,000) in damage to, or loss of, property arising from a single incident, scheme, or course of conduct involving a dangerous capability.
(c) “Critical safety incident” means any of the following:
(1) Unauthorized access to, modification of, or exfiltration of, the model weights of a foundation model.
(2) Harm resulting from the materialization of a catastrophic risk.
(3) Loss of control of a foundation model causing death, bodily injury, or damage to, or loss of, property.
(4) A foundation model that uses deceptive techniques against the large developer to subvert the controls or monitoring of its large developer outside of the context of an evaluation designed to elicit this behavior.
(5) Attaining a dangerous capability or catastrophic risk threshold, as defined in the large developers safety and security protocol pursuant to paragraph (3) of subdivision (a) of Section 22757.12, for the first time.
(d) “Dangerous capability” means the capacity of a foundation model to do any of the following:
(1) Provide expert-level assistance in the creation or release of a chemical, biological, radiological, or nuclear weapon.
(2) Conduct or assist in a cyberattack.
(3) Engage in conduct, with limited human intervention, that would, if committed by a human, constitute the crime of murder, assault, extortion, or theft, including theft by false pretense.
(4) Evade the control of its large developer or user.
(e) (1) “Deploy” means to make a foundation model available to a third party for use, modification, copying, or combination with other software.
(2) “Deploy” does not include making a foundation model available to a third party for the primary purpose of developing or evaluating the foundation model.
(f) “Foundation model” means an artificial intelligence model that is all of the following:
(1) Trained on a broad data set.
(2) Designed for generality of output.
(3) Adaptable to a wide range of distinctive tasks.
(g) “Large developer” means either of the following:
(1) (A) Before January 1, 2027, “large developer” means a person who meets both of the following criteria:
(i) (I) The person has trained, or initiated the training of, at least one foundation model using a quantity of computing power greater than 10^26 integer or floating point operations
(II) The quantity of computing power described in subparagraph (A) subclause (I) shall include computing for the original training run and any subsequent fine-tuning, reinforcement learning, or other material modifications to a preceding foundation model.
(ii) The person had annual gross revenues in excess of one hundred million dollars ($100,000,000) in the preceding calendar year.
(2) (A) Except as provided in subparagraph (B), on and after January 1, 2027, “large developer” has the meaning defined by a regulation adopted by the Attorney General pursuant to Section 22757.15.
(B) If the Attorney General does not adopt a regulation described in subparagraph (A) by January 1, 2027, the definition in paragraph (1) shall be operative until the regulation is adopted.
(h) “Model weight” means a numerical parameter in a foundation model that is adjusted through training and that helps determine how inputs are transformed into outputs.
(i) "Property" means tangible or intangible property.
(j) “Safety and security protocol” means documented technical and organizational protocols to manage, assess, and mitigate catastrophic risks.
(a) A large developer shall write, implement, comply with, and clearly and conspicuously publish on its internet website a safety and security protocol that describes in specific detail all of the following:
(1) How, if at all, the large developer excludes certain foundation models from being covered by its safety and security protocol because those foundation models are incapable of posing material catastrophic risks and which foundation models, if any, have been included.
(2) The testing procedures that the large developer uses to assess catastrophic risks from its foundation models, including risk resulting from malfunctions, misuse, and foundation models evading the control of the large developer or user.
(3) (A) Thresholds used by the large developer to identify and assess whether a foundation model has attained a dangerous capability or poses a catastrophic risk.
(B) How the large developer will assess whether a threshold described in subparagraph (A) has been attained, which may include multiple tiered thresholds for different dangerous capabilities or for catastrophic risk.
(C) The actions the large developer will take if each threshold is attained.
(4) The mitigations that a large developer takes to reduce a catastrophic risk and how the large developer assesses the effectiveness of those mitigations.
(5) The degree to which the large developer’s assessments of catastrophic risk and dangerous capabilities and the effectiveness of catastrophic risk mitigations are reproducible by external entities.
(6) The extent to which, and how, a large developer will use third parties to assess catastrophic risks and dangerous capabilities and the effectiveness of mitigations of catastrophic risk.
(7) The large developer’s cybersecurity practices and how the large developer secures unreleased model weights from unauthorized modification or transfer by internal or external parties.
(8) To the extent that the foundation model is controlled by the large developer, the procedures the large developer will use to monitor critical safety incidents and the steps that a large developer would take to respond to a critical safety incident, including, but not limited to, whether the large developer has the ability to promptly shut down copies of foundation models owned and controlled by the large developer, who the large developer will notify, and the timeline on which the large developer would take these steps.
(9) The testing procedures that the large developer will use to assess and manage a catastrophic risk or dangerous capability resulting from the internal use of its foundation models, including risks resulting from a foundation model circumventing oversight mechanisms, and the schedule, specified in days, by which the large developer will report these assessments pursuant to subdivision (d).
(10) How the developer determines when its foundation models are substantially modified enough to conduct additional assessments and publish a transparency report pursuant to subdivision (c).
(b) If a large developer makes a material modification to its safety and security protocol, the large developer shall clearly and conspicuously publish the modified protocol and a justification for that modification within 30 days.
(c) Before, or concurrently with, deploying a new foundation model or a substantially modified version of an existing foundation model that is covered by the large developer's safety and security protocol, a large developer shall clearly and conspicuously publish on its internet website a transparency report containing both of the following:
(1) (A) The results of any risk assessment or risk mitigation assessment conducted by the large developer or a third party that contracts with a large developer pursuant to its safety and security protocol, why the information gathered by the large developer or third party leads to the stated results, and the steps taken to address any identified risks.
(B) A large developer shall disclose the time and extent of predeployment access provided to any third party described in subparagraph (A), whether or not the third party was independent, and the nature of any constraints the large developer placed on the assessment or on the third party’s ability to disclose information about its assessment to the public or to government officials.
(C) Whether a catastrophic risk threshold or dangerous capability threshold has been attained for the foundation model and any actions taken as a result.
(2) (A) The reasoning behind the large developer’s decision to deploy the foundation model, the process by which the large developer arrived at that decision, and any limitations in the assessments that the large developer used to make that decision.
(B) A large developer may reuse an answer previously provided under subparagraph (A) if the rationale in question has not materially changed for the new deployment.
(d) A large developer shall clearly and conspicuously publish on its internet website any assessment of catastrophic risk or dangerous capabilities resulting from internal use of its foundation models pursuant to the schedule the developer specifies in its safety and security protocol.
(e) A large developer shall not make a materially false or misleading statement about catastrophic risk from its foundation models, its management of catastrophic risk, or its implementation of or compliance with its safety and security protocol.
(f) (1) When a large developer publishes documents to comply with this section, the large developer may make redactions to those documents that are necessary to protect the large developer’s trade secrets, the large developer’s cybersecurity, public safety, or the national security of the United States or to comply with any federal or state law.
(2) If a large developer redacts information in a document pursuant to this subdivision, the large developer shall describe the character and justification of the redaction in any published version of the document to the extent permitted by the concerns that justify redaction and shall retain the unredacted information for five years.
(a) The Attorney General shall establish a mechanism to be used by a large developer or a member of the public to report a critical safety incident that includes all of the following:
(1) The date of the critical safety incident.
(2) The reasons the incident qualifies as a critical safety incident.
(3) A short and plain statement describing the critical safety incident.
(b) (1) Subject to paragraph (2), a large developer shall report any critical safety incident pertaining to one or more of its foundation models to the Attorney General within 15 days of discovering the critical safety incident.
(2) If a large developer discovers that a critical safety incident poses an imminent risk of death or serious physical injury, the large developer shall disclose that incident within 24 hours to an authority, including any law enforcement agency or public safety agency with jurisdiction, that is appropriate based on the nature of that incident and as required by law.
(c) The Attorney General shall review critical safety incident reports submitted by large developers and may review reports submitted by members of the public.
(d) The Attorney General may transmit reports of critical safety incidents, reports from employees made pursuant to Chapter 5.1 (commencing with Section 1107) of Part 3 of Division 2 of the Labor Code, and summaries of auditor reports required by Section 22757.14 to the Legislature, the Governor, the federal government, or appropriate state agencies.
(e) A report of a critical safety incident submitted to the Attorney General pursuant to this section, a employee report made pursuant to Chapter 5.1 (commencing with Section 1107) of Part 3 of Division 2 of the Labor Code, and a summary of an auditors report required by Section 22757.14 are exempt from the California Public Records Act (Division 10 (commencing with Section 7920.000) of Title 1 of the Government Code).
(f) (1) Beginning January 1, 2027, and annually thereafter, the Attorney General shall produce a report with anonymized and aggregated information about critical safety incidents, reports from employees made pursuant to Chapter 5.1 (commencing with Section 1107) of Part 3 of Division 2 of the Labor Code, and summaries of auditors reports required by Section 22757.14 that have been reviewed by the Attorney General since the preceding report.
(2) The Attorney General shall not include information in a report pursuant to this subdivision that would compromise the trade secrets or cybersecurity of a large developer, public safety, or the national security of the United States or that would be prohibited by any federal or state law.
(3) The Attorney General shall transmit a report pursuant to this subdivision to the Legislature, pursuant to Section 9795, and to the Governor.
(a) Beginning January 1, 2030, and at least annually thereafter, a large developer shall retain an independent third-party auditor to produce a report assessing both of the following:
(1) Whether the large developer has substantially complied with its safety and security protocol and any instances of substantial noncompliance during the prior year.
(2) Any instances in which the large developers safety and security protocol has not been stated clearly enough to determine whether the large developer has complied.
(b) A large developer shall allow the third-party auditor access to all materials produced to comply with this chapter and any other materials reasonably necessary to perform the assessment required by this section.
(c) A large developer shall retain a report required by this section for five years.
(d) In conducting an audit for a large developer pursuant to this section, an auditor shall employ or contract one or more individuals with expertise in corporate compliance and one or more individuals with technical expertise in the safety of foundation models.
(e) Within 30 days after completing an audit, the auditor shall transmit to the Attorney General a high-level summary of the report required by this section that fairly presents, in all material respects, the outcome of the audit.
(f) An auditor shall not knowingly include a material misrepresentation or omit a material fact in a summary or report produced pursuant to this section
(a) On or before January 1, 2027, and annually thereafter, the Attorney General may adopt regulations to update the definition of a “large developer” for the purposes of this chapter to ensure that it accurately reflects technological developments, scientific literature, and widely accepted national and international standards and applies to well-resourced large developers at the frontier of artificial intelligence development.
(b) In developing regulations pursuant to this section, the Attorney General shall take into account all of the following:
(1) Similar thresholds used in international standards or federal law, guidance, or regulations for the management of catastrophic risk.
(2) Input from stakeholders, including academics, industry, the open-source community, and governmental entities.
(3) The extent to which a person will be able to determine, before beginning to train or deploy a foundation model, whether that person will be subject to the regulations as a large developer with an aim toward allowing earlier determinations if possible.
(4) The complexity of determining whether a person is covered, with an aim toward allowing simpler determinations if possible.
(5) The external verifiability of determining whether a person is covered, with an aim toward definitions that are verifiable by parties other than the large developer.
(c) If the Attorney General determines that less well-resourced developers, or developers significantly behind the frontier of artificial intelligence, may create substantial catastrophic risk, the Attorney General shall promptly submit a report to the Legislature, pursuant to Section 9795, with a proposal for managing this source of catastrophic risk but shall not include those developers within the definition of “large developer” without authorization in subsequently enacted legislation.
(1) (A) For an unknowing violation that does not create a material risk of death, serious physical injury, or a catastrophic risk, a large developer shall be subject to a civil penalty in an amount not to exceed ten thousand dollars ($10,000).
(B) (i) If the violation is a first violation subject to a civil penalty pursuant to subparagraph (A), a large developer shall be provided with a 30-day period to cure the violation after notification by the Attorney General. If the violation is cured within that period, a civil penalty shall not be imposed for that violation.
(ii) For the purposes of this subparagraph, a violation involving the missing or late publication or submission of a document is cured when the developer publishes or submits that document, even if the publication or submission occurs after the statutory deadline.
(2) For a knowing violation that does not create a material risk of death, serious physical injury, or a catastrophic risk or an unknowing violation that creates a material risk of death, serious physical injury, or a catastrophic risk, a large developer shall be subject to a civil penalty in an amount not to exceed one hundred thousand dollars ($100,000).
(3) For a knowing violation that creates a material risk of death, serious physical injury, or a catastrophic risk, a large developer shall be subject to a civil penalty in an amount not to exceed one million dollars ($1,000,000) for a violation that is the large developers first such violation and in an amount not exceeding ten million dollars ($10,000,000) for any subsequent violation.
(b) A violation of this chapter by an auditor shall be subject to a civil penalty in an amount not to exceed ten thousand dollars ($10,000).
(c) A civil penalty described in this section shall be recovered in a civil action brought only by the Attorney General.
Section 11546.8 is added to the Government Code, to read:
(a) There is hereby established within the Government Operations Agency a consortium that shall develop, pursuant to this section, a framework for the creation of a public cloud computing cluster to be known as “CalCompute.”
(b) The consortium shall develop a framework for the creation of CalCompute that advances the development and deployment of artificial intelligence that is safe, ethical, equitable, and sustainable by doing, at a minimum, both of the following:
(1) Fostering research and innovation that benefits the public.
(2) Enabling equitable innovation by expanding access to computational resources.
(c) The consortium shall make reasonable efforts to ensure that CalCompute is established within the University of California to the extent possible.
(d) CalCompute shall include, but not be limited to, all of the following:
(1) A fully owned and hosted cloud platform.
(2) Necessary human expertise to operate and maintain the platform.
(3) Necessary human expertise to support, train, and facilitate the use of CalCompute.
(e) The consortium shall operate in accordance with all relevant labor and workforce laws and standards.
(f) (1) On or before January 1, 2027, the Government Operations Agency shall submit, pursuant to Section 9795, a report from the consortium to the Legislature with the framework developed pursuant to subdivision (b) for the creation and operation of CalCompute.
(2) The report required by this subdivision shall include all of the following elements:
(A) A landscape analysis of California’s current public, private, and nonprofit cloud computing platform infrastructure.
(B) An analysis of the cost to the state to build and maintain CalCompute and recommendations for potential funding sources.
(C) Recommendations for the governance structure and ongoing operation of CalCompute.
(D) Recommendations for the parameters for use of CalCompute, including, but not limited to, a process for determining which users and projects will be supported by CalCompute.
(E) An analysis of the state’s technology workforce and recommendations for equitable pathways to strengthen the workforce, including the role of CalCompute.
(F) A detailed description of any proposed partnerships, contracts, or licensing agreements with nongovernmental entities, including, but not limited to, technology-based companies, that demonstrates compliance with the requirements of subdivisions (c) and (d).
(G) Recommendations regarding how the creation and ongoing management of CalCompute can prioritize the use of the current public sector workforce.
(g) The consortium shall, consistent with state constitutional law, consist of 14 members as follows:
(1) Four representatives of the University of California and other public and private academic research institutions and national laboratories appointed by the Secretary of Government Operations.
(2) Three representatives of impacted workforce labor organizations appointed by the Speaker of the Assembly.
(3) Three representatives of stakeholder groups with relevant expertise and experience, including, but not limited to, ethicists, consumer rights advocates, and other public interest advocates appointed by the Senate Rules Committee.
(4) Four experts in technology and artificial intelligence to provide technical assistance appointed by the Secretary of Government Operations.
(h) The members of the consortium shall serve without compensation, but shall be reimbursed for all necessary expenses actually incurred in the performance of their duties.
(i) The consortium shall be dissolved upon submission of the report required by paragraph (1) of subdivision (f) to the Legislature.
(j) If CalCompute is established within the University of California, the University of California may receive private donations for the purposes of implementing CalCompute.
(k) This section shall become operative only upon an appropriation in a budget act, or other measure, for the purposes of this section.
Chapter 5.1 (commencing with Section 1107) is added to Part 3 of Division 2 of the Labor Code, to read:
1107. For purposes of this chapter:
(a) “Artificial intelligence model” means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.
(b) "Catastrophic risk" has the meaning defined in Section 22757.11 of the Business and Professions Code.
(c) “Large developer” has the meaning defined in Section 22757.11 of the Business and Professions Code.
(d) “Employee” means a person who performs services for an employer, including both of the following:
(1) A contractor, subcontractor, or an unpaid advisor involved with assessing, managing, or addressing catastrophic risk, including all of the following:
(A) An independent contractor.
(B) A freelance worker.
(C) A person employed by a labor contractor.
(D) A board member.
(2) Corporate officers.
(e) “Foundation model” has the same meaning as defined in Section 22757.11 of the Business and Professions Code.
(f) “Labor contractor” means an individual or entity that supplies, either with or without a contract, a client employer with workers to perform labor within the client employer’s usual course of business.
1107.1. (a) A large developer shall not make, adopt, enforce, or enter into a rule, regulation, policy, or contract that prevents an employee from disclosing, or retaliates against an employee for disclosing, information to the Attorney General, a federal authority, a person with authority over the employee, or another employee who has authority to investigate, discover, or correct the reported issue, if the employee has reasonable cause to believe that the information discloses either of the following:
(1) The large developer’s activities pose a catastrophic risk.
(2) The large developer has violated Chapter 25.1 (commencing with Section 22757.10) of Division 8 of the Business and Professions Code.
(b) A large developer shall not enter into a contract that prevents an employee from making a disclosure protected under Section 1102.5.
(c) A large developer shall not make, adopt, enforce, or enter into a rule, regulation, policy, or contract that would prevent an organization or entity that provides goods or services to the large developer related to the assessment, management, or addressing of catastrophic risk, or an employee of that organization or entity, from disclosing information to the Attorney General, a federal authority, or the developer if the organization, entity, or individual has reasonable cause to believe that the information discloses either of the following:
(1) The large developer’s activities pose a catastrophic risk.
(2) The large developer has violated Chapter 25.1 (commencing with Section 22757.10) of Division 8 of the Business and Professions Code.
(d) An employee may use the hotline described in Section 1102.7 to make reports described in subdivision (a).
(e) A large developer shall provide a clear notice to all employees of their rights and responsibilities under this section, including by doing either of the following:
(1) At all times posting and displaying within any workplace maintained by the large developer a notice to all employees of their rights under this section, ensuring that any new employee receives equivalent notice, and ensuring that any employee who works remotely periodically receives an equivalent notice.
(2) At least once each year, providing written notice to each employee of the employee’s rights under this section and ensuring that the notice is received and acknowledged by all of those employees.
(f) (1) A large developer shall provide a reasonable internal process through which an employee may anonymously disclose information to the large developer if the employee believes in good faith that the information indicates that the large developer’s activities present a catastrophic risk or that the large developer has violated Chapter 25.1 (commencing with Section 22757.10) of Division 8 of the Business and Professions Code, including a monthly update to the person who made the disclosure regarding the status of the large developer’s investigation of the disclosure and the actions taken by the large developer in response to the disclosure.
(2) (A) Except as provided in subparagraph (B), the disclosures and responses of the process required by this subdivision shall be shared with officers and directors of the large developer at least once each quarter.
(B) If an employee has alleged wrongdoing by an officer or director of the large developer in a disclosure or response, subparagraph (A) shall not apply with respect to that officer or director.
(g) The court is authorized to award reasonable attorney’s fees to a plaintiff who brings a successful action for a violation of this section.
(h) In a civil action brought pursuant to this section, once it has been demonstrated by a preponderance of the evidence that an activity proscribed by this section was a contributing factor in the alleged prohibited action against the employee, the large developer shall have the burden of proof to demonstrate by clear and convincing evidence that the alleged action would have occurred for legitimate, independent reasons even if the employee had not engaged in activities protected by this section.
(i) (1) In a civil action or administrative proceeding brought pursuant to this section, an employee may petition the superior court in any county wherein the violation in question is alleged to have occurred, or wherein the person resides or transacts business, for appropriate temporary or preliminary injunctive relief.
(2) Upon the filing of the petition for injunctive relief, the petitioner shall cause notice thereof to be served upon the person, and thereupon the court shall have jurisdiction to grant temporary injunctive relief as the court deems just and proper.
(3) In addition to any harm resulting directly from a violation of this section, the court shall consider the chilling effect on other employees asserting their rights under this section in determining whether temporary injunctive relief is just and proper.
(4) Appropriate injunctive relief shall be issued on a showing that reasonable cause exists to believe a violation has occurred.
(5) An order authorizing temporary injunctive relief shall remain in effect until an administrative or judicial determination or citation has been issued, or until the completion of a review pursuant to subdivision (b) of Section 98.74, whichever is longer, or at a certain time set by the court. Thereafter, a preliminary or permanent injunction may be issued if it is shown to be just and proper. Any temporary injunctive relief shall not prohibit a large developer from disciplining or terminating an employee for conduct that is unrelated to the claim of the retaliation.
(j) Notwithstanding Section 916 of the Code of Civil Procedure, injunctive relief granted pursuant to this section shall not be stayed pending appeal.
(k) (1) This section does not impair or limit the applicability of Section 1102.5.
(2) The remedies provided by this section are cumulative to each other and the remedies or penalties available under all other laws of this state.
(a) The provisions of this act are severable. If any provision of this act or its application is held invalid, that invalidity shall not affect other provisions or applications that can be given effect without the invalid provision or application.
(b) This act shall be liberally construed to effectuate its purposes.
(c) The duties and obligations imposed by this act are cumulative with any other duties or obligations imposed under other law and shall not be construed to relieve any party from any duties or obligations imposed under other law and do not limit any rights or remedies under existing law.
(d) This act shall not apply to the extent that it strictly conflicts with the terms of a contract between a federal government entity and a large developer.
(e) This act shall not apply to the extent that it is preempted by federal law.
The Legislature finds and declares that Section 2 of this act, which adds Chapter 25.1 (commencing with Section 22757.10) to Division 8 of the Business and Professions Code, imposes a limitation on the public's right of access to the meetings of public bodies or the writings of public officials and agencies within the meaning of Section 3 of Article I of the California Constitution. Pursuant to that constitutional provision, the Legislature makes the following findings to demonstrate the interest protected by this limitation and the need for protecting that interest:
Information in critical safety incident reports, summaries of auditors reports, and reports from employees may contain information that could threaten public safety or compromise the response to an incident if disclosed to the public.