Newsom ordena al gobierno considerar el daño de la IA en las reglas de contratación

La próxima vez que el gobierno federal etiquete a una empresa como un riesgo para la cadena de suministro, como hizo el Departamento de Defensa el mes pasado con el fabricante de herramientas de IA con sede en San Francisco, Anthropic, el estado de California revisará esa designación y tomará su propia decisión sobre si hará negocios con ellos.

Eso según una orden ejecutiva firmada por el gobernador Gavin Newsom el lunes. La orden siguió a una disputa entre Anthropic y el Departamento de Defensa sobre los términos del contrato que impiden al ejército usar sistemas de Anthropic para vigilancia masiva doméstica y para armamento totalmente autónomo.

Al designar a Anthropic como un riesgo para la cadena de suministro, el Departamento de Defensa, de facto, impidió que la startup compitiera por ciertos contratos y subcontratos militares. Un juez emitió recientemente una orden de amparo temporal para bloquear la designación.

El propósito más amplio de la orden de Newsom era establecer barandillas para el uso de IA por parte de empleados estatales y, al mismo tiempo, alentarlos a acelerar su uso de la tecnología.

Muchas de las empresas de IA más grandes del mundo tienen su base en California, y el estado también lidera al país en volumen de regulaciones de IA.

                        Historias relacionadas

            Una desordenada carrera del gobierno de California eleva los temores demócratas de una posible pérdida
        

    

  

    

    
    







    
    
        
        
    
    
    
    
        

            4 MIN DE LECTURA

38

            El Departamento de Justicia investiga a California y Maine por alojar a mujeres trans con reclusas
        

    

  

    

    
    







    
    
        
        
    
    
    
    
        

            2 MIN DE LECTURA

            Los eventos del Día de César Chávez cambian de nombre, se posponen o se cancelan tras acusaciones de abuso sexual
        

    

  

    

    
    







    
    
        
        
    
    
    
    
        

            5 MIN DE LECTURA

The order requires state agencies to:

  1. Develop recommendations for state contract standards relating to AI and its ability to generate child sexual abuse material, violate civil liberties and civil rights laws or infringe upon legal “protections against unlawful discrimination, detention, and surveillance.” Help employees gain access to “vetted GenAI tools.”

  2. Update the State Digital Strategy to identify ways generative AI can “strengthen government transparency and accountability, improve performance, and make government services easily accessible for every Californian.”

  3. Develop generative AI for Californians to gain access to government services.

  4. Issue guidance on how state employees should place watermarks on AI-generated imagery and videos.

Those mandates come at a time when more than 20 California departments and agencies are working to develop or use Poppy, a generative AI assistant for state employees, and when half a dozen state agencies are testing AI to do things like assist state employees and help homeless people and businesses. They also come as state courts and city governments are increasing their use of the technology.

Lee más - AP 

Newsom’s office said President Donald Trump and Republicans in Washington D.C. have rolled back protections or ignored the ways AI can harm people.

“Unlike the Trump administration, California remains committed to ensuring that AI solutions adopted and deployed by (California)… cannot be misused by bad actors,” the governor’s office said in a press release announcing the order.

At the federal level, Trump has signed executive orders to discourage states from regulating AI and urged federal agencies to adopt AI to do things like reduce federal regulation and accelerate decisions made about Medicare. The White House introduced an AI policy framework last month that the president wants Congress to take up. That proposal takes a light touch approach to regulation and does not address issues related to bias, discrimination, or civil rights.

This is the second executive order signed by Newsom to address artificial intelligence. A 2023 order aimed exclusively at generative AI, the sort that powers systems like ChatGPT and Midjourney, similarly called for more use of AI by state agencies and ordered them to put guardrails in place.

Newsom’s handling of AI issues is closely watched by both union leaders, who in February pledged that they will not support his run for president without more worker protections from the technology, and big tech donors, who are pouring money into influencing California politics ahead of midterm elections this fall.


This story was originally published by CalMatters and distributed through a partnership with The Associated Press.

Ver originales
Esta página puede contener contenido de terceros, que se proporciona únicamente con fines informativos (sin garantías ni declaraciones) y no debe considerarse como un respaldo por parte de Gate a las opiniones expresadas ni como asesoramiento financiero o profesional. Consulte el Descargo de responsabilidad para obtener más detalles.
  • Recompensa
  • Comentar
  • Republicar
  • Compartir
Comentar
Añadir un comentario
Añadir un comentario
Sin comentarios
  • Anclado