Categories
News

Opinion | Artificial Intelligence Requires Specific Safety Rules


For about 5 years, OpenAI used a system of nondisclosure agreements to stifle public criticism from outgoing staff. Present and former OpenAI staffers have been paranoid about speaking to the press. In Might, one departing worker refused to sign and went public in The Instances. The corporate apologized and scrapped the agreements. Then the floodgates opened. Exiting staff started criticizing OpenAI’s security practices, and a wave of articles emerged about its broken promises.

These tales got here from individuals who have been keen to danger their careers to tell the general public. What number of extra are silenced as a result of they’re too scared to talk out? Since current whistle-blower protections typically cowl solely the reporting of unlawful conduct, they’re insufficient right here. Artificial intelligence might be harmful with out being unlawful. A.I. wants stronger protections — like these in place in elements of the public sector, finance and publicly traded corporations — that prohibit retaliation and set up nameless reporting channels.

OpenAI has spent the final 12 months mired in scandal. The corporate’s chief govt was briefly fired after the nonprofit board lost trust in him. Whistle-blowers alleged to the Securities and Change Fee that OpenAI’s nondisclosure agreements have been unlawful. Safety researchers have left the corporate in droves. Now the agency is restructuring its core enterprise as a for-profit, seemingly prompting the departure of extra key leaders. On Friday, The Wall Road Journal reported that OpenAI rushed testing of a significant mannequin in Might, trying to undercut a rival’s publicity; after the discharge, staff discovered the mannequin exceeded the corporate’s requirements for security. (The corporate informed The Journal the findings have been the results of a methodological flaw.)

This conduct could be regarding in any trade, however in response to OpenAI itself, A.I. poses distinctive dangers. The leaders of the highest A.I. companies and main A.I. researchers have warned that the expertise might result in human extinction.

Since extra complete nationwide A.I. laws aren’t coming anytime quickly, we want a slender federal regulation permitting staff to reveal data to Congress in the event that they fairly imagine that an A.I. mannequin poses a big security danger. Congress ought to set up a particular inspector basic to function some extent of contact for these whistle-blowers. The regulation ought to mandate corporations to inform workers concerning the channels accessible to them, which they will use with out dealing with retaliation.

Such protections are important for an trade that works so intently with such exceptionally dangerous expertise, notably when regulators haven’t caught up with the dangers. Individuals reporting violations of the Atomic Power Act have more robust whistle-blower protections than these in most fields, whereas these working in organic toxins for a number of authorities departments are protected by proactive, pro-reporting guidance. A.I. employees want comparable guidelines.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *