英語閱讀雙語新聞

不能賦予機器人殺人的權力

本文已影響 2W人 

Imagine this futuristic scenario: a US-led coalition is closing in on Raqqa determined to eradicate Isis. The international forces unleash a deadly swarm of autonomous, flying robots that buzz around the city tracking down the enemy.

不能賦予機器人殺人的權力

想象一下這樣的未來場景:以美國爲首的聯軍正在逼近敘利亞的拉卡(Raqqa),決心消滅“伊斯蘭國”(ISIS)。多國部隊出動一批致命的自主機器人,圍着城市四處飛行,追蹤敵人。

Using face recognition technology, the robots identify and kill top Isis commanders, decapitating the organisation. Dazed and demoralised, the Isis forces collapse with minimal loss of life to allied troops and civilians.

利用面部識別技術,這些機器人識別和殺死ISIS的指揮官,斬落了這個組織的頭目。在聯軍和平民傷亡最少的情況下,瓦解了不知所措、士氣低落的ISIS部隊。

Who would not think that a good use of technology?

有誰不認爲這是很好地運用了技術呢?

As it happens, quite a lot of people, including many experts in the field of artificial intelligence, who know most about the technology needed to develop such weapons.

事實上,有很多人不這麼認爲,包括人工智能領域的很多專家,他們最瞭解研發這種武器所需要的技術。

In an open letter published last July, a group of AI researchers warned that technology had reached such a point that the deployment of Lethal Autonomous Weapons Systems (or Laws as they are incongruously known) was feasible within years, not decades. Unlike nuclear weapons, such systems could be mass produced on the cheap, becoming the “Kalashnikovs of tomorrow.”

去年7月,衆多人工智能研究人員發表了一封公開信,警告稱這種技術已經發展到一定程度,幾年以後——而無需幾十年——就有可能部署“致命自主武器系統”(Lethal Autonomous Weapons Systems,它還有一個不相稱的簡稱,Laws,意爲“法律”)。不像核武器,這類系統可以以低廉成本大規模生產,成爲“明天的卡拉什尼科夫步槍(Kalashnikov,即AK-47)”。

“It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing,” they said. “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

“它們早晚會出現在黑市上,落入恐怖分子、希望更好地控制民衆的獨裁者和想要進行種族清洗的軍閥的手中,”他們表示,“在軍用人工智能領域開啓一場軍備競賽是一個壞主意,應該對超出人類有效控制的攻擊性自主武器施加禁令,以防止這樣的軍備競賽。”

Already, the US has broadly forsworn the use of offensive autonomous weapons. Earlier this month, the United Nations held a further round of talks in Geneva between 94 military powers aiming to draw up an international agreement restricting their use.

美國大體上已承諾放棄使用攻擊性自主武器。本月早些時候,聯合國(UN)在日內瓦舉行了有94個軍事強國參加的新一輪談判,旨在擬定一項限制此類武器使用的國際協定。

The chief argument is a moral one: giving robots the agency to kill humans would trample over a red line that should never be crossed.

主要論據是道德層面上的:賦予機器人殺人的代理權,將越過一條永遠不應被越過的紅線。

Jody Williams, who won a Nobel Peace Prize for campaigning against landmines and is a spokesperson for the Campaign To Stop Killer Robots, describes autonomous weapons as more terrifying than nuclear arms. “Where is humanity going if some people think it’s OK to cede the power of life and death of humans over to a machine?”

因爲開展反對地雷的運動而獲得諾貝爾和平獎的喬迪•威廉斯(Jody Williams)是“阻止殺手機器人運動”(Campaign To Stop Killer Robots)的發言人,他表示自主武器比核武器更可怕。“如果一些人認爲把人類的生殺大權交給一臺機器是可以的,人性又何以處之?”

There are other concerns beyond the purely moral. Would the use of killer robots lower the human costs of war thereby increasing the likelihood of conflict? How could proliferation of such systems be stopped? Who would be accountable when they went wrong?

除了純粹的道德問題以外,還有其他令人擔憂的問題。殺手機器人會降低戰爭中的人員成本,爆發衝突的可能性是否會因此提高?如何阻止這類系統的擴散?當它們出問題的時候誰來負責?

This moral case against killer robots is clear enough in a philosophy seminar. The trouble is the closer you look at their likely use in the fog of war the harder it is to discern the moral boundaries. Robots (with limited autonomy) are already deployed on the battlefield in areas such as bomb disposal, mine clearance and antimissile systems. Their use is set to expand dramatically.

在一個哲學研討會上,反對殺手機器人的道德理由已是足夠明顯。問題在於,你越是近距離地觀察它們在戰爭硝煙中可能的用處,就越難分辨出道德的界限。(有限自主的)機器人已經被用於戰場上,應用在拆彈、排雷和反導系統等。它們的應用範圍還將大爲擴大。

The Center for a New American Security estimates that global spending on military robots will reach $7.5bn a year by 2018 compared with the $43bn forecast to be spent on commercial and industrial robots. The Washington-based think-tank supports the further deployment of such systems arguing they can significantly enhance “the ability of warfighters to gain a decisive advantage over their adversaries”.

據新美國安全中心(Center for a New American Security)估測,到2018年,全球範圍內在軍用機器人方面的支出將達到每年75億美元。相比之下,該機構預測用於商業和工業機器人的支出將爲430億美元。這家位於華盛頓的智庫支持進一步利用這類系統,主張它們能夠顯著提高“作戰人員取得凌駕對手的絕對性優勢的能力”。

In the antiseptic prose it so loves, the arms industry draws a distinction between different levels of autonomy. The first, described as humans-in-the-loop, includes predator drones, widely used by US and other forces. Even though a drone may identify a target it still requires a human to press the button to attack. As vividly shown in the film Eye in the Sky , such decisions can be morally agonising, balancing the importance of hitting vital targets with the risks of civilian casualties.

軍工界用其最愛使用的置身事外的論調,對機器人不同的自主等級進行了區分。第一類被稱爲“人在環中”(humans-in-the-loop),包括被美軍和其他軍隊廣泛使用的“捕食者”無人機。即使一架無人機或許能夠識別目標,還是需要一個人類來按下攻擊按鈕。就像電影《天空之眼》(Eye in the Sky)生動地體現出來的,這類決策可能會給人帶來道德上的痛苦,你需要在打擊關鍵目標和造成平民傷亡的風險之間進行權衡。

The second level of autonomy involves humans-in-the-loop systems, in which people supervise roboticised weapons systems, including anti-aircraft batteries. But the speed and intensity of modern warfare make it doubtful whether such human oversight amounts to effective control.

第二級的自主是“人在環中系統”(humans-in-the-loop system),人對機器人武器系統進行監督,包括防空炮。但現代戰爭的速度和強度讓人懷疑這種人類的監督能否形成有效控制。

The third type, of humans-out-of-the-loop systems such as fully autonomous drones, is potentially the deadliest but probably the easiest to proscribe.

第三類是“人在環外系統”(humans-out-of-the-loop system),比如完全自主的無人機,這種可能是最致命的,但也很可能是最容易禁止的。

AI researchers should certainly be applauded for highlighting this debate. Arms control experts are also playing a useful, but frustratingly slow, part in helping define and respond to this challenge. “This is a valuable conversation,” says Paul Scharre, a senior fellow at CNAS. “But it is a glacial process.”

人工智能研究人員通過發表公開信,引起人們對這場辯論的關注,這一舉動當然值得讚揚。軍備控制專家在幫助定義和應對這一挑戰方面起到有用的作用,但他們的行動步伐卻慢得讓人沮喪。“這是一次有價值的對話,”新美國安全中心的保羅•沙勒(Paul Scharre)說,“但這是一個極其緩慢的過程。”

As in so many other areas, our societies are scrambling to make sense of fast-changing technological realities, still less control them.

就像在其他很多方面一樣,我們的社會在試圖理解快速變化的技術現實方面就已窮於應付,更別提加以控制了。

猜你喜歡

熱點閱讀

最新文章