• ようこそゲストさん!

bbbcさんの日記

(SNS全体・外部に公開(Web全体に公開))

2022年
12月30日
10:47 bbbcさん

The 4 greatest threats to the survival of humanity    (人類の存続に対する4つの大きな脅威)

                     TED-Ed の代表頁に行く

人類の存続に対する4つの脅威 は 核戦争気候変動人工病原体人工知能
.いずれも人間の発明による脅威だが、暴走した場合、止める手立てがない。
.特に核戦争は今や極めて危機的な状況にある。

 5分 140wpm                         2022年             

字幕 : 開始後 で字幕On/Off、 で言語選択。文字の色やサイズ゙はオプションから。
.     動画を見るとき、 でフルスクリーンに拡大すると見やすい。

下記英文は ポップアップ辞書 が使えます。
  テキストはこちら⇒日英トランスクリプト (字幕はYouTubeの方が大きく見やすい)        

In January of 1995, Russia detected a nuclear missile headed its way. The alert went all the way to the president, who was deciding whether to strike back when another system contradicted the initial warning. What they thought was the first missile in a massive attack was actually a research rocket studying the Northern Lights (オーロラ) This incident happened after the end of the Cold War, but was nevertheless one of the closest calls we’ve had to igniting a global nuclear war.

With the invention of the atomic bomb, humanity gained the power to destroy itself for the first time in our history. Since then, our existential risk— risk of either extinction or the unrecoverable collapse of human civilization— has steadily increased. It’s well within our power to reduce this risk, but in order to do so, we have to understand which of our activities pose existential threats now, and which might in the future.

So far, our species has survived 2,000 centuries, each with some extinction risk from natural causes— asteroid(小惑星) impacts, supervolcanoes(超巨大火山), and the like. Assessing existential risk is an inherently uncertain business because usually when we try to figure out how likely something is, we check how often it's happened before. But the complete destruction of humanity has never happened before. While there’s no perfect method to determine our risk from natural threats, experts estimate it’s about 1 in 10,000 per century.

Nuclear weapons were our first addition to that baseline. While there are many risks associated with nuclear weapons, the existential risk comes from the possibility of a global nuclear war that leads to a nuclear winter, where soot from burning cities blocks out the sun for years, causing the crops that humanity depends on to fail. We haven't had a nuclear war yet, but our track record is too short to tell if they’re inherently unlikely or we’ve simply been lucky. We also can’t say for sure whether a global nuclear war would cause a nuclear winter so severe it would pose an existential threat to humanity.

The next major addition to our existential risk was climate change. Like nuclear war, climate change could result in a lot of terrible scenarios that we should be working hard to avoid, but that would stop short of causing extinction or unrecoverable collapse. We expect a few degrees Celsius of warming, but can’t yet completely rule out 6 or even 10 degrees, which would cause a calamity of possibly unprecedented proportions. Even in this worst-case scenario, it’s not clear whether warming would pose a direct existential risk, but the disruption it would cause would likely make us more vulnerable to other existential risks.

The greatest risks may come from technologies that are still emerging. Take engineered pandemics. The biggest catastrophes in human history have been from pandemics. And biotechnology is enabling us to modify and create germs(細菌) that could be much more deadly than naturally occurring ones. Such germs could cause pandemics through biowarfare (生物兵器) and research accidents. Decreased costs of genome(ゲノム) sequencing and modification, along with increased availability of potentially dangerous information like the published genomes of deadly viruses, also increase the number of people and groups who could potentially create such pathogens(病原体).
 genome 〔dʒíːnoum〕=ゲノム=全遺伝子情報=生物の設計図全体  

Another concern is unaligned(非同盟の) AI. Most AI researchers think this will be the century where we develop artificial intelligence that surpasses human abilities across the board. If we cede this advantage, we place our future in the hands of the systems we create. Even if created solely with humanity’s best interests in mind, superintelligent AI could pose an existential risk if it isn’t perfectly aligned with human values— a task scientists are finding extremely difficult.
 AI : artificial intelligence 人工知能

Based on what we know at this point, some experts estimate the anthropogenic(人為的な) existential risk is more than 100 times higher than the background rate of natural risk. But these odds depend heavily on human choices. Because most of the risk is from human action, and it’s within human control. If we treat safeguarding humanity's future as the defining issue of our time, we can reduce this risk. Whether humanity fulfils its potential— or not— is in our hands.
  • 総アクセス数(688)
  • 拍手拍手(0)
  • お気に入りお気に入り(0)