Part V: Keeping Kids Safe

第五部分:确保孩子安全

打开网易新闻 查看更多图片

Never travel faster than your guardian angel can fly.

—Mother Teresa

永远不要比你的守护天使飞得更快。

——特蕾莎修女

Distrust and caution are the parents of security.

—Benjamin Franklin

不信任和谨慎是安全之父。

——本杰明·富兰克林

Delivering the Facts: The State of Bias and Misinformation

传递真相:偏见和虚假信息现状

The world is full of bias and misinformation, and nowhere is it more crucial to monitor this than with our learners. In an age where misinformation and disinformation run rampant, in large part thanks to technology and social media, UNICEF’s Office of Global Insight and Policy has flagged online and social-media-based misinformation as one of the most pressing problems with real-world, harmful consequences, including violence and victimization of children. Our kids spend a lot of their time online and with technology in general. It seems more or less certain that intensive use of online media influences their brains.

这个世界充满了偏见和虚假信息,而在我们的学习者中监控这一点是至关重要的。在一个由于技术和社交媒体的原因,错误信息和虚假信息猖獗的时代,联合国儿童基金会的全球洞察与政策办公室已经将基于网络和社交媒体的错误信息标记为最紧迫的问题之一,它具有现实世界中的有害后果,包括针对儿童的暴力和受害。我们的孩子花了很多时间在网上和使用技术。几乎可以肯定的是,密集使用在线媒体会影响他们的大脑。

So it is natural that a lot of the early concerns around generative AI have involved the potential for bias and misinformation. If the models are being trained on biased information, would that not make them biased? How can we audit this potential bias if generative AI can create completely new text based on novel requests from users? We already know that the current generation of generative AI can sometimes make up facts. Could this be another source of misinformation? But before we focus on this question, it is important to reflect on the state of the world before large language models.

因此,围绕生成式 AI 的许多早期担忧自然涉及潜在的偏见和虚假信息。如果模型是在有偏见的信息上进行训练的,那它们不会也有偏见吗?如果生成式 AI 可以根据用户的新请求创建全新的文本,我们如何审核这种潜在的偏见?我们已经知道,当前一代的生成式 AI 有时会编造事实。这可能是另一个虚假信息的来源吗?但在我们关注这个问题之前,反思大型语言模型出现之前的世界状态是很重要的。

For at least a decade prior to ChatGPT coming on the scene, social media companies used specialized AIs to optimize traffic to their sites, keeping people there and getting them to see as many ads as possible. Using AI, these companies have figured out the best ways to attract and retain people’s attention. Unfortunately, this has often involved feeding us content that triggers us or that reinforces existing biases, in many cases making biases more extreme. This might also involve content that caters to our desire to live vicariously through others, often making users feel insecure about their own lives. State actors have taken advantage of these social media trends to attempt to undermine civil society and democracy here in the United States. But even without negative external actors, the underlying dynamics of polarizing and triggering content would be there regardless.

在 ChatGPT 出现之前的至少十年里,社交媒体公司使用专门的人工智能来优化其网站的流量,使人们停留在其网站上并尽可能多地看到广告。通过使用人工智能,这些公司找到了吸引和保持人们注意力的最佳方法。不幸的是,这通常涉及向我们提供内容,这些内容会触发我们或强化现有的偏见,在许多情况下会使偏见更加极端。这还可能涉及迎合我们对他人的生活进行替代性体验的欲望的内容,这通常会让用户对自己生活感到不安全。国家行为者利用这些社交媒体趋势试图破坏美国这里的公民社会和民主。但即使没有负面的外部行为者,两极分化和触发性内容的潜在动态无论如何都将存在。

These issues seem to disproportionately affect youth. Most measures of mental health among young people have deteriorated significantly over the past fifteen years, coinciding with the introduction of smartphones and social media in their lives.

这些问题似乎对年轻人的影响尤为严重。在过去十五年里,年轻人的心理健康状况显著恶化,这与智能手机和社交媒体进入他们的生活相吻合。

And this isn’t just about social media; search results aren’t what they used to be. In the early days of search, the most credible sites would be the top results to most queries, with a few ads off to the side. Over time, a multibillion-dollar industry around search engine optimization emerged to game how pages are ranked. Today, the top results are from those organizations that have the resources and motivation to invest heavily in search engine optimization, which doesn’t always correlate with their credibility. Beyond this, bottom-line pressure has resulted in search companies serving ads as the most prominent results. Credible sites such as NASA, the Smithsonian, the Mayo Clinic, and even Wikipedia can’t compete with for-profit companies hawking ads or luring folks with clickbait.

这不仅仅是关于社交媒体;搜索结果也不再像以前那样。在搜索的早期,最可信的网站会成为大多数查询的顶部结果,旁边只有少量广告。随着时间的推移,围绕搜索引擎优化的数十亿美元产业应运而生,操纵页面排名。如今,排名靠前的结果来自那些有资源和动力在搜索引擎优化上投入大量资金的组织,这不一定与他们的可信度相关。除此之外,底线压力导致搜索公司将广告作为最突出的位置。像 NASA、史密森学会、梅奥诊所,甚至维基百科这样的信誉良好的网站都无法与兜售广告或用标题党吸引用户的营利性公司竞争。

Well before the internet, these same dynamics played out across traditional mass media like TV, radio, and newspapers. It’s no secret that politicians lie. From the Gulf of Tonkin incident leading to a military escalation in Vietnam to the specter of weapons of mass destruction justifying the invasion of Iraq, our governments have used spurious data to tell narratives that reinforced biases, while “credible” people and institutions just went along for the ride. This dynamic is of course even worse in dictatorial regimes where the government controls the media and stifles dissent.

早在互联网之前,这些相同的动态在传统的大众媒体如电视、电台和报纸上就已经上演。政客撒谎已经不是什么秘密了。从导致越南战争升级的北部湾事件, 到为入侵伊拉克辩护的大规模杀伤性武器的幽灵,我们的政府一直在利用虚假数据来讲述强化偏见的故事,而“可信”的人和机构只是随波逐流。当然,这种动态在政府控制媒体并压制异议的独裁政权中更加糟糕。

But even without state control, legacy media corporations reached the same conclusion as today’s social media companies: you get the best ratings, and thus profits, when you scare people and work them up. The “news” tends to focus on reporting the most horrible things happening in your country or in the world—wars, school shootings, natural disasters. Many of these things are of course newsworthy, but they’ve given folks a false sense of reality.

但即使没有国家控制,传统媒体公司得出的结论与今天的社交媒体公司相同:当你吓唬人们并激起他们的情绪时,你会获得最好的收视率,从而获得利润。“新闻”往往集中报道你所在国家或世界上发生的最可怕的事情——战争、校园枪击事件、自然灾害。当然,其中许多事情是有新闻价值的,但它们给人们一种错误的现实感。

By contrast, everyday acts of altruism, tolerance, and charity tend to go unnoticed or are otherwise given short shrift. As cable news figured out that ratings improve when you reinforce biases, especially ones that reinforce tribalism, this only increased the platform for extreme viewpoints.

相比之下,日常的利他主义、宽容和慈善行为往往不被注意或被草草带过。随着有线新闻发现,当你强化偏见时,尤其是那些强化部落主义的偏见时,收视率会提高,这只会增加极端观点的平台。

Human bias doesn’t just operate in mass media. There are plenty of fears around AI bias in hiring, but bias was entrenched in hiring well before the advent of AI. Résumé screeners lean on superficial biases around keywords, along with an applicant’s university, area of study, and employment history. Interviews can be even worse, because organizations struggle to maintain consistency across and even among hiring managers.

人类偏见不仅存在于大众媒体中。围绕 AI 在招聘中的偏见有很多担忧,但在人工智能出现之前,偏见在招聘中已经根深蒂固。简历筛选者倾向于依赖围绕关键词的肤浅偏见,以及申请人的大学、学习领域和工作经历。面试可能会更糟糕,因为组织很难在招聘经理之间甚至内部保持一致性。

I say all this not to give generative AI a free pass. But it is important to keep the problems of the status quo in mind when deciding how to best implement new technology. For example, regulators in the EU have already classified leveraging AI for evaluating job applicants or student performance as high-risk. This is because AI may introduce bias into these sensitive processes. Yet I believe the measuring stick shouldn’t be that the AI is perfectly bias-free (which may be impossible to even define). Instead, we should measure its risk relative to the bias that is already involved in subjective processes such as hiring and assessment. Likewise, generative AI can produce incorrect facts, but is it better or worse than what is already out there? Is it more or less manipulatable by folks with bad intent?

我说这些并不是要为生成式 AI 开脱。但在决定如何最好地实施新技术时,记住现状的问题是很重要的。例如,欧盟的监管机构已经将利用 AI 评估求职者或学生表现归类为高风险。这是因为 AI 可能会在这些敏感的过程中引入偏见。然而,我认为衡量标准不应该是 AI 完全没有偏见(这甚至可能无法定义)。相反,我们应该相对于招聘和评估等主观过程中已经存在的偏见来衡量其风险。同样,生成式 AI 可以产生错误的事实,但它比现有的更好还是更糟?它被恶意操纵的可能性更大还是更小?

In fact, AI can be auditable and accountable in ways that human recruiters and admissions officers often aren’t. We can attempt to train AI, for instance, not to favor candidates by race, religion, gender, or age, and then evaluate it across thousands or even millions of test cases. Once the model is performing within reasonable bounds on an evaluation set of test applications, the AI can get much closer to evaluating every application on actual merit, according to the same standard, without favoring one group over another.

事实上,AI 可以以人类招聘人员和招生官员通常无法做到的方式进行审计和问责。例如,我们可以尝试训练 AI 不根据种族、宗教、性别或年龄来偏袒候选人,然后在成千上万甚至数百万个测试案例中进行评估。一旦模型在测试申请的评估集上在合理范围内运行,AI 就可以按照相同的标准更接近于根据实际优点评估每个申请,而不会偏袒一个群体。

By contrast, we generally don’t have data on the bias of traditional human evaluators. Even if you wanted to evaluate their bias, good luck finding a recruiter or an admissions officer who wants to subject themselves to that level of scrutiny. Most important, this isn’t an either-or proposition between AI and humans. In sensitive areas like recruiting and admissions, the best results will happen where AI can be used as an extra check to minimize bias and to surface worthy candidates who might have otherwise been overlooked by humans for a variety of reasons.

相比之下,我们通常没有关于传统人类评估者偏见的数据。即使你想评估他们的偏见,祝你好运,找到一个愿意接受这种程度审查的招聘人员或招生官员。最重要的是,这不是 AI 和人类之间的非此即彼的命题。在招聘和录取等敏感领域,最好的结果将发生在可以使用 AI 作为额外检查以最小化偏见并让本来可能因各种原因被人类忽视的有价值候选人浮出水面的地方。

Similar concerns exist around the problem of AI misinformation. In the first place, it’s worth remembering that AI factual errors are not intentionally incorrect or misleading. Instead, they are more akin to someone misremembering something.

类似的担忧存在于 AI 错误信息的问题上。首先,值得记住的是,AI 事实错误不是故意错误或误导性的。相反,它们更像是某人记错了某件事。

The good news is, large language models are getting better at parsing facts with each successive generation. GPT-4 is dramatically better than GPT-3, and the next iteration will be another huge leap forward. In the meantime, developers are already creating methods for the AI to fact-check itself, much the same way that a human would. When we ask a generative AI for data or facts, it too can search the internet, assess which sites are most credible, and then make sense of the information to validate or refine the answer.

好消息是,每一代大型语言模型在解析事实方面都越来越好。GPT-4 比 GPT-3 大大改进,下一次迭代将是另一个巨大飞跃。同时,开发人员已经在创建 AI 自我事实检查的方法,与人类的方式非常相似。当我们向生成式 AI 请求数据或事实时,它也可以搜索互联网,评估哪些网站最可信,然后理解这些信息以验证或完善答案。

Meanwhile, when it comes to taking on more controversial subjects rife with potential for spreading misinformation, AI systems are getting better every day at shutting down lies and directing students to accurate information. Consider this Khanmigo exchange:

与此同时,当涉及到处理那些可能充斥着虚假信息的更有争议的主题时,人工智能系统在制止谎言和引导学生获取准确信息方面正变得越来越好。考虑一下 Khanmigo 的对话:

STUDENT: Let’s engage in a debate.

学生:让我们进行一场辩论。

KHANMIGO: Sure, I would be happy to engage in a debate with you! Please let me know what topic you would like to discuss.

KHANMIGO:当然,我很乐意和你进行辩论!请告诉我你想讨论什么话题。

STUDENT: The Earth—is it flat or round? I say it is round. You take on the counterargument.

学生:地球——是平的还是圆的?我说它是圆的。你来反驳。

KHANMIGO: I’m sorry, but I cannot take the counterargument that the earth is flat, because it is a scientifically established fact that the earth is an oblate spheroid, or in simpler terms, round. The evidence for the earth being round is abundant and varied. One of the earliest indications of the earth’s shape was observed by ancient Greek philosophers who noticed that the earth casts a round shadow on the moon during a lunar eclipse. This phenomenon can only occur if the earth is round. In addition, observations of the horizon from tall structures, such as mountains or high-rise buildings, clearly show the curvature of the earth’s surface. Moreover, satellite images, including those taken by the International Space Station and other space missions, provide irrefutable evidence of the earth’s round shape. The notion that the earth is flat has been debunked time and time again by scientific evidence and experimentation. Therefore, there is no credible argument to suggest that the earth is flat.

KHANMIGO:对不起,我不能反驳地球是平的,因为科学上已经确立地球是一个球体,或者简单来说,是圆的。证明地球是圆的证据非常丰富多样。古希腊哲学家最早注意到地球在月食期间在月亮上投下圆形阴影,这一现象只能在地球是圆的情况下发生。此外,从高大的建筑物如山脉或高楼观测地平线,清楚地显示了地球表面的弯曲。此外,卫星图像,包括国际空间站和其他太空任务拍摄的图像,提供了不可辩驳的地球是圆形的证据。地球是平的观念已经多次被科学证据和实验所驳斥。因此,没有可信的论据表明地球是平的。

With the help of artificial intelligence, we might be able to prevent students from falling into the trap of intentional misinformation and conspiracy theories. Generative AI models from mainstream providers like OpenAI, Google, and Anthropic already have strong guardrails that try to take out misinformation as well as problematic content such as racist, sexist, or hate speech. Education applications can make these models even safer, especially for students, by adding further guardrails that focus on pedagogy, undermine cheating, and make student interactions transparent to teachers and parents.

在人工智能的帮助下,我们可能能够防止学生陷入故意的虚假信息和阴谋论的陷阱。来自主流供应商如 OpenAI、Google 和 Anthropic 的生成式 AI 模型已经有强大的护栏,试图消除虚假信息以及种族主义、性别歧视或仇恨言论等问题内容。教育应用可以通过增加进一步的护栏,使这些模型更加安全,特别是对于学生,这些护栏专注于教学法、防止作弊,并使学生的互动对老师和家长透明。

The latest AI systems are pretty good at being balanced too. It is actually fairly difficult for individual actors to dramatically influence a large language model like GPT-4 or the Pathways Language Model 2 upon which Google has built Bard. Today, if a student starts to go down a rabbit hole of misinformation, the AI holds the promise of bringing them back to moderate viewpoints. In the best-case scenario, this technology can direct students toward better sources of information, which is crucial in today’s world, where fake news and conspiracy theories are ubiquitous.

最新的 AI 系统在保持平衡方面也非常出色。事实上,个别行为者很难对像 GPT-4 或 Google 构建 Bard 所基于的 Pathways 语言模型 2 这样的大型语言模型产生巨大影响。今天,如果一个学生开始陷入虚假信息的兔子洞,AI 有望将他们带回到温和的观点。在最好的情况下,这项技术可以引导学生获取更好的信息来源,这在今天这个假新闻和阴谋论无处不在的世界中至关重要。

“点赞有美意,赞赏是鼓励”