不仅如此,ChatGPT还在网上引发了一系列舆论。绝大多数舆论偏向于ChatGPT它有多“聪明”“好用”。关注非洲市场的小编,却听到了一个不同的看法,最终得出一个结论-- 用ChatGPT聊非洲相关问题,可能不太妥当。也就是说 我们用ChatGPT查询了解非洲相关信息,获得只是片面的信息并不具有事实依据。

图源:OpenAI推特,侵删

最近,OpenAI研发的聊天机器人ChatGPT火爆全球,风头成功盖过了“高启强”,短短两个月内月活用户就突破了1亿,创下互联网用户积累最快速度。不仅如此,ChatGPT还在网上引发了一系列舆论。马斯克在去年12月就公开在Twitter上表示:“ChatGPT聪明得吓人,人类离强大到危险的人工智能不远了。”绝大多数舆论偏向于ChatGPT它有多“聪明”“好用”。关注非洲市场的小编,却听到了一个不同的看法,最终得出一个结论--用ChatGPT聊非洲相关问题,可能不太妥当。

图源:Elon Musk 推特,侵删

ChatGPT, a chatbot developed by OpenAI, has recently become a global sensation, surpassing 100 million monthly active users in just two months, the fastest rate of user accumulation on the Internet. Not only that, ChatGPT has also sparked a series of public opinion online. Musk publicly tweeted in December last year that "ChatGPT is scary good. we are not far fromdangerously strong Al." The overwhelming majority of opinion is in favour of how "smart" and "good" ChatGPT is. The editor, who follows the African market, heard a different view. The final conclusion is that ChatGPT may not be the best way to talk about Africa-related issues.

一位来自《非洲商业》杂志的专职记者Leo Komminoth发布了一篇《ChatGPT与非洲AI未来》为题的文章报道。报道重点说明,由于与非洲文化和经济现实相匹配的训练数据有限,ChatGPT的输出可能偏向于西方文化和意识形态。

A dedicated journalist from African Business magazine, Leo Komminoth, has published a report in an article titled 'Chat GPT and the future of African AI'. The report highlights that.

With limited training data matching African cultural and economic realities, the output of ChatGPT could be skewed toward reinforcing Western cultural and ideological hegemony.

图源:african.business截图,侵删

ChatGPT与其他人工智能聊天软件不同之处,便是ChatGPT采用了来自庞大的人类互联网的可用数据进行训练,因此它可以为用户提供即时答案,既可以回答严肃的问题,也可以回答更琐碎的问题,用个词语形容便是“更人性化”“更趣味性”。

ChatGPT is trained on huge datasets of human-written texts, so that it can give users instant answers to both serious questions and more frivolous inquiries.In a word, it is "more human" and "more fun".

但我们反向思考一下,恰恰因为互联网数据由用户产生,本身存在偏差,或者无法使用互联网的数据,是不是就会出现“创造不存在的知识”、“主观猜测提问者意图”等问题。好比如关于非洲实际信息。

But let's think about it the other way round: precisely because internet data is generated by users, is itself biased, or is not available, does it not lead to problems such as "creating knowledge that does not exist" and "subjectively guessing the questioner's intentions". A good example is the actual information on Africa.

根据Mozilla的《2022年互联网健康报告》,从2015年到2020年,埃及是唯一的非洲国家,其数据集被用来评估这种机器学习模型的性能,只有12个实例记录。撒哈拉以南的非洲地区仅占世界人工智能期刊发表总量的1.06%。相比之下,东亚和北美分别占了42.87%和22.70%。也就是说我们用ChatGPT查询了解非洲相关信息,获得只是片面的信息并不具有事实依据。

According to Mozilla’s Internet Health Report 2022, from 2015 to 2020, Egypt was the only African country whose datasets were used to evaluate the performance of such machine-learning models, with just 12 instances recorded.

This reflects a larger trend in machine-learning research. Sub-Saharan Africa accounts for just 1.06% of the world’s total AI journal publications. In contrast, East Asia and North America account for 42.87% and 22.70% respectively. Texts from developed countries are heavily over-represented in the training datasets, with only a small number coming from Africa.

科技巨头尼日利亚的初创企业在2022年吸引了12亿美元的资金,据信该国使用的所有软件中约有90%是进口的。但是,如果尼日利亚的金融技术、农业技术或教育技术初创公司将其商业模式建立在由别国数据组成的构思和人工智能工具上,尼日利亚能有任何有意义的数字主权吗?

Tech giant Nigeria, where startups attracted $1.2bn of funding in 2022, is believed to import approximately 90% of all software used in the country. But if Nigerian fintech, agritech, or edtech startups base their business models on AI tools conceived across the Atlantic, can Nigeria have any meaningful digital sovereignty?

文章还提出了一个新闻事件,ChatGPT 的母公司 OpenAI曾在肯尼亚雇佣工人进行评估商检互联网内容中的暴力、仇恨言论和性虐待等内容。简而言之,人工智能需要人类告诉它说什么或表现什么本质上是错误的,也无法在准确性、延续性等方面进行自我修正。

The article also brings up a news story where OpenAI, the parent company of ChatGPT, had hired workers in Kenya to evaluate commercial inspection of internet content for violence, hate speech and sexual abuse. In short, AI needs humans to tell it what to say or show is inherently wrong, and is unable to self-correct in terms of accuracy, continuity, etc.

有一天,由人工智能驱动的解决方案确实可能使农业和医疗保健系统更加高效,帮助数百万人摆脱贫困。但是,不应盲目相信这项技术,必须评估其有害后果。

One day AI-powered solutions may indeed make farming and healthcare systems more efficient, helping lift millions of people out of poverty. But the technology should not be blindly trusted, and its harmful consequences must be assessed.

原文来自邦阅网 (52by.com) - www.52by.com/article/120196

声明:该文观点仅代表作者本人,邦阅网系信息发布平台,仅提供信息存储空间服务,若存在侵权问题,请及时联系邦阅网或作者进行删除。

评论
登录 后参与评论
发表你的高见
服务介绍
非洲/中东/墨西哥专线、FBX跨境电商专线、全球空/海运、海空/空空联运、海外仓服务、COD服务、小包专线等服务