如果你曾在Zoom电话会议中看到Otter.ai虚拟助手出现在房间里,请知晓它们在听你说话,并记录下你所说的一切。在人工智能和混合或远程办公模式时代,这种做法在某种程度上已变得相当普遍,但令人担忧的是,许多用户并不了解这项技术的全部功能。
如果你不知道如何选择正确的设置,Otter.ai等虚拟助手会向所有与会者发送录音和文字记录,即使有嘉宾提前离开了会议。这意味着,如果你说同事的坏话、讨论机密信息或分享卑劣的商业行为,人工智能会识别出来,而且会出卖你。
研究人员兼工程师亚历克斯·比尔泽里安(Alex Bilzerian)最近就遇到了这种情况。他与一家风险投资公司进行了一次Zoom电话会议,Otter.ai被用来记录电话会议内容。比尔泽里安上周在X上的一篇帖子中写道,会议结束后,Otter.ai自动将记录通过电子邮件发送给他,其中包括“他们之后几个小时的私人谈话:他们讨论了关于业务的私密和机密细节”。Otter.ai成立于2016年,提供录音和转录服务,可以通过Zoom连接,也可以在虚拟会议或面对面会议时手动连接。
比尔泽里安告诉《华盛顿邮报》,记录显示,在他退出会议后,投资者讨论了他们公司的“战略失败和虚假指标”。虽然比尔泽里安提醒投资者注意此事,但在他们“深表歉意”后,他仍然决定终止交易。
这只是新兴人工智能技术被用户误解的众多例子之一。针对比尔泽里安在X上发布的帖子,其他用户也报告了类似的情况。
另一位用户迪恩·朱利叶斯(Dean Julius)在X上写道:“今天,我妻子参加了一次工作相关拨款会议,[整个]会议都被记录下来并做了注释。有些人留下来私下讨论会议内容。但虚拟助手一直在记录会议内容,然后都发给大家了。无比尴尬。”
其他用户指出,随着虚拟治疗和远程医疗会议变得更加突出,这可能成为医疗保健行业的一个主要问题。
医疗软件公司IT medical的医生兼医疗顾问丹妮尔·凯尔瓦斯(Danielle Kelvas)在接受《财富》杂志采访时表示:“你可以想象,这将成为医疗保健领域一个非常严重的问题,涉及到受保护的健康信息。医疗服务提供商对隐私的担忧是可以理解的。例如,无论是人工智能书写设备还是人工智能超声设备,我们作为医生都会问,这些信息流向哪里?”
不过,Otter.ai坚持认为,用户可以防止这些尴尬或令人难堪的事件发生。
Otter.ai发言人告诉《财富》杂志:“用户完全可以控制自己的设置,我们努力让Otter尽可能直观。虽然通知功能是内置的,但我们也强烈建议在会议和对话中使用Otter时继续征求用户同意,并说明你使用Otter的情况,以实现完全透明。”该发言人还建议访问公司的帮助中心,查看所有设置和偏好。
人工智能虚拟助手的力量
作为提高工作效率和记录重要对话的一种手段,越来越多的企业开始在工作流程中使用人工智能功能。虽然人工智能无疑可以减少抄录并将其发送给利益相关者的繁琐做法,但人工智能仍不具备与人类相同的感知能力。
数据咨询公司Affinity Reply的高级顾问苏克·索哈尔(Sukh Sohal)在接受《财富》杂志采访时表示:“由于人工智能的自动化行为和缺乏判断力,它存在泄露‘工作机密’的风险。我有客户对非计划中的信息共享表示担忧。如果企业在采用人工智能工具时没有充分了解其设置或影响,例如在与会者离开会议后继续自动转录,就会出现这种情况。”
但归根结底,人类才是这项技术的推动者。
计算技术行业协会(CompTIA)战略高级副总裁汉娜·约翰逊(Hannah Johnson)在接受《财富》杂志采访时表示:“虽然人工智能正在帮助我们更快、更智能地工作,但我们需要了解自己正在使用的工具。我们不能忘记,情商和有效沟通同样至关重要。技术可能在不断发展,但人类的技能仍然是维系这一切的粘合剂。”
其他人工智能助手,如微软(Microsoft)Copilot,工作原理与Otter.ai类似,可以记录和转录会议内容。不过,微软的一位发言人告诉《财富》杂志,Coiplot也有一些保障措施:用户要么必须亲自参加会议,要么必须征得组织者的同意,才能共享会议记录或文稿。
微软发言人在一份声明中表示:“在Teams会议中,所有与会者都会看到会议正在被记录或转录的通知。”此外,管理员还可以启用一项设置,要求会议参与者明确同意被记录和转录。除非他们提供明确的许可,否则他们的麦克风和摄像头无法打开,也无法共享内容。”
然而,这些权限并不总是解决人类缺乏经验的操作或错误。人工智能基础设施公司CUDO Compute的首席营销官拉尔斯·尼曼(Lars Nyman)表示,为了给虚拟助手的使用设置更多的限制,可以把人工智能助手看作是初级行政助理。
尼曼告诉《财富》杂志,它“很有用,但还缺乏经验。要避免自动发送跟进信息,而应手动进行审核和批准。积极塑造人工智能流程,保持对共享内容和时间的严格控制。关键在于,在这个阶段,不要赋予人工智能超过刚大学毕业的新员工的更多的自主权。”(财富中文网)
译者:中慧言-王芳
如果你曾在Zoom电话会议中看到Otter.ai虚拟助手出现在房间里,请知晓它们在听你说话,并记录下你所说的一切。在人工智能和混合或远程办公模式时代,这种做法在某种程度上已变得相当普遍,但令人担忧的是,许多用户并不了解这项技术的全部功能。
如果你不知道如何选择正确的设置,Otter.ai等虚拟助手会向所有与会者发送录音和文字记录,即使有嘉宾提前离开了会议。这意味着,如果你说同事的坏话、讨论机密信息或分享卑劣的商业行为,人工智能会识别出来,而且会出卖你。
研究人员兼工程师亚历克斯·比尔泽里安(Alex Bilzerian)最近就遇到了这种情况。他与一家风险投资公司进行了一次Zoom电话会议,Otter.ai被用来记录电话会议内容。比尔泽里安上周在X上的一篇帖子中写道,会议结束后,Otter.ai自动将记录通过电子邮件发送给他,其中包括“他们之后几个小时的私人谈话:他们讨论了关于业务的私密和机密细节”。Otter.ai成立于2016年,提供录音和转录服务,可以通过Zoom连接,也可以在虚拟会议或面对面会议时手动连接。
比尔泽里安告诉《华盛顿邮报》,记录显示,在他退出会议后,投资者讨论了他们公司的“战略失败和虚假指标”。虽然比尔泽里安提醒投资者注意此事,但在他们“深表歉意”后,他仍然决定终止交易。
这只是新兴人工智能技术被用户误解的众多例子之一。针对比尔泽里安在X上发布的帖子,其他用户也报告了类似的情况。
另一位用户迪恩·朱利叶斯(Dean Julius)在X上写道:“今天,我妻子参加了一次工作相关拨款会议,[整个]会议都被记录下来并做了注释。有些人留下来私下讨论会议内容。但虚拟助手一直在记录会议内容,然后都发给大家了。无比尴尬。”
其他用户指出,随着虚拟治疗和远程医疗会议变得更加突出,这可能成为医疗保健行业的一个主要问题。
医疗软件公司IT medical的医生兼医疗顾问丹妮尔·凯尔瓦斯(Danielle Kelvas)在接受《财富》杂志采访时表示:“你可以想象,这将成为医疗保健领域一个非常严重的问题,涉及到受保护的健康信息。医疗服务提供商对隐私的担忧是可以理解的。例如,无论是人工智能书写设备还是人工智能超声设备,我们作为医生都会问,这些信息流向哪里?”
不过,Otter.ai坚持认为,用户可以防止这些尴尬或令人难堪的事件发生。
Otter.ai发言人告诉《财富》杂志:“用户完全可以控制自己的设置,我们努力让Otter尽可能直观。虽然通知功能是内置的,但我们也强烈建议在会议和对话中使用Otter时继续征求用户同意,并说明你使用Otter的情况,以实现完全透明。”该发言人还建议访问公司的帮助中心,查看所有设置和偏好。
人工智能虚拟助手的力量
作为提高工作效率和记录重要对话的一种手段,越来越多的企业开始在工作流程中使用人工智能功能。虽然人工智能无疑可以减少抄录并将其发送给利益相关者的繁琐做法,但人工智能仍不具备与人类相同的感知能力。
数据咨询公司Affinity Reply的高级顾问苏克·索哈尔(Sukh Sohal)在接受《财富》杂志采访时表示:“由于人工智能的自动化行为和缺乏判断力,它存在泄露‘工作机密’的风险。我有客户对非计划中的信息共享表示担忧。如果企业在采用人工智能工具时没有充分了解其设置或影响,例如在与会者离开会议后继续自动转录,就会出现这种情况。”
但归根结底,人类才是这项技术的推动者。
计算技术行业协会(CompTIA)战略高级副总裁汉娜·约翰逊(Hannah Johnson)在接受《财富》杂志采访时表示:“虽然人工智能正在帮助我们更快、更智能地工作,但我们需要了解自己正在使用的工具。我们不能忘记,情商和有效沟通同样至关重要。技术可能在不断发展,但人类的技能仍然是维系这一切的粘合剂。”
其他人工智能助手,如微软(Microsoft)Copilot,工作原理与Otter.ai类似,可以记录和转录会议内容。不过,微软的一位发言人告诉《财富》杂志,Coiplot也有一些保障措施:用户要么必须亲自参加会议,要么必须征得组织者的同意,才能共享会议记录或文稿。
微软发言人在一份声明中表示:“在Teams会议中,所有与会者都会看到会议正在被记录或转录的通知。”此外,管理员还可以启用一项设置,要求会议参与者明确同意被记录和转录。除非他们提供明确的许可,否则他们的麦克风和摄像头无法打开,也无法共享内容。”
然而,这些权限并不总是解决人类缺乏经验的操作或错误。人工智能基础设施公司CUDO Compute的首席营销官拉尔斯·尼曼(Lars Nyman)表示,为了给虚拟助手的使用设置更多的限制,可以把人工智能助手看作是初级行政助理。
尼曼告诉《财富》杂志,它“很有用,但还缺乏经验。要避免自动发送跟进信息,而应手动进行审核和批准。积极塑造人工智能流程,保持对共享内容和时间的严格控制。关键在于,在这个阶段,不要赋予人工智能超过刚大学毕业的新员工的更多的自主权。”(财富中文网)
译者:中慧言-王芳
If you’ve ever been in a Zoom meeting and seen an Otter.ai virtual assistant in the room, just know they’re listening to you—and recording everything you’re saying. It’s a practice that’s become somewhat mainstream in the age of artificial intelligence and hybrid or remote work, but what’s alarming is many users don’t know the full capabilities of the technology.
Virtual assistants like Otter.ai, if you don’t know the proper settings to select, will send a recording and transcript to all meeting attendees, even if a guest has left the meeting early. That means if you’re talking bad about your coworkers, discussing confidential information, or sharing shoddy business practices, the AI will pick up on it. And it will rat you out.
That happened to researcher and engineer Alex Bilzerian recently. He had been on a Zoom meeting with a venture-capital firm and Otter.ai was used to record the call. After the meeting, it automatically emailed him the transcript, which included “hours of their private conversations afterward, where they discussed intimate, confidential details about their business,” Bilzerian wrote in an X post last week. Otter.ai was founded in 2016, and provides recording and transcription services that can be connected through Zoom or manually when in a virtual or in-person meeting.
The transcript showed that after Bilzerian had logged off, investors had discussed their firm’s “strategic failures and cooked metrics,” he told The Washington Post. While Bilzerian alerted the investors to the incident, he still decided to kill the deal after they had “profusely apologized.”
This is just one of many examples of how nascent AI technologies are misunderstood by users. In response to Bilzerian’s post on X, other users reported similar situations.
“Literally happened to my wife today with a grant meeting at work,” another user, Dean Julius wrote on X. “[The] whole meeting [was] recorded and annotated. Some folks stayed behind on the call to discuss the meeting privately. Kept recording. Sent it all out to everyone. Suuuuper awkward.”
Other users pointed out this could become a major issue in the health-care industry as virtual therapy and telehealth sessions become more prominent.
“This is going to become a pretty terrible problem in health care, as you can imagine, regarding protected health information,” Danielle Kelvas, a physician and medical adviser for medical software company IT Medical, told Fortune. “Health care providers understandably have concerns about privacy. Whether this is an AI-scribe device or AI powered ultrasound device, for example, we as doctors are asking, where is this information going?”
Otter.ai, however, insists users can prevent these awkward or embarrassing incidents from happening.
“Users have full control over their settings and we work hard to make Otter as intuitive as possible,” an Otter.ai spokesperson told Fortune. “Although notifications are built in, we also strongly recommend continuing to ask for consent when using Otter in meetings and conversations and indicate your use of Otter for full transparency.” The spokesperson also suggested visiting the company’s Help Center to review all settings and preferences.
The power of AI virtual assistants
As a means of increasing productivity and having records of important conversations, more businesses have begun implementing AI features into workflows. While it can undoubtedly cut down on the tedious practice of transcribing and sending notes out to stakeholders, AI still doesn’t have the same sentience as humans.
“AI poses a risk in revealing ‘work secrets’ due to its automated behaviours and lack of discretion,” Sukh Sohal, a senior consultant at data advisory Affinity Reply, told Fortune. “I’ve had clients express concerns over unintended information sharing. This can come about when organizations adopt AI tools without fully understanding their settings or implications, such as auto-transcription continuing after participants have left a meeting.”
Ultimately, though, humans are the ones who are enabling the tech.
“While AI is helping us work faster and smarter, we need to understand the tools we’re using,” Hannah Johnson, senior vice president of strategy at The Computing Technology Industry Association (CompTIA), told Fortune. “And we can’t forget that emotional intelligence and effective communication are just as vital. Technology may be evolving, but human skills remain the glue that holds it all together.”
Other AI assistants, like Microsoft’s Copilot, work similarly to Otter.ai, in that meetings can be recorded and transcribed. But in the case of Coiplot, there are some backstops: A user has to either be a part of the meeting or have the organizer approve the share of the recording or transcripts, a Microsoft spokesperson told Fortune.
“In Teams meetings, all participants see a notification that the meeting is being recorded or transcribed,” the Microsoft spokesperson said in a statement. “Additionally, admins can enable a setting that requires meeting participants to explicitly agree to be recorded and transcribed. Until they provide explicit permission, their microphones and cameras cannot be turned on, and they will be unable to share content.”
Still, these permissions don’t always address human naivety or error. To apply more guardrails to virtual assistant usage, Lars Nyman, chief marketing officer of AI infrastructure company CUDO Compute, said to think of your AI assistant as a junior executive assistant.
It’s “useful, but not yet seasoned,” Nyman told Fortune. “Avoid auto-sending follow-ups; instead, review and approve them manually. Shape AI processes actively, maintaining firm control over what gets shared and when. The key is not to entrust AI with more autonomy than you’d give to a new hire fresh out of college at this stage.”