Tech/Science

Slack Faces Backlash Over Use of Messages for AI Training

Slack users have been left shocked and concerned after discovering that their messages are being used for AI training without a clear opt-out option. The popular communication platform, Slack, recently introduced Slack AI in February, leading to questions about its data usage policies.

According to Slack engineer Aaron Maurer, the company has stated that it does not train its large language models (LLMs) on customer data. However, users have raised concerns about the lack of clarity in Slack’s policy, which currently allows the platform to utilize messages, content, and files for training its global AI models.

Engineer and writer Gergely Orosz highlighted the need for companies to provide a clear opt-out option for data sharing until the policy is explicitly outlined in the terms and conditions, rather than through a blog post.

The discrepancy between Slack’s privacy principles and its promotion of Slack AI has caused confusion among users. While Slack’s privacy principles acknowledge the use of Machine Learning (ML) and Artificial Intelligence (AI) to enhance the product, the platform’s AI page reassures users that their data is not used to train Slack AI.

Following the backlash from users, Slack has announced that policy changes are on the horizon to address the concerns raised. A spokesperson from Salesforce, the company that owns Slack, confirmed that updates will be made to clarify that customer data is not used to develop or train generative AI models.

These updates will ensure that Slack does not use customer data for training third-party models or in a way that could compromise user privacy. The move aims to provide users with greater transparency and control over how their data is utilized for AI purposes.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *