Slack under attack over sneaky AI training policy

Slack under attack over sneaky AI training policy

Amidst the growing concerns over how large technology companies are using personal and business data for AI development, there’s a rising discontent among Slack users. They’re troubled by the way Salesforce’s messaging platform is aggressively pursuing its AI strategy.

Slack, like its peers, is leveraging user data to enhance its AI offerings. However, users who prefer not to contribute their data must proactively reach out to Slack via email to opt out.

This requirement is buried within a seemingly outdated and perplexing privacy policy that had gone unnoticed. The issue came to light when an irritated user shared their grievances on a developer-centric community website, leading to the post going viral.

The controversy began with a Hacker News post that simply linked to Slack’s privacy principles, sparking a broader debate. Many Slack users were taken aback to learn that their data was being used for AI training by default, and opting out required emailing a designated address.

The discussion on Hacker News led to further scrutiny on various platforms. Questions arose about the vaguely named “Slack AI,” a tool for searching and summarizing chats, which was conspicuously absent from the privacy principles page. Additionally, the references to both “global models” and “AI models” in Slack’s documentation added to the confusion.

The lack of clarity regarding the application of Slack’s AI privacy rules, coupled with the frustration over the opt-out process, has cast Slack in a negative light, especially since the company promotes user data control.

While the uproar is recent, the terms themselves are not new. Records from the Internet Archive indicate that these terms have been in place since at least September 2023. The company’s confirmation on this matter is pending.

According to Slack’s privacy policy, the company utilizes customer data to develop “global models” that enhance features such as channel and emoji suggestions, as well as search functionalities. Slack assures that there are stringent boundaries on how this data is used.

Slack has clarified that its machine learning models, which facilitate channel and emoji recommendations along with search capabilities, are not designed to learn, retain, or replicate any portion of the customer data, as stated by a spokesperson to TechCrunch. Nonetheless, the policy seems to omit details regarding the full extent and broader objectives of Slack’s AI model training.

Slack’s terms indicate that even if customers choose to withdraw from data training, they will still have access to the benefits of the “globally trained AI/ML models.” This raises questions about the necessity of using customer data for functions such as emoji suggestions in the first place.

Furthermore, Slack has confirmed that customer data is not employed in the training of Slack AI.

Slack AI, which is an additional feature available for purchase, operates on large language models (LLMs) that are not trained on customer data. These LLMs are maintained within Slack’s own AWS infrastructure, ensuring that customer data is kept internally and not shared with any LLM providers. This measure guarantees that customer data is managed solely by the respective organization and used exclusively for its purposes, as explained by a company representative.

It appears that the current misunderstandings regarding Slack’s policies may soon be clarified. Responding to a critical analysis on Threads by engineer and author Gergely Orosz, Slack’s Aaron Maurer acknowledged the necessity for updating their webpage to better illustrate the interaction between Slack’s privacy principles and Slack AI.

Maurer noted that the existing terms were established before the introduction of Slack AI, mirroring the company’s initiatives in search and recommendation features. Given the prevailing ambiguity over Slack’s AI activities, a review of these terms for forthcoming amendments is advisable.

The situation at Slack serves as a potent reminder of the importance of prioritizing user privacy amidst the rapid evolution of AI technology. It’s essential for a company’s service agreement to transparently detail the usage of data, or the lack thereof.

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to Top