Microsoft disclosed a bug in Microsoft 365 Copilot (tracked as CW1226324) that allowed the AI assistant to summarize confidential emails in violation of Data Loss Prevention (DLP) policies. Starting January 21, 2026, emails stored in Sent Items and Drafts folders that carried sensitivity labels were being processed by Copilot Chat without proper permission checks.
This vulnerability is significant because it undermines the trust organizations place in sensitivity labels and DLP controls to protect confidential information. Microsoft has since fixed the issue, but the incident highlights the ongoing challenges of integrating AI tools like Copilot with existing enterprise security frameworks, where a single bug can inadvertently expose sensitive data at scale.
Bloomberg shares a retrospective on Steven Spielberg's 'A.I. Artificial Intelligence', which marks its 25th anniversary. The post highlights how the 2001 film humanized machines and notes that the science fiction writer who helped craft the screenplay for Stanley Kubrick now reflects on the state of modern AI. The piece connects the film's prescient themes about artificial intelligence and humanity to today's rapidly evolving AI landscape, offering a unique perspective from someone who was deeply involved in imagining these concepts decades ago.
A social media post by @sciencegirl highlights a coffee shop deploying AI video analytics to monitor both barista productivity and customer dwell time. The system, called the NeuroSpot Barista Staff Control and Customer Monitoring Video Analytics Module, is designed to improve operational efficiency by analyzing staff performance and tracking how long customers spend in the shop.
The post raises questions about the growing use of AI-powered surveillance in everyday retail and food service environments. While framed as an efficiency tool, the concept of continuously monitoring workers and customers through video analytics touches on broader concerns around workplace surveillance, privacy, and the extent to which AI is being embedded into routine commercial operations.
Kanika shares a productivity insight about working with Claude Skills, revealing that she reduced her workflow-building time from 6 hours to 45 minutes. The key takeaway is that the bottleneck wasn't syntax or technical tweaking — it was understanding the underlying structure and purpose ("the why") behind effective Claude workflows. She encourages others to bookmark her advice before starting their next Claude project, suggesting that a structural-first approach dramatically outperforms iterative syntax adjustments.
Anthropic has released a series of free AI courses focused on building with Claude in 2026. According to a post by Alex Prompter on X, the courses cover practical skills including making real API calls, shipping tool-using agents, and building and deploying full RAG (Retrieval-Augmented Generation) pipelines. The post claims these free offerings rival or surpass many paid "AI degrees" in terms of relevance and hands-on utility.
While the full course list is truncated in the post, the emphasis is on applied, production-oriented skills rather than theoretical knowledge. The courses appear to target developers and AI practitioners who want to work directly with Claude's capabilities, making them a notable free resource in the rapidly evolving AI education landscape.