By Vasu Jakkal, Corporate Vice President, Microsoft Security

Agentic AI transformation is giving rise to the Frontier Firm—a new type of organization characterized by on-demand intelligence and a workforce where humans and agents work in tandem. According to Microsoft’s 2025 Work Trend Index, we expect every organization will be on their journey to becoming a Frontier Firm within the next two to five years. 

And as AI transforms every aspect of our lives and unlocks unprecedented possibilities, it must be grounded in security—starting with a Zero Trust foundation to protect the workforce and a new generation of Frontier Firms. 

Microsoft is committed to helping customers build a strong security foundation from the start. At Microsoft Build 2025, we’re taking important steps to secure the agentic workforce.

Secure and manage agent identities with Microsoft Entra

Security starts with identity. Identity-based cyberattacks have consistently been one of the top threat vectors since the cloud era. The number of password cyberattacks has increased to approximately 7,000 password attacks per second, and identity-based cyberattacks now account for nearly 80% of breaches.1 Identity is the new perimeter and Microsoft Entra, with more than 900 million monthly active users today, plays a pivotal role in securing all identities in the agentic era. 

We are excited to introduce Microsoft Entra Agent IDwhich extends identity management and access capabilities to AI agents. Now, AI agents created within Microsoft Copilot Studio and Azure AI Foundry are automatically assigned identities in a Microsoft Entra directory—analogous to etching a unique VIN into every new car and registering it before it leaves the factory—centralizing agent and user management in one solution. 

Circular diagram representing Zero Trust Policy across Identities, Networks, Endpoints, Data, Apps & AI, and Infrastructure.

Agentic AI is gaining momentum for its ability to combine large language models with reasoning to deliver real outcomes. As we scale autonomous capabilities, identity becomes critical—robust authentication, access provisioning, fine-grained authorization, and governance are essential. Microsoft Entra Agent ID is a huge step in delivering industry thought leadership with a tangible solution. —Frank Dickson, Group Vice President of Security and Trust, IDC

Partnering with ServiceNow and Workday

And as AI agents increasingly join and reshape the workforce, it’s crucial that workforce systems tap into Microsoft Entra’s expanded identity capabilities for agents. That’s why we are excited to partner with leading providers like ServiceNow and Workday. As part of this, we’ll integrate Microsoft Entra Agent ID with the ServiceNow AI Platform and the Workday Agent System of Record. This will allow for automated provisioning of identities for future digital employees.

Learn more about Microsoft Entra Agent ID

Secure data and compliance for AI agents with Microsoft Purview 

With the adoption of generative AI apps and models—and now agents—other types of risks beyond identity have emerged such as data oversharing and leaks, new AI-specific vulnerabilities and cyberthreats, and non-compliance with stringent regulatory requirements.  

To give organizations the tools needed to help secure and govern AI agents, Microsoft Purview data security and compliance controls is now extended to:

  • Any custom-built AI app with the new Microsoft Purview software development kit (SDK).
  • Enabled natively for AI agents built within Azure AI Foundry and Copilot Studio.

This means that AI agents can now inherently benefit from Microsoft Purview’s robust data security and compliance capabilities. Developers can leverage these controls to help reduce the risk of their AI applications oversharing or leaking data, and to support compliance efforts, while security teams gain visibility into AI risks and mitigations. This integration improves AI data security and streamlines compliance management for development and security teams.

Learn more about Microsoft Purview

Proactively secure agents with Microsoft Defender 

Finally, to help developers address critical AI risks, Microsoft Defender now integrates AI security posture management recommendations and runtime threat protection alerts directly into Azure AI Foundry. This integration reduces the tooling gap between security and development teams so developers can proactively mitigate AI application risks and vulnerabilities from within the development environment and more quickly reduce surface area risk—empowering developers to enhance the security of AI applications. 

Learn more about Microsoft Defender

These announcements underscore our commitment to providing comprehensive security and governance for AI, with technology built on the security lessons of the past and in line with our Secure Future Initiative principles. By embedding identity, security, and governance for agents into Microsoft’s agent-building spaces with seamless integration with Microsoft Entra, Microsoft Purview, and Defender, we are helping organizations innovate more securely with AI.  

More details can be found on Tech Community.

By Rudra Mitra, Corporate Vice President, Microsoft Data Security, Governance and Compliance

The Microsoft Fabric and Microsoft Purview teams are excited to be in Las Vegas from March 31 to April 2, 2025, for the second annual and highly anticipated Microsoft Fabric Community Conference. With more than 200 sessions, 13 focused tracks, 21 hands-on workshops, and two keynotes, attendees can expect an engaging and informative experience. The conference offers a unique opportunity for the community to connect and exchange insights on key topics such as data and AI.

Microsoft Purview: Built to safeguard your AI innovation

AI innovation is impacting every industry, business process, and individual. About 75% of knowledge workers today are currently using some sort of AI in their day to day.1 At the same time, the regulatory landscape is evolving at an unprecedented pace. Around the world, at least 69 countries have proposed more than 1,000 AI-related policy initiatives and legal frameworks to address public concerns around AI safety and governance.2 With the need to adhere to regulations and policy frameworks for AI transformation, a comprehensive solution is needed to address security, governance, and privacy concerns. Additionally, with the convergence of the responsibilities of cybersecurity and data teams, customers are asking for a solution that turns data security and data governance into a team sport to address issues such data discovery, data classification, data loss prevention, and data quality in a unified way. Microsoft Purview delivers a comprehensive set of solutions that address these needs, helping customers seamlessly secure and confidently activate their data in the era of AI.

We are excited to announce new innovations that help security and data teams accelerate their organization’s AI transformation:

  1. Enhancing Microsoft Purview Data Loss Prevention (Purview DLP) support for lakehouse in Microsoft Fabric to help prevent sensitive data loss by restricting access.
  2. Expanding Purview DLP policy support for additional Fabric items such as KQL databases and Mirrored databases to send users notification through policy tips when they are working with sensitive data.
  3. Microsoft Purview integration with Copilot in Fabric, specifically for Power BI.
  4. Data Observability within the Microsoft Purview Unified Catalog.

Seamlessly secure data

Microsoft Purview is extending its proven data security value delivered to millions of Microsoft 365 users worldwide, to the Microsoft data platform. This helps users drive consistency across their multicloud and multiplatform data estate and simplify risks related to data leaks, oversharing, and risky user behavior as more users are managing and handling data in the era of AI.

1. Enhancing Microsoft Purview Data Loss Prevention (DLP) support for lakehouse in Fabric to help prevent sensitive data loss by restricting access

Microsoft Purview Data Security capabilities are used by hundreds of thousands of customers for their integration with Microsoft 365 data. Since last year’s Microsoft Fabric Community Conference, Microsoft Purview has extended Microsoft Purview Information Protection and Purview DLP policy tip value across the data estate, including Fabric. Currently, Purview DLP supports the ability to show users notifications for when they are working with sensitive data in lakehouse. We are excited to share that we are enhancing the DLP value in lakehouse to prevent sensitive data leakage to guest users by restricting access. Data Security admins can configure policies and limit access to only internal users or data owners based on the sensitive data found. This control is valuable for when a Fabric tenant includes guest users and domain owners want to limit access to internal proprietary data in their lakehouses. 

Purview DLP restricting access to a Fabric lakehouse

Figure 1. DLP policy restricting access for guest users into lakehouse due to personally identifiable information (PII) data discovered 

Learn more about Microsoft Purview Data Loss Prevention

2. Expanding DLP policy support for additional Fabric items such as KQL databases and Mirrored databases to show users notification through policy tips when they are working with sensitive data

A key part of securing sensitive data is to provide visibility to your users on where and how they are interacting with sensitive data. Purview DLP policies can help notify users when they are working with sensitive data through policy tips in lakehouse in Fabric. We are excited to announce that we are extending policy tips support for additional Fabric items—KQL databases and Mirrored databases in preview. (Mirrored Database sources include Azure Cosmos DB, Azure SQL Database, Azure SQL Managed Instance, Azure Databricks Unity Catalog, and Snowflake, with more sources available soon). KQL databases are the only databases used for real-time analytics so detecting sensitive data that comes through real-time analytics is huge for Fabric customers. Purview DLP for Mirrored databases reduces the security risk of sensitive data leakage when data is transferred in Fabric. We are happy to extend Purview DLP value to more data sources, providing end-to-end protection for customers within their Fabric environments, all to prepare for the safe deployment of AI.

Purview DLP triggering a policy tip for a KQL database

Figure 2. Policy tip triggered by Purview DLP due to PII being discovered in KQL databases.

Purview DLP triggering a policy tip for a Mirrored database

Figure 3. Policy tip triggered by Purview DLP due to PII being discovered in Mirrored databases.

3. Microsoft Purview for Copilot in Fabric

As organizations adopt AI, implementing data controls and a Zero Trust approach is crucial to mitigate risks like data oversharing and leakage, and potential non-compliant usage in AI. We are excited to announce Microsoft Purview capabilities in preview for Copilot in Fabric, starting with Copilot for Power BI. By combining Microsoft Purview and Copilot for Power BI, users can:

  • Discover data risks such as sensitive data in user prompts and responses and receive recommended actions in their Microsoft Purview Data Security Posture Management (DSPM) dashboard to reduce these risks.
  • Identify risky AI usage with Microsoft Purview Insider Risk Management to investigate risky AI usage, such as an inadvertent user who has neglected security best practices and shared sensitive data in AI or a departing employee using AI to find sensitive data and exfiltrating the data through a USB device.
  • Govern AI usage with Microsoft Purview Audit, Microsoft Purview eDiscovery, retention policies, and non-compliant usage detection.
Microsoft Purview dashboard view displaying reports on Copilot in Fabric’s interactions over time, user activities, and the data entered and shared within the copilot.

Figure 4. Purview DSPM for AI provides admins with comprehensive reports on Copilot in Fabric’s user activities, as well as data entered and shared within the copilot.

Confidently activate data

4. Data observability, now in preview, within Microsoft Purview Unified Catalog

Within the Unified Catalog in Microsoft Purview, users can easily identify the root cause of data quality issues by visually investigating the relationship between governance domains, data products, glossary terms, and data assets associated with them through its lineage. Data assets and their respective data quality are visible across your multicloud, hybrid data estate. Maintaining high data quality is core to driving trustworthy AI innovation forward, and with the new data observability capabilities in Microsoft Purview, users can now improve how fast they can investigate and resolve root cause issues to improve data quality and respond to regulatory reporting requirements.

Microsoft Purview dashboard displaying data quality within a Data Product.

Figure 5. Lineage view of data assets that showcases data quality within a Data Product.

Microsoft Purview and Microsoft Fabric can help secure and activate data

As your organization continues to implement AI, Microsoft Fabric and Microsoft Purview will serve as key solutions to safely activate your data for AI. Stay tuned for even more exciting innovations to come and check out the Fabric blog to read more about the innovations in Fabric.

Learn more about Microsoft Purview

Learn more

Explore these resources to stay updated on our product innovations in security and governance for your data:

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


¹Work Trends Index

²AI Regulations around the World – 2025