European Collaboration Summit 2024 – Purview AI Hub
Mirko Pechatschek >> 24 May 2024

As an employee of Intellity GmbH, I had the pleasure of attending the European Collaboration Summit 2024 in Wiesbaden last week. As expected, the conference was all about Copilot and AI. Whether it's Teams, Outlook, Power Platform, Defender, Purview or Viva, Copilot can be found in more and more places in the Microsoft universe. Aside from that, there were still other exciting topics such as multi-geo tenants or passkeys.
Since my focus is on compliance and security, my sessions focused on that. Machine learning is nothing new for users of Microsoft Purview, as it has been used there for several years as part of trainable classifiers. Files can be identified by training a classifier with your own documents and then recorded in policies and the Content Explorer.
Now, at the beginning of May, the next AI-related feature came as a preview to Microsoft Purview. In the so-called Microsoft Purview AI Hub, statistics on AI usage within the organization can now be viewed. There are currently two levels that connect AI and compliance. On the one hand, AI offers the possibility of supporting compliance, as with trainable classifiers, but on the other hand, it also raises new compliance problems that arise from the use of AI. Due to these risks, the governance, risk and compliance experts Simon Hudson and Nikki Chapple recommend in their ECS sessions that AI tools should only be rolled out when the governance risk and compliance maturity level is at least 300.
Since Copilot allows you to "exploit" all of a user's permissions and search and link a wide variety of critical data by simply entering text, inadequate governance and compliance can lead to damage even more easily. An employee may have previously had access to risky data that allowed certain conclusions to be drawn, but depending on the data structure, tracking down these gaps and putting the data into the problematic context was difficult or even impossible. Copilot overcomes these hurdles effortlessly. This makes Copilot a powerful tool for uncovering risky security and compliance gaps, but at the same time it poses a major risk if it is rolled out to many users without the appropriate governance.
And to promote good AI compliance, the Microsoft Purview AI Hub comes into play. The AI Hub can, for example, record the use of sensitive data (e.g. through Sensitive Information Types) in the AI tools used in order to potentially uncover targeted searches for risky data. The AI Hub not only records Copilot interactions, but, through Endpoint Enrollment in Intune, even interactions with other tools such as ChatGPT.
In this case, employees can be prevented from accidentally uploading or entering sensitive data into a generative AI that is outside the organization. Aside from that, the Purview features in Copilot itself are also being expanded, for example by automatically assigning the sensitivity labels to content generated by Copilot from the data used. Likewise, the permissions of the Azure Rights Management Service are retrieved before a user receives data to which they do not have permissions.
As part of all the new compliance features for Purview and Copilot, we hope that this will also increase awareness of compliance and governance in AI in general, something that many organizations still lack.
Contact us if you also want to know
want to learn how we can improve your AI compliance and security with Microsoft Purview AI Hub
optimize.