Exclude Specific Content from Box AI
As a law firm we need to segregate certain client data from the ability to use Box AI due to legal agreements in place with their clients to not be able to use AI on their data. Currently we would need to turn off Box AI on the whole account in order to achieve this requirement, which is not ideal.
-
Anonymous
commented
Wanted to share I am working with an organization who themselves want to leverage AI, but have certain clients/customer who do not want AI touching any of their content. In an Ideal world they could use classification labels to visually restrict AI usage on the content, and also prevent the ability to use AI on single or multiple documents with that AI label. Within their business model they had consultants who would be working with some clients who allows for AI, and others who don't, so being able to have more granular AI permissioning based on content not user would be ideal.
-
Jimi Hendricks
commented
We also want to have more granular access to what folders Box AI can process due to compliance requirements and general good practice.
As suggested by BlackRoseEvy, using clasification labels would be a good way to implement this feature.
Thanks for the consideration.
-
BlackRoseEvy
commented
We are facing challenges in gaining confidence to enable the AI tool due to the nature of its current access controls. While we can exclude or allow individual user access to AI, the concern primarily revolves around protecting specific content. For example, even if AI is disabled for an individual, any shared content they own can still be processed by AI if accessed by another user with AI enabled.
We are awaiting a response from the privacy office regarding acceptance of the risks associated with AI and HIPAA-protected content. If the risk is not approved, we may need to disable AI features for all users who have access to HIPAA data. Unfortunately, this would prevent those users from leveraging AI features for their non-HIPAA-related content as well.
It would be highly beneficial to have greater flexibility in managing AI accessibility at a more granular level. Are there any roadmap items or plans to address the following potential solutions?
• Allowing content owners to tag specific files to exclude them from AI processing
• Allowing content owners to tag folders for AI exclusion
• Leveraging classification policies to prevent specific data types from being processed by AIThese capabilities would help ensure compliance while enabling broader use of AI for non-sensitive content.
Thank you.
-
Anonymous
commented
Restrict content processing with Box AI via a Shield policy. Similar to how a classification can restrict Box Sign Requests.