Many organizations may overlook the importance of including artificial intelligence (AI) in their Windows security policy. However, with the increasing use of AI in various applications, it becomes crucial to review and update security policies to address the potential risks associated with AI platforms.
It is essential for organizations to establish limits and guidelines in their security plans regarding the type of information that can be entered into platforms or websites that may store or share sensitive data. Confidential information should never be included in any application that does not have clearly defined protections for handling such data. Fortunately, there are already examples available online that provide guidance on what concepts should be included in AI security policies.
When introducing any new AI software or tool into a network, organizations should prioritize evaluating its security. The software should be assessed for its fitness for use, ensuring that it operates without crashes or error messages. Vendors should provide a privacy policy and terms of service, as well as information on how they handle updates, security, bug reports, and improvements to the platform. It is also important to determine whether the vendor includes controls in the software to allow users to limit or control AI integration.
A clear policy around AI should emphasize that no confidential client information should ever be uploaded to the AI interface. Instructions should be provided on what is allowed to be entered into the software. Employees should also be prohibited from sharing access with unauthorized individuals, and each employee using AI tools should be required to review and sign the policy. Employee training can be utilized to ensure that these policies are followed.
In addition to individual AI tools, organizations should also consider the implications and limitations of applications that include AI. For example, Microsoft Windows is already exposing group policy controls to limit connections to applications such as the Edge browser and Bing search engine. Enterprises should evaluate the implications before fully rolling out these platforms.
Basic settings can be adjusted to control AI integration. For instance, organizations can block requests to change the default browser to Microsoft Edge and the default search engine to Bing. These settings can be configured using group policy or Intune. To disable Bing Chat AI from the search field on the taskbar, users can access the Windows key + I shortcut to open Settings, navigate to Privacy & Security, click on Search permissions, and toggle off the Show search highlights option.
Future Windows releases, such as Windows 11, will continue to introduce AI components. For example, Windows Copilot will be available in the preview for Windows 11 in June, and Microsoft plans to bring AI into Bing chat plugins. Organizations must remain diligent in reviewing policies and adjusting settings to restrict or limit the use of these AI components.
Currently, Bing Chat can be limited using group policy settings. The Chat icon can be disabled by downloading and configuring the ADMX Templates for Windows 11 October 2021 Update [21H2] from the Official Microsoft Download Center. In the Group Policy Management Console, organizations can create a new Group Policy Object and navigate to Computer Configuration\Administrative Templates\Windows Component\Chat to configure the policy settings for the Chat icon on the taskbar.
In conclusion, the integration of AI into networks and desktops is imminent. Organizations must proactively build their policies and review their processes to ensure readiness for the increasing use of AI in the technology landscape. By including AI in their Windows security policies, organizations can mitigate risks and protect confidential information from potential data breaches or misuse.
