Security Copilot is designed to summarize and make sense of threat intelligence so that businesses can make informed decisions about how to protect themselves. It collects data from Microsoft products such as Bing, Outlook, and LinkedIn, as well as external services such as ThreatGrid and Geomatics Security Intelligence Corporation’s ThreatStream.
Microsoft’s pitch for Security Copilot is that the tool is better equipped than other tools to correlate data on attacks and prioritize security incidents. The company claims that its generative AI models from OpenAI – specifically the recently launched text-generating GPT-4 – make it easier to identify patterns and insights that other tools cannot see.
Microsoft Security is unveiling a new tool geared towards enhancing the state of security across organizations. Security Copilot will provide defenders with the ability to manage threats faster and more efficiently,Ultimately making the world a safer place.
Security Copilot, a security platform developed by Microsoft, leverages GPT-4 to infuse its users with cybersecurity skills. This model is based off of a custom model that incorporates security-specific skills and deploys queries pertinent to cybersecurity. This allows Security Copilot to provide an enhanced experience to its users, helping them stay ahead of the threats they face on a daily basis.
Critics of language model-driven services often argue that the models are trained on customer data, which gives them an unfair advantage over other companies. Microsoft insists that the model is not trained on customer data, dispelling one of the most common criticisms of these services.
The Microsoft Threat Modeling and Forecasting custom model, as advertised, is said to be effective in catching what other approaches might miss. The text-generating model is not immune to being deceitful however; so it remains unclear how effective this model would be in production.
Microsoft’s admission that its custom security model doesn’t always get things right suggests that automated content may not be the most reliable or effective way to protect users from online threats. Although AI-generated content can contain mistakes, it may still be better than nothing at all.
Creating a good security policy begins with understanding your users. By taking the time to get to know them, you can personalize your security policy based on their habits and needs.