Security Vendors are Turning to GPT as a Key AI Technology

 

A number of businesses are utilising conversational AI technology to improve their product capabilities, including for security, despite some concerns about how generative AI chatbots like ChatGPT can be used maliciously — to create phishing campaigns or write malware. 

A large language model (LLM) called ChatGPT, created by OpenAI, uses the GPT 3 LLM and is based on a variety of large test data sets. When a user asks a simple question, ChatGPT, which can understand human language, responds with thorough explanations and can manage complex tasks like document creation and code writing. It serves as an illustration of how conversational AI can be used to organise massive amounts of data, improve user experience, and facilitate communications. 

For example, a conversational AI tool, such as ChatGPT or another option, could act as the back end of an information concierge that automates the use of threat intelligence in enterprise support, claims IT research and advisory firm Into-Tech Research. 

With Orca Security Platform, it seems like

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: