CyberSecurity SEE

ChatGPT and Google Gemini Successfully Pass Cybersecurity Exams

ChatGPT and Google Gemini Successfully Pass Cybersecurity Exams

The collaborative research between the University of Missouri and Amrita University in India has shed light on the potential of large language models (LLMs) like ChatGPT and Google Gemini in enhancing ethical hacking practices to strengthen cybersecurity defenses. The study, led by Prasad Calyam, Director of the Cyber Education, Research and Infrastructure Center at the University of Missouri, delved into how AI-driven tools can contribute to safeguarding digital assets against malicious cyber threats by analyzing their performance on the Certified Ethical Hacker (CEH) exam.

Ethical hacking, a proactive approach to identifying vulnerabilities in digital defenses, was put to the test in this study using questions from the CEH exam to assess the abilities of ChatGPT and Google Gemini in explaining and recommending protections against common cyber threats. Both models demonstrated proficiency in elucidating concepts like the man-in-the-middle attack and proposing preventive measures, with Google Gemini showing slightly higher accuracy rates compared to ChatGPT.

One interesting aspect of the research was the introduction of confirmation queries to the AI models after their initial responses, which led to corrections and improvements in their accuracy and reliability. This iterative query processing mechanism not only enhances the AI models’ performance but also reflects the problem-solving approach of human experts in cybersecurity, emphasizing the importance of a collaborative approach between AI-driven automation and human oversight.

Despite the promising performance of AI tools like ChatGPT and Google Gemini in ethical hacking, caution was advised against solely relying on these tools for comprehensive cybersecurity solutions. Prasad Calyam emphasized the necessity of human judgment and problem-solving skills in devising robust defense strategies and warned against the risks of over-reliance on potentially flawed AI advice, which could leave systems vulnerable to attacks.

Moving forward, the study pointed towards the importance of establishing ethical guidelines for the deployment of AI in cybersecurity and further research to enhance the reliability and usability of AI-driven ethical hacking tools. The researchers highlighted the need for improvements in AI models’ handling of complex queries, expansion of multi-language support, and the development of robust legal and ethical frameworks to ensure responsible deployment of AI in cybersecurity practices.

The collaborative efforts between academia, industry stakeholders, and policymakers will play a crucial role in shaping the future of AI in cybersecurity, fostering innovation while safeguarding digital infrastructures against emerging threats. With ongoing advancements and innovations, AI models like ChatGPT and Google Gemini have the potential to significantly contribute to ethical hacking practices and fortify digital infrastructure against evolving cyber threats.

Source link

Exit mobile version