...and a growing concern for security leaders
In the last few months, we've seen rapid growth in the development and popularity of generative artificial intelligence (AI) software. Everyone has had their go at tools like ChatGPT or Copilot, be it for fun, a test drive, or a university dissertation - and especially for business. This form of machine learning that can produce all sorts of content has been a great hit - but security leaders have now started to wonder: who are we feeding our data to?
Since COVID-19, cybercrime has gone up 600%, and the number is predicted to grow even further, especially now with AI. Meta reported that within a month they blocked 1000 malicious web addresses claiming to be linked to AI tools. Hackers have started to make the most out of this latest craze by tricking people to integrate malicious software, pretending to offer AI functionality and giving them free access to their data. Like lions in the wild, they wait to ambush their prey by using malicious links, hoping someone will fall victim.
In 2021, 35% of businesses used AI, and this year, around 85% of businesses are looking to invest in it. Departments such as customer operations, marketing and sales, software engineering and R&D are increasing productivity and efficiency by using generative AI. This reinforces the idea of how dependent the workforce is becoming on these tools. However, as a Forrester analyst said recently “The problem is that if a user asks for help editing a document full of company secrets, the AI might then learn about those secrets -- and blab about them to other users in the future.” Now, the question is, if all of these departments are starting to integrate these platforms into their work, how do the security team keep track of this?
This trend is pushing the limits of cybersecurity and completely ignoring the data risks involved. If certain departments are implementing business data onto these highly intelligent tools, who is keeping track of what data is being put out and onto which platform? According to a recent survey, 70% of professionals are using these applications without telling their bosses. The lack of visibility causes security measures to be left vulnerable. Going back to the old days, when you trust your staff to keep the data safe, hoping that the person on the other side of the screen is not about to steal your data. Clearly, that’s not enough nowadays.
So, what can IT Security leaders do?
-
Build a list of all the generative AI platforms your staff are using. Read through the security policies offered by these vendors to see if the data your staff are sharing is being used or stored, and what the timeframes are. Reading these ‘boring’ documents, you’ll get a better idea of how their user agreement works and how this affects your regulatory compliance requirements.
-
Train your staff – and I don’t mean having a company call to make them aware of what they should and shouldn’t be doing. Train them, educate them, and make sure they understand what Generative AI is first of all. Show them what could happen if sensitive data is put onto the platforms, and how hackers can use this against your business. Use the list above as an example and show them your findings and learnings. Make them aware of the rise in phishing attacks – hackers use these tools to imitate websites, making the content look very credible hoping someone will fall for it.
-
Keep these applications up to date, as they often have new updates that include security optimisation.
-
IT leaders have also started using AI tools to help them develop software, however, they are not completely reliable. They help build applications faster, but there is no guarantee that there won’t be any missed gaps. Double-check your software if you are using AI and make sure the processes are ethical.
-
Create a business infrastructure for implementing these tools – your IT team will have full access at all times, making sure they always have accessibility and visibility. If any updates are required, or bug fixes, IT can be on top of these without wondering what’s happening in which department.
-
Consider using an endpoint management tool to make sure all devices are staying ahead of any cyberattacks or use a security tool to control and prevent oversharing, to avoid data leaks.