Cybersecurity experts from around the world gathered in Nashville, Tennessee from 25-27 October for this year’s ISC2 Secure Congress. It became clear that the information and IT security community cannot ignore the topic of Artificial Intelligence, so this year’s focus was clearly on the security of AI.

Machine learning and generative AI security

Several presentations focused on the security of machine learning and generative AI. In addition to how generative AI models can be tricked into unwanted (and potentially security-critical) outputs through clever prompt engineering – and what AI vendors should do about it – there were also several papers that addressed safe use.

AI models learn from usage data

So what should enterprise users do – or not do – to ensure the safest possible use of generative AI? The most important rule of conduct: in principle, NO confidential content – such as business secrets, personal data or intellectual property – should be used in prompts to AI models. The reason for this is obvious, but many users do not realise it: AI models learn from usage data. User input can be used by the model maker to further train the models. A prominent example of this is the leak of confidential information from electronics manufacturer Samsung, whose employees revealed internal company information through careless use of ChatGPT (see https://gizmodo.com/chatgpt-ai-samsung-employees-leak-data-1850307376).

Train employees in the use of AI models

Companies should therefore train their employees in the use of generative AI models such as ChatGPT. There was a lively discussion on the sidelines of one presentation about whether the issue of security and generative AI models might cause companies to rethink their use of cloud services. Today, large AI models are almost exclusively cloud-based. However, if there are concerns about security, could it be that companies will start to run AI models in the traditional way, in their own data centre (“on premises”), in order to benefit from the advantages of generative AI models, but in their own protected operating environment? Although this is likely to be an option only for larger user organisations, we are already reading about the first such deployment scenarios. Many experts are already predicting significant demand for such deployment scenarios.

Bitcoin payments are not always anonymous

And what other exciting topics were there? An extremely informative presentation by Andy Greenberg shed light on why bitcoin payments are not always anonymous – and how payment flows can be traced by law enforcement. A number of presentations were dedicated to the security of industrial systems (OT) as well as current attack vectors on IT systems – in particular, a presentation on lessons learned from past cyber-attacks attracted a lot of interest.

All in all, it was two and a half very exciting days for us, which provided valuable inspiration for the coming weeks and months.

More articles

ISO 27001 requires you to conduct an internal audit of your ISMS on a regular basis to verify conformity with the standard. Although it is called an “internal audit”, you can – and should –...
Checks of IT security are useful and advisable for a variety of reasons. External reasons such as regulatory requirements – the KRITIS regulation or the IT security law are examples – may require such reviews....
Instant 27001 is a solution that saves an enormous amount of time and money when setting up and operating an ISMS according to ISO 27001. Users benefit not only from the fact that Instant 27001...