By Steve Tait - Chief Technology Officer
August 13, 2025 3 Minute Read
Should we start talking about Generation AI alongside Generative AI?
On March 14th, 2023 Chat GPT4 was launched, propelling the academic and niche discussions about generative AI right into the mainstream. Within a few days it became apparent that this was not another incremental step, but rather a giant leap forward.
Human-computer interaction. However, because of my “veteran status” (thanks Skyhigh Security), I was also immediately concerned about how we manage the risks of this. The more I thought about it the riskier it seemed. Not using this technology is not an option. We must use it but we must do this securely and with intention.
Then it struck me. If I had started university in 2023 I would enter the workforce next year (2026). My life at university will have been lived entirely with this AI assistant by my side. It will have helped me find things, it will have helped analyze data for me and will probably have helped me to create quite a bit of content. It may even have planned a good night out! This is Gen(eration)AI.
So, Gen(eration)AI enters the workforce next year. Are we ready?
At university you will understand that these models can hallucinate and can be broken to create bad content. You will also know that the output can be, let’s say, a bit iffy! But how often are you thinking about handling corporately sensitive data? Never of course, you’re at university.
So into the world of work you go and we typically provide you with loads of corporate training in the form of documentation and the standard “interactive” video. Then you sign the Acceptable Use Policy (AUP).
For your whole university career AI has been there, helping. And just because you signed the AUP, does that mean you will suddenly stop…pause… and not ask Chat GPT to analyze that set of data that your boss asked for in a few hours? No chance! It’s not malicious, it’s just human nature. For me and my peers this is new technology arriving after over 25 years in industry, but for Gen(eration) AI they are AI natives.
So what do we do? Of course we still need the training, we need the AUP, but as this workforce enters the workplace the most effective strategy is one that can contain risk and train the new workforce as they go about their roles. Tick-box training up front is simply not going to be sufficient. This means a multi-step approach enabled by Web Gateway, CASB and DLP technologies, tailored for AI and working in concert. My recommendation for this approach would be:
- Discovery, classification and restriction: Discover what is being used, determine your risk appetite and use web policies to block access to AI that is just too risky against your corporate risk appetite
- Perform DLP on what is permitted & coach in real time: Effective DLP can block and importantly provide ‘coaching’ to users about why something was blocked enabling the user to learn in real time, which is far more effective that up-front training
- Advanced co-pilot control: For corporate co-pilot applications more granular controls can be enforced such as preventing learning through automated document classification (e.g. AIP classification for Microsoft Co-Pilot). Effective DLP engines can do this automatically and advise the user what is happening to reinforce learning
- Targeted followup / retraining: User behavioral analytics can identify cohorts of users typically breaching DLP conditions, enabling targeted follow-up training
My conclusion, yes we need to start talking about Gen(eration)AI. As with every generation this generation will transform business and will one day be writing blogs like this about the next generation, or more likely just thinking about the blog which AI will automatically create! Our responsibility to this new workforce is to enable Gen(eration)AI to make this impact safely and without compromising the sensitive data of the organizations they join. AI focused SSE technologies are not nice to have, they are essential to achieve this.
About the Author
Steve Tait
Chief Technology Officer
Steve is an executive technology leader, with over 25 years experience delivering enterprise software solutions across a broad range of industry sectors including security, defense, financial services and healthcare. Specializing in the delivery of mission critical applications, Steve has held several executive and senior leadership positions for FTSE 100 companies and SMEs, both public and private equity funded.
Back to Blogs