News / Technology & Risk

Technology Agenda for Cybersecurity and GenAI

|

Jan 27, 2025

|

Rafael Sterling

Directors reviewing technology risk dashboards in a board meeting

A strong cyber program reduces downside, but it can also increase resilience and trust. GenAI can unlock productivity-if the organisation treats accuracy, data, and accountability as design requirements.

Technology has been near the top of board agendas for years, but the "why" is sharper in 2025: digital systems now determine how quickly a company can operate, recover, and compete. Two topics keep surfacing in boardrooms for good reason-cybersecurity and generative AI (GenAI). Both create real opportunity. Both create real exposure. And both punish organisations that treat them as side projects rather than enterprise capabilities with clear ownership.

Cybersecurity: not only risk reduction

Board attention to cybersecurity has grown alongside the volume and impact of cyberattacks and data breaches. Those incidents have driven heavier regulatory scrutiny, lawsuits tied to valuation loss, and long-lived reputational damage. Many directors now understand that cyber risk is not "an IT problem"-it is a business continuity problem, a customer trust problem, and a governance problem.

There is also an upside story that boards sometimes miss: a mature cyber program can enable the business. When security is designed into operations-identity, access, monitoring, and recovery-it can improve resilience, strengthen trust with customers and ecosystem partners, protect valuation, and support revenue growth by making the organisation more credible in how it handles data and availability.

How boards can make cyber oversight practical

Define what matters most: the "crown jewels" (data, systems, services) and the tolerable downtime for each.

Ask for evidence, not assurances: tabletop exercises, recovery testing, and measurable improvements over time.

Clarify third-party exposure: which partners connect to your systems, and what happens when they fail.

Tie cyber to decision rights: who can accept risk, who must escalate, and what thresholds trigger board visibility.

GenAI: big upside, new failure modes

GenAI has drawn intense attention from business leaders, regulators, and the public. Its most immediate promise is straightforward: help teams move faster by assisting with routine work such as drafting, summarising, searching, and synthesising information. For many organisations, early value comes from internal productivity-reducing cycle time-before expanding into customer-facing use cases.

But GenAI also introduces risks that boards cannot treat as theoretical. Media reporting has documented cases where GenAI systems produced misleading or plainly incorrect information. In a corporate setting, that can translate into bad decisions, flawed customer communications, compliance missteps, and operational errors-especially when outputs are used without appropriate review, testing, or guardrails.

Workforce and ethics: decisions, not slogans

Workforce questions remain central. GenAI can change which tasks are automated and which roles evolve, raising legitimate concerns about displacement and reskilling. Boards can help management move from broad statements to concrete plans: which job families will be affected first, what "augmented" work looks like in practice, and how the organisation will invest in training and change management.

Ethical questions also need operational answers. Which tasks should remain human-led? What level of transparency is expected when AI contributes to customer interactions or decisions? How will the company prevent systems from bypassing human oversight? These are governance issues, not philosophy debates-because they ultimately determine accountability when something goes wrong.

Board prompts for GenAI governance

Inventory and classification: what GenAI tools and use cases exist today, and which are high-risk?

Data boundaries: what information is prohibited from entering GenAI systems, and how is that enforced?

Quality controls: what testing is required before launch, and what monitoring exists after launch?

Human oversight: which outputs must be reviewed, and who is responsible for exceptions?

Incident response: how will the organisation handle incorrect outputs, data leakage, or misuse?

Where cyber and GenAI collide

Boards should also watch the intersection: GenAI can increase the attack surface through new tools, new integrations, and new data pathways. At the same time, attackers can use GenAI to scale social engineering and speed up reconnaissance. Treating GenAI as "just another app" can create blind spots; treating it as a controlled capability-with clear access, logging, and change management-keeps risk governable.

In the last week of January 2025, the signal for boards is clear: technology is both an engine of advantage and a source of compounding risk. The right response is not fear or hype. It is governance that is specific-decision rights, controls, measurement, and accountability-so the company can pursue upside without gambling the brand on preventable failures.

Essential Cookies

Strictly necessary for the website to function properly.

Required

Analytics Cookies

Help us understand how you use our website.

Marketing Cookies

Used to deliver relevant advertisements and track performance.

Technology Agenda for Cybersecurity and GenAI | Ventrix Intelligence