• Engaging with effective governance

      Giving South African municipalities access to the tools and technologies it needs to thrive, was the driving force behind BCX SOLAR. Giving South African municipalities access to the tools and technologies it needs to thrive, was the driving force behind BCX SOLAR.

      Mining Sector

      Increase productivity & build a culture of innovation

      Financial Sector

      Meet the challenges of disruption & cyber security

      Healthcare Sector

      Empower your patients & leverage data by deploying customised solutions

      Retail Sector

      Embrace the changing retail landscape & know your customer

      Government Sector

      Use digital transformation to grow the economy & build capacity

      Industries Overview >

    • Cloud
      Reimagine success

      Accelerate your business ambitions with cloud computing solutions from BCX.

      Digital Innovation Awards

      BCX Digital Innovation Awards

      Cloud

      Computing for today & the future

      Digital Transformation

      Intelligent systems upgrading

      Analytics

      Data that works for you

      Applications

      Tools to streamline operations

      Services

      Strategies for efficient ICT
      Healthcare Solutions

      Healthcare Solutions

      Applications for healthcare
      BCX ERP Solutions

      SOLAR ERP Solutions

      Connect, integrate, and optimise

      Security

      Protection for your critical systems

      Devices

      Processes & network foundations

      Connectivity

      Connections within & without

      Partners

      Partnerships moving business forward
      BCX HR and Payroll

      HR and Payroll Solutions

      Everything to manage people & payroll
      Software Testing-as-a-Service

      Software Testing-as-a-Service

      Your pathway to zero-defect software

      Solutions Overview >

    • Our Offices
      BCX Head Office
      1021 Lenchen Avenue North
      Centurion, Gauteng
      South Africa
      0157
      Botswana

      Botswana

      Mozambique

      Mozambique

      Namibia

      Namibia

      Zambia

      Zambia

      UK

      United Kingdom

      Our Global Footprint Overview >

    • Speak To An Expert
      We'll need just a few details from you, and one of our specialists will be in touch as soon as possible.
      BCX HEAD OFFICE

      Employee Entrance:

      1021 Lenchen Avenue North
      Centurion
      Gauteng
      South Africa
      0157

      Visitors Entrance:

      1266 South Road
      Centurion
      Gauteng
      South Africa
      0157
AI is a Product, Not a Proxy
Home > AI is a product, not a proxy: Rethinking accountability in cybersecurity

AI is a product, not a proxy: Rethinking accountability in cybersecurity

6 August, 2025
Artificial Intelligence (AI) is no longer a futuristic concept. It is embedded in the systems we use every day. From fraud detection in banking to patient triage in hospitals, AI is shaping decisions that affect lives. But as we integrate these technologies into our cybersecurity frameworks, we must ask: who is accountable when AI gets it wrong?

AI is not autonomous. It doesn’t operate in a vacuum. It is built, trained, and deployed by people. And that means accountability must remain human. The idea that AI can be a standalone solution is not only misleading, but can also be dangerous. We must view AI as a product, a tool, a component of a broader system that requires oversight, governance, and ethical scrutiny.

The illusion of autonomy

There is a growing tendency to treat AI as a decision-maker rather than a decision-support tool. But when an AI system flags a user as a threat, denies access to a service, or fails to detect a breach, who is responsible? The developer? The vendor? The enterprise that deployed it?

This ambiguity is particularly risky in sectors like healthcare, finance, and public services, where the consequences of false positives – or false negatives – can be severe. In 2023, a UK-based AI system used by the Department for Work and Pensions to detect benefits fraud, was found to disproportionately flag individuals from minority communities, leading to wrongful investigations and public backlash. The courts ruled that the system lacked transparency and violated data-protection laws, setting a precedent for AI accountability.

Closer to home, South Africa has seen its own share of AI-related ethical concerns. In 2024, a Pietermaritzburg law firm faced disciplinary action after submitting court papers containing fabricated legal citations, likely generated by an AI tool. Of the nine cases cited, only two could be verified, and only one was correctly referenced. The presiding judge described the conduct as “irresponsible and downright unprofessional,” and referred the matter to the Legal Practice Council. In another case, an algorithm used by the South African Social Security Agency (SASSA) to verify income for social-grant eligibility was found to exclude millions of eligible applicants due to flawed logic and misinterpretation of informal financial support. Applicants received automated rejections without recourse to human review, prompting criticism from civil society and digital rights experts.

These incidents highlight the risks of over-reliance on AI without human oversight, and the very real consequences of ethical lapses.

African realities: Context matters

In Africa, the stakes are even higher. Many AI models are trained on global datasets that don’t reflect local realities. A phishing detection model trained on European data might miss a South African scam mimicking a local bank. This isn’t just a technical flaw – it is a contextual failure.

South Africa’s Information Regulator recorded over 1 200 data-breach notifications between April 2023 and March 2024. Many of these incidents involved automated systems that failed to detect or respond to threats in time. These are not just technical failures – they are governance failures.

While national bodies like the CSIR are working to map local threat landscapes, there remains a significant gap in African-centric threat intelligence. Without this, AI systems risk becoming blunt instruments, ineffective at best, harmful at worst.

In healthcare, for example, Professor Keymanthri Moodley, head of medical ethics at Stellenbosch University, has warned that South Africa’s lack of AI-specific regulation and representative medical datasets could lead to biased diagnoses and compromised patient care. The absence of oversight from bodies like the HPCSA has created what she calls “technical and ethical debt,” where innovation outpaces accountability.

These challenges highlight the need for a more grounded approach. We must start by addressing real problems with AI, not just implementing it because it’s shiny tech. The focus must be on outcomes where AI solves for real-world challenges. Explainability is key, and AI must be tightly coupled with security and risk management, particularly in the financial, healthcare, and public sectors. Crucially, we must invest in training people alongside AI, because no model should operate without human oversight.

Technical realities: What AI can’t do alone

From a technical standpoint, AI systems are only as good as the data and assumptions they’re built on. Model drift – where an AI system becomes less accurate over time as data patterns change – is a real and present risk. In cybersecurity, this could result in missed detection of new malware variants or the misclassification of legitimate user behaviour as malicious.

Then there’s the issue of explainability. Many AI models, particularly those based on deep learning, operate as black boxes. In high-stakes environments, this is unacceptable. Security teams must be able to audit decisions, trace logic, and understand why a model flagged or missed a threat.

Leading organisations are now adopting AI observability practices: monitoring model performance, detecting drift, and ensuring transparency. These are not just technical safeguards – they are ethical imperatives.

A shared responsibility

The ethical deployment of AI in cybersecurity is not the responsibility of one party. It is a shared obligation. Vendors must be transparent about how their models work. Enterprises must understand what they’re buying not just the features, but the risks. Regulators must create frameworks that protect citizens from algorithmic harm.

And we, as cybersecurity leaders, must ensure that AI is not used as a proxy for human judgement. It must be embedded within a system of accountability, where decisions can be explained, challenged, and improved.

Responsible innovation starts with perspective

AI is a product. It is powerful, but it is not infallible. It must be deployed with care, governed with rigour, and constantly evaluated for fairness, accuracy, and relevance.

At BCX, we are committed to this vision, not only in how we build and deploy AI, but also in how we train people to work alongside it. Because, ultimately, technology should serve humanity, not the other way around.

References

  1. AI fraud detection system under fire for bias against vulnerable groups – The Justice Gap, 2023
    https://www.thejusticegap.com/ai-fraud-detection-system-under-fire-for-bias-against-vulnerable-groups/
  2. Law firm in deep trouble in South Africa for using AI-generated court papers – MyBroadband, 2024
    https://mybroadband.co.za/news/investing/589128-law-firm-in-deep-trouble-in-south-africa.html
  3. Welfare algorithm is excluding too many of SA’s poor, activists argue – TimesLIVE, 2025
    https://www.timeslive.co.za/news/south-africa/2025-07-01-welfare-algorithm-is-excluding-too-many-of-sas-poor-activists-argue/
  4. National survey results on the state of cybersecurity in South Africa – CSIR, 2024
    https://www.csir.co.za/national-survey-results-on-state-cybersecurity-south-africa
  5. AI-driven healthcare in SA: What are the ethical considerations? – Moneyweb Midday Podcast, 2024
    https://www.moneyweb.co.za/moneyweb-podcasts/moneyweb-midday/ai-driven-healthcare-in-sa-what-are-the-ethical-considerations/
Share

SPEAK TO AN EXPERT

 We'll just need a few details from you, and one of our specialists will be in touch.

Consent
Please read our Privacy Statement & Consent Clause to understand what happens to your personal information.

RELATED POSTS