

Silicon Valley Ambitions, Karoo Connectivity - Navigating the 2026 AI Regulatory Minefield
By the Tech & Human Survival Division at NVDB Attorneys …
If you’re reading this, congratulations! You haven’t yet been replaced by a large language model housed in a server farm in Reykjavik, and your law firm hasn't been bankrupted by a rogue algorithm that decided "billable hours" were an inefficient use of carbon-based lifeform energy.
At NVDB Attorneys, we’ve spent the last year watching the legal landscape shift from "paper-heavy and slow" to "digitally automated and occasionally hallucinogenic". It’s been quite the whirlwind.
So, as we move from policy discussions to the cold, hard reality of active legal practice, it’s time to address the cyber-elephant in the room. Artificial Intelligence (AI) is no longer a toy with which teenagers write bad poetry. It’s actually – and in reality – a regulatory Kraken that South African businesses are ill-equipped to fight.
Shall we take a wee look?
The Ghost in the Machine - Ethical AI and the "Hallucination" Problem
The legal industry has a long history of ethical dilemmas - usually involving trust accounts and missing mahogany desks. However, 2026 has introduced a new player - The AI Hallucination.
For the uninitiated, an AI hallucination is when your software looks you dead in the eye (or the cursor) and confidently cites a 1984 Constitutional Court case that never actually happened. True story.
In the early days, we laughed. In 2026, the Legal Practice Council is no longer laughing. They’re now handing out disciplinary notices like flyers at a Sandton intersection. In fact, just last year, a law firm was referred to the Legal Practice Council after a court found they had used AI to fabricate nine legal citations, only two of which actually existed.
Eish. Awkward.
The Bias in the Code
We like to think of AI as objective. But it isn’t. AI is trained on the data that’s produced by humans - beings famously known for their flawless objectivity. Automated bias is now a top-tier litigation risk, especially since the Department of Communications and Digital Technologies (DCDT) released the National AI Policy Framework. This framework acts as a foundational blueprint, classifying AI systems into risk categories from "low" to "existential crisis"!
If your "high-risk" algorithm keeps rejecting loan applications from anyone living south of the Orange River, you aren't being "tech-forward". You’re violating the framework's mandate for human-centred AI.
This isn't just a suggestion either.
The "Not-Another-Policy" Policy
The Department of Communications and Digital Technologies (DCDT) has finally stopped playing Solitaire long enough to clear the internal hurdles for the Draft National AI Policy. It has officially entered the Cabinet approval process after clearing the Socio-Economic Impact Assessment System (SEIAS) hurdles. It’s currently sitting on a minister’s mahogany desk, awaiting a final blessing before being unleashed for a 60-day public consultation period later this March.
Guided by five "core pillars" - skills capacity, responsible governance, ethical and inclusive AI, cultural preservation, and human-centred deployment - the policy acts as a foundational blueprint, classifying AI systems into risk categories from "low" to "existential crisis."
At NVDB, we’ve codified this into our own internal "Liar, Liar" Provision -
“Every case citation, statute, and "fact" generated by an AI must be verified by a human being with a valid law degree. If you submit a filing containing a case that exists only in the imagination of a server farm in California, you will be required to explain the concept of "hallucination" to the Judge President personally”.
Digital Transformation - The 90% Delusion
Statistics show that over 90% of legal departments are prioritising "legal tech" and "matter management software" in 2026. This sounds impressive, like we’re all living in a Minority Report sequel. In reality, "Digital Transformation" in many South African firms usually means the Senior Partner has finally stopped printing out his emails and has started saving them as PDFs.
The shift toward matter management software is driven by a terrifying increase in regulatory complexity. We’re no longer just lawyers. We are data janitors. As software becomes more "efficient," the volume of work increases to fill the saved time. Ho Hum.
We’ve seen firms adopt AI-driven contract review tools only to find they now have to review 400 contracts an hour instead of four per day. It’s a total catch-22.
It’s a digital treadmill, and the speed is set to "Cardiac Arrest."
At NVDB, we embrace the tech, but we do so with a healthy dose of South African scepticism. If a software salesperson tells you their AI can "predict judge behaviour with 99% accuracy", ask them if it can predict whether the electricity will stay on long enough for the judge to open up his laptop.
POPIA, Cybersecurity, and the 2026 Audit Fever
If 2025 was the year of "Oops, we got hacked", 2026 is the year of "Oh no, the Information Regulator is at the door". The Protection of Personal Information Act (POPIA) has finally grown teeth. Large, sharp, expensive teeth. Think modern-age Dracula. With a killer smile.
Specifically, Section 71(1) of POPIA now hangs over our heads like a digital guillotine, strictly prohibiting decisions made solely by automated means if they have substantial legal consequences. If your AI decides to fire an employee or deny a claim without a human in the loop, you’ve just handed the Information Regulator a reason to audit your soul. Which can be quite uncomfortable. For some.
Furthermore, if you’re using unique identifiers (like ID numbers) to train your fancy new model, you better have Prior Authorisation under Section 57, or you’re inviting a fine that could bankrupt a small municipality.
To combat this tightening net of existing laws and new policy frameworks, we have implemented the NVDB "CYBER-SHIELD 2026" Audit Checklist - take a look it may give you some good ideas of the kinds of things you should be looking at too -
1. The "Ex-Lover" Protocol - revoke access for every employee who left in the last 12 months. Under POPIA, a disgruntled ex-associate with a cloud password is a security compromise waiting for a Section 22 Notification.
2. The AI "Shadow" Check - scan for unauthorised AI extensions. If an associate is using a "Free PDF Summariser," they have effectively handed our client’s strategy to a third-party server, likely violating the Joint Standard on Cybersecurity set by the Financial Sector Conduct Authority (FSCA), which now mandates that financial institutions (and their legal lackeys) maintain "cyber resilience".
Ø According to Moonstone, the Joint Standard 2 of 2024 on Cybersecurity and Cyber Resilience, issued by the FSCA and Prudential Authority, mandates that South African financial institutions adopt specific, enforceable standards to mitigate cyber risks. It requires firms to implement robust governance, security strategies, and, notably, multi-factor authentication for critical systems.
3. The "Nando’s" Factor - conduct phishing simulations. The Cybercrimes Act of 2020 is great for prosecuting hackers, but it won't help you if staff are still clicking on links promising "Free Nando’s for Life".
4. Immutable Backups - ensure you have a backup that cannot be modified by ransomware. If your only backup is "the cloud drive that stays synced", you don't have a backup. You have a second copy of the virus.
Liability - Who Sues the Bot?
The most fascinating legal frontier of 2026 is the question of Algorithmic Liability. In South Africa, our law of delict is currently doing a nervous dance around this issue. While we famously granted a patent to the AI "DABUS" under the Patents Act - making us the global poster child for "AI-inventorship" - we still haven't decided who goes to jail when the AI commits a "cyber-delict".
Ø Wait. What is DABUS?
In a stunning act of bureaucratic compliance, South Africa became the first nation to grant a patent to an AI, the Device for the Autonomous Bootstrapping of Unified Sentience or DABUS, turning a "check-the-box" patent system into a global, albeit accidental, pioneer in robotic IP. While global experts laughed off the move as a procedural blunder, this move highlighted a hilarious legal loophole where the lack of a "natural person" requirement in the Patents Act allowed a toaster-brained AI to technically own its own inventions. As other, less adventurous nations panicked and reversed similar decisions, South Africa remains in a unique position where a machine’s right to own property is only as secure as the next, inevitable, and likely successful court challenge.
Even the Consumer Protection Act 68 of 2008 (CPA) is being stretched to its limits to cover the fairness and safety of AI-driven products.
The Dark Side - Deepfakes and Evidence
We cannot discuss 2026 without mentioning the death of "seeing is believing". Deepfake technology has reached the point where we can no longer trust video or audio evidence implicitly. Anymore. The legal profession is now forced to become forensic tech experts just to prove a WhatsApp voice note is actually from the client and not a bot. Every piece of digital evidence must now be accompanied by a chain-of-custody report that is longer than the actual evidence.
And it’s absolutely necessary.
Adaptive or Extinct?
At NVDB Attorneys, we don’t fear the AI revolution. We just don't trust it quite yet. The legal landscape of 2026 is a strange mixture of high-tech wizardry and old-school litigation. To survive, South African law firms must pivot from being "gatekeepers of information" to "navigators of complexity" within a tightening net of regulations that range from State Information Technology Agency (SITA) Cybersecurity Hub protocols to the whims of the Information Regulator.
The robots are here. They are faster than us, they don't need coffee, and they don't have egos. Or emotions. But they also don't understand the nuance of a "braai-side agreement" or the specific socio-political complexities of the Gauteng High Court.
So, there’s that.
Our advice for 2026?
Upgrade your software, encrypt your soul, and keep your lawyer on speed dial. Because when the AI eventually decides that humans are "redundant assets", you’re going to want someone who knows how to argue for a stay of execution.
And that would be us. The human lawyers.
We have taken the utmost care to ensure that the above information is correct, but we urge you to consult with a suitably qualified legal practitioner who will be able to assist you should you have any questions or require assistance regarding the ethical use of AI, or if you would like further information regarding the tightening net of existing laws and new policy frameworks that directly impact technology and AI regulation. Please feel free to contact us to see how we can best assist.
We are a law firm that considers honesty to be core to our business. We are a law firm that will provide you with clear advice and smart strategies - always keeping your best interests at heart!
(Sources used and to whom we owe thanks – Michalsons here and here; Fluxmans; Nemko Digital; Polity; DBC Group; Lex Africa; De Rebus; FA News; Baker McKenzie; Fairbridges; Protection of Personal Information Act 4 of 2013; Cybercrimes Act 19 of 2020 and Moonstone here; here and here; the Conversation; Spoor Fisher and the Cybersecurity Hub Project).



