There was that boosterish press statement from the prime minister, PM unveils AI breakthrough to slash planning delays and help build 1.5 million homes: 9 June 2025. I’ve read it a few times, along with, for instance, the more detailed MHCLG Digital blog post, Extract: Using AI to unlock historic planning data (12 June 2025).
The “Extract” tool is targeted to be available for local authorities by next Spring to enable the easier digitisation of old planning documents and maps. Useful as it may be (“revolutionary”! “breakthrough”! “cutting-edge technology”!):
- to talk this up as the way to “slash planning delays and help build 1.5 million homes” is, shall we say, pushing it; and
- for the avoidance of doubt it should not be at the expense of us all being able to interrogate copies of the original documentation (memories of the transfer of authorities’ planning records to microfiche files – many an unhappy hour spent at those dreaded microfiche machines -and of whole swathes of planning records that have mysteriously disappeared as a result of, for instance, past waves of local government reorganisation).
In my 20 October 2024 blog post, Together In Electric Dreams I referred to some of the other technical advances which may help, and of course the legislation now enacted via the Levelling-up and Regeneration Act 2023 to set common data standards. The submission deadline has also just closed for MHCLG’s Geovation PropTech Innovation Challenge, where up to 12 companies will share in £1.2 million to develop solutions “to accelerate the delivery of 1.5 million homes in England through scalable PropTech solutions, and make a measurable impact on the yearly target of 300,000 new homes”.
However, are we sufficiently focused on the risks that AI ends up adding to, rather than, reducing planning delays, in particular though enabling submission by applicants and objectors alike of over-long and sometimes inaccurate material?
Lawyers will be well aware of the salutary case of R (Ayinde) v London Borough of Haringey (Dame Victoria Sharp and Johnson J, 6 June 2025), where a junior barrister, Sarah Foley, prepared grounds for judicial review which cited five cases which do not exist. Her evidence to the court was that “when she drafted the grounds she “may also have carried out searches on Google or Safari” and that she may have taken account of artificial intelligence generated summaries of the results (without realising what they were)”. The barrister was instructed by the Haringey Law Centre, whose solicitor and chief executive, Victor Amadigwe, gave evidence that: “Haringey Law Centre relies heavily on the expertise of specialist counsel. It has not been its practice to verify the accuracy of case citations or to check the genuineness of authorities relied on by counsel. It had not occurred to either Ms Hussain or Mr Amadigwe that counsel would rely on authorities that do not exist. When Haringey Council raised concerns about the five authorities, Ms Hussain and Mr Amadigwe wrote to Ms Forey and asked her to provide copies of the cases. Ms Forey did not do so, but she did provide the wording for the email that Ms Hussain sent on 5 March 2025. In the light of that wording, Ms Hussain and Mr Amadigwe did not appreciate that the five cases that had been cited were fake – they wrongly thought that there were minor errors in the citations which would be corrected before the court. Ms Hussain denies that Ms Forey told her that she had been unable to find the cases. It was only at the hearing before Ritchie J that they realised that the authorities did not exist. Mr Amadigwe has now given instructions to all his colleagues within Haringey Law Centre that all citations referred to by any counsel must be checked.”
The court decided not to instigate contempt proceedings against those involved but set out matters which required further consideration by the lawyers’ respective regulatory bodies.
The court’s judgment has these important passages on the use of artificial intelligence in court proceedings:
“4. Artificial intelligence is a powerful technology. It can be a useful tool in litigation, both civil and criminal. It is used for example to assist in the management of large disclosure exercises in the Business and Property Courts. A recent report into disclosure in cases of fraud before the criminal courts has recommended the creation of a cross-agency protocol covering the ethical and appropriate use of artificial intelligence in the analysis and disclosure of investigative material. Artificial intelligence is likely to have a continuing and important role in the conduct of litigation in the future.
5. This comes with an important proviso however. Artificial intelligence is a tool that carries with it risks as well as opportunities. Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained. As Dias J said when referring the case of Al-Haroun to this court, the administration of justice depends upon the court being able to rely without question on the integrity of those who appear before it and on their professionalism in only making submissions which can properly be supported.
6. In the context of legal research, the risks of using artificial intelligence are now well known. Freely available generative artificial intelligence tools, trained on a large language model such as ChatGPT are not capable of conducting reliable legal research. Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect. The responses may make confident assertions that are simply untrue. They may cite sources that do not exist. They may purport to quote passages from a genuine source that do not appear in that source.
7. Those who use artificial intelligence to conduct legal research notwithstanding these risks have a professional duty therefore to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work (to advise clients or before a court, for example). Authoritative sources include the Government’s database of legislation, the National Archives database of court judgments, the official Law Reports published by the Incorporated Council of Law Reporting for England and Wales and the databases of reputable legal publishers.
8. This duty rests on lawyers who use artificial intelligence to conduct research themselves or rely on the work of others who have done so. This is no different from the responsibility of a lawyer who relies on the work of a trainee solicitor or a pupil barrister for example, or on information obtained from an internet search.
9. We would go further however. There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused. In those circumstances, practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities (such as heads of chambers and managing partners) and by those with the responsibility for regulating the provision of legal services. Those measures must ensure that every individual currently providing legal services within this jurisdiction (whenever and wherever they were qualified to do so) understands and complies with their professional and ethical obligations and their duties to the court if using artificial intelligence. For the future, in Hamid hearings such as these, the profession can expect the court to inquire whether those leadership responsibilities have been fulfilled.”
The internet is becoming increasingly unreliable – and the introduction of Google AI at the top of any set of search results, certainly doesn’t help
Surely, much of this advice is equally relevant to the planning system. As referred to in my 20 October 2024 blog post we have the Planning Inspectorate’s guidance on the use of artificial intelligence in casework evidence. How is this being policed in practice? And what of submissions made by applicants and objectors at application stage? I was pleased to see this piece: Local authorities need to ‘get wise’ to residents using AI to object to planning applications, warns GLA digital lead (Planning Resource, 12 June 2025 – behind paywall):
“The GLA’s head of change and delivery Peter Kemp told Planning’s Planning Summit yesterday (Wednesday 11 June) that “part of really successfully planning towns and cities is having the confidence of our residents”.
While digital planning brings a variety of “really exciting and positive” benefits , unless authorities start to think about the risks of AI they are “going to lose the confidence of their residents”.
One example of this is “how many people are using AI to produce objection letters to planning applications and misquoting case law as a result”, said Kemp.
“As local authorities, we need to get really wise to this and we need to start thinking about the impact of that in how we operate and how we build the confidence of junior officers to really operate in that space as well”, he added.
Kemp also noted that as a result of digital planning, the role of monitoring officers across London over the last five years “has fundamentally changed”.
Historically, monitoring officers would be responsible for manually supplying data on thousands of applications a year, but “now that stuff happens automatically, so their role has changed to check the quality of the data”, he said.”
I wonder how many authorities have followed the approach of North Norfolk Council which now has specific reference to the use of artificial intelligence in its local validation list?

The reality is surely that we are all collectively sleepwalking.
Worryingly, there is a cottage industry in online firms offering AI platforms to generate planning objections:

Or people can obviously use the tools themselves, generating lengthy, superficially well-written prose, with numerous legal, policy and/or factual references to be verified. This ultimately helps no-one, least of all those putting their trust in these tools.

And the issue is not just with text but of course images too – see Iceni’s Rebecca Davy’s 10 June 2025 blog post AI tools are reshaping how we read the past – how can heritage consultants help to keep the records straight?
Rather than relying on authorities individually to set out guidance for anyone submitting documents for reliance in the operation of the planning system, wouldn’t it be better for firm guidance to be set down centrally by MHCLG, using as a basis the Planning Inspectorate’s current guidance?
- When should use of AI be declared in relation to any submitted material?
- What is and isn’t AI for these purposes? (Predictive text, proof reading and document transcription tools? More traditional web searches?)
- What is the responsibility of person submitting the material to check the accuracy of the material, including underlying sources relied upon, and what should be the potential consequences if this is not done?
- In any event, as I have been saying for so long, why do we not have indicative word and file size limits for different categories of material? Nearly every document submitted by anyone is simply too long and AI will exacerbate the issue. Now is the opportunity!
NB as always, in preparing this post I have had to avoid, for instance, WordPress’s “writing assistance” tool and, in uploading the images, the opportunity offered by Microsoft to “create an image using AI“. I get it why tools like this are increasingly popular but, without guardrails as to their use in connection with every element of the planning system, one thing is for sure: our jobs are going to become harder, not easier.
Simon Ricketts, 22 June 2025
Personal views, et cetera
