AI Will Add To, Rather Than Reduce, Planning Delays Unless We Do Something About It

There was that boosterish press statement from the prime minister, PM unveils AI breakthrough to slash planning delays and help build 1.5 million homes: 9 June 2025. I’ve read it a few times, along with, for instance, the more detailed MHCLG Digital blog post, Extract: Using AI to unlock historic planning data (12 June 2025).

The “Extract” tool is targeted to be available for local authorities by next Spring to enable the easier digitisation of old planning documents and maps. Useful as it may be (“revolutionary”! “breakthrough”! “cutting-edge technology”!):

  • to talk this up as the way to “slash planning delays and help build 1.5 million homes” is, shall we say, pushing it; and
  • for the avoidance of doubt it should not be at the expense of us all being able to interrogate copies of the original documentation (memories of the transfer of authorities’ planning records to microfiche files – many an unhappy hour spent at those dreaded microfiche machines -and of whole swathes of planning records that have mysteriously disappeared as a result of, for instance, past waves of local government reorganisation).

In my 20 October 2024 blog post, Together In Electric Dreams I referred to some of the other technical advances which may help, and of course the legislation now enacted via the Levelling-up and Regeneration Act 2023 to set common data standards. The submission deadline has also just closed for MHCLG’s Geovation PropTech Innovation Challenge, where up to 12 companies will share in £1.2 million to develop solutions “to accelerate the delivery of 1.5 million homes in England through scalable PropTech solutions, and make a measurable impact on the yearly target of 300,000 new homes”.

However, are we sufficiently focused on the risks that AI ends up adding to, rather than, reducing planning delays, in particular though enabling submission by applicants and objectors alike of over-long and sometimes inaccurate material?

Lawyers will be well aware of the salutary case of R (Ayinde) v London Borough of Haringey (Dame Victoria Sharp and Johnson J, 6 June 2025), where a junior barrister, Sarah Foley, prepared grounds for judicial review which cited five cases which do not exist. Her evidence to the court was that “when she drafted the grounds she “may also have carried out searches on Google or Safari” and that she may have taken account of artificial intelligence generated summaries of the results (without realising what they were)”. The barrister was instructed by the Haringey Law Centre, whose solicitor and chief executive, Victor Amadigwe, gave evidence that: “Haringey Law Centre relies heavily on the expertise of specialist counsel. It has not been its practice to verify the accuracy of case citations or to check the genuineness of authorities relied on by counsel. It had not occurred to either Ms Hussain or Mr Amadigwe that counsel would rely on authorities that do not exist. When Haringey Council raised concerns about the five authorities, Ms Hussain and Mr Amadigwe wrote to Ms Forey and asked her to provide copies of the cases. Ms Forey did not do so, but she did provide the wording for the email that Ms Hussain sent on 5 March 2025. In the light of that wording, Ms Hussain and Mr Amadigwe did not appreciate that the five cases that had been cited were fake – they wrongly thought that there were minor errors in the citations which would be corrected before the court. Ms Hussain denies that Ms Forey told her that she had been unable to find the cases. It was only at the hearing before Ritchie J that they realised that the authorities did not exist. Mr Amadigwe has now given instructions to all his colleagues within Haringey Law Centre that all citations referred to by any counsel must be checked.”

The court decided not to instigate contempt proceedings against those involved but set out matters which required further consideration by the lawyers’ respective regulatory bodies.

The court’s judgment has these important passages on the use of artificial intelligence in court proceedings:

4. Artificial intelligence is a powerful technology. It can be a useful tool in litigation, both civil and criminal. It is used for example to assist in the management of large disclosure exercises in the Business and Property Courts. A recent report into disclosure in cases of fraud before the criminal courts has recommended the creation of a cross-agency protocol covering the ethical and appropriate use of artificial intelligence in the analysis and disclosure of investigative material. Artificial intelligence is likely to have a continuing and important role in the conduct of litigation in the future.

5. This comes with an important proviso however. Artificial intelligence is a tool that carries with it risks as well as opportunities. Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained. As Dias J said when referring the case of Al-Haroun to this court, the administration of justice depends upon the court being able to rely without question on the integrity of those who appear before it and on their professionalism in only making submissions which can properly be supported.

6. In the context of legal research, the risks of using artificial intelligence are now well known. Freely available generative artificial intelligence tools, trained on a large language model such as ChatGPT are not capable of conducting reliable legal research. Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect. The responses may make confident assertions that are simply untrue. They may cite sources that do not exist. They may purport to quote passages from a genuine source that do not appear in that source.

7. Those who use artificial intelligence to conduct legal research notwithstanding these risks have a professional duty therefore to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work (to advise clients or before a court, for example). Authoritative sources include the Government’s database of legislation, the National Archives database of court judgments, the official Law Reports published by the Incorporated Council of Law Reporting for England and Wales and the databases of reputable legal publishers.

8. This duty rests on lawyers who use artificial intelligence to conduct research themselves or rely on the work of others who have done so. This is no different from the responsibility of a lawyer who relies on the work of a trainee solicitor or a pupil barrister for example, or on information obtained from an internet search.

9. We would go further however. There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused. In those circumstances, practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities (such as heads of chambers and managing partners) and by those with the responsibility for regulating the provision of legal services. Those measures must ensure that every individual currently providing legal services within this jurisdiction (whenever and wherever they were qualified to do so) understands and complies with their professional and ethical obligations and their duties to the court if using artificial intelligence. For the future, in Hamid hearings such as these, the profession can expect the court to inquire whether those leadership responsibilities have been fulfilled.”

The internet is becoming increasingly unreliable – and the introduction of Google AI at the top of any set of search results, certainly doesn’t help

Surely, much of this advice is equally relevant to the planning system. As referred to in my  20 October 2024 blog post we have the Planning Inspectorate’s guidance on the use of artificial intelligence in casework evidence. How is this being policed in practice? And what of submissions made by applicants and objectors at application stage? I was pleased to see this piece: Local authorities need to ‘get wise’ to residents using AI to object to planning applications, warns GLA digital lead (Planning Resource, 12 June 2025 – behind paywall):

The GLA’s head of change and delivery Peter Kemp told Planning’s Planning Summit yesterday (Wednesday 11 June) that “part of really successfully planning towns and cities is having the confidence of our residents”.

While digital planning brings a variety of “really exciting and positive” benefits , unless authorities start to think about the risks of AI they are “going to lose the confidence of their residents”.

One example of this is “how many people are using AI to produce objection letters to planning applications and misquoting case law as a result”, said Kemp.

“As local authorities, we need to get really wise to this and we need to start thinking about the impact of that in how we operate and how we build the confidence of junior officers to really operate in that space as well”, he added.

Kemp also noted that as a result of digital planning, the role of monitoring officers across London over the last five years “has fundamentally changed”.

Historically, monitoring officers would be responsible for manually supplying data on thousands of applications a year, but “now that stuff happens automatically, so their role has changed to check the quality of the data”, he said.”

I wonder how many authorities have followed the approach of North Norfolk Council which now has specific reference to the use of artificial intelligence in its local validation list?

The reality is surely that we are all collectively sleepwalking.

Worryingly, there is a cottage industry in online firms offering AI platforms to generate planning objections:

Or people can obviously use the tools themselves, generating lengthy, superficially well-written prose, with numerous legal, policy and/or factual references to be verified. This ultimately helps no-one, least of all those putting their trust in these tools.

And the issue is not just with text but of course images too – see Iceni’s Rebecca Davy’s 10 June 2025 blog post AI tools are reshaping how we read the past – how can heritage consultants help to keep the records straight?

Rather than relying on authorities individually to set out guidance for anyone submitting documents for reliance in the operation of the planning system, wouldn’t it be better for firm guidance to be set down centrally by MHCLG, using as a basis the Planning Inspectorate’s current guidance?

  • When should use of AI be declared in relation to any submitted material?
  • What is and isn’t AI for these purposes? (Predictive text,  proof reading and document transcription tools? More traditional web searches?)
  • What is the responsibility of person submitting the material to check the accuracy of the material, including underlying sources relied upon, and what should be the potential consequences if this is not done?
  • In any event, as I have been saying for so long, why do we not have indicative word and file size limits for different categories of material? Nearly every document submitted by anyone is simply too long and AI will exacerbate the issue. Now is the opportunity!

NB as always, in preparing this post I have had to avoid, for instance, WordPress’s “writing assistance” tool and, in uploading the images, the opportunity offered by Microsoft to “create an image using AI“. I get it why tools like this are increasingly popular but, without guardrails as to their use in connection with every element of the planning system, one thing is for sure: our jobs are going to become harder, not easier.

Simon Ricketts, 22 June 2025

Personal views, et cetera

Together In Electric Dreams

We should be constantly pinching ourselves at the good fortune of (1) living in, what was to previous generations, the future, and (2) having been given the privilege and responsibility of in turn helping to shape a small part of the world in which future generations will live and work.

It wasn’t so long ago that the life of a planning lawyer used to entail posting out cheques for copies of local plans and decision notices (having first had various telephone conversations – yes telephone conversations – to work out the price) or, if it was a rush job, turning up at the local authority’s offices to go through the paper files, or (the horror) sit at their microfiche machine. And sometimes we actually had to sit in a library, with books.

The Planning Portal, individual local authority planning portals and the Planning Inspectorate’s Appeals Casework Portal have been a game changer – but we are on the cusp of bigger improvements in terms of efficiency, transparency of information and the potential for better informed public engagement.

Last week at Town Legal we co-hosted a breakfast roundtable discussion with Gordon Ingram and Claire Locke from Vu.City  to discuss digital 3D planning but the discussion went wider to discuss where we are with digital planning data more widely as well as the Planning Inspectorate’s recent guidance as to the use of artificial intelligence. We had a range of participants from the private and public sectors but I was particularly grateful to Nikki Webber, digital planning lead at the City of London who subsequently shared some of the links to resources that I will now use in this post.

There has been discussion about digitising the planning system for so long that there’s a risk of taking it all for granted, or of not focusing on the vision and how achievable it now is. But huge advantages in terms of efficiency, transparency and quality of decision-making surely flow from (and indeed are already starting to flow from):

  • Ensuring that data that enters the planning system is available for wider public use and that common standards are adopted wherever possible
  • Using technology (1) to give decision-makers and the public a better understanding of the policy options before them and the ability to visualise development proposals in context and (2) to enable better and more straight-forward opportunities for the public to express their views, on the basis of a better understanding of the issues

As the old British Rail slogan went, we’re getting there.

MHCLG’s Digital Planning Programme is doing great, practical, work. Its planning data platform is still at beta testing stage but is already useful, showing planning and housing information provided by local authorities on a single interactive map. It also announced on 18 October 2024 that it is now turning to developing data specifications for planning applications, looking  into “where specifications are required, and define them clearly, taking into account how this data will be used by the planning community. This will build on the work that we have already started, such as the draft specifications for planning applications and decisions , and planning conditions  .”

The legislation required to underpin these advances is taking shape. Part 3, chapter 1 of the Levelling-up and Regeneration Act 2023 deals with planning data. Sections 84, 85, 86 are already in force as of 31 March 2024, by virtue of the Levelling-up and Regeneration Act 2023 (Commencement No. 3 and Transitional and Savings Provision) Regulations 2024 .

Quoting in part from LURA’s explanatory notes:

Section 84 gives the Secretary of State and devolved administrations “the power to regulate the processing of planning data by planning authorities, to create binding “approved data standards” for that processing. It also provides planning authorities with the power to require planning data to be provided to them in accordance with the relevant approved data standards.”

“Example (1):

A planning authority creating their local plan: Currently planning authorities do not follow set standards in how they store or publish local plan information. Through these powers, contributions to the preparation of a local plan and the contents of a local plan will be required to be in accordance with approved data standards. This will render local plan information directly comparable, enabling cross-boundary matters to be dealt with more efficiently as well as the process of updating a local plan as planning authorities will benefit from having easily accessible standardised data.

Example (2):

Central government trying to identify all conservation areas nationally: In the existing system, planning authorities name their conservation areas using different terms (e.g., con area, cons area) making it hard for users of this data, such as central government to identify which areas are not suitable for development and what restrictions are in place. By setting a data standard which will govern the way in which planning authorities must name their conservation areas, and planning authorities publishing this machine-readable data, a national map of conservation areas can be developed which can be used to better safeguard areas of special importance.”

Section 85 allows planning authorities, by published notice, to require a person to provide them with planning data that complies with an approved data standard, that is applicable to that data.

Section 86 allows regulations to be made “requiring a relevant planning authority to make such of its planning data as is specified or described in the regulations available to the public under an approved open licence”.

Whilst these sections are already technically in force, they cannot fully take effect until the government determines what those specific approved data standards will be. Section 87 is also important but not yet in force, which gives the Secretary of State the power to approve software, that is in accordance with data standards, to be used by planning authorities in England. Clearly there is great advantage in consistency of approach as between public authorities as to the software used, so as to ease the user experience and presumably to make providers’ investment in technology more viable but this is to be balanced as against the risks arising from any particular provider being able to exploit a dominant position. Is not a private/public sector approach possibly the most appropriate, as per the Planning Portal (a joint venture between MHCLG and TerraQuest Solutions Limited)?

MHCLG’s Digital Planning Programme has also been funding local authorities’ digital planning projects and its website has links to various case studies. For instance, take a look at Southampton City Council’s work  on increasing accessibility and understanding to improve public engagement, using a Vu.City developed 3D model to help local residents understand what proposals may look like in situ and potentially ease concerns about increased densities. How transformative it would be if local people could see the different options that here might be to accommodate local housing and employment development needs within an area. Or in terms of development management and transparent public engagement, look at London Borough of Camden’s beta testing as to the information it can provide as to major applications in its area (particularly look at the use of images of the proposal and at the “How could this affect you?” section).

With progress of course comes the need for caution. These tools need to be based on accurate information and the risks are accentuated where outputs are the result of modelling and extrapolation of data, rather than taking the form of simply making the raw data more easily available. Any inputs and algorithmic influences need to be capable of being tested. Technology is requiring us all to be additionally cautious in all that we do. In my world for instance, the Law Society has published some useful, detailed, advice as to Generative AI: the essentials  to provide a “broad overview of both the opportunities and risks the legal profession should be aware of to make more informed decisions when deciding whether and how generative AI technologies might be used”. As a firm we now have a policy on the use of AI; no doubt yours does too.

Understanding of the issues has in some ways already moved on greatly since my 27 May 2023 blog post You Can Call Me AI but the risks have increased now that use of Chat GPT and its competitors has become more mainstream. AI is undoubtedly being used by some to generate text for objections to planning applications. I’ve had prospective clients who mention in passing that before asking me the particular question they have looked online and “even Chat GPT didn’t have the answer” (these things are just large language models folks! Would you rely on predictive text as anything more than an occasional short-cut? I don’t like to think about what it must be like to be a GP these days).

Until recently I hadn’t thought about the additional risks arising from generative AI, of false images and documents being relied upon as supposed evidence in planning appeals. So I was pleased to see the Planning Inspectorate’s guidance on Use of artificial intelligence in casework evidence (6 September 2024).

The guidance says:

If you use AI to create or alter any part of your documents, information or data, you should tell us that you have done this when you provide the material to us. You should also tell us what systems or tools you have used, the source of the information that the AI system has based its content on, and what information or material the AI has been used to create or alter.   

In addition, if you have used AI, you should do the following: 

  • Clearly label where you have used AI in the body of the content that AI has created or altered, and clearly state that AI has been used in that content in any references to it elsewhere in your documentation. 
  • Tell us whether any images or video of people, property, objects or places have been created or altered using AI. 
  • Tell us whether any images or video using AI has changed, augmented, or removed parts of the original image or video, and identify which parts of the image or video has been changed (such as adding or removing buildings or infrastructure within an image).  
  • Tell us the date that you used the AI.
  • Declare your responsibility for the factual accuracy of the content. 
  • Declare your use of AI is responsible and lawful. 
  • Declare that you have appropriate permissions to disclose and share any personal information and that its use complies with data protection and copyright legislation.   

AI is defined in the document very loosely: “AI is technology that enables a computer or other machine to exhibit ‘intelligence’ normally associated with humans”.

If I can carp a little, whilst the thrust of the guidance and its intent is all good, are we really clear what is and isn’t AI? What about spell-check and other editing functions, what about the photo editing that goes on within any modern camera? Do you know whether the information you are relying upon has itself been prepared partly with the benefit of any AI tool however defined and if AI has been used on what basis are you confirming that “its use complies with data protection and copyright legislation” given the legal issues currently swirling around that subject as to the material upon which some of these AI models are being trained? Perhaps some examples would be helpful of the practical issues on which PINS is particularly focusing.

Tech isn’t my specialism. Planning and planning law probably isn’t a specialism of those actually developing the technical systems and protocols. But I think we need to make sure that we are all engaging as seamlessly as possible across those professional dividing lines, so that the opportunities to create a better, more efficient, more engaging, possibly even more exciting planning system are fully taken. These are the things that dreams are made of.

Simon Ricketts, 20 October 2024

Personal views, et cetera

Extract from MHCLG’s planning data map