Back to blog
DA Insights
February 2026
11 min read

AI-Assisted Statement of Environmental Effects: Why Structured Inputs Beat Free Text

The average NSW development application takes over 70 days to assess. A significant proportion of that time isn't assessment — it's waiting. Waiting for applicants to respond to requests for additional information, most commonly an incomplete or missing DCP compliance schedule. One unanswered RFI can add four to six weeks to a DA timeline before assessment has even begun.

This is the specific problem that AI-assisted Statement of Environmental Effects drafting is designed to solve. Not the professional judgment parts of planning — the compliance narrative, the variation justification, the relationship with the assessment officer — but the mechanical work of assembling a complete DCP provision schedule with responses against each applicable requirement. That work currently takes a planning consultant three to four hours on a straightforward residential DA. It's a primary driver of the $3,000–$8,000 fee applicants pay for a standard SEE.

The question is how to automate it reliably enough to be useful, without creating a new class of risk for the professionals who sign the document.

Why free-text description fails as the primary input

The obvious approach — ask the applicant to describe their development in plain text, feed that description to an AI alongside the applicable DCP provisions, and let it mark each provision as relevant or not applicable — fails at a specific and predictable point.

The failure isn't that AI is bad at reading. It's that the provisions that cause the most problems in lodged DAs are precisely those whose applicability isn't visible in the development description. Consider a typical rear extension with a new deck. The description says nothing about stormwater. It says nothing about impervious surfaces. It says nothing about BASIX cost thresholds. Yet all three of those provision categories directly apply, and all three are among the most common sources of RFIs.

An AI reading "rear single-storey extension with new deck" and a stormwater management provision will frequently mark the stormwater provision not applicable. The word "stormwater" doesn't appear in the description. The connection — deck equals new impervious surface, impervious surface triggers stormwater management obligations — requires planning domain inference that the AI doesn't have the inputs to make reliably.

The failure mode here is asymmetric in a way that matters. A false positive — an AI flagging a provision as relevant when it isn't — costs the user a few minutes of review. A false negative — an AI marking a provision not applicable when it does apply — produces a DCP compliance schedule with a gap. If that schedule goes into a lodged DA without a planner catching the gap, the council will flag it in an RFI. The very problem the tool was meant to solve has been reproduced, just a few weeks later in the process.

The structured intake changes the reliability ceiling

The alternative is to replace the free-text description with a short structured intake — factual questions about the proposal whose answers map deterministically to provision applicability.

The distinction is important. A structured intake doesn't ask the applicant to make planning judgments. It asks them to state facts: does the proposal create new impervious surfaces? Does it add a floor level? Are there trees within the works area? Is the estimated cost of works above $50,000? These are binary questions with factual answers that most applicants can answer correctly.

Each answer maps to a set of provision categories that are either locked in or can be considered for exclusion. New impervious surfaces: stormwater provisions are locked in regardless of anything else the user says. Cost of works above $50,000: BASIX provisions are locked in. No new floor level: upper storey provisions can be suggested as not applicable, with the user confirming. No pool or spa: pool provisions can be excluded automatically.

The design principle that makes this legally defensible is straightforward: a provision is only excluded when its applicability trigger is factually impossible given the confirmed inputs, not merely unlikely. The system is permitted to exclude commercial signage provisions for a confirmed residential development. It is not permitted to exclude stormwater provisions because the applicant described a rear extension without mentioning drainage.

When applicants select "unknown" for any trigger input, the associated provisions remain in the schedule. Unknown defaults to inclusion, never exclusion. The provision set may be larger than necessary, but a planner reviewing a complete schedule can dismiss irrelevant items quickly. A planner reviewing an incomplete schedule faces a much harder task.

The audit trail that shifts liability correctly

The structured intake also creates something the free-text approach cannot: a documented basis for every exclusion decision.

When a provision is excluded from the schedule, the system records why — which intake answer made its applicability impossible, and when the user confirmed that answer. This record is stored alongside the generated SEE. If a council officer later questions why a provision wasn't addressed, the answer is auditable: the applicant confirmed no new impervious surfaces, and stormwater provisions were therefore not included.

This matters because it moves the responsibility for exclusion decisions to where it belongs. A planner reviewing a structured-intake-generated schedule is reviewing a document where every omission traces to a confirmed factual input, not an AI's probability estimate about relevance. That's a fundamentally different liability position from reviewing a schedule generated by asking an AI to interpret a text description.

The government pressure and the planner channel

NSW Government has been explicit about the DA processing bottleneck. Ministerial benchmarks, published council league tables, and a $5.6 million AI investment targeting assessment efficiency all point in the same direction: the government wants assessment-ready applications arriving at council with complete documentation. The political urgency is housing supply — faster approvals mean more dwellings entering the pipeline sooner.

This creates an apparent tension. Planning professionals have legitimate concerns about tools that automate work that is, in part, professional judgment. Those concerns are well-founded: an AI that incorrectly dismisses a DCP provision creates professional exposure for the planner who signs the document without catching the error.

But the tension resolves when the distribution channel is considered carefully. An AI-generated DCP compliance schedule is functionally useless without a qualified planner reviewing and signing it. Council assessment officers do not treat unsigned SEE documents as equivalent to those carrying professional attribution. The planner is not being bypassed — they remain essential to the process. What changes is how they spend their time. Three hours of manual provision assembly becomes thirty minutes of structured review and professional annotation.

This is the outcome the government's efficiency agenda and the profession's quality standards both require. Planners who submit complete, structured DCP compliance schedules get fewer RFIs and faster assessments. Council officers processing well-prepared applications spend less time chasing missing information. The tool's value proposition to the planner is not "we replace you" — it's "you submit better applications and take on more work."

The planners most likely to adopt this approach are those with high DA volumes and a competitive interest in faster turnaround times. They are also, not coincidentally, the planners most likely to be submitting to the councils under the most government pressure to process faster. The professional adoption and the government efficiency agenda reinforce each other.

What this means for how the tool works

PlotDetect's SEE generator operates on this basis. The DCP tab already applies a four-layer spatial and statutory filter that reduces the full provision database to those applicable to the specific property — by zone, heritage status, precinct, and site conditions. What the structured intake then does is a second-pass triage: using confirmed factual inputs about the proposal to surface the provisions most critical to the specific development type, flag categories that are structurally impossible to apply, and present the resulting schedule for professional review.

The output is a draft DCP compliance schedule suitable for a planner to annotate, complete, and incorporate into a DA submission. It is not a completed SEE. The disclaimer on every generated document is not a legal formality — it reflects the actual intended workflow. The tool produces the mechanical foundation; the professional provides the judgment that makes it credible.

For planning professionals handling residential DAs at volume, the efficiency gain is substantial. For council assessment officers receiving complete applications, the RFI rate drops. For applicants waiting on approvals, the timeline shortens. That alignment — tool, professional, council, government — is what makes this approach viable rather than simply interesting.