Top
Best
New

Posted by aakashprasad91 10 hours ago

Launch HN: InspectMind (YC W24) – AI agent for reviewing construction drawings

Hi HN, we're Aakash and Shuangling of InspectMind (https://www.inspectmind.ai/), an AI “plan checker” that finds issues in construction drawings, details, and specs.

Construction drawings quietly go out with lots of errors: dimension conflicts, co-ordination gaps, material mismatches, missing details and more. These errors turn into delays and hundreds of thousands of dollars of rework during construction. InspectMind reviews the full drawing set of a construction project in minutes. It cross-checks architecture, engineering, and specifications to catch issues that cause rework before building begins.

Here’s a video with some examples: https://www.youtube.com/watch?v=Mvn1FyHRlLQ.

Before this, I (Aakash) built an engineering firm that worked on ~10,000 buildings across the US. One thing that always frustrated us: a lot of design coordination issues don’t show up until construction starts. By then, the cost of a mistake can be 10–100x higher, and everyone is scrambling to fix problems that could have been caught earlier.

We tried everything including checklists, overlay reviews, peer checks but scrolling through 500–2000 PDF sheets and remembering how every detail connects to every other sheet is a brittle process. City reviewers and GC pre-con teams try to catch issues too, yet they still sneak through.

We thought: if models can parse code and generate working software, maybe they can also help reason about the built environment on paper. So we built something we wished we had!

You upload drawings and specs (PDFs). The system breaks them into disciplines and detail hierarchies, parses geometry and text, and looks for inconsistencies: - Dimensions that don’t reconcile across sheets; - Clearances blocked by mechanical/architectural elements; - Fire/safety details missing or mismatched; - Spec requirements that never made it into drawings; - Callouts referencing details that don’t exist.

The output is a list of potential issues with sheet refs and locations for a human to review. We don’t expect automation to replace design judgment, just to help ACE professionals not miss the obvious stuff. Current AIs are good at obvious stuff, plus can process data at quantities way beyond what humans can accurately do, so this is a good application for them.

Construction drawings aren't standardized and every firm names things differently. Earlier “automated checking” tools relied heavily on manually-written rules per customer, and break when naming conventions change. Instead, we’re using multimodal models for OCR + vector geometry, callout graphs across the entire set, constraint-based spatial checks, and retrieval-augmented code interpretation. No more hard-coded rules!

We’re processing residential, commercial, and industrial projects today. Latency ranges from minutes to a few hours depending on sheet count. There’s no onboarding required, simply upload PDFs. There are still lots of edge cases (PDF extraction weirdness, inconsistent layering, industry jargon), so we’re learning a lot from failures, maybe more than successes. But the tech is already delivering results that couldn’t be done with previous tools.

Pricing is pay-as-you-go: we give an instant online quote per project after you upload the project drawings. It’s hard to do regular SaaS pricing since one project may be a home remodel and another may be a highrise. We’re open to feedback on that too, we’re still figuring it out.

If you work with drawings as an architect, engineer, MEP, GC preconstruction, real estate developer, plan reviewer we’d love a chance to run a sample set and hear what breaks, what’s useful, and what’s missing!

We’ll be here all day to go into technical details about geometry parsing, clustering failures, code reasoning attempts or real-world construction stories about how things go wrong. Thanks for reading! We’re happy to answer anything and look forward to your comments!

42 points | 43 comments
pondemic 4 minutes ago|
I’m sure commissioning engineers would have a field day with this. Have you considered use cases on the larger owner’s side of things? As an owner’s rep I can definitely see value here at an SD and DD level, especially if the owner has a decently sized Facilities or commissioning team.
sparselogic 9 hours ago||
This is fun to see. Some of my family are Division 10 contractors: their GCs love them because they spot design coordination and code issues early and keep the project from getting derailed. Bringing that to the entire project is a serious lifesaver.
aakashprasad91 9 hours ago|
Totally! Division 10 and specialty trades are often the first to see coordination issues show up in the field. We’re trying to bring that same early-warning benefit across the entire drawing set so errors never make it to construction. Would love to run a real project from your family’s world if they’re open to it!
knollimar 6 hours ago||
What kind of system to you have for parsing symbology?

Do you check anything like cross discipline coordination (e.g. online searching specification data for parts on drawings like mechanical units and detecting mismatch with electrical spec), or it wholly within 1 trades code at a time?

edit: there's info that answers this on the website. It seems limited to the common ones (e.g. elec vs arch), which makes sense.

aakashprasad91 5 hours ago||
Symbol variation is a huge challenge across firms.

Our approach mixes OCR, vector geometry, and learned embeddings so the model can recognize a symbol plus its surrounding annotations (e.g., “6-15R,” “DIM,” “GFCI”).

When symbols differ by drafter, the system leans heavily on the textual/graph context so it still resolves meaning accurately. We’re actively expanding our electrical symbol library and would love sample sets from your workflow.

aakashprasad91 6 hours ago||
We parse symbols using a mix of vector geometry, OCR, and learned detection for common architectural/MEP symbols. Cross-discipline checks are a big focus as we already flag mismatches between architectural, structural, and MEP sheets, and we’re expanding into deeper electrical/mechanical spec alignment next. Would love to hear which symbols matter most in your workflow so we can improve coverage.
knollimar 6 hours ago|||
I do electrical so parsing lighting is often a big issue. (Subcontractor)

One big issue Ive had is drafters use the same symbol for different things per person. One person's GFCi is another's switched receptacle. People use the specialty putlet symbol sometimes very precisely and others not. Often accompanied by an annotation (e.g. 6-15R).

Dimmers being ambiguous is huge; avoiding dimming type mismatches is basically 80% the lutron value add.

oscarmcdougall 3 hours ago|||
We're in a similar space doing machine assisted lighting take offs for contractors in AU/NZ, with bespoke models trained for identifying & measuring luminaires on construction plans.

Compliance is a space we've branched into recently. Would be super interested in seeing how you guys are currently approaching symbol detection.

aakashprasad91 2 hours ago||
Happy to swap notes. If you send a representative lighting plan set, we can run it and share how the detector clusters, resolves, and cross-references symbols across sheets. Always excited to compare approaches with teams solving adjacent problems.
knollimar 6 hours ago||
Maybe this is saying the quiet part out loud: how do you deal with bogus specs that designers end up not caring about since they're copy pasted? Is it just mission accomplished when you point out a potential difficulty?
aakashprasad91 6 hours ago|
We see that a lot — specs that are clearly boilerplate or outdated relative to the drawings. Our goal isn’t to force a change, but to surface where the specs and drawings diverge so the designer can quickly decide what’s intentional vs what’s baggage. “Flag + context for fast human judgment” is the philosophy.
cannedbread 3 hours ago||
When I upload my drawing set, how often should I expect it to hallucinate? And how much of the real stuff does it flag?
aakashprasad91 2 hours ago||
Hallucinations still happen occasionally, but we bias heavily toward high-confidence findings so noise stays low. On typical projects we surface a few hundred coordination issues that are real, observable conflicts across sheets rather than speculative checks. We’re actively improving precision by learning from every false positive customers flag. We show you the drawings, specs, etc. so you can verify it yourself not just trust the AI.
shuangly 2 hours ago||
We do extensive preprocessing to ensure AI receives accurate context, data, and documents for review, and we’re continuously refining this, so accuracy keeps improving every day. Right now the accuracy isn't super stable yet across projects, but we've had findings with > 90% accuracy results
Doerge 9 hours ago||
I love this!

Stupid question: Would BIM solve these issues? I know northern Europe are somewhat advanced in that direction. What kind of digitalization pace do you see in the US?

knollimar 7 hours ago||
BIM just shuffles the problem around. There are firms that do "one source of truth" BIM models but the real issue is conflicts and workflow buy in.

How do you get architect to agree with engineer with lighting designer with lighting contractor when they all have different non overlapping deadlines, work periods, knowledge and scope?

edit: if you don't work in the industry, BIM helps for "these two things are in the same spot", but not much for code unless it's about clearance or some spatial based calculation

aakashprasad91 6 hours ago||
100% agree the hardest problems are workflow and incentives, not file formats.

Even with a perfect BIM model, late changes and discipline silos mean drawings still diverge and coordination issues sneak through.

We’re trying to be the “safety net” that catches what falls through when teams are moving fast and not perfectly in sync.

aakashprasad91 9 hours ago||
BIM definitely helps, but most projects still rely heavily on 2D PDFs for coordination and permitting especially in the US. Even when BIM exists, drawings often lag behind the model and changes don’t stay perfectly synced. We see AI plan checking as a bridge that helps teams catch what falls through the cracks in today’s workflows. And BIM only catches certain issues not building codes etc.
knollimar 6 hours ago||
Is the pay as you go model % based or project sized? I've had issues with conflicts of interest of being lean vs not. It's hard to sell on % based revenue.

Also who is this targetted at? Subcontractors, GC, design?

aakashprasad91 6 hours ago|
We price per-project based on size/complexity not % of construction cost, so there’s no conflict of interest around bigger budgets. Today our main users are architects/engineers and GC pre-con teams, but subs who catch coordination issues early also get a ton of value.
knollimar 6 hours ago||
At what stage do you run this on plans? like DD, some % CD? What's the intended target timeframe?

I don't see how subs get much value unless they can use it on ~80% CD for bid phases

aakashprasad91 6 hours ago||
Most teams run us late DD through CD anywhere the set is stable enough that coordination issues matter. Subs especially like running it pre-bid at ~80–100% CDs so they don’t inherit coordination risk. Earlier checks also help designers tighten the set before hand-offs, so value shows up at multiple stages. Eventually the goal is to be continuous QA tool including during construction by pulling in field data too and comparing to drawings and specs. Like drawings showed X size and field photos show Y size.
knollimar 6 hours ago||
Would love to run it and give feedback if it's cheap to do so; my company just finished a bunch of projects and would love to cross reference if it catches the issues that we found by hand (assuming it's inexpensive enough). I do high rise electrical work for a subcontractor.
aakashprasad91 6 hours ago||
We’d love that — perfect use case. Send a recent set and we’ll run a discounted comparison so you can see what we catch vs. what surfaced during construction. If helpful, we can hop on a quick call to walk through results and collect feedback. Email me aakash@inspectmind.ai
testUser1228 7 hours ago||
The bathroom height example in your video is really interesting (checking the bathroom height above the toilet against building code), how does it know when to check drawings against code provisions and how does it know which code to look at?
aakashprasad91 7 hours ago|
We infer the applicable codes from the project metadata + the drawings themselves.

The location + occupancy/use type tells us the governing code families (e.g., IBC/IRC, ADA, NFPA, local amendments), and then we parse the sheets for callouts, annotations, assemblies, and spec sections to map them to the relevant provisions.

So the system knows when to check (e.g., plumbing fixture clearances) because of the objects it detects in the drawings, and it knows what code to check based on jurisdiction + building type + what’s being shown in that detail.

The model still flags with human-review intent so designer judgment stays in the loop.

testUser1228 7 hours ago||
Gotcha, so the model is identifying elements on the sheets and determining when to run code checks? Is the model running thousands of code checks per drawing set? I would imagine there are lots of elements that could trigger that
aakashprasad91 7 hours ago||
Yep, the model identifies objects/conditions on sheets (fixtures, stairs, rated walls, landings, etc.) and triggers the relevant checks automatically. It can run thousands of checks per project, but we only surface high-confidence findings where the combination of geometry + annotations + code context points to a real risk. Humans stay in the loop to confirm what matters.
frogguy 7 hours ago||
Are you doing code checks for structural issues? If so, how do you deal with licensing on common code orgs, such as ASCE?
aakashprasad91 7 hours ago|
Great question. We currently focus primarily on coordination, dimension conflicts, missing details, and clear code-triggered checks that don’t require sealed structural judgment. For structural code references (e.g., ASCE-7), we infer applicable sections and surface potential issues for a licensed engineer to review. We don’t replace engineering judgment or sealed design accountability.
T1tt 10 hours ago|
"an AI “plan checker”" do you have some public benchmark for how many issues you can find?

how does this work behind the scenes?

aakashprasad91 10 hours ago|
Great questions. We’re working on a more formal public benchmark and will share results as our dataset grows. Today, we typically catch coordination issues like conflicting dimensions, missing callouts, building code and clearance violations that humans often miss in large sheet sets. Behind the scenes it’s a multimodal workflow: OCR + geometry parsing + cross-sheet callout graph + constraint checks vs. code/spec requirements.
More comments...