Research Program for AI Bottlenecks

queryconfidence: mediumupdated: 2026-05-11frameworkthesiswatchlistriskdata-quality

Research Program for AI Bottlenecks

User question

How can we research the AI bottlenecks more thoroughly ourselves?

Decision supported

Convert broad ai bottleneck beta idea generation into diligence-grade research that can support watch/pass decisions, thesis writing, and eventually position sizing.

Core principle

Do not start with tickers. Start with a physical process map, identify the binding step, then map companies to that step and test whether exposure is material, scarce, and underpriced.

Ontology overlay

Use a lightweight version of the fp profile ontology templates, not the full client-operations template. The useful pieces are the load-bearing discipline: object types, link types, action/decision types, source mappings, validation rules, provenance, lifecycle events, and readiness gates. The client-specific parts such as permissions, writeback, regulated data handling, and customer workflow actions should be adapted or omitted unless we are building an implementation workspace.

For bottleneck research, define:

  • Object types: BottleneckLayer, ProcessStep, Material, Equipment, Company, Facility, CustomerPlatform, Source, Claim, Catalyst, Risk, Metric.
  • Link types: supplies, constrains, substitutes_for, qualifies_with, depends_on, exposed_to, validates_claim, contradicts_claim.
  • Action/decision types: verify_claim, map_process_step, score_company, demote_hype, graduate_to_watchlist, write_thesis, update_kill_criteria.
  • Source mapping: filings, earnings calls, investor decks, patents, technical papers, standards, trade press, dashboards, and paid/free research notes.
  • Validation rules: every bottleneck claim needs at least one primary-source anchor or an explicit unverified label; every company exposure claim needs revenue materiality or a stated gap; every investable conclusion needs valuation and kill criteria.
  • Provenance: preserve raw sources, source date, retrieval date, claim confidence, and whether the claim is primary, secondary, or weak lead.

This turns the wiki from narrative notes into a queryable diligence graph without over-engineering it into a client implementation ontology.

Ontology-lite implementation files

The ontology-lite template files live under _meta/ontology-lite/:

  • _meta/ontology-lite/README.md
  • _meta/ontology-lite/ontology.yml
  • _meta/ontology-lite/object-types.yml
  • _meta/ontology-lite/link-types.yml
  • _meta/ontology-lite/decision-types.yml
  • _meta/ontology-lite/source-mapping.yml
  • _meta/ontology-lite/validation-rules.yml
  • _meta/ontology-lite/provenance.yml
  • _meta/ontology-lite/lifecycle-events.yml
  • _meta/ontology-lite/company-exposure-scorecard.yml
  • _meta/ontology-lite/deep-dive-template.md
  • _meta/ontology-lite/claim-register-template.md

Workstreams

1. Bottleneck map

For each layer in ai physical stack watchlist, document the process chain, lead times, qualification requirements, capacity constraints, substitutes, and failure modes.

Priority maps:

  • Glass substrates: organic ABF -> glass panel -> TGV -> metallization -> RDL -> bonding -> inspection -> qualified package. See glass substrate cycle.
  • HBM manufacturing: wafer -> die stack -> bonding -> molding -> inspection/metrology -> burn-in/test -> package integration. See hbm manufacturing bottlenecks.
  • Photonics/InP: substrate -> epi -> laser/EML -> optical engine -> module/CPO package.
  • Retimers/networking: protocol generation -> retimer/active cable/switch timing -> platform qualification.
  • Power/cooling/construction: grid interconnect -> generation -> equipment -> MEP -> rack/pod deployment -> utilization.

2. Source hierarchy

Use this evidence ladder:

  1. Primary: company filings, earnings calls, investor decks, capex plans, customer announcements, export-control documents, standards/specs, patents, conference proceedings.
  2. Strong secondary: reputable industry research with methodology, trade press with named supply-chain claims, technical conference coverage.
  3. Weak but useful leads: blogs, Substack, X posts, dashboards, sell-side snippets, anonymous supply-chain notes.

3. Company scorecard

For every candidate, score 1-5 on:

  • Process bottleneck control: does the company control the actual scarce step?
  • Customer qualification depth: named customers, sampling, certification, design wins, sole-source/dual-source status.
  • Capacity and capex recovery: utilization, lead times, capex, backlog, book-to-bill, gross margin trajectory.
  • Revenue materiality: percent of revenue/earnings exposed to the bottleneck.
  • Substitution risk: alternate suppliers, alternate process flow, customer internalization, architecture changes.
  • Valuation / expectation gap: what growth and margins the stock already discounts.
  • Multi-application durability: can the process serve AI accelerators, HBM, CPO, photonic integration, or other markets?

4. Evidence pack per bottleneck

Each bottleneck research packet should include:

  • Process diagram in words.
  • 5-10 primary sources.
  • Named companies and role in the process.
  • Capacity / lead-time indicators.
  • Qualification status by customer.
  • Revenue materiality table.
  • Bull/base/bear timeline.
  • Kill criteria.
  • Monitoring checklist.

5. Variant-perception test

For each thesis, write:

  • Consensus / narrative: what the market appears to believe.
  • Variant view: what we believe that differs.
  • Proof required: what evidence would make the variant view true.
  • Disproof required: what would invalidate it.
  • Timing: when the evidence should show up.

6. Anti-hype checks

Reject or demote a name if:

  • It has only vague AI exposure.
  • Bottleneck revenue is too small to move consolidated earnings.
  • The process is scarce but the company lacks pricing power.
  • The bottleneck can be solved by capacity additions before orders flow through.
  • Price action is mainly low-float/momentum rather than order/margin evidence.
  • The thesis requires several unverified architecture transitions at once.

First three deep dives to run

  1. Glass substrates / TGV / metallization: highest fit with the new PhotonCap article and a likely next-layer bottleneck.
  2. HBM packaging enablement: bonding, molding, inspection/metrology, test, and OSAT capacity.
  3. InP/photonics chain: substrate, epi, lasers, EMLs, optical engines, CPO timing, and silicon-photonics substitution risk.

Deliverable format for each deep dive

  • One-page executive summary.
  • Bottleneck claim and confidence.
  • Process-chain map.
  • Company exposure table.
  • Primary-source evidence table.
  • Bull/base/bear case.
  • Kill criteria.
  • Monitoring calendar.
  • Watch/pass/possible buy stance with confidence and open questions.

Immediate next checks

  • Build the company universe from ai bottlenecks dashboard plus PhotonCap's visible categories, but do not assume the dashboard weights are investable.
  • For glass, verify Intel, TSMC, Absolics, AMD, and equipment-vendor claims from primary sources.
  • For HBM, verify whether the binding constraint is memory wafer capacity, stacking, bonding, test, advanced packaging, substrates, or CoWoS/OSAT capacity.
  • For InP/photonics, verify whether the scarce step is InP substrate, epi, EML laser capacity, DSP/retimer, optical engine assembly, or module qualification.

Local source refs

  • raw/articles/ai-bottleneck-beta-dashboard-2026-05-11.md
  • raw/articles/ai-bottlenecks-dashboard-snapshot-2026-05-11.md
  • raw/articles/photoncap-glass-substrate-cycle-2026-05-08.md