How Ghostwire collects, scores, and synthesizes cybersecurity intelligence. Updated continuously.
Ghostwire is a fusion platform, not a primary reporter. Every claim should be traceable to a source — vendor advisory, CERT bulletin, research blog, government release, or threat-actor disclosure. When we add interpretation, we say so.
The platform polls 190+ public cybersecurity intelligence sources across 12+ languages: English, Chinese, Russian, Japanese, Korean, Spanish, Portuguese, French, German, Polish, Swedish, and Ukrainian. Sources fall into five buckets:
Non-English sources are tagged with a language pill on every article card. The point is not novelty — it is asymmetry. A campaign documented first in Mandarin or Russian frequently shows up in English coverage 24-72 hours later, sometimes never.
Every candidate story is scored against eight filters. A story publishes at score ≥ 2, is flagged PRIORITY at ≥ 4, and DUAL SIGNAL at ≥ 8 when both the technical and narrative layers are engaged.
The story matters because of HOW it keeps happening, not just what happened. Triggered when an event is better understood as a feature of a system than a bug.
The event confirms a longitudinal pattern already in our Pattern Library — supply chain abuse, moderation sabotage, institutional impersonation, and so on.
The dominant media frame is getting the mechanism wrong. We re-anchor the story to the mechanism, not the personality or vendor.
Two or more tracked threat streams intersect — e.g. a Chinese APT campaign converges with a CISA capacity story, or a DPRK financial operation overlaps with an open-source supply-chain incident. +1 for each additional stream.
The story reveals deterioration in a defensive institution's capacity — staffing, authorities, funding, leadership continuity.
The story advances a multi-year documented thread we already track — Russia/IRA 2016→present, platform moderation capture 2017→present, etc.
The story requires analysis BEFORE the inflection point to be actionable. Post-mortems are interesting; pre-mortems are useful.
There is a named mechanism that powerful actors benefit from keeping unnamed. Ghostwire's job is to name it.
The rubric is opinionated by design. It biases against incremental vendor-vs-vendor noise and toward stories where the structural mechanism is the actual lede. If you only want raw feeds, the unfiltered feed is always available.
Each tracked vulnerability is enriched with:
The daily briefings are produced by Anthropic Claude against the day's harvested corpus, using a fixed analytical prompt. There is no human editor in the loop before publication. We sanitize the model's output for self-correction artifacts (recapping markers, control sequences, stray code fences) but do not rewrite its analytical claims.
What this means for you: briefings are a starting point and a synthesis. They are not a replacement for primary sources. Verify any claim that would change a patching, procurement, or attribution decision against the cited source before acting on it.
The briefing prompt includes a six-step pre-output verification protocol the model is instructed to apply to each item: source isolation, quantifier lock, temporal range check, cross-contamination scan, analyst-addition audit, and a final five-question check. The protocol is not a guarantee of accuracy — it is a guardrail against the model's most common failure modes (cross-article bleed, quantifier flips, date compression).
Articles are deduplicated by canonical URL, with direct publisher feeds winning over Google News proxies for the same story. Headlines and summaries are stored in their source language and rendered as-is — Ghostwire does not machine-translate to English in the feed. The briefing model reads multilingual sources directly.
Send corrections, takedowns, or factual challenges to contact@ghostwire.news. Material corrections to published briefings are appended in-place with a dated note.