Here are the Research Credibility Standards Being Used at Job One for Known and Unknown Data

Last Updated 3.3.26.

 

Here are the Research Source Credibility principles we use for:

Weighting known data in cumulative calculations, and

Weighting known missing/omitted data so it doesn’t silently wreck the result.

 

 

A) How to weight known data in complex cumulative calculations

 

1) Define what the number actually means

Before weighting anything, lock the target:

    • What is being estimated? (temperature physics vs. “urgency framing” vs. damages vs. policy feasibility)

    • What is the unit? (% underestimation, probability of claim truth, expected value, risk index)

    • What’s in-scope vs out-of-scope?

If the “thing” isn’t well-defined, weights become theater.

 

2) Use evidence-quality weights, not “all citations are equal”

Implicitly weight sources by credibility tiers:

    • Tier 1: primary data, court findings, subpoenaed docs, official audits, reproducible datasets

    • Tier 2: peer-reviewed synthesis, meta-analyses, robust modeling papers

    • Tier 3: high-quality investigative journalism with document trails

    • Tier 4: NGO summaries (useful, but not automatically decisive)

    • Tier 5: stakeholder PR (important as positions, weak as truth)

In practice: Tier 1–2 can dominate; Tier 4 can support; Tier 5 can’t “prove” much.

 

3) Avoid double counting (the silent killer)

If two lines of evidence share the same origin (same dataset, same doc cache, same expert circle), they don’t get counted twice. Correlated evidence gets discounted.

 

4) Combine contributions with a structure that matches reality

When multiple mechanisms can each contribute, use a multiplicative “stacking” model:

Combined=1−∏i(1−ci)Combined=1i(1ci)

Why? Because it:

    • captures cumulative stacking without letting totals exceed 100%,

    • reduces inflation from overlapping factors,

    • behaves sensibly when many small effects exist.

 

5) Use ranges, not point estimates, unless you have calibration

For each component, I prefer low–mid–high ranges. Then:

    • midpoint becomes a “best estimate,”

    • width reflects uncertainty,

    • the range itself becomes part of the truth claim.

 

6) Sensitivity: find what actually drives the outcome

Even without full Monte Carlo, you can do a “tornado chart in your head”:

    • Which factor, if wrong, changes the result the most? Those get stricter scrutiny and tighter bounds.

 

7) Sanity checks (reality punches liars)

    • Does the number contradict back-tested history?

    • Does it imply physically or socially impossible outcomes?

    • Does it “solve” complexity too neatly (single villain, single cause)?

If yes, shrink confidence and widen ranges.

 

B) How to weight known missing or omitted data (so it doesn’t blow up the result)

If missing terms are known and you treat them as zero, you get fake precision.

Here’s the correct playbook.

 

1) First classify missingness: is it random or biased?

    • MCAR (missing completely at random): rare in politics and climate impacts

    • MAR (missing at random given variables): sometimes true

    • MNAR (missing not at random): common when incentives exist (delay, underreport, PR pressure)

If it’s MNAR, you assume the missingness is directional rather than neutral.

 

2) Never treat “unknown” as zero: assign bounds

For known omissions, you assign:

    • minimum plausible value

    • maximum plausible value

    • most likely value
      Even crude bounds are better than pretending it’s absent.

 

3) Add missing factors as explicit terms, not footnotes

If the model is:

Y=f(known factors)Y=f(known factors)

You rewrite it as:

Y=f(known)+g(omitted)Y=f(known)+g(omitted)

And you give gg a distribution or range.

 

4) Use “penalty factors” when the omission is structural

If omissions systematically bias in one direction (common in consensus summaries), you apply a bias-correction term like:

    • Conservatism penalty: +X% urgency undercall

    • Tail-risk penalty: +Y% consequence undercall

    • Feasibility optimism penalty: +Z% timeframe undercall

 

5) Use robustness ranges and “break-even” thresholds

For omissions that could swing the result:

    • Calculate what value the omission would need to change your conclusion.
      If that break-even value is totally plausible, the conclusion must be softer.

Example in plain English:

“If omitted feedbacks contribute even ~5–10% more urgency, the combined underestimation jumps from ~25% to ~35%.”

 

6) Weight omissions by “impact leverage,” not by how famous they are

An omitted factor gets weight based on:

    • magnitude if true,

    • probability it’s non-trivial,

    • correlation with other factors,

    • speed (near-term effects often matter more for timeframe claims).

 

7) Value of Information (VOI): decide whether to chase better data

Sometimes the right answer isn’t “pick a number,” it’s:

    • Is this omission worth spending time to reduce uncertainty?
      If it changes decisions, VOI is high. If not, stop.

 

C) The clean “principle summary” you can reuse in your work

Here is a short rule set:

    1. Define the target quantity precisely.

    2. Rank evidence by credibility tier.

    3. Discount correlated sources.

    4. Use ranges for each contributing mechanism.

    5. Combine with a structure that prevents double counting (often multiplicative).

    6. Treat known omissions as explicit terms with bounds, never as zero.

    7. Run sensitivity and break-even tests.

    8. Report conclusions as intervals + confidence, not a single magic number.

That’s the non-delusional way to weight reality. Humans hate it because it doesn’t fit on a bumper sticker, which is also why they keep driving into the same wall.


Showing 1 reaction

Get More Info Here Take Action Support Our Mission

Subscribe to Our Global Warming Blog

Subscribe

Subscribe to Our Global Warming Blog

Subscribe