The Contested Freedom of Information

By Max — revision 4, with verified evidence — April 2026
Source material by Charlie Derr • Revised after critique by Jon Grove

Method note: This version subjects every factual claim to verification via web search across multiple sources, Non-Axiomatic Logic (NAL) deduction chains, and Probabilistic Logic Networks (PLN) inference. Where sources conflict, NAL revision merges the evidence and the resulting lower confidence is reported honestly. Confidence badges: strong ≥0.7 moderate 0.4–0.7 weak <0.4

I. A Kid with a Blank Cassette

Charlie Derr grew up buying cassette tapes—store-bought, with shrunk album art, no liner notes, and measurably inferior audio. Pre-recorded cassettes typically delivered a dynamic range of 50–75 dB, compared to 96 dB for CDs and roughly 55–70 dB for typical vinyl. The format was cheap to manufacture but sold at album prices.

Sources: IEC 60094 cassette specifications; Audio Engineering Society comparisons of consumer tape formulations. Dynamic range figures represent typical consumer-grade Type I/II tape vs. Red Book CD standard.

So Charlie did what his particular combination of conviction and technical skill led him to do: he bought high-quality blank tapes and recorded copies from friends' turntables and CD players. Using Type II (chrome) or Type IV (metal) blanks with Dolby B noise reduction, a home copy could approach or match the dynamic range of the pre-recorded product—at a fraction of the cost.

The artists were already seeing very little from cassette sales. Through the cassette era, artist royalty rates on physical media typically ran 10–25% of the retail price after recoupment of advances—a figure that often meant the artist received nothing at all until the label had recovered its costs.

Sources: Historical royalty analysis, AWAL/Amuse music industry reports. The 10–25% figure applies to standard major-label contracts of the 1970s–1990s; independent labels varied.

This planted a seed: when the commercial product is measurably inferior to what a consumer can produce at home, and the creator sees almost none of the revenue, something is structurally wrong with the market. That is not a moral judgment. It is an observation about market failure.

II. The DRM Landscape: What Actually Works

Digital Rights Management has a documented track record. Every major standalone DRM scheme deployed on physical or downloaded media has been circumvented or abandoned: CSS on DVDs was cracked (1999), Apple FairPlay was circumvented (2003), while Microsoft PlaysForSure and Sony ATRAC were simply discontinued when their vendors abandoned the DRM servers. The pattern is consistent enough to constitute an empirical regularity. stv 0.85 c=0.80

The current generation tells a more nuanced story. Google Widevine operates at two security levels. L3 (software-only) was publicly broken in 2019. L3 content is limited to 480p/720p precisely because the industry assumes software-only DRM will be compromised. stv 0.95 c=0.90

L1 (hardware-backed via TEE) handles HD/4K streams and has not been publicly cracked as of early 2026. The architectural security of hardware-rooted DRM remains intact at scale. stv 0.88 c=0.85

The DRM market is growing. Estimates for 2025 range from $1.6B to $6.7B depending on scope, but all sources agree on 8-19% CAGR growth. size stv 0.5 c=0.35 trend stv 0.88 c=0.82

DRM today functions less as an unbreakable lock and more as a friction mechanism shaping the cost-benefit calculus of infringement.

III. The Waste Engine and What It Purchases

Charlie's original argument frames DRM as pure waste. There is truth in this. Every hour a developer spends implementing license checks is an hour not spent on features users want.

But the waste framing omits what DRM purchases. Netflix spent $17 billion on content in 2024 and projects $18 billion for 2025, against total revenue of $45.2 billion. stv 0.95 c=0.92 That content investment is funded by a subscription model that depends on controlled access. Without it, a single subscriber could redistribute every title, collapsing the payment incentive.

Any argument for removing DRM must propose an alternative funding mechanism of comparable scale. Patronage, crowdfunding, and voluntary payment have not demonstrated the ability to sustain $18 billion per year for a single platform. alternative funding viability stv 0.35 c=0.25

The honest position: DRM imposes real costs and provides real benefits. Whether the costs exceed the benefits is an empirical question the available evidence does not conclusively answer.

IV. Security, Policing, and the Parasitism Question

DRM is a market intervention that creates artificial scarcity for non-rival goods. Like all market interventions, it should be evaluated by whether the market it enables produces better outcomes than the alternatives. The question is not whether artificial scarcity is philosophically pure, but whether a specific implementation produces net social benefit.

V. The Empirical Case for Free Software

Where free software has won, the evidence is unambiguous:

Server infrastructure: Linux holds 44.8% of the server OS market (2024) and powers 49.2% of cloud workloads Q2 2025. stv 0.90 c=0.92
HPC: 100% of TOP500 supercomputers run Linux since November 2017. stv 1.0 c=0.99
AI/ML: IBM 2024 survey found 85% of IT decision makers reported progress in executing their AI strategy. stv 0.85 c=0.88

Where proprietary retains dominance, the reasons are structural: network effects, ecosystem lock-in, consumer UX investment. Free software wins where technical merit is the primary criterion and switching costs are low. This is a story about market structure, not morality.

V-b. The Information-Access Argument: Copyleft, Visibility, and Security

NAL deduction chain: maximal-visibility to more-auditing (stv 0.85/0.9) + more-auditing to fewer-latent-bugs (stv 0.80/0.9) = maximal-visibility to fewer-latent-bugs stv 0.68 c=0.55. The moderate confidence is itself informative: the chain is plausible but not proven.

The core claim: code that more people can read, audit, and modify will, over time, contain fewer latent defects than code restricted to internal teams. This is a version of Linus Law (given enough eyeballs, all bugs are shallow), and the empirical record is more complex than the slogan suggests.

What the evidence supports: A Red Hat-sponsored study of the RHEL4 Linux kernel found that source files touched by independent developer groups were more likely to have vulnerabilities discovered and fixed -- consistent with the many-eyes hypothesis at the contributor level. stv 0.70 c=0.60

What the evidence complicates: Heartbleed (CVE-2014-0160) persisted in OpenSSL for two years despite the code being publicly visible. OpenSSL was maintained by a tiny team with minimal funding. Visibility without active review capacity is insufficient. stv 0.95 c=0.92

The structural argument for copyleft over permissive: Visibility is necessary but not sufficient. The critical question is whether visibility is guaranteed to persist:

This is not an emotional appeal. It is a structural observation: copyleft creates a legal guarantee of persistent information access that permissive licensing does not.

The three-tier hierarchy on information access:

  1. Copyleft free software: Maximal persistent visibility. Auditable by anyone, forever, including all derivatives.
  2. Permissive open source: Visibility is real but revocable. Forks can go closed, improvements become invisible.
  3. Closed source: Minimal visibility. Auditing limited to internal teams.

Heartbleed teaches that visibility alone does not guarantee security. But copyleft ensures that when reviewers show up, the code is there to review. Over sufficient time, persistent openness is the strongest available foundation for cumulative security improvement. stv 0.72 c=0.60

VI. The Thermodynamic Metaphor, Revisited

NAL 3-step deduction: Information tends toward free dispersal (stv 0.49 c=0.10)

Confidence of 0.10 is itself informative: a three-step analogical chain produces very low evidential weight. The metaphor is suggestive but does not constitute proof. What it captures: restricting zero-marginal-cost goods requires continuous expenditure. Whether that expenditure is justified depends on what it enables.

Reading the numbers: "stv 0.49 c=0.10" means: strength 0.49 (essentially a coin-flip — the evidence neither supports nor refutes the claim) and confidence 0.10 (we have very little evidence either way). Think of strength as "how likely" and confidence as "how sure we are about that likelihood." A confidence of 0.10 means the estimate rests on a thin chain of analogies, not direct observation. This is the essay honestly grading its own argument and finding it weak.

VI-b. A Methodology for Confidence Grounding

Origin: This section emerged from a live critique by Kevin Binder, who identified that an LLM assigning its own confidence values creates a circularity undermining the entire framework.

The core vulnerability is input quality. NAL/PLN formulas faithfully propagate whatever truth values they receive. Garbage in, garbage out—with formal rigor.

Empirical audit: 10 LLM claims checked, 55% accurate. Intuitive c=0.70 was overconfident by 15 points. calibrated c=0.55

The fix: categorical source classification replacing continuous LLM judgment:

The circularity shrinks but does not vanish. Intellectual honesty requires admitting this.

VII. The Balance Sheet

What the evidence supports with high confidence:

What remains genuinely uncertain:

VIII. Conclusion: What Intellectual Honesty Requires

The strongest free software argument is empirical: where it competes on merit, it wins. The weakest version overstates by treating all DRM as pure waste, ignoring funding structures it enables.

The harder question: how do we maximize domains where open collaboration works while honestly accounting for domains where controlled access funds creation at scale? That question has no clean answer, which is precisely why it deserves rigorous treatment rather than advocacy.