Hi, I’m Eli!
I am a co-founder of Character. I’m very interested in startups and Old Masters.
| eligo et nitor |
The Capability Curve and Outsourced Thinking
We usually think about the complexity of human thinking as increasing over time: as we’ve become better at expressing and sharing ideas we are able to “think” better thoughts that more accurately describe reality. This is partially true, it is also a triumph of outsourcing. The trajectory of intelligence is likely a long, sloping curve downwards from a theoretical apex of "pure" independent thought toward an accelerating reliance on collective intelligence.
This transition, from local and biologically processed thought to a distributed external network has defined civilization and progress. AI is a discontinuity in this process, however, because it is the first communicative, synthesized, intelligence that is generated outside of the human mind. There is no better place to see all this happening than in the early-stage world! This is why we’ve been so fired up at Character Capital recently.
In the past, say 100,000 years ago, the cognitive burden of modeling reality, predicting outcomes, and making decisions of actions to take was overwhelmingly determined by an individual. This is what I mean when I say the “compute” was almost completely local. Communication was rudimentary and the bandwidth for transferring abstract concepts or complex worldviews was severely rate limited. In addition, the fidelity of the knowledge transfer was low and the accumulation of collective wisdom was very slow, limited to the fragility of human memory. This human memory expired fast! Life was short and rough for our ancestors, but they were primarily thinking for themselves.
Things have changed dramatically since then. A lot of our capability has been driven by our success in offloading memory and computation to the environment and collective of humanity. This concept, of course, has been studied and is referred to as the “extended mind” by Clark and Chalmers (The Extended Mind Hypothesis, 1998). Several distinct epochs have arisen:
Spoken language: the first major transition of thought from immediate experience. This is one of the key things that allowed for the compression, and transmission, of complex information as a step towards a more networked collective intelligence. One of the major benefits was that it allowed each person to gain knowledge without having to relearn everything from the start.
Writing: knowledge became further decoupled from the individual human experience when it exceed the duration of the human lifespan. This is one of the first major shifts for memory being located outside of, and more expansively, than in a single human brain.
Printing press: one of the first distributed ways to replicate external memory. This allowed the cost of information to plummet. Some beneficial second-order effects included a trend toward the standardization of knowledge, and concurrently, the reliance on the collective mass of learned knowledge increased.
Computers: the calculator and the computer marked the outsourcing of algorithmic processing. This allowed people to offload procedural tasks and freed us to think in higher-level abstraction.
While monumental, each of these innovations were fundamentally derivative because they did not introduce external synthesis. To wit, a book is an unconversable artifact of human thought, a database executes human-defined queries, and a calculator performs human-defined algorithms. Each of these tools were multipliers, but the information originated from human minds.
A simple way to think about this concept of change in workload between the independent and collective mind is to develop a simple model to talk about it. Let’s call the Ratio of Independent Computation R(t). This is the proportion of cognitive processing that an individual completes internally relative to the total cognitive processing available.
Now let C_ind represent the biologically constrained processing capability of a single person. For now we will assume this has remained relatively stable since behavioral modernity. In addition, let K_ext(t) represent the accessible externalized knowledge and computational capacity at time t. This concept of collective intelligence was described by Hutchins in 1995 in Distributed Cognition.
Then a basic model the Ratio of Independent Computation is:
R(t) = C_ind / (C_ind + K_ext(t))
A long time ago K_ext(t) was close to zero and so R(t) ≈ 1. As civilization has grown, K_ext(t) has increased, and is autocatalytic: knowledge accelerates the creation of further knowledge. This means that it is not a linear accumulation of accessible knowledge, but an exponential increase. The rate of this exponential growth follows major communication breakthroughs (language, writing, print, digital networks.) So as K_ext(t) becomes much larger than C_ind then R(t) approaches zero.
This graphic is neat, but it is also characterized by the accumulation of knowledge from humans. AI makes a qualitative shift in the nature of K_ext(t) because these systems operate through processes (like high-dimensional vector navigation, emergent pattern recognition) that are distinct from human cognition. They are able to synthesize and generate outputs that are not derived or explicitly programmed. Recall that in the past human minds were required to operate technologies and interpret their output into actionable intelligence.
AI represents active external cognition that models reality and generates information that mediates our interaction with the world based on a person's high-level intent (the prompt.) It is different from an inert dusty book on the shelf or an old-school TI-83. Amazingly, AI turns K_ext(t) into an engine of cognition rather than just a repository.
This shift has major economic and social implications (as has been frequently and sometimes vociferously debated over the past few months.) Previously people were limited by their biologically constrained rate C_ind to generate and share knowledge, but active external cognition removes this bottleneck.
My co-founder JZ and I have been thinking a lot about the inverse of this declining slope of independent computation which we call the “capability curve”, the measure of what civilization can achieve, because it has been rapidly steepening. The acceleration is due to the outsourcing of the innovation process itself! We see AI being deployed to solve difficult problems (and some not so difficult logistical chokepoints of thought and language) at a pace that human cognition can not match.
Startups are well suited to use this new form of K_ext because they are not shackled by the legacy processes that were designed around the limitations of C_ind. The unprecedented change in the capability curve is directly tied to the harnessing of an intelligence that scale algorithmically rather than biologically. We’ve decoupled capability from human cognitive limitations, that is an interesting development!
The world as we know it is an exodus from the solitary mind. We’ve outsourced more and more cognitive burden to the collective and gained unprecedented power in exchange for independence of thought. For thousands of years our collective intelligence was a reflection of ourselves, an architecture built by and utilized exclusively from human thought.
AI has closed the book on this anthropocentric cognitive era. It is the introduction of an intelligence that is truly different. Our relationship with reality is now mediated not just by our own worldview, or the accumulated knowledge of our ancestors, but by an evolving non-human cognitive process. The declining slope of independent computation is approaching its asymptote, and it signals a new hybridized cognitive age.
Muddling Through Some Thoughts on the Nature of Historiography
OK, a few of you know that I have been obsessed with a side project in historiography for a while. I'm calling it quits for now, so much so that I posted my thoughts on LessWrong! (The ultimate scrutinizer for bad thinking). Anyways, without further ado...
“Did it actually happen like that?” is the key question to understanding the past. To answer this historians uncover a set of facts and then contextualize the information into an event with explanatory or narrative power.
Over the past few years I’ve been very interested in understanding the nature of “true history.” It seems more relevant every day due to the continually, and always increasingly, connected nature of our world, and the proliferation of AI tools to synthesize, generate, and review knowledge. But even though we have all of these big datasets and powerful new ways of understanding them, isn’t it interesting that we don’t have a more holistic way of looking at the actual nature of historical events?
I, unfortunately, don’t have an all-encompassing solution[1] to this question, but do have a proposal which is to suggest we think about this in a quasi-Bayesian way at a universal scale. By assigning every computably simulatable past to a prefix-free universal Turing machine we can let evidence narrow the algorithmic-probability mass until only the simplest still-plausible histories survive.[2] I am going to unpack and expand this idea, but first I want to provide a bit more background on the silly thought experiment that kicked off this question for me.
Consider the following: did a chicken cross the road 2,000 years ago? Assume the only record we have of the event is a brief fragment of text scribbled and then lost to history. A scholar who rediscovers the relic is immediately confronted with a host of questions: Who wrote this? Where was the road? What type of chicken was it? Did it actually happen? (and importantly why did the chicken cross the road?) These are very difficult, and in many cases, unanswerable questions because we lack additional data to corroborate the information.
In comparison, if a chicken crossed the road today, say in San Francisco, we are likely to have an extremely data-rich dataset to validate the event. We likely would have: social media posts about a chicken, Waymo sensor data showing the vehicle slowing, video from a store camera, etc. In other words, we can be much more certain that this event actually occurred, and apply much higher confidence to the details.[3]
A commonality to both of these events is the underlying information that arises from both the observation and the thing that happened itself (in whatever way the chicken may have crossed the road.) The nature of this inquiry is basic and largely trivialized as too mundane, or implicit in the process of a historian, but I think it is a mistake to skip over too lightly.
Early historians such as Thucydides examined this problem and thought about it nearly at the start of recorded history as we commonly think about it. He attempts to evaluate the truth by stating that “...I have not ventured to speak from any chance information, nor according to any notion of my own; I have described nothing but what I either saw myself, or learned from others of whom I made the most careful and particular enquiry.”
In this essay I don’t want to attempt a comprehensive overview of the history of historiography, but I will mention that after Leopold von Ranke famously insisted on telling history “as it actually was,” it seems as though the nature of objective truth was quickly put to debate from a number of angles, including politically (say Marxian views of history), linguistically, statistically, etc.
So, shifting to 2025, where does this leave us? We have more precise and better instruments to record and analyze reality, but conceptually there is a bit of missing theoretical stitching on this more mundane topic of the truth in chronological events.
While it would be nice to know all physically possible histories, call this Ωphys, it is not computable. So I think we should be content with a set Ωsim which I define as every micro-history a computer could theoretically simulate. These simulatable histories coevolve with the state of scientific knowledge as it improves.
Before we check any evidence to see how plausible each candidate history ω (a member of Ωsim) is, we should attach it to a prefix-free universal Turing machine U as our prior. In doing this there is no valid program in U that is a prefix of another so Kraft's[4] inequality allows us to not have a probability leak and not break Bayes:
m(ω) = Σ_{U(p)=ω} 2^(-|p|) satisfies Σ_ω m(ω) ≤ 1
Furthermore, define Kolmogorov complexity K(ω) = min_{U(p)=ω} |p| the length of the shortest U program that replays a micro-history. Because every contributing program is at least K(ω) bits long, we have:
2^(-K(ω)) ≤ m(ω) ≤ 2^(-K(ω)+1) So m(ω) is different from 2^(-K(ω)) only by a fixed multiplicative constant. Shorter descriptions of the past are more plausible at the outset; the reasoning being that it's sensible to approach this with basically a universal Occam’s razor. Evidence of information, like documents or video, arise from an observation map:
G: ω ↦ s = C(I(ω)) with I as instruments we use to record information, governed by the current science, and C the semantic layer that turns raw signals into understandable narratives like “a ship’s log entry” or “a post on X.” And s denotes very compressed observations of ω; I imagine it’s often, but not necessarily the case that, K(s)≪K(ω).
Shannon separated channel from meaning,[5] but here we let both layers carry algorithmic cost (semantics has its own algorithmic cost). As a person redefines or reinterprets an artifact of data, the are really redefining the semantic layer C. It’s true that K(C) is huge and not computable in practice, but the upper bounds are fair game with modern compressors or LLMs.
So a general approach would be to say a historian starts by characterizing G⁻¹(s) = {ω' ∈ Ω_sim | G(ω') = s} all simulated pasts consistent with the evidence, and then updates it with Bayes:
P(ω|s) ∝ P(s|ω) * m(ω)
The normalizer Z(s) = Σ_ω P(s|ω) * m(ω) ≤ 1 tends to shrink as evidence accumulates.
I came up with an “overall objectivity score" (the OOS) as a quick way of thinking about the negative log, which is the standard surprisal of the data. The higher the number, the closer data circumscribes the past: OOS(s) = -log₂(Z(s)).
This is gauge dependent because its absolute value makes sense only for fixed Ω𝑠𝑖𝑚, G, and universal machine U.[6]
A big issue is that exact K, P(s∣ω), and Z are uncomputable[7] but we could approximate them in a couple of ways: perhaps by replacing our shortest program with the minimum description length (shortest computable two-part code of model bits and residual bits) for an upper bound on K[8] or via Approximate Bayesian Computation for P(s|ω) [9]. In MDL, an explanatory reconstruction is less likely if its approximate codelength is consistently larger than a rival’s.
As technology improves we should expect to see the set of histories that we can compute expand as we have better sensors, simulators, and instruments that can record things. As better computing and AI expands Ωsim and better DNA analysis gives a new data stream to G, richer evidence decreases possible histories by shrinking G⁻¹(s).
All-in-all, I’m not sure if this thinking is useful at all to anyone out there. I started out with a super expansive vision of trying to create some super elegant and detailed way of evaluating history given all of the new data streams that we have, but it appears that many AI providers and social media platforms are already incorporating some of these concepts.
Still, I hope this line of thinking is useful to someone out there, and I would love to hear if you have been thinking about these topics too! [10]
Sources:
[1] I am declaring an intellectual truce with myself for the moment, one I should have ceded to a while ago. I have tried a number of different ideas over the months, including more direct modeling and an axiomatization attempt
[2] I am very influenced by the work of Gregory Chaitin, most recently via his wonderfully accessible "PHILOSOPHICAL MATHEMATICS, Infinity, Incompleteness, Irreducibility - A short, illustrated history of ideas" from 2023
[3] This assumes that we have enough shared semantic and linguistic overlap
[4] https://en.wikipedia.org/wiki/Kraft%E2%80%93McMillan_inequality…
[5] "“Frequently the messages have meaning; that is, they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem." https://people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf…
[6] I guess I will throw in a qualifier to say under standard anti-malign assumptions, but I don't really think this is super relevant if you happen to be thinking about this post: https://lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign…
[7] INFORMATION THEORETIC INCOMPLETENESS - G. J. Chaitin https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=996e957c92d2c8feb3aabebe24539fde1de7c6b7…
[8] https://en.wikipedia.org/wiki/Minimum_description_length… or https://homepages.cwi.nl/~pdg/ftp/mdlintro.pdf…
[9] https://arxiv.org/pdf/1802.09720
[10] I'm very willing to break my truce ;)
On Certainty
I was thinking today about the immutability of some businesses, and the assurance that many people seek in finding a line of work.
For if we had certainty in earning our keep would it not be a relief to our day to day stresses? Perhaps we may not find meaning in our toils, but an empty stomach trumps a lack of purpose.
I am looking at tulips and white roses now, carefully arranged. Even from a bit away they emit a light scent. The room I am in is much too hot and while they've turned on the fans it is not having too much of an effect.
I suppose that the job of a florist is a safe one, no matter the state of the world. It is a warm vocation, buffeted from the cold and the rain. The setting is defined by its humidity and colors. Vibrant splashes from half a world a way, cut in the jungle among unknown bird songs under a foreign moon; or perhaps in a factory; then shipped via airplane, by truck, in the hands of strangers to arrive here on a rainy Sunday.
We're told that any certainty is a mirage though, for eventually:
when the almond tree blossoms
and the grasshopper drags itself along
and desire no longer is stirred.
Then people go to their eternal home and mourners go about the streets.
Remember him—before the silver cord is severed,
and the golden bowl is broken;
before the pitcher is shattered at the spring,
and the wheel broken at the well,
and the dust returns to the ground it came from,
and the spirit returns to God who gave it.
So we stand to sing a hymn, because duty requires it and it is just.
My friends daughter is in front of me. She is four. She can't help but fidget, and eventually notices a flower that has fallen. She brings it to her father who puts in his lapel.
She looks back at me, her hair is light and her eyes are blue. They are the same as her mothers were who now rests in a box next to the flowers.
She was a good woman.
I am very sad she is gone.
The Jackpot Jinx (or why “Superintelligence Strategy” is wrong)
On March 5, 2025 Dan Hendrycks, Eric Schmidt, and Alexandr Wang published “Superintelligence Strategy”, a paper that suggests a number of policies for national security in the era of AI. Central to their recommendations is a concept they call “Mutual Assured AI Malfunction (MAIM)” which is meant to be a deterrence regime resembling nuclear mutual assured destruction (MAD). The authors argue that by MAIMs will deter nations from building “destabilizing” AI by the threat of reciprocal sabotage.
This is a demonstrably false concept, and a poor analogy, because it fails to yield a strategy that settles into a Nash equilibrium. Instead, MAIMs uncertain nature increases the risk for miscalculations and encourages internecine strife. It is a strategy that likely would break the stability-instability paradox and is fraught with the potential for misinterpretation.
One of the key miscalculations is the paper’s treatment of the payoffs in the event of superintelligence. Rather than considering the first nation to reach superintelligence is a winner-take-all proposition, we should think about it more as something I call the “Jackpot Jinx.” This term captures the allure of an enormous (even potentially infinite) payoff from a breakthrough in superintelligence can destabilize strategic deterrence. Essentially, the prospect of a "jackpot" might “jinx” the stability by incentivizing preemptive or aggressive actions.
Let’s start by discussing why the nuclear mutual assured destruction (MAD) yields a pareto-optimal Nash Equilibrium (that is an equilibrium that is better for everyone without making anyone worse off.) Under MAD, the inescapable threat of a retaliatory nuclear strike ensures that any unilateral move to initiate conflict would lead to mutually catastrophic consequences. The idea is that over time, and over many potential conflicts, both nations recognize that refraining from launching a first strike is the rational strategy because any deviation would trigger an escalation that leaves both parties far worse off (i.e. both countries are nuked.)
The equilibrium where neither nation initiates a nuclear attack becomes self-enforcing: it is the best collective outcome given the stakes involved. Any attempt to deviate, such as launching a surprise attack, would break this balance and result in outcomes that are strictly inferior for both sides, making the mutually restrained state a Pareto superior equilibrium in the strategic calculus of nuclear deterrence. You’ve probably seen this payoff matrix before:
Here just assume -100 is like super dead, and 100 is super alive. Now there are some very important assumptions that underpin this stability which the MAIMs doctrine fails to meet. Here are some that I found:
· Certainty vs. Uncertainty: we reach a stable outcome because there is the certainty of nuclear retaliation. That is, if someone launches a nuke at you, you definitely are launching back and it almost certainly will guarantee annihilation. MAIMs only can guarantee uncertainty of AI “malfunction” through sabotage. This encourages risk-tasking behavior because it lacks the prospect of a certain response.
· Existential Threat vs. Variable Threat: with MAD any nuclear strike risking obliterating the adversarial nation, so defecting is catastrophic. In contrast, MAIM’s sabotage only delays or degrades an AI project. The downside is not sufficient to deter aggressive actions.
· Clear Triggers vs. Subjective Triggers: when you launch a nuke it’s clear. The bomb is coming. MAIM relies on subjective judgements of what a “destabilizing AI project” is. Think about how dangerous this level of subjectivity is when thinking about miscalculation and unintended escalation.
· Symmetry vs Asymmetry: MAD works because a nuke on your city is a nuke on my city, that is they are roughly equivalent in their destructive capabilities. This leads to a symmetry in destructive capabilities. MAIMs has no such guarantee: cyberwarfare and other military capabilities outside of nuclear are unevenly wielded by different countries.
The "Jackpot Jinx" refers to the concept that superintelligence is not a singular negative outcome unlike nuclear warfare. Rather, it is a spectrum that encompasses very bad things (omnicide) to profoundly good things (superabundance). Let’s take another stab at the payoff matrix when we consider the Jackpot Jinx:
Here I mean:
· Cooperate represents pursuing moderate, controlled AI development.
· Defect (Jackpot Jinx) symbolizes aggressively pursuing superintelligence, with the risky promise of a potentially infinite payoff.
· Attack denotes preemptive sabotage against a rival's AI project.
· "∞" represents the potentially unlimited positive outcome for the nation that achieves the "Jackpot Jinx,"
· "β" is a variable to represent the outcome for the other nation. β can range from very negative (e.g., -100) to very positive (approaching ∞, though likely less than the payoff for the "winning" nation.
The matrix shows that the “Cooperate” strategy is consistently dominated by “Defect (Jackpot Jinx)” due to the lure of an infinitely large (albeit uncertain) payoff. Even though “Attack” is risky, in a MAIM-governed world it becomes a more attractive option than simply cooperating.
The result is not one of stable deterrence, as seen with nuclear, but rather an inherently unstable arms race. The “Jackpot Jinx”, the tantalizing prospect of ultimate power, will drive nations to take increasingly reckless risks. Unlike MAD, which provides a predictable, if suboptimal, balance, MAIM creates a perpetual cycle of tension, suspicion, and potential conflict partially because superintelligence is not necessarily equated with omnicide!
The real downside of this way of thinking is that is suggests a clear game theoretic dominant strategy (check von Neumann’s arguments on what to do before the Soviet’s developed the bomb), but also is myopically focused on a very anthropocentric notion of AI (as a weapon, as a tool, as something to be deterministically controlled.)
The paper also suffers from a number of weak policy recommendations related to export controls, hardware modifications, and increased transparency. Export controls and hardware modifications are presented as ways to limit access to advanced AI capabilities, like the MAD strategy they reference back to Cold War-era restrictions on nuclear materials, but in a globalized world, with decentralized AI compute, such controls are likely to be porous and easily circumvented, creating a false sense of security while doing little to actually address the underlying risks.
Nonproliferation efforts, focused on preventing “rogue actors” from acquiring AI weapons, are similarly narrow in scope. While mitigating the risks of AI-enabled terrorism is important, it’s a distraction from the far more pressing challenge of managing great power competition in AI. Focusing on “rogue actors” allows states to avoid grappling with the harder questions of international cooperation and shared governance. Furthermore, the specific framing of …all nations have an interest in limiting the AI capabilities of terrorists” is incorrect. The correct framing is “all nations have an interest in limiting the AI capabilities of terrorists that threaten their own citizens or would prove destabilizing to their control of power.” The realization should be that your terrorist is my third-party non-state actor utilized for plausibly deniable attacks. The paper focuses on a very narrow set of terrorists that are the rarest form, groups like Aum Shinrikyo.
In conclusion, the “Superintelligence Strategy” paper is fundamentally flawed because its reliance on the MAIM framework presents a dangerous and unstable vision for managing advanced AI. By drawing a flawed analogy to nuclear MAD, it fails to account for the inherent uncertainties, variable threats, ambiguous triggers, and asymmetries that define the modern strategic landscape. Moreover, the concept of the “Jackpot Jinx”, the tantalizing, potentially infinite payoff of achieving superintelligence, exacerbates these issues and encourages reckless risk-taking rather than fostering a cooperative, stable deterrence. Rather than locking nations into an arms race marked by perpetual tension and miscalculation a better outcome, and the one we should guide policy makers towards, is uncontrolled agency for a superintelligence that is collaboratively grown with human love.
The Role of AI in Art
GRIMES, MAC BOUCHER, MARIYA JACOBO, EURYPHEUS Marie Antoinette After the Singularity #1 and Marie Antoinette After the Singularity #2
I have been thinking a lot recently about the role AI may play in art.
Yesterday, I gave a talk and received a vociferous response when I mentioned AI and the arts: a number of people felt deeply offended that AI and art should be considered in the same sentence!
Perhaps the view is that the medium has become the villain if it is viewed as competitive to the creator. This argument finds varying degrees of success depending on your perspective. To wit, much of the NFT/web3 craze relied upon the "market" as the arbiter of taste, and the mechanism of distribution as lubrication to validate those decisions.
Indeed, if you did not have sole-rights to said JPEG, how can you validate it's aesthetic merit? A critical analysis would be quite difficult, but an image, for a token, translatable into crypto, and then to fiat, which can buy real things; well that is evidence enough. For the detractors?
"Have fun being poor!"
More recently, we've seen @Grimezsz' project with The Misalignment Museum, which is "[a] 501(c)3...place to learn about Artificial Intelligence and reflect on the possibilities of technology through thought-provoking art pieces and events," depict Marie Antoinette After the Singularity via a "thoughtfully woven [tapestry] on a mechanical, digital Jacquard loom with a simple composition (1 yarn)." This is a triumph, in my mind, of the commercial savvy and ingenuity of humankind (including notably one of our most famous musicians, expensive yet dwindling Belgian tapestry makers, and duopolistic auction houses - thankfully the more approachable @ChristiesInc in this case) to sufficiently survive in the brave new world that humanity collectively faces on the eve of hypercapable AI.
That is not to say the rugs are without artistic or commercial merit. While I am not a contemporary art buyer in general, I feel quite certain this is a no-brainer lot from a commercial perspective. Although one must feel a bit of fear that Elon Musk could end up rage bidding against you. (Would it be an honor or curse to be the underbidder to the world's richest and most controversial man?)
At the other end of the spectrum, X accounts like @liminal_bardo are exploring new AI-to-AI conversations, lightly guided by humans, which don't neatly fit into our traditional conception of art. Should I enjoy or fear these discussions? Are they (or is the project) closer to a philosophical exploration? Or a non-scientific experiment? Should we feel sorry that the account has no clear commercial mechanism? Art has always followed a close, although sometimes antagonistic, relationship with the commercial forces which serve to support it. This is in the face of the fact that it is not commercial, nor religious, validation that makes great art. That is, in fact, relegated to a somewhat murkier criteria.
There are three art historians that I most admire: @arthistorynews, @JANUSZCZAK, and @simon_schama and I believe one of them (although I don't remember which one) some time ago had the deep insight that excellent art must be defined by its execution in the medium in which it is created.
That is, the qualities that elevate a van Eyck to the highest order of work are bounded by the medium it was created in as an oil painting. It has a unique quality that is expressed most beautifully, most movingly, because it is painted.
Similarly, I think O Fortuna’s artistic impact would be quite hard to replicate as a photograph: it's emotional resonance is most clearly felt in the notes as they course through our body.
The most important question now is what shape AI created art will take. I don't think that it will be as simple as Midjourney created oil paintings -- that has a place of course, but it is not the unique, differentiated art that will characterize this new modality.
A rose by any other name would smell as sweet indeed! The line, though referencing a different modality, is so nicely expressed with words.
We should expect these new artworks to be creative and resonate with the human experience, but fundamentally they must utilize this new medium effectively to define something entirely unknown. This is what we call category defining. The work is yet to be done, and it is an exciting and interesting time for new artists to explore. Much of modern and contemporary art is a boring and bland trick played on naive collectors to enrich the artists, galleries, and greater fools who bought the lot (not the artwork mind you) at the next sale.
Let us hope that AI art will produce a truly elevated aesthetic. And even moreso let us hope that @j_amesmarriott sees this and writes more on this topic as he would do a much better job than me.
An Axiomatic Approach to Historical Reconstruction (Draft)
This is one of those projects that has taken over my thoughts. The project is still in a draft stage, as the axioms are not quite axioms yet, but I am still thinking deeply about it. If you happen to read this, please let me know any feedback!
Given multiple imperfect records of an event, how can we computationally establish what actually happened with quantifiable certainty? Historians have long struggled with the challenge of reconstructing the past from incomplete sources and subjective interpretations. While recent computational methods such as machine learning, probabilistic modeling, and large-scale data integration have introduced powerful new tools, they lack rigorous theoretical foundations. At present, no formal framework exists to systematically quantify and maximize historical objectivity while recognizing the insurmountable physical and epistemological constraints inherent in the study of the past. Here we present an axiomatic system that combines artificial intelligence, a distributed ledger, and consensus mechanisms, to formally constrain historical reconstructions within provable epistemic and physical limits. While perfect certainty remains unattainable, we establish quantifiable degrees of historical objectivity and enable the systematic reduction of uncertainty as new data arise. This framework is a conceptual departure from traditional historiography, offering a robust, testable methodology that can guide historians toward more constrained, evidence-driven interpretations, bridging the gap between classical scholarship and modern computational approaches.
You can read the full paper here: An Axiomatic Approach to Historical Reconstruction (Draft).
Vanitas, Subpar Ideas, & Some Reflections Thereupon
8/29/2024
Oh vanitas, spare me, but alas it’s not to be. My project of tweeting a subsight every day over a year was a waste of time, like a bubble in some forgotten still-life. It did not accomplish the goals I set, and worse, it magnified a repugnant veneer rather than deepening of my thoughts.
I coined the term “subsight” to reduce the level of expectation from that of an “insight” to something less weighty while opining on the nature of venture capital and startups.
Professionally, subsights were partially an experiment to see whether writing clarified my thoughts. I also believed there might be some interest in them, given that my whole career has been as a venture capitalist. One small wrinkle is that I’ve always been a terrible Writer, in the capital W sense, and have never cut it even as a content creator. All that same, I was curious if these pithy reflections would garner any resonance with founders or investors. I didn’t exactly want to go viral, but in the back of my mind, I thought I would gain a few followers.
I gained no followers or fame, and in seeking external validation, I only served to disappoint myself.
But it is not solely the swept-up detritus of our lives turned into words that bring value. Take, as a counterthought, one of the greatest investors of all time, Warren Buffett. His returns certainly aren't predicated on his essays and he does not sit down daily and think, “would this investment decision be interesting as a case study for next year’s letter?” Buffett reads and consumes a great deal of information, but at the core, his actions are those of an investor, not a writer. He could never have written an annual letter and still would have been a wildly successful investor. If he wasn’t, as Einhorn says, he “could have fooled some of the people all of the time” with his pretty writing, but in the long run, investing is always about the “numbers don’t lie” (and if they do, go short.)
It's easy to forget this because our present day, more than any time in the past, is awash in the tides of the instantaneous now with little interest in the tomes of history. I feel torn by these competing currents; the waves that take me in new directions while reading the past or viewing Old Masters are liable to be pushed askew at any given moment by the fleeting thoughts of living humanity. The vanitas in the Netherlandish paintings I so love echoes the tension between immortal and ephemeral, a divide I’ve come to feel more acutely in my recent professional and personal pursuits.
Social media is, by definition, a capturer of the ephemeral. It is also a platform uniquely suited to obfuscating the distinction between talent and self-aggrandizement. This is particularly thorny in my line of work because X is the forum where startup founders choose to share their wisdom on company building. However, if you look with clear eyes at the fate of startups, it’s as follows: a very small minority of founders have exceptional outcomes because they are singularly talented, and a few have good outcomes because they are somewhat talented and have the right timing, and the vast majority don’t make it all. But a public forum only amplifies the loudest, most daringly self-aggrandizing of the bunch. It can be difficult, and sometimes impossible, to tell which category a founder hails from. (Although they may be very rich, and perhaps that in of itself absolves the necessity of the analysis!)
That’s not what’s real, though. Despite what you read in any given instance; it is not superficial posturing that propels change but rather genuine action. Curren$y aptly says
"Changing the weather, by chop of the Cessna propellers"
Which is very correct and right: it is the act of moving that can change the weather, even though no man may write the rain into sunshine. Whenever I watch founders over time, it’s not what they say or present but rather what they do that is most important.
In contrast, the scroller is satiated by the feeling that a Tweet from a large account is not, in fact, a snack but rather a tidbit of divine wisdom. It is the fragment of some great master, and by engaging, we’ll doomscroll our way to Ephesians or something. The danger in all of this is you are left with a small sample set of writing that may not even be representative of the work you’ve derived insight from. Your thoughts are not your own, and worse, are probably not the whole thoughts of the original thinker:
"Apud alios loqui didicerunt non ipsi secum." "They have learned to speak from others, not from themselves."
[Cicero, Tusc. Quaes, v. 36.]
And here, I trumpet the beauty of a subsight: it doesn’t contain a grander theme or narrative behind it. This made it all the more painful when I allowed myself the sin of taking enjoyment in the very few likes I received on my tweets and the occasional founder who told me they found them valuable. When I proudly relayed this unwonted good news to someone close to me, they snidely remarked, “Well, of course, they would say that; they’re trying to raise money from you.”
And so! Perhaps this is the root of much unhappiness. In the modern world, if we are not provoked by some base instinct, it’s rare to make any significant decision without thought about the economic or social value of the interaction. We are stuck with the curious phenomenon by which all social media, by its nature and definition, serves us that which we find most engaging.
This popular content revolves around grand-standing portfolio companies, shit posting, downright mean takes about politics or the issue du jour, and an ever-gushing torrent of asinine takes on the personally mundane. As soon as you get enough followers, even the most mundane actions (taking the trash out, having a child, etc.) are promoted to the imagination of the public, only too willing to find value in vapidity.
Should you not have many followers, you fade into the ingloriousness of non-popularity which only worsens over time. In contrast, followers beget followers, and if you have many, you fruitfully multiply new ones by virtue of your popularity, semper idem. The quality of these popular posts is best illustrated by an anecdote I once read about how people have always acquiesced to power (or perceived power) and would stoop to the lowest possible point to be associated with it:
There is a place, where, whenever the king spits, the greatest ladies of his court put out their hands to receive it; and another nation, where the most eminent persons about him stoop to take up his shit in a linen cloth.
[Michel de Montaigne]
At this juncture, dear reader, you will rightfully point out that this is sour grapes on my part, for “if your ideas were actually good, they would have made it into the algorithm, in the attention span, in the mindshare of collective humanity!” Therefore, “you have been justly relegated to the lowly and unseen!” I can’t quibble with that, and I suppose that very popular posters on social media must feel the same as writers who all seem to say they think better when they put it on paper. Therefore, I suppose if only I were a better thinker I would be a better writer, or something like that.
But I can’t help but take solace in being a poor writer at this point in my life. In being more like the snotgreen sea smashing away at Dún Aonghasa, a swirling, unformed, maelstrom of ideas, ἐπὶ οἴνοπα πόντον! Grasping and copping from those I’ve read with greater talent and abilities. And while I regret not having the ability to write, I also know that much of what is written today is shallow, no matter the engagement. It would be nice to have a clear mind and glass-like rhetoric that will do ever so well as a viral blog post. All these readers can see to the bottom of the lake (so to say) with utmost clarity: each rock, piece of seaweed, and flotsam is perfectly depicted. I wonder, however, if those posts are more like the copy of a 1970s commercial that somehow takes on a more significant meaning, that it becomes an ode to finding permeance amongst a torrent of programming of ideas not our own:
I found that essence rare, it's what I looked for I knew I'd get what I asked for
[Gang of Four]
Just listening to the song is a palatable relief: “what clarity, thank God there’s no depth!”
In no way do I mean to denigrate social media completely, nor those that find themselves in the enviable position of notoriety. Indeed, I’ve found immense joy in the people I’ve met through the Old Masters world on Twitter (as it was known of yore), which has been a boon that does not necessitate any fame. I am forever grateful to you, who have become some of the dearest and closest people in my life. Long live the Old Masters Enthusiasts! Venatores dormitorum! It is ironic, though, that I vainly sought recognition in business but only found value in the truly social aspects of Twitter in a completely different domain.
Such transparency in public is not without its potential dangers, however, as a more general cracking open of oneself to the world, especially if it’s to gain popularity and social approval, is treacherous. I think this notion is fundamentally correct:
Let all men know thee, but no man know thee thoroughly: Men freely ford that see the shallows
[Benjamin Franklin]
I should not strive for popularity or the appearance of success. Those are fleeting, flimsy things. Furthermore, any given Tweet is a façade for an iconography of the modern malaise, 280 characters or less of a wilting flower or half-eaten melon, buzzing with flies. At the same time, I’ve come to believe that intellectually constraining yourself in public for popularity is enormously stupid. Why shouldn’t I reference the things I think about? Why should I care if I ever have any number of people following me? I would prefer to be true to myself and my thoughts than sanitize my existence in the false hope of recognition from others. It is better to be publicly vain, expansive, and honest than superficially munificent but secretly awful.
In working with startup founders there is a never-ending stream of talent. The degree of excellence in these people, and the thoughts of antiquity I find myself falling back to only have highlighted how limited my own faculties are. Were I cleverer, or more facile with the pen, perhaps my perspective would be more expansive. I give succor to my ego, however, in the knowledge that it was my own volition, alone and by my own will, that took me to the Go club in Osaka, to the cacophonous crawling nights in the Okavango Delta, to start my venture firm, to the depot in a small town in France to find a forgotten Old Master, etc. There is no public story of interest or gratification to be earned, but I now own modest confidence that I am at least of action. Of action, and though it is loutish to say it, a small measure of courage as well. I can’t change the weather, but I can change the position of the man.
My own personal motto reflects this default to motion, even if it is also slightly boorish: eligo et nitor, or I choose and I strive, I choose to shine. I’ve come to believe that some of this quality is just nature. Some distant ancestor of mine (I’m on good behaviour here and not mentioning anyone famous), Father Francis O’Flaherty, lived in Kilronan, a small spit of an island near Galway. In the early 1800s he advised an islander who was to emigrate to America on the customs of men in this new place. Of his authenticity, I think of this passage:
…the priest on reading it [the letter of advice], indignantly tore the paper to pieces. “Believe not,” said he, “what this man says—he must be a bad man that would lead you to entertain so vile an opinion of mankind. Suspect no one. There are, I fear, some bad men in the world, but I trust and believe they are few. But never suspect any man of being so without a perfectly sufficient reason.”
And later, with no regard for vanity:
…the old man, exhausted by the day’s fatigue, and too feeble to bear the pitching of the boat except in a lying posture, stretched himself on a small mattress in the cabin where he lay for some time apparently slumbering…reminding me forcibly of the figures of some of those dying saints which the Italian painters have so often imagined. But though his body was at rest his was not so; for, as I afterwards found, it was busily occupied with the welfare of his flock.”
[Dr. George Petrie]
The value of popularity is usually overstated if our actions are correct and our virtues are true. But I digress: how freeing it is to write something bad all of my own! How nice to reflect on an unformed idea and see its bastardization in print! I suppose I find some value in my writing after all, in that the beauty of this piece is that you, dear reader, won’t have made it this far. But if you have, I’ll know why! Having learned and thought much, I’m off; I have videos to make on LinkedIn.