The fine art of opportunism

Some quotes:

“Perhaps the single biggest barrier to opportunistic behaviors is a sort of puritanism drilled into us by most cultures that an outcome is not won fairly if it is won without an effort proportionate to its value. Gamblers are not respected in any culture. Not even smart gamblers who learn to count the cards at blackjack.”

“Burst. If all your work disciplines were acquired through a moral ethic and a sense of static work-life balance, you won’t be able to do this. Within an extremely short period of time you have to bet a LOT. Not just direct effort (as in staying up nights), but relationships, earned trust, all your brownie points, money.”

All of the people are saying the same thing: Stop being trapped by your own mental prisons. Try things. Fuck around and find out.

This time with a dose of “be observant. be ready.”

This is a quote within the article that resonates with me strongly:

“Great scientists have thought through, in a careful way, a number of important problems in their field, and they keep an eye on wondering how to attack them.”

I’ve engaged in this kind of opportunism in my academic career, such as it is/was. What are the broader problems where I feel there is something to be opportunistic about?

Super-cooperation-cluster alignment aguments

This is sort of how the argument comes across to me:

There is some most powerful and agentic thing in the universe; let’s call this God.

One way that you could be powerful and agentic is by being a super-cooperator. Being a super-cooperator might also make you very wise (and forgiving, etc).

Therefore, God must be very wise and forgiving.

There’s a bit of a logical problem with this posing of it, which is apparent if we look at the implications being asserted:

  • A: being a super-cooperator ==> having great power and agency
  • B: being a super-cooperator ==> being very wise
  • C: having great power and agency ==> being very wise.

Unfortunately, we can’t get C from A and B, as the implication in A points in the wrong direction. What we need is something like

  • A’ : having great power and agency ==> being a super-cooperator

i.e., the only way to have great power and agency is by being a super-cooperator.

Whether or not A’ and B are actually true seems like it’s actually a fascinating question in the vein of “AI alignment”-style discussions.

Some handwavey initial thoughts:

For A':

All intelligence and capacity in the world is “holonic” in nature. In particular, it consisting of self-organizing parts which respond to local incentives and represent some local implicit utility function.

There is some complexity buzz-saw which keeps “unitary” formations from developing beyond a point. All global objectives will eventually decompose into smaller objectives which parts arise to fulfill.

Since capacity must be implemented by self-interested parts, naturally these parts must somehow cooperate in order for their capacity to aggregate.

For B:

So parts must cooperate. But does this necessarily result in aggregate wisdom “on the edges.” Does the super-cooperator even know about the cooperative dynamics which give rise to its existence.

Counterexamples:

  • Corporations
  • Human society
  • The human body

There seem to be some other big pieces, assumptions, that are need for this part. What are the emergent phenomena of a human-level super cooperation cluster? Is there some reason that this phase is necessary in the developmental cycle of a God?

Attractors in the Space of Mind Architectures and the Super Cooperation Cluster

When you mix together Is, Ought, Darwinism, and the idea of a Super Cooperation Cluster, this is what comes to mind more me:

It is not at all obvious to me that, at the level of “is”, something like a blissful, godlike, super cooperation cluster must arise. The overarching narrative of collaboration wins over defection seems a bit overly simplistic and fails to take into account all of the local incentives which lead to game-theoretic instability.

I personally would not be inclined to join a cooperation cluster worshiping religion on the basis of the “is”-belief that such a thing probably exists.

However, if we mix in a dash of “ought,” to me the interesting question that arises is “how much would the existence of a super-cooperation-cluster-respecting world religion help to bring about a worldly super cooperation cluster?”

Maybe a lot?

Maybe enough so that many people could derive enough meaning from the idea to make it worth joining the religion.

Which brings us back into the territory of “is.”

I guess my preferred framing of all of this is something like “design thinking + religion.” How much good could a designer religion do for the world? Could I design a religion that I would join?

Related: Is’s and Oughts

Universal Oughts

Is’s and oughts.

An “is” is relatively easy to understand, even if it can be difficult to sort out whether an is truly is.

“is"s fit within our consciousness.

Tell me that super cooperators shall inherit the earth, I’m with you.

Tell me that a “designer religion” is the way that a super cooperator cluster is most likely to emerge on earth, I can at least understand what you are saying.

“is"s are well behaved under normal operations of composition, addition, etc. This is and that is and so forth. “is"s accommodate great complexity in this way.

An ought is hard to understand because it is actually half of an is.

If you tell me I ought to join such a religion, you are leaving half of the statement implied.

The full statement would read something like “You ought to join such a religion if you care about the universal prevalence of positive vs negative mind-states.”

Oughts are poorly behaved under complex operations. I ought to do A and B, but A precludes B.

Thus, we find that the left-out part of the ought is actually the most critical part. Once I have clarity on my values, I can evaluate a set of oughts as heuristics that may or may not help me to achieve these values, rather than as first order imperatives.

Let’s now consider the question: “Should I care about the universal prevalance of positive vs negative mind states?”

This sounds like a kind of glib, unserious question (of course we should!), but I think it’s actually a pathological one. To put it more poignantly, “How much should I care about the universal prevalance of positive and negative mind states?”

It may very well be that the only self-consistent answer to this question is “not very much.” If you care too much about universal mind-valence, your life as you know it might disintegrate so if your other values preclude you from disintegrating your life, you shouldn’t care too much about universal mind-valence. There’s some kind of “fixed-point” in here of caring about universal well-being that we can support, but it’s a contained amount.

I think the interesting question that is being asked here is something like

Is it possible that there is a “universal ought”?

An ought would be universal over some space of predicates if for every predicate in that space, the ought remains a sort of dominant strategy relative to it. For instance, Over all of the “local” value systems of which we can conceive, it makes sense to do some thing.

What could a universal ought possibly look like?

A universal ought needs necessarily to be non-zero sum.

Religion as a universal ought.

Mind Games

It’s really interesting that people’s relationship to “framing” tends to be dominated by either “how do I get unstuck” or “how do I keep from being manipulated.”

Obviously, this makes a lot of sense and resonates with John Vervaeke’s points about the development of rhetoric and the idea of bullshitting.

An interesting aside: Let’s take the idea of the Master and His Emissary, but equate right-hemispheric thinking with frame formulation and left-hemispheric thinking with operating within a frame, for sake of argument. If somehow an unhealthy predominance of left-hemispheric thinking and attention would cause us to be less aware of our frames / less adaptive to unhelpful frames, this could explain why framing would be the right battlefield for manipulation and political influence.

The article itself gives an interesting comment on why some people are more prone to running into this than others:

“It’s not just those who cross their paths professionally who have to deal with their bullshit. If you’ve ever wondered why attractive women tend to be bitchy, it’s often because they have to be - imagine being a honing beacon for every species of manipulative tactic ever developed.”

Wow, he even uses the term bullshit, I hadn’t noticed that before.

Chapman: A first lesson in meta-rationality

Reading “How to think” caused to realize that David Chapman’s main idea with metarationality may be somewhat close to the ideas that I have been exploring around the concept of a framing.

The point which he makes well here is that in order to apply a formal method of reasonig, one has to already formalize the problem. And many approaches to rationality may at least fail to emphasize the difficulty and importance of this step.

To quote:

Finding a good formulation for a problem is often most of the work of solving it.

A bewildered Bayesian might respond:

“You should consider all hypotheses and types of evidence! Omitting some means you might get the wrong answer!”

Unfortunately, there are too many. Suppose you want to understand the cause of manic depression. For every grain of sand in the universe, there is the hypothesis that this particular grain of sand is the sole cause of manic depression. Finding evidence to rule out each one individually is impractical.

“But, obviously, all grains of sand are equivalent as far as manic depression is concerned! And anyway, sand obviously has nothing to do with manic depression.”

Yes; but this is not logically necessary. It’s something we can reasonably suppose. But how do we do that? It requires intelligent background understanding.

What Chapman refers to as a problem formulation here is very similar to what I have been defining as a frame.

Finding a frame that works well for a problem is often so intuitive and straightforward for us as humans that we aren’t even aware of it happening. Even when this gets us or an entire field stuck, and resolution only comes about via a subtle change to the frame or problem formulation, it can be difficult to understand that this is what has happened unless you are paying attention to the overall structure of intra-frame thinking and frame-formulation.

Chapman does a good job of pointing out some examples from his own background where landing on the right problem formulation was really quite non-trivial.

Chapman’s orientation of the article around Bayesianism can be kind of a distraction. An interesting point that Chapman makes is that Bayesian inference itself is may only be optimal when a large set of conditions is met concerning the relationship between the thing we are modeling and our model of it.

While it’s possible in principle to invoke ideas like Solomonov induction which may be universal enough to broadly do away with modeling altogether, for all practical applications of rationality by humans, Chapman’s point stands.

Bayesian inference over a universal space of hypotheses is simply ridiculous from a computational standpoint. This is never what we are doing. We always first undergo a process of constructing a relevant world model by observation and interaction with the world/the problem. While this process of feature selection or model formation may yeild to some Bayesian-like mathematical description, for us humans it is a mostly unconscious affair and to the extent that we can influence it it is very unclear that tools like Bayesian inference are the right ones to utilies. It seems very likely that a bag or tricks or heuristics for checking, permuting, or playing around with a frame is much more useful.

Act into fear and abandon all hope

“By making clever choices in the company we keep and the cultures we engage, as adults we can insulate ourselves from the fullness of the world, and by doing so cut ourselves off from the need for further development.”

“Key to creating such a buffer against further psychological development is fear. Fear is like a fence, keeping a person “safe” by separating them from the things that would challenge their understanding of the world. Fear keeps out new info that would invalidate existing beliefs and creates a bubble inside which only confirming evidence can be found. Fear protects us from cognitive dissonance, but in so doing cuts off the vital confusion that helps us grow in our complexity to make sense of the world.”

This describes the way that I engaged with certain types of information when I was a conservative Christian. What fears are currently insulating me from new experiences and in turn, limiting my need for development? Social fears? Fears of seeing uncomfortable truths about myself?

This second one for sure.

I fear learning things about the way that I come across which I fear would in turn cause me to become more self-aware in a harmful manner.

This is difficult because there is potential, in the absence of the right personal development in response, for real harm. I guess it would be important in some cases to time engaging with our fears smartly to ensure that we have the slack to deal with the repercussions.

Soares: Enjoying the feeling of agency

Don’t let an ugh field accumulate around something. Instead, learn to recognize and enjoy the feeling of agency that comes with preempting an ugh field. Even look forward to such “opportunities,” since agency isn’t possible when we’re comfortable with the normal flow of things.

Soares: Stop Trying and Try

When am I expecting myself to “do my best?” instead of trying to kick the soccer ball?

I actually find that I sometimes intentionally set my mindset to “do my best” in order to break down a mental barrier that I have around something. Like weightlifting. When just getting into weightlifting during a running hiatus, I was worried about injury and unpleasant/anxiety-inducing experiences like light-headedness and nausea associated with pushing myself very hard. Committing to “only do by best” helps me to get over worries that I will push myself too hard.

I guess I don’t usually find myself wishing I had kept at something instead of giving in. Usually it’s more of a mental game about whether I’m pushing myself harder than I should. An exception might be fleeing a social situation prematurely. Does this frame apply here? Something to try to remember next time there is a social situation where I’m liable to do this: “Am I just showing up/doing my best”? or am I hear to do A to accomplish B?

SSC: Epistemic Learned Helplessness

epistemic learned helplessness: for the layman, the convincingness of an argument has less to do with its correctness and more to do with the cleverness/art of the one crafting/delivering the argument. if I get burned enough times, I’ll realize this, and I’ll stop taking convincing arguments seriously.

One word: Covid