• 1 Post
  • 13 Comments
Joined 3 years ago
cake
Cake day: August 29th, 2023

help-circle
  • I had thought lesswrong “merely” has a plurality of racists HBD’rs but judging from the total lack of comments calling out his racists bullshit and the majority of comments advising hiding your power level as a practical matter, I guess lesswrong is actually majority HBDers at this point.

    Also, one of his followup comments (explaining why he doesn’t want to just stay mask on like the other lesswrongers) is pretty stupid and gross:

    Thanks, good comment. The quick low-effort version that doesn’t require actually writing the posts is that without taking heritable IQ into account, I think you will be confused about:

    1. Various ways in which post-apartheid South Africa is a bad place to live.
    2. Why so many countries have market-dominant minorities.
    3. Why Israel is so good at defending itself even against far larger countries surrounding it (and the last few centuries of Jewish history more generally).
    4. Why the growth curves for East Asia and Africa looked so different over the last century.

    1 and 4 show the continued willful ignorance about the harmful effect of colonialism and neocolonialism. The first part of 3 is obviously huge amount of material support from the US. I don’t know what 2 is talking about, I assume he’s got some stupid and racist interpretation of various historically contingent things.


  • Sovereign citizens think their made up procedures or words will actually let them bypass the law. Whereas I think Eliezer would fold to actual pressure from the government (despite all his talk about game theory and ignoring threat-like incentives he would in fact want to avoid going to jail). At least, that is the vibes I’ve gotten from seeing his absolute refusal to suggest non-governmental direct action to stop the AI doom he is so certain is coming.



  • Edit: Isn’t Dath Ilan the setting of the Project Wonderful glowfic? The setting where people with good genes get more breeding licenses than people with bad genes?

    Yep, Project Lawful. dath ilan is Eliezer’s “utopian” world the isekai’d protagonist is from. It is described in dath ilan that if you have “bad” genes you lost your UBI if you had kids anyway (it was technically Gregorist-style citizen’s dividend, but it basically UBI) and if you had “good” genes you got extra payment for having more kids.

    Eliezer is basically saying unless the government meets the “standards” of his made up fantasy “utopia” he won’t cooperate with it, even in prosecuting literal child raping pedophiles or carrying out social repercussions against said child rapists.






  • So one point I have to disagree with.

    More to the point, we know that thought is possible with far less processing power than a Microsoft Azure datacenter by dint of the fact that people can do it. Exact estimates on the storage capacity of a human brain vary, and aren’t the most useful measurement anyway, but they’re certainly not on the level of sheer computational firepower that venture capitalist money can throw at trying to nuke a problem from space. The problem simply doesn’t appear to be one of raw power, but rather one of basic capability.

    There are a lot of ways to try to quantify the human brain’s computational power, including storage (as this article focuses on, but I think its the wrong measure, operations, numbers of neural weights, etc.). Obviously it isn’t literally a computer and neuroscience still has a long way to go, so the estimates you can get are spread over like 5 orders of magnitude (I’ve seen arguments from 10^13 flops and to 10^18 or even higher, and flops is of course the wrong way to look at the brain). Datacenter computational power have caught up to the lowers ones, yes, but not the higher ones. The bigger supercomputing clusters, like El Capitan for example, is in the 10^18th range. My own guess would be at the higher end, like 10^18, with the caveat/clarification that evolution has optimized the brain for what it does really really well, so that the compute is being used really really efficiently. Like one talk I went to in grad school that stuck with me… the eyeball’s microsaccades are basically acting as a frequency filter on visual input. So literally before the visual signal has even got to the brain the information has already been processed in a clever and efficient way that isn’t captured in any naive flop estimate! AI boosters picked estimates on human brain power that would put it in range of just one more scaling as part of their marketing. Likewise for number of neurons/synapses. The human brain has 80 billion neurons with an estimated 100 trillion synapses. GPT 4.5, which is believed to have peaked on number of weights (i.e. they gave up on straight scaling up because it is too pricey), is estimated (because of course they keep it secret) like 10 trillion parameters. Parameters are vaguely analogs to synapses, but synapses are so much more complicated and nuanced. But even accepting that premise, the biggest model was still like 1/10th the size to match a human brain (and they may have lacked the data to even train it right).

    So yeah, minor factual issue, overall points are good, I just thought I would point it out, because this factual issue is one distorted by the AI boosters to make it look like they are getting close to human.




  • Poor historical accuracy in favor of meme potential is why our reality is so comically absurd. You can basically use the simulation hypothesis to justify anything you want by proposing some weird motive or goals of the simulators. It almost makes God-of-the-gaps religious arguments seem sane and well-founded by comparison!


  • Within the world-building of the story, the way the logic is structured makes sense in a ruthless utilitarian way (although Scott’s narration and framing is way too sympathetic to the murderously autistic angel that did it), but taken in the context outside the story of the sort of racism Scott likes to promote, yeah it is really bad.

    We had previous discussion of Unsong on the old site. (Kind of cringing about the fact that I liked the story at one point and only gradually noticed all the problematic stuff and poor writing quality stuff.)


  • I’ve seen this concept mixed with the simulation “hypothesis”. The logic goes that if future simulators are running a “rescue simulation” but only cared (or at least cared more) about the interesting or more agentic people (i.e. rich/white/westerner/lesswronger), they might only fully simulate those people and leave simpler nonsapient scripts/algorithms piloting the other people (i.e. poor/irrational/foreign people).

    So basically literally positing a mechanism by which they are the only real people and other people are literally NPCs.