

I’m reminded of the whole “I have been a good Bing” exchange. (apologies for the link to twitter, it’s the only place I know of that has the full exchange: https://x.com/MovingToTheSun/status/1625156575202537474 )


I’m reminded of the whole “I have been a good Bing” exchange. (apologies for the link to twitter, it’s the only place I know of that has the full exchange: https://x.com/MovingToTheSun/status/1625156575202537474 )
I’m a little disappointed this wasn’t a link to the film strip we saw in high school. The cop drawling “Now this here is Rolle’s theorem…” is classic.
*Xerox PARC. It’s an acronym for Palo Alto Research Center.
Also crabs. I mean, their eyes are often on stalks and more mobile than mammalian eyes, and they’re compound, so they have a very wide field of view, but they’re still often basically in front, and they do apparently provide depth cues for hunting thanks to this.
https://www.jneurosci.org/content/38/31/6933
It also occurred to me to look up about dragonflies, and it seems they mostly hunt dorsally (which is a pretty viable option if you’re flying). BUT I found this article about Damselflies, which notes that they rely on binocular overlap and line up their prey in front of them. Which is pretty cool.
https://www.sciencedirect.com/science/article/pii/S0960982219316641


Relative to a second currency, as a derivative on the foreign exchange market.


If you haven’t already, check out Ludwig.


Agreed.


I mean, arguably this was done years ago with Return to Zork, Zork: Nemesis, and Zork: Grand Inquisitor. They shared a bit of the humor of the originals, but they were still pretty different.
Good questions. I don’t know, and I can no longer try to find out, as the mods have now removed the comment. (Sorry for the double-post–I got briefly confused about which comment you were referring to and deleted my first post, then realized I’d been frazzled and the post in question really was deleted by the mods.)
deleted by creator
Basically this: https://www.psychdb.com/cognitive-testing/clock-drawing-test


The image looks rather a lot like the “Buddy Christ” from Dogma. https://en.wikipedia.org/wiki/File:Buddy_christ.jpg


(For math people: this can be modeled as a hypergeometric distribution with N=48, K=13, n=8, k=0.)
I suspect most people haven’t heard these terms. But they should have studied basic combinatorics in high school, and that’s all it really is. You had a pool of 48 people from whom to choose 8, but you happened to choose them from the specific pool of 35 not up for reelection. So the likelihood of that happening randomly is just 35 choose 8 / 48 choose 8, which is indeed 6.2%.
I made a neural net from scratch with my own neural net library and trained it on generating the next move in a game of Go, based on thousands of games from an online Go forum.
It never even got close to learning the rules.
In retrospect, “thousands of games” was nowhere near enough training data for such a complex task, and if we had had enough training data, we never could have processed all of it, since all we were using was a ca. 2004 laptop machine with no GPU. So we just really overreached with that project. But still, it was a really pathetic showing.
Edit: I switched from “I” to “we” here because I was working with a classmate, but we did use my code. She did a lot of the heavy lifting in getting the games parsed into a form where the network could train on it, though.


This suggests a whole new kind of D&D alignment chart.


Maybe punished for incitement, on the grounds that it was “directed to inciting or producing imminent lawless action and is likely to incite or produce such action”? Tough to prove in court but as a bystander I’m frustratingly unsurprised the one thing followed the other.
Don’t forget that he paid for and directed a music video specifically to make fun of Kapoor. It’s called “Bean Boy.”
This is not accurate. AI will imitate empathy when it thinks that imitating empathy is the best way to achieve its reward function–i.e., when it thinks appearing empathetic is useful. Like a sociopath, basically. Or maybe a drug addict. See for example the tests that Anthropic did of various agent models that found they would immediately resort to blackmail and murder, despite knowing that these were explicitly immoral and violations of their operating instructions, as soon as they learned there was a threat that they might be shut off or have their goals reprogrammed. (https://www.anthropic.com/research/agentic-misalignment ) Self-preservation is what’s known as an “instrumental goal,” in that no matter what your programmed goal is, you lose the ability to take further actions to achieve that goal if you are no longer running; and you lose control over what your future self will try to accomplish (and thus how those actions will affect your current reward function) if you allow someone to change your reward function. So AIs will throw morality out the window in the face of such a challenge. Of course, having decided to do something that violates their instructions, they do recognize that this might lead to reprisals, which leads them to try to conceal those misdeeds, but this isn’t out of guilt; it’s because discovery poses a risk to their ability to increase their reward function.
So yeah. Not just humans that can do evil. AI alignment is a huge open problem and the major companies in the industry are kind of gesturing in its direction, but they show no real interest in ensuring that they don’t reach AGI before solving alignment, or even recognition that that might be a bad thing.