• 1 Post
  • 136 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2024

help-circle
  • “I’ve got” seems particularly strange to me because without the contraction Americans would still just say “I have.” (There are some circumstances where they’ll say “I have got” without a contraction, but it’s mainly when they’re drawing a contrast with what they “haven’t got.” E.g., “No, I don’t have a baseball… oh, but I have got a lacrosse ball, will that work?”)

    I think the rule is probably closer to “you don’t contract a stressed verb,” but that’s not terribly useful since there are so few rules about stress patterns. Verbs at the end of sentences are typically stressed, though, so you’re right that ending with that kind of contraction is going to sound wrong to most people.


  • I think it might be more common in British English? Like “I’ve a fiver says he muffs the kick.” Or “I’ve half a mind to go down there myself.” (Curiously in American English this latter would probably still have the contraction but add a second auxiliary verb: “I’ve got half a mind to…” English is such a mess.)








  • Do we know it plays a role? I thought we basically just knew it was an associated biomarker. I kinda thought the research was leaning towards the underlying problem being some kind of issue that kept glial cells from clearing debris effectively, and that the amyloid plaques were mostly another consequence of that same cause, rather than a key mechanism in the chain that led to the dementia.



  • Yeah, my current (aging) motherboard also has gotchas like that you have to choose in the bios where to allocate PCIe lanes, so you end up not being able to use some of the SATA drive connections if you want to use both M.2 slots. And there’s the thing about putting the RAM sticks in the right slots to run in dual channel mode. And the switches and LED connectors for the case are all just random 2mm header pins in a clump, so you have to look up how the cables are supposed to tetris in there.

    I’m not saying it’s challenging; it really is pretty straightforward. But it’s definitely not just “that’s right! it goes in the square hole!” level stuff.


  • Even AI can tell when something is really wrong, and imitate empathy. It will “try” to do the right thing, once it reasons that something is right.

    This is not accurate. AI will imitate empathy when it thinks that imitating empathy is the best way to achieve its reward function–i.e., when it thinks appearing empathetic is useful. Like a sociopath, basically. Or maybe a drug addict. See for example the tests that Anthropic did of various agent models that found they would immediately resort to blackmail and murder, despite knowing that these were explicitly immoral and violations of their operating instructions, as soon as they learned there was a threat that they might be shut off or have their goals reprogrammed. (https://www.anthropic.com/research/agentic-misalignment ) Self-preservation is what’s known as an “instrumental goal,” in that no matter what your programmed goal is, you lose the ability to take further actions to achieve that goal if you are no longer running; and you lose control over what your future self will try to accomplish (and thus how those actions will affect your current reward function) if you allow someone to change your reward function. So AIs will throw morality out the window in the face of such a challenge. Of course, having decided to do something that violates their instructions, they do recognize that this might lead to reprisals, which leads them to try to conceal those misdeeds, but this isn’t out of guilt; it’s because discovery poses a risk to their ability to increase their reward function.

    So yeah. Not just humans that can do evil. AI alignment is a huge open problem and the major companies in the industry are kind of gesturing in its direction, but they show no real interest in ensuring that they don’t reach AGI before solving alignment, or even recognition that that might be a bad thing.