• jj4211@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    16 days ago

    no one’s trying to design a system against their own interests.

    Well, to an extent that can be in a political philosophy.

    Certainly rational self interest is factored in as to “affordability”. E.g. you support some benefit that you, personally, will never ever benefit from but it just seems the right thing to do, even if it may cost you 0.01% of your income, because that seems pretty affordable for someone else to benefit. Generally, people have voted explicitly against their self-interest.

    Now the point can be made about welfare sorts of programs that it is a matter of self interest. That the small amount you lose in contributing is a small price for making everyone else contribute in case you need it. This case can be made for a lot of these scenarios, but the fact remains folks do vote against ‘rational’ self interest in various other ways.

    • AnyOldName3@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      16 days ago

      I’m not sure that doing something that only directly benefits other people but makes you feel better about yourself as you’ve done something good (or less bad as you’ve not spent the money on something you’d have felt guilty about) isn’t in your self-interest. Other kinds of making yourself feel good count.

        • AnyOldName3@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          15 days ago

          It’s rational to make yourself feel more good. That’s the final outcome of every aspect of self-interest that isn’t solely to remain alive. If the intention is to act solely in the self-interest of an emotionless unfeeling human-shaped robot:

          • it’s very silly as such an entity doesn’t exist and wouldn’t care about its own interests if it did.
          • it’s inconsistent with many other things Rand advocated for that only make someone feel better, but do so through hedonism rather than charity.
          • it’s such a terrible model for real humans that it can’t inform us of what’s good for humans.