• 4 Posts
  • 267 Comments
Joined 1 year ago
cake
Cake day: December 22nd, 2024

help-circle

  • You’re way too rude for somebody this unaware of the topic at hand.

    FSR and DLSS are at their core temporal upscalers. They take motion vectors, subpixel samples from jittering objects, and a low resolution scene, and using shaders for FSR or AI models for DLSS, interpolate the existing pixels to fill the entire target resolution. That’s it. This is not frame generation, and they don’t use anything, whatever you meant by that.

    You can then, on top of the regular upscaling, enable frame generation to enable an entirely different path that holds frames in the buffer and creates intermediary frames. Those are the fake frames you complained about.

    One can use both FSR and DLSS without no frame generation whatsoever, and both were originally created without any type of frame generation to begin with. At the present, Helldivers already uses FSR without frame generation - just for upscaling - but it’s FSR 1.0 (previously called FidelityFX), a matrix based spatial scalar that only looks at one central pixel and tries to apply weights to determine how to fill in the neighbors. This looks horrendous. FSR 2.x and onwards, and DLSS, use the full temporal mechanism I described.

    That’s “what the heck” I think DLSS does.







  • The user explained what exactly went wrong later on. The AI gave a list of instructions as steps, and one of the steps was deleting a specific Node.js folder on that D:\ drive. The user didn’t want to follow the steps and just said “do everything for me” which the AI prompted for confirmation and received. The AI then indeed ran commands freely, with the same privilege as the user, however this being an AI the commands were broken and simply deleted the root of the drive rather than just one folder.

    So yes, technically the AI didn’t simply delete the drive - it asked for confirmation first. But also yes, the AI did make a dumb mistake.


  • Like there is a lot of stupidity on reddit but usually someone comes in with actual knowledge

    Be careful with that, actually. Reddit mastered repeating an explanation or analogy they read on another thread or saw on YouTube, but being quite eloquent at explaining it. Problem is, if they misunderstood it to begin with, they’ll just as confidently repeat a broken version.

    I didn’t notice it at first… then I started seeing explanations for things on my field and cringed at how wrong they were, and then I started noticing the pattern and the very repeated analogies on other areas too.











  • One clear sign is how despite all the money and pressure, companies haven’t been able to actually implement it in useful ways.

    Samsung, Apple, Google, Canva, you name it, they invest a billion into integrating AI into their products and what do they get?

    A chat box, an image object remover, bad image generation, a translator. Sure, all things users were impressed by… Two years ago. Its always the same.

    My banking app decided to update adding “innovative AI features!” which meant… Any guesses?

    Instead of typing the value, pressing OK, and selecting a contact for a bank transfer - which is fast and easy - I now need to type into a chat box “Transfer X amount to Person Y” and this obnoxiously bad AI will reply with emojis, two wrong pieces of information, and take double the time to complete the same task.