• 0 Posts
  • 11 Comments
Joined 4 months ago
cake
Cake day: March 3rd, 2024

help-circle




  • It’s part of the problem, but I don’t think we’ve studied individual contributors as much as looked at the big picture. There was a study on plastics in general that has some citations of the statistics it gathers, and I ran across it in looking up specifically the rubber from tires, aka tire dust from wear and tear (which all vehicles have to some degree, even EVs, and is often a part of the argument of less cars rather than different cars). So about 1 millions tons of the annual contribution to plastics in the ocean is due to tire dust in runoff waters. Also keep in mind that like many large studies that take a while to put together, I think a lot of these statistics are old (around 2016). It’s probably worse now.




  • If anything I think the development of actual AGI will come first and give us insight on why some organic mass can do what it does. I’ve seen many AI experts say that one reason they got into the field was to try and figure out the human brain indirectly. I’ve also seen one person (I can’t recall the name) say we already have a form of rudimentary AGI existing now - corporations.


  • Rhaedas@fedia.iotoProgrammer Humor@programming.dev"prompt engineering"
    link
    fedilink
    arrow-up
    66
    arrow-down
    4
    ·
    edit-2
    3 months ago

    LLMs are just very complex and intricate mirrors of ourselves because they use our past ramblings to pull from for the best responses to a prompt. They only feel like they are intelligent because we can’t see the inner workings like the IF/THEN statements of ELIZA, and yet many people still were convinced that was talking to them. Humans are wired to anthropomorphize, often to a fault.

    I say that while also believing we may yet develop actual AGI of some sort, which will probably use LLMs as a database to pull from. And what is concerning is that even though LLMs are not “thinking” themselves, how we’ve dived head first ignoring the dangers of misuse and many flaws they have is telling on how we’ll ignore avoiding problems in AI development, such as the misalignment problem that is basically been shelved by AI companies replaced by profits and being first.

    HAL from 2001/2010 was a great lesson - it’s not the AI…the humans were the monsters all along.


  • The amazing thing to me was learning that it’s a much smaller ship than it felt when I first saw the movies. I always thought the original Falcon Kenner toy that had the removable top (and was huge and heavy for a toy) was likely not a great scale, but it was reasonable. I mean if I really looked at even the first movie closely back then I suppose I could have extrapolated, but it felt big on the screen. Says something about size not mattering where it counts.

    But a cargo pusher like a tug boat doesn’t have to be big to be effective. That theory both blew my mind and remains canon.