• 0 Posts
  • 11 Comments
Joined 1 year ago
cake
Cake day: June 8th, 2023

help-circle



  • I use the term “autocomplete on steroids” because it gets across a vaguely accurate idea of what an LLM is and how it works to people who are thinking of it like sci-fi movie AI. Sorry if it came across that was my whole reason for considering them not intelligent.

    LLMs do seem to pass a lot of intelligence tests we’ve come up with. Talking with one for the first time is a really uncanny experience, it’s a totally different thing than the old voice assistants. But they also consistently fail at tasks that would indicate an understanding of a topic. They produce good looking equations, but the math underneath doesn’t make sense. They hallucinate facts that don’t fit with the rest of what they themselves are saying, but look similar to the way right answers are written and defended. They produce really convincing responses, but when they fail they betray some really basic failures to understand what they’re saying.

    I feel that LLMs are brute-forcing the tests people designed to measure intelligence. They can pass the bar exam, but they also contain thousands of successful bar exams to consult and millions of bits of text to glue those answers together with. But if you ask the LLM to actually do the job of a lawyer, they start producing all kinds of garbage that sounds good but doesn’t stand up to scrutiny when someone looks up the hallucinated case references.



  • Part of the problem is that AI research likes to use terminology that sounds like what people do, when that’s not what the AI actually does.

    Large language models are not intelligent in any sense. They are autocomplete on steroids. This is a computer program that was fed a book someone wrote, then mathematically tweaked to be able to guess the next word in a sentence in a way that resembles that book. That’s all it does. It does not think or learn in any sense we’d apply to a human.

    To me, LLMs sound like a massive plagiarism engine, and I think they should need to get a license from the authors whose works they used to make the LLM under whatever terms that author wants to give, just like a publisher needs to get permission to print a copy of the work. But copyright law has no easy “bright line” for what counts and what doesn’t. So the courts will have to decide whether what the AI “creates” is similar enough to the original works to count as a violation, or if the AI and its results are transformative enough to count as something new.


  • I try to ask myself what the motivation of the FOMO is. Does it come from me, or is the platform/game/whatever designed to make me feel that way?

    If it’s coming from the design of the thing, and I notice that design, that can immediately change my attitude toward it. It’s not “I want to play one more game” anymore, it’s “this game is pressuring me to play one more game.” Does the game have my best interests at heart? Am I comfortable with being pressured by this game? I find those questions really reframe the FOMO and help me step back from it.

    If the FOMO is actually coming from me, now it’s a question of priorities. If I’m spending time watching one more video on this platform, there’s something else I’m not going to get to. So the question for myself is “out of all the things I can be doing right now, is this the thing I want to do most?” Sometimes the answer is yes! I might take want to catch up on the latest news if I haven’t checked in today. But if I’ve been doomscrolling for hours, the answer is probably no. And framing that as a choice between a bunch of activities instead of the simple FOMO choice of one more click makes that easier to see.




  • I was a mod on Reddit so I was personally aware that for years Reddit’s mod tools have been totally inadequate for the job, that Reddit has been promising to give us something better, and that Reddit has failed to deliver. Honestly, it was even worse than just not delivering: we’d get new tools that didn’t solve the main problems, were only available on the iOS app, coming to Android eventually, and coming to the websites never. Third party API tools were the only thing that made modding vaguely functional, even on a small sub.

    I’m also a supporter of accessibility in apps, which is also something Reddit has been promising for years and Reddit has failed to deliver. Again, third party API tools are the only thing that makes Reddit vaguely accessible right now.

    Reddit’s API changes are not realistic to implement in a single month. This was made clear early on and Reddit has refused to budge. So at this point Reddit is knowingly upending an ecosystem that makes their site usable by groups of users with no first-party replacements ready. And given their history of failing to deliver these very tools, I have no confidence that they will ever do so.

    And THEN the Spez AMA happened. I was hoping he’d listen to the community, engage with our concerns, or at the very least actually do an AMA. Instead he got caught lying, he got caught astroturfing, and he inadvertently made it clear that the real issue was that he was butthurt over these third party apps being better at business than Reddit was. Oh, and later we found out the Reddit CEO really admired Elon Musk’s handling of Twitter, a platform I left for all the reasons Spez seems to like it.

    Even if none of these issues affected me personally (which they do), Reddit has made it clear that I just can’t trust them to run a fair and functional platform. They do not take their obligations to their users, mods, and business partners seriously. If they don’t like the way the game is going, they’ll change the rules without warning. They will promise features they will not deliver even when those features are essential to their site working for the users who keep it alive.

    I don’t want to help Reddit build what Reddit wants to make anymore.