One of the problems with using AI currently is getting the right prompting to get the results close to what you want. Hell, there’s AI for writing prompting. So you either learn some programming by doing it yourself like the AI did, trial and error, or maybe look at the code as the AI builds it and fixes bugs…or learn how to prompt well enough to get results faster. I can’t say which is easier, faster, or better, things are changing rapidly.
I will add that having the right LLM for coding helps. One that is trained specifically on programming rather than a general LLM.
I think the first episode of SNW will hook you. So good.
It’s part of the problem, but I don’t think we’ve studied individual contributors as much as looked at the big picture. There was a study on plastics in general that has some citations of the statistics it gathers, and I ran across it in looking up specifically the rubber from tires, aka tire dust from wear and tear (which all vehicles have to some degree, even EVs, and is often a part of the argument of less cars rather than different cars). So about 1 millions tons of the annual contribution to plastics in the ocean is due to tire dust in runoff waters. Also keep in mind that like many large studies that take a while to put together, I think a lot of these statistics are old (around 2016). It’s probably worse now.
That’s probably how most of the 50% have any stock.
Recession for thee but not for me. Followed by “hey, things are looking better!”
If anything I think the development of actual AGI will come first and give us insight on why some organic mass can do what it does. I’ve seen many AI experts say that one reason they got into the field was to try and figure out the human brain indirectly. I’ve also seen one person (I can’t recall the name) say we already have a form of rudimentary AGI existing now - corporations.
LLMs are just very complex and intricate mirrors of ourselves because they use our past ramblings to pull from for the best responses to a prompt. They only feel like they are intelligent because we can’t see the inner workings like the IF/THEN statements of ELIZA, and yet many people still were convinced that was talking to them. Humans are wired to anthropomorphize, often to a fault.
I say that while also believing we may yet develop actual AGI of some sort, which will probably use LLMs as a database to pull from. And what is concerning is that even though LLMs are not “thinking” themselves, how we’ve dived head first ignoring the dangers of misuse and many flaws they have is telling on how we’ll ignore avoiding problems in AI development, such as the misalignment problem that is basically been shelved by AI companies replaced by profits and being first.
HAL from 2001/2010 was a great lesson - it’s not the AI…the humans were the monsters all along.
The amazing thing to me was learning that it’s a much smaller ship than it felt when I first saw the movies. I always thought the original Falcon Kenner toy that had the removable top (and was huge and heavy for a toy) was likely not a great scale, but it was reasonable. I mean if I really looked at even the first movie closely back then I suppose I could have extrapolated, but it felt big on the screen. Says something about size not mattering where it counts.
But a cargo pusher like a tug boat doesn’t have to be big to be effective. That theory both blew my mind and remains canon.
Two more variables that are going to affect the number of encounters are when the “final” orbits of the inner planets were established (the Nice model suggests there was much disruption early on) and that Mars’ orbit is very elliptic so it’s rarely lining up at its closest approach, which is still pretty far. If anything we’d more likely see some correlation between Earth and Venus if there is any.
Usenet. Go back to the base level.