Note the versions, none of the results give you the official operators page for the current version, 16. They give 9, which went EOL in 2021.
Note the versions, none of the results give you the official operators page for the current version, 16. They give 9, which went EOL in 2021.
This is almost entirely misdirected. The success of Wikipedia is from its human structures, the technical structure is close to meaningless. To propose a serious alternative you’d have to approach it from a social direction, how are you going to build a moderation incentive structures that forces your ideal outcomes?
Federation isn’t a magic bullet for moderation, alone it creates fractal moderation problems.
Recently got a Onyx Boox Ultra and it’s incredible compared to my previous Kobo. Basically, its 10" with stylus input and a keyboard case. The special sauce is it running Android, complete with the Google store. The display tech is advanced enough that normal apps, for instance Connect for Lemmy, work fine. I have mine setup with Syncthing, Home Assistant, Obsidian, it all just works, mostly. I’d recommend using a 3rd party launcher and not touching the Onyx account, though.
I’ve had great experiences with Kobo, though. I literally went through 4 models because they kept upping their game. They’re less sketchy than Onyx and are very open; you can load your own books of nearly any format and modify it as it runs linux. You can even completely replace the OS.
Key detail in the actual memo is that they’re not using just an LLM. “Wallach anticipates proposals that include novel combinations of software analysis, such as static and dynamic analysis, and large language models.”
They also are clearly aware of scope limitations. They explicitly call out some software, like entire kernels or pointer arithmetic heavy code, as being out of scope. They also seem to not anticipate 100% automation.
So with context, they seem open to any solutions to “how can we convert legacy C to Rust.” Obviously LLMs and machine learning are attractive avenues of investigation, current models are demonstrably able to write some valid Rust and transliterate some code. I use them, they work more often than not for simpler tasks.
TL;DR: they want to accelerate converting C to Rust. LLMs and machine learning are some techniques they’re investigating as components.