Just some Internet guy

He/him/them 🏳️‍🌈

  • 0 Posts
  • 167 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle

  • I’m not saying to use native toolkits like Qt or GTK, those indeed have problems. What React Native does is somewhere in-between: it’s an abstraction that produces decent results between platforms including the web.

    It uses slightly higher level abstractions that work a lot like the web for rendering, you still get your boxes and a subset of CSS properties. But on web it’ll compile to flexbox or grids, on Android it’ll compile to something like a LinearLayout or some other kind of layout the OS understands. On web a <Text> will compile to a <span>, on Android it’ll compile to a native text element. On mobile where you need the performance the most, you otherwise end up rendering a web page that will then eventually end up doing the same thing back to display it natively, but with all the downsides of a web view.

    This performs way better with basically no downside for the web version, has the majority of the flexibility one needs for responsive layouts but it’s way more lightweight when you do target native. On native you can just render it all yourself for really cheap, like any native toolkit would. You’re your own toolkit.

    They will never look native, but at least all the rendering will be native. Most companies have their custom UI theme anyway, native widgets rarely gets used anyway.

    We’re talking Electron replacement after all, it’s not like apps made with it look anything native. But if at least they performed like native apps by skipping the web views and all the baggage it brings with it, that’d be great.


  • For the end user, its main weakness is that complex pages can be pretty slow to render if not coded well. It’s not that bad either. You wouldn’t be like “oh this is a React site, yuck”, they’re all like that these days for the reasons you’d expect.

    As for React Native, its main issue is the communication between the JavaScript browser-ish environment and the Java/Kotlin native environment that can be costly because every has to be serialized (meaning, converted to some type of data structure both sides can understand) and deserialized, so complex screen updates don’t scale too well.

    It’s easy for developers to accidentally trigger much bigger and much more expensive rerenders than expected. If you see whole second long page hangs on some websites as new content loads in that’s usually what happened.


    For developers, it’s complicated, you kind of need to experience it to understand the footguns.

    React was born to solve one particular problem at Facebook: how can we make it so any developer can jump on any part of the UI code and add features without breaking everything. One of the most complicated aspects of a website is state management, in other words, making sure every part of the page are updated when something changes. For example, if you read a message in your inbox, the unread count needs to update a couple places on the page. That’s hard because you need to make sure everything that can change that count is in agreement with everything that displays that count.

    React solves that problem by hiding it away from you. Its model is simple: given a set of inputs, you have a function that outputs how to display that. Every time the value changes, React re-renders every component that used that value, compares it with the previous result, and then modifies the page with the updated data. That’s why it’s called React, it reacts to changes and actions.

    The downside of that is if you’re not very careful, you can place something in a non-ideal spot that can cascade into re-rendering the entire page every time that thing updates. At scale, it usually works out relatively okay, and it’s not like rendering the whole page is that expensive. There’s an upper cap on how bad it can be, it won’t let you do re-render loops, but it can be slow.

    I regularly see startups with 25MB of JavaScript caused by React abuse and favoring new features over tracking down excessive renders. Loads the same data 5 times because “this should only render once” and that turned out to be false, but it displays correctly. I commonly see entire forms being re-rendered every character you type because the data is stored in the form’s state, so it has to re-render that entire tree.

    But it’s not that bad. It’s entirely possible to make great and snappy sites with React. Arguably its problem isn’t React itself but how much it is associated with horrible websites because of how tolerant to bad code it is. It’s maybe a little bit too easy to learn, it gives bad developers an undeserved sense of confidence.

    E: And we have better solutions to this such as signals which SolidJS, Vue and Svelte make heavy use of. Most of the advantages with less problems.


    Anyway, that part wasn’t relevant at all why I don’t like React. The point is, skip the web, you don’t really need the web. React Native skipped the whole HTML part, it’s still JSX but for native app styled components for UI building. The web backend worked very well, your boxes became divs with some styles. It pretty much just worked. Do that but entirely in Rust since Rust can run natively on all platforms. Rust gets to skip all the compromises RN needed, and skip the embedded browser entirely. Make it desktop first then make the web version, it’ll run just as well and might even generate better code than if a human wrote it. Making the web look native sucks but making native fit web is a lot easier than it looks. Letting go of HTML and CSS was a good call from React Native.


  • I wish we went the other way around: build for native and compile to HTML/CSS/WASM.

    For me the disadvantage of Electron is well, it doesn’t have any advantage or performance improvement over the browser version for 99% of use cases, and when you shove that on a mobile phone it performs as horribly as the web version.

    People already use higher level components that ends up shitting out HTML and CSS anyway, why not skip the middleman and just render the box optimally from the start? Web browsers have become good, but if you can skip parsing HTML and CSS entirely and also skip maintaining their state, that’s even better.

    I had the misfortune of developing a React Native app, and I’d say thinking in terms of rows and columns and boxes was nice. Most of RN’s problems are because they still run JS and so you have to bundle node and have the native messaging bridge, and of course that it’s tied to the turd that is React. But zero complains about the UI part when it doesn’t involve the bridge: very smooth and snappy, much more than the browser. And the browser version was no different than standard React in performance.

    I like that it’s not yet another Chromium one at least.




  • Ask your admin to turn it off, or if you’re the admin, turn it off.

    They really went with the worst possible way to implement this in that it mangles the post to rewrite all images to the image proxy, so it’s not giving you a choice. So if you want the original link you have to reprocess it to strip the proxy. It’s like when they thought it was a good idea to store the data as HTML encoded, so not-web clients had to try to undo all of it and it’s lossy. It should be up to the clients to add the proxy as needed and if desired. Never mangle user data for storage, always reprocess it as needed and cache it if the processing is expensive.

    Now you edit a post and your links are rewritten to the proxy, and if you save it again, now you proxy to the proxy. Just like when they applied the HTML processing on save, if you edited a post and saved it again it would become double encoded.

    Personally I leave it off, and let Tesseract do it instead when it renders the images. It’s the right way to do it. If the user wants a fresh copy because it’s a dynamic image, they can do so on demand instead of being forced into it. And it actually works retroactively compared to the Lemmy server only doing it for new posts.


  • API documentation isn’t a tutorial, it’s there to tell you what the arguments are, what it does and what to expect as the output and just generally, what’s available.

    I actually have the opposite problem as you: it infuriates me when a project’s documentation is purely a bunch of examples and then you have to guess if you want to do anything out of the simple tutorial’s paved path. Tell me everything that’s available so I can piece together something for what I need, I don’t want that info on chapter 12 of the example of building a web store. I’ve been coding for nearly two decades now, I’m not going to follow a shopping cart tutorial just in the off chance that’s how you tell how the framework defines many to many relationships.

    I believe an ideal world has both covered: you need full API documentation that’s straight to the point, so experienced people know about all the options and functions available, but also a bunch of examples and a tutorial for those that are new and need to get started and generally learning how to use the library.

    Your case is probably a bit atypical as PyTorch and AI stuff in general is inherently pretty complex. It likely assumes you know your calculus and linear algebra and stuff like that so that’d make the API docs extra dense.





  • It’s end to end encrypted, it could be hosted on the NSA’s servers for all you care, it should be safe.

    The reason this is there is likely because they use those cloud services to provide the hosted services, so they disclose that they do. I don’t think it applies to the client you download or the ones you self-host from open-source builds on your own homeserver on your own infrastructure.


  • Why do you keep comparing phones and PCs? They’re not comparable and never will. My PC can draw probably close to 1000W when running full bore. Mobile chips have a TDP of like 10-20W. My PC can throw 50-100x more power at the problem than your phone can. In the absolute worst case, it would have a dozen or two of those power efficient ARM chips because it can. And PC games would make use of all of them and you circle back to PC superiority. My netbook is within the same range and crappier than my phone in many aspects, around 5-10W. My new Framework 16 has a TDP of 45W, already like 2-4x more than a high end phone has.

    Even looking at Apple, the M2 has a TDP of 20W because it was spun off their iPad chips, and primarily targets mobile devices like MacBooks. So while the performance is impressive in the efficiency department, I could build an ARM server with 10x the core count and have a 10x more powerful computer than the top of the line M3 iMac.

    PCs running ARM would have no effect on the mobile ecosystem whatsoever. Android runs Linux, and Linux runs on a lot of CPU architectures. You can run Android on RISC-V today if you want to spend the time building it. Or MIPS. Or PowerPC. There’s literally nothing stopping you from doing that.

    The gaming experience on mobile sucks because gaming on mobile sucks. If you ran your phone at full power to game and have the best graphics it would probably be dead in 1-2 hours. Nobody would play games that murders their battery. And most people that do play games on mobile want like 10 minute games to play while sitting on the toilet, or on a bus or train or whatever. Thus, battery life is an important factor in making a game: you don’t want your game to chew through battery because then people start rationing their gameplay to make it to the end of the day or the next charger.

    PCs are better not because of IBM, or even the x86 architecture, not even because of Windows. They’re better because PCs can be built with any part you want, and you can throw as many CPUs and GPUs and NPUs and FPGAs at the problem as you want. Heck there’s even SBC PCs on PCI/PCIe cards so you can have multiple PCs in your PC.

    Whatever you can come up with that fits in a mobile device, I can make a 10-20x more powerful PC if anything by throwing 10-20 phones in it and split the load across all of them.

    PC games are ambitious and make use of as much hardware as it can deal with. If you want to show off your 3D tech you don’t limit yourself to mobile, you target dual RTX 4090 Ti graphics cards. There are great games made for lower end hardware, and consoles like the switch runs ARM, like the Zelda games. The switch is vastly inferior to modern phones, and Yuzu can run those games better than the switch can. My PC will happily run BotW and TotK at 4K 240Hz HDR if I ask it to. But it was designed for the Switch and it’s pretty darn good games. So the limitation clearly isn’t that PCs exist, it’s what developers write their games for. CPU architecture isn’t a problem, we have emulators, we have Rosetta, we have Box64, we have FEX.

    If PCs didn’t exist, something else would have taken its place a long time ago, and we’d circle back to the exact same problem/question. Heck there’s routers and firewalls that run games better than your phone.


  • The 10 year old PC has a much much bigger power budget than a phone. It wasn’t until really recently that ARM got anywhere close to x86 performance.

    While the phone technically possibly could be better, it would also drain in an hour or two if it was maxed out. And most people have crappy phones that can barely hold 60fps doing nothing so mobile games usually target the lower end devices to maximize the amount of potential players, while also remaining battery conscious.

    There’s also just not that much demand. Nobody has space on their phones for a 120GB game, and nobody wants to play a AAA game on their phones because gaming on a phone sucks ass and if you’re going to dock the phone you might as well get a console.


  • Matrix is for chatting, not posts.

    When it goes well you get live, interactive support and get your question answered fairly quickly. Nice and convenient. But as you’ve said already, it has drawbacks and it’s where forums and things like Lemmy come in, where sometimes you can get replies days later.

    They’re different systems that reach different audiences. You use whichever based on the needs and complexity. What sucks is when the chat rooms develop some knowledge that doesn’t get known outside and it’s also not indexed anywhere on the web. Some things are better discussed in forum format (or mailing lists if you’re very oldschool), while others are just better interactively and the back and forth on a public forum would just be painful.

    Usually there’s a bit of an overlap at least, where users are usually in Discord/Matrix/IRC and some forum or reddit or fediverse community at the same time.


  • You can try unsubscribing and resubscribing. The switch to “subscribed” from “subscription pending” depends on the remote server sending you an activity acknowledging the subscription. New instances sometimes struggle initially, because the remote instance has to discover you first and I think there’s a race condition where it won’t send the activity because it doesn’t know if your instance is up yet. (There’s an instance sync job that runs periodically to ping all linked instances, and it pauses sending activity to instances that are not considered active. If your subscription is the first interaction, you’re not “active” yet as it just learned about your instance)



  • Some of those keys are public knowledge and only serves to tie what client it is (Chromium, Firefox, Safari probably) or otherwise stolen from one of those. This is a safe browsing API key, it’s used to check if sites have been marked as phishing/scam/etc and is used to warn users that the site is known to be malicious. Others are used to tie analytics or ads to the app, so it goes into the right developer’s account metrics.

    I wouldn’t call those leaked, they’re meant to be embedded into apps and aren’t considered as secret keys.

    It’s common practice to use API keys like that even if they’re not so secret, just for the sake of tracking which app is making what requests and so people can’t just openly use the API. You can easily shut down unapproved clients by just rolling out a new key, and it causes an annoying whack-a-mole game to constantly have to extract them from an APK.



  • It all depends on how “finished” the project is, and how much it has to track a moving ecosystem.

    There’s a lot of crates that you can probably write once and be done with it. Like, a unit converter that’s not been updated since the first version of Rust is probably still just fine to use. A meter and a feet won’t change length anytime soon.

    Even a GTK app that’s not been updated in 5 years that might not be a problem at all as long as it compiles. Windows is full of apps that were written 30 years ago and are still shipped basically unchanged. The calculator and notepad were two examples until Windows 10/11.

    Another example: an FTP library or client. It’s basically a dead protocol at this point, so even if it’s not been updated in years, it’s likely fine and there’s not much to improve on.

    It really depends on what it does and how much the rest of the world around it is changing and how complete the code is already.