• 0 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: July 21st, 2023

help-circle
  • I personally like to keep it on. Most of my messaging is with family and friends and it’s good to know if someone read or hasn’t read my message.

    Especially if things are time critical. Picking someone up? Asking if they need anything from the supermarket? If I see that they read the message I know that they are going to reply in a moment. If they didn’t even read the message I won’t have to wait around / can guess that they are currently in the car or wherever.

    Sometimes you also have a spotty connection, so the received + read receipt can tell you if they actually got your message.

    In general if someone sends me a message and I read it… I’m going to fucking reply to it (if I’m not super busy, and even then I might send a quick message back). I seriously don’t get people who just leave things on read and then forget about it.




  • Vlyn@lemmy.worldtoProgrammer Humor@lemmy.mlSounds great in theory
    link
    fedilink
    English
    arrow-up
    174
    arrow-down
    4
    ·
    1 year ago

    TDD is great when you have a very narrow use case, for example an algorithm. Where you already know beforehand: If I throw A in B should come out. If I throw B in C should come out. If I throw Z in an error should be thrown. And so on.

    For that it’s awesome, which is mostly algorithms.

    In real CRUD apps though? You have to write the actual implementation before the tests. Because in the tests you have to mock all the dependencies you used. Come up with fake test data. Mock functions from other classes you aren’t currently testing and so on. You could try TDD for this, but then you probably spend ten times longer writing and re-writing tests :-/

    After a while it boils down to: Small unit tests where they make sense. Then system wide integration tests for complex use-cases.


  • Multi-threading is difficult, you can’t just slap it on everything and call it a day.

    There are languages where it’s easier (Go, Rust, …) but parallelism is an advanced feature. Do it wrong and you get race conditions or dead locks. There is a reason you learn about this later in programming, but you do learn about it (and get to use it).

    When we’re being honest most programmers work on CRUD applications, which are highly sequential, usually waiting on IO and not CPU cycles and so on. Saving 2ms on some operations doesn’t matter if you wait 50ms on the database (and sometimes using more threads is actually slower due to orchestration). If you’re working with highly efficient algorithms or with GPUs then parallelism has a much higher priority. But it always depends on what you’re working with.

    Depending on your tech stack you might not even have the option to properly use parallelism, for example with JavaScript (if you don’t jump through hoops).


  • At this point you’re just arguing to argue. Of course this is about the math.

    This is Amdahl’s law, it’s always about the math:

    https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/AmdahlsLaw.svg/1024px-AmdahlsLaw.svg.png

    No one is telling students to use or not use parallelism, it depends on the workload. If your workload is highly sequential, multi-threading won’t help you much, no matter how many cores you have. So you might be able to switch out the algorithm and go with a different one that accomplishes the same job. Or you re-order tasks and rethink how you’re using the data you have available.

    Practical example: The game Factorio. It has thousands of conveyor belts that have to move items in a deterministic way. As to not mess things up this part of the game ran on a single thread to calculate where everything landed (as belts can intersect, items can block each other and so on). With some clever tricks they rebuilt how it works, which allowed them to safely spread the workload over several cores (at least for groups of belts). Bit of a write-up here (under “Multithreaded belts”).

    Teaching software development involves teaching the theory. Without that you would have a difficult time to decide what can and what can’t benefit from multi-threading. Absolutely no one says “never multi-thread!” or “always multi-thread!”, if you had a teacher like that then they sucked.

    Learning about Amdahl’s law was a tiny part of my university course. A much bigger part was actually multi-threading programs, working around deadlocks, doing performance testing and so on. You’re acting as if the teacher shows you Amdahl’s law and then says “Obviously this means multi-threading isn’t worth it, let’s move on to the next topic”.


  • You still don’t get it. This is about algorithmic complexity.

    Say you have an algorithm that has 90% that can be done in parallel, but you have 10% that can’t. No matter how many cores you throw at it, be it 4, 10, or a billion, the 10% will be the slowest part that you can’t optimize with more cores. So even with an unlimited amount of cores, your algorithm is still having to wait on the last 10% that runs on a single core.

    Amdahl’s law is simply about those 10% you can’t speed up, no matter how many cores you have. It’s a bottleneck.

    There are algorithms you can’t run in parallel, simply because the results depend on each other. For example in a cipher where you first calculate block A, then to calculate block B you rely on block A. You can’t do block A and B at the same time, it’s not possible. Yes, you can use multi-threading to calculate A, then do it again to calculate B, but overall you still have waiting times while you wait for each result, which means no matter how fast you get, you always have a minimum time that you’ll need.

    Throwing more hardware at this won’t help, that’s the entire point. It helps to a certain degree, but at some point the parts you can’t run in parallel will hold you back. This obviously doesn’t count for workloads that can be done 100% in parallel (like rendering where you can split the workload up without issues), Amdahl’s law doesn’t apply there as the amount of single-core work would be zero in the equation.

    The whole thing is used in software development (I heard of Amdahl’s law in my university class) to decide if it makes sense to multi-thread part of the application. If the work you do is too sequential then multi-threading won’t give you much of a benefit (or makes it run worse, as you have to spin up threads and synchronize results).


  • There’s this Computer Science 101 concept called Amdahl’s Law that was taught wrong as a result of this - people insisted ‘more processors won’t work faster,’ when what it said was, ‘more processors do more work.’

    You massacred my boy there. It doesn’t say that at all. Amdahl’s law is actually a formula how much speedup you can get by using more cores. Which boils down to: How many parts of your program can’t be run in parallel? You can throw a billion cores at something, if you have a step in your algorithm that can’t run in parallel… that’s going to be the part everything waits on.

    Or copied:

    Amdahl’s law is a principle that states that the maximum potential improvement to the performance of a system is limited by the portion of the system that cannot be improved. In other words, the performance improvement of a system as a whole is limited by its bottlenecks.





  • You only get good quality if you use the right model, the right keywords, the right negative prompt, the right settings, … and then it can still be pure luck.

    If you see a high quality AI image that actually looks good (not just parts of it, but the whole composition) then someone probably spent hours with fine-tuning and someone else spent weeks to customize the model.

    And even if you’re good at that, you’ll never get exactly the image you had in your mind. Especially as most models are heavily biased (You can create a portrait of a busty beautiful woman, but the second one you create probably has a very similar face).

    This might get better relatively fast, but right now AI art is not a replacement for good artists. Especially if you need more than one image with consistency between them.

    It’s more like a superpowered Photoshop where you can mess around with and get cool results, just that instead of filters or a magic stamp you generate the entire image.

    Super cool tech, but of course artists feel threatened. Except the popular ones who already drown in commissions.


  • You forgot a massive step in-between: Digital art / Photoshop.

    Which already vastly sped up art creation and made it easier (when you can just use special brushes instead of having to spend hours doing a pattern by hand).

    And even though it’s a lot easier, you still need artists to produce proper products. Good artists and designers will keep their jobs in the foreseeable future, while more simple one-shot works can be done by AI.


  • If you have a basic understanding how AI works then this argument doesn’t hold much water.

    Let’s take the human approach: I’m going to look at all the works of popular painters to learn their styles. Then I grab my painting tools and create similar works.

    No credit there, I still used all those other works as input and created by own based on them.

    With AI it’s the same, just in a much bigger capacity. If you ask AI to redraw the Mona Lisa you won’t get a 1:1 copy out, because the original doesn’t exist in the trained model, it’s just statistics.

    Same as if you tell a human to recreate the painting, no matter how good they are, they’ll never be able to perfectly reproduce the original work.





  • Absolutely NEVER mark anything from an online email provider you want to keep as spam. They use shared systems, it’s not just spam for you, but potentially for everyone on that email provider. That’s one way to protect people from receiving spam, 100 users marked that same newsletter email as spam? Alright, the newsletter will go to the spam folder for the next 20k users.

    If you mark legitimate emails as spam for fun you’re fucking up the system (and give the sender a massive headache if suddenly every @gmail.com receiver puts their emails into the spam folder).



  • Vlyn@lemmy.worldtoMildly Infuriating@lemmy.worldThanks Spez!
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    You can believe what you want, but I literally did this a month ago. Editing my comments, then submitting a GDPR request and getting a large package of my data. Showing every up and downvote for example (which was over a million entries, ouch) and every comment and post I ever made in the last 11 years.

    Deleting your comment does not delete the database entry. It’s up to discussion if that conforms to GDPR, because theoretically you could have personal data in your comment… but for now they don’t delete them. So obviously your GDPR request will contain deleted comments, as they are still right where you left them (and you agreed to the terms and conditions of Reddit which technically make any content you post on their platform their content legal wise).

    If you edited your comment and then deleted it and the GDPR request showed the original comment… then that’s a different matter. As far as I tried out they don’t keep the comment from before the edit around. Though if you do it too fast, instant edit and delete maybe something gets messed up and the edit doesn’t stick. But this hasn’t happened to me yet (except I edit more than one comment in 3 seconds, then it gets rate limited).

    Reddit admitted they only keep the last version of your comments around, so if you edit them with random crap it’s as close to deletion as you can get.