• 1 Post
  • 102 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle
  • There was a ton of hairbrained theories floating around, but nobody had any definitive explanation.

    Well I was new to the company and fresh out of college, so I was tasked with figuring this one out.

    This checks out lol

    Knowing very little about USB audio processing, but having cut my teeth in college on 8-bit 8051 processors, I knew what kind of functions tended to be slow.

    I often wonder if this deep level understanding of embedded software/firmware design is still the norm in university instruction. My suspicion has been that focus moved to making use of ever-increasing SoC performance and capabilities, in the pursuit of making it Just Work™ but also proving Wirth’s Law in the process via badly optimized code.

    This was an excellent read, btw.





  • 1 - I get that light is flashed in binary to code chips but how does it actually fookin work ? What is the machine emmiting [sic] this light made up of ?

    This video by Branch Education (on YouTube or Nebula) is a high level explanation of every step in a semiconductor fab. It doesn’t go over the details of how semiconductor junctions work, though. That sort of device physics is discussed in this YouTube video by Ben Eater, “how semiconductors work”

    2 - How was program’s, OSs, Kernal [sic] etc loaded on CPU in early days when there were no additional computers to feed it those like today ?

    When the CPU powers up, typically the very first thing it starts to execute is the bootloader. Bootloaders will vary depending on the system, and today’s modern Intel or AMD desktop machines boot very differently to their 1980s predecessor. However, since the IBM PC laid the foundation for how most computers booted up for a nearly four decades, it may be instructive to see how it worked in the 80s. This WikiBook on x86 bootloading should be valid for all 32-bit x86 targets, from the original 8086 to the i686. It may even be valid further, but UEFI started to take off, which changed everything into a more modern form.

    But even before the 80s, computers could have a program/kernel/whatever loaded using magnetic tape, punch cards, or even by hand with physical switches, each representing one bit.

    But how does the computer decode this binary “machine code” into instructions to perform? See this video by Ben Eater, explaining machine instructions for the MOS 6502 CPU (circa 1975). The age of the CPU is not important, but rather that by the 70s, the basics of CPU operations has already been laid down, and that CPU is easy to explain yet non-trivial.

    3 - I get internet is light storing information but how ? Fookin HOW ?

    The mechanics of light bouncing inside a fibre optic cable is well-explained in this YouTube video by engineerguy. But for an explanation of how ones-and-zeros get converted into light to be transmitted, that’s a bit more involved. I might just point you to the Wikipedia page for fibre optic communications.

    How the data is encoded is important, as this has significant impact on bandwidth and data integrity, not just for light but for wireless RF transmission and wireline transmission. For wireless, this Branch Education video on Starlink (YouTube or Nebula) is instructive. And for wired, this Computerphile YouTube video on ADSL covers the challenges faced.

    Quite frankly, I might just recommend the entirety of the Computerphile channel, particularly their back catalogue when they laid down computer fundamentals.

    4 - How did it all come to be like it is today and ist it possible for one human to even learn how it all works or are we just limited one or two things ? Like cab we only know how to program or how to make hardware but not both or all ?

    As of 2024, the field is enormous, to the point that a CompSci degree necessarily has to be focused on a specific concentration. But that doesn’t necessarily mean the hard stuff like device physics are off-limits, leaving just stuff like software and AI. Sam Zeloof has been making homemade microchips, devising his own semiconductor process and posting it on YouTube..

    Specifically to your question about either software or hardware, the specialty of embedded software engineering requires skills with low-level software or firmware, as well as dealing with substantial hardware-specific details. People that write drivers or libraries for new hardware require skills from both regimes, being the bridge between Electrical Engineers that design the hardware, and software developers that utilize the hardware.

    Likewise, developers for high performance computers need to know the hardware inside-out, to have any chance of extracting every last bit (pun intended) of speed. However, these developers tend to rely upon documentation such as data sheets, rather than having to be keenly aware of how the hardware was manufactured. Some level of logical abstraction is necessary to tractably understand today’s necessarily large and complex systems.

    5 - Do we have to join Intel first or something to learn how most of the things work lol ?

    Nope! Often, you can look to existing references, such as Linux source code, to provide a peek at what complexities exist in today’s machines. I say that, but the Linux kernel is truly a monster, not because it’s badly written, but because they willingly take code to support every single bleeding platform that people are willing to author code for. And that means lots and lots of edge cases; there’s no such thing as a “standard” computer. X86 might be the closest to a “standard” but Intel has never quite been consistent across that architecture’s existence. And ARM and RISC-V are on the rise, in any case.

    Perhaps what’s most important is to develop strong foundations to build on. Have a cursory understanding of computing, networking, storage, wireless, software licenses, encryption, video encoding/decoding, UI/UX, graphics, services, containers, data and statistical analysis, and data exchange formats. But then pick one and focus on it, seeing how it interacts with other parts of the computing world.

    Growing up, I had an interest in IT and computer maintenance. Then it evolved into writing websites. Then into writing C++ software. Right before university, I started playing around with the Arduino’s Atmel 328p CPU directly, and so I entered uni as a Computer Engineer, hoping to do both software and hardware.

    The space is huge, so start somewhere that interests you. From the examples above, I think online videos are a fantastic resource, but so can blog posts written by engineers at major companies, as can talks at conferences, as can sitting in at university courses.

    Good luck and good studies!



  • In agreement with the other comments, this is indeed a very dense diagram, specifically the right-side. Focusing on that some more, my chief concern is that this novel triangle representation is very easy to misread.

    Let’s take the dot in the middle which has the arrow with “10M”. What would you say the car percentage for that dot is? The axis along the bottom of the triangle is labeled 0 to 100%, and the dot is just to the right of the 50% demarcation. So maybe 52% or 55% seems reasonable, yeah?

    But the axis is deceiving: notice how the demarcation are all slanted at the bottom. The dot is actually representing about 42%, since although the axis is marked horizontally, the line which is 50% slopes north-east rather than straight up. You can see the 50% number itself is actually rotated 60 degrees counter-clockwise.

    The public transit axis on the left of the triangle has its demarcations tilted clockwise by 60 degrees as well. Only the active transport axis matches the conventional Y axis.

    For that UI/UX reason alone, I wouldn’t endorse this as a “great” depiction of statistical data. If a diagram can – intentionally or not – be used to mislead a casual reader, it’s not one we should put up on a pedestal.

    I also had a gripe about the successive colors not being consistent for each mode of transport, but that’s minor and easily corrected. The tilted axes may require some reworking though.


  • I think this can be more generalized as: why do some people eschew anonymity online? And a few plausible reasons come to mind:

    • a convention carried over from the pre-Internet days to be honest and frank as one would be in-person
    • having no prior experience with anonymity or a basis to expect anonymity to last
    • they’re already a real-life edgelord and so the in-person/online distinction is artificial, or have an IDGAF attitude to such distinctions

    IMO, older people tend to have the first reason, having grown up with the Internet as a communication tool. Younger, post-2000 people might have the second reason, because from the events during their lifetime, privacy has eroded to the point it’s almost mythical. Or that it’s like the landed gentry, that you have to be highly privileged to afford to maintain anonymity.

    I have no thoughts as to the prevalence of the third reason, but I’m reminded of a post I saw on Mastodon months ago, which went something like this: every village used to have the village idiot, but was mostly benign because everyone in town knew he was an idiot. One moron in every 5 or 10 thousand people is fine. But with the Internet, all the village idiots can network with each other, expanding their personal communities and hyping themselves up to do things they otherwise wouldn’t have found support for.

    Coming back to the question, in the context above, maybe online anonymity is a learned practice, meaning it has to be taught and isn’t plainly natural. Nothing quite like the Internet has ever existed in human history, so what’s “natural” may just not have caught up yet. That internet literacy and safety is a topic requiring instruction bolsters this thought.



  • I’m reluctant to upvote this, since it’s leaving out a lot of rather important caveats about the dataset. This depiction is presented as “the number of aviation incidents between the two giants since 2014 in the U.S. and international waters”. Here, “international waters” means the regions of the North Pacific Ocean, north Atlantic Ocean, and Gulf of Mexico, whose airspace services are delegated by ICAO to the United States, administered by the FAA. It’s not US airspace, but it’s administered as if it was, meaning accident reports get filed with FAA and NTSB, the source of this data.

    The other caveat is that the total size of the Boeing fleet flying through FAA-administered airspace versus the total Airbus fleet is closer to 2-to-1, with nearly twice as many Boeing aircraft as Airbus aircraft, using 2018 estimates. This is including all the aircraft which US airliners currently operate, not just the newest ones they’ve bought in recent years.

    Finally, in the reporting parlance, an aircraft “incident” means a non-serious injury event that happened. If major injuries or death occurred, that would be an “aircraft accident”. So an incident could include anything like:

    • Returning to the airport because of an unruly passenger
    • Another aircraft getting too close but not requiring evasive manoeuvres (aka minimum separation violation)
    • Overspeeding of the aircraft, such as exceeding 250 knots while still below 10,000 ft
    • An engine failure
    • A door plug falling off, causing minor injuries to three people but no deaths
    • A passenger getting their arm stuck in the toilet while reaching for their dropped phone

    What reasons could Boeing aircraft have more incidents? Sure, they might be shoddily assembled. But it could also be a matter of fleet distribution: if Boeing makes more wide-body aircraft than Airbus, and thus carry more passengers, then passenger-related incidents would be higher represented for Boeing aircraft. Suffice it to say, this single graphic isn’t giving enough depth to a complicated situation.




  • The knot is non-SI but perfectly metric and actually makes sense as a nautical mile is exactly one degree meridian

    I do admire the nautical mile for being based on something which has proven to be continually relevant (maritime navigation) as well as being brought forward to new, related fields (aeronautical navigation). And I am aware that it was redefined in SI units, so there’s no incompatibility. I’m mostly poking fun at the kN abbreviation; I agree that no one is confusing kilonewtons with knots, not unless there’s a hurricane putting a torque on a broadcasting tower…

    No standard abbreviation exists for nautical miles

    We can invent one: kn-h. It’s knot-hours, which is technically correct but horrific to look at. It’s like the time I came across hp-h (horsepower-hour) to measure gasoline energy. :(

    if you take all those colonial unit

    In defense of the American national pride, I have to point out that many of these came from the Brits. Though we’re guilty of perpetuating them, even after the British have given up on them haha

    An inch is 25mm, and a foot an even 1/3rd of a metre while a yard is exactly one metre.

    I’m a dual-capable American that can use either SI or US Customary – it’s the occupational hazard of being an engineer lol – but I went into a cold sweat thinking about all the awful things that would happen with a 25 mm inch, and even worse things with 3 ft to the meter. Like, that’s not even a multiple of 2, 5, or 10! At least let it be 40 inches to the meter. /s

    There’s also other SI-adjacent strangeness such as the hectare

    I like to explain to other Americans that metric is easy, using the hectare as an example. What’s a hectare? It’s about 2.47 acre. Or more relatable, it’s the average size of a Walmart supercenter, at about 107,000 sq ft.

    1 hectare == 1 Walmart



  • I’m afraid I have no suggestions for DoT servers.

    One tip for your debugging that might be useful is to use dig to directly query DNS servers, to help identify where a DNS issue may lay. For example, your earlier test on mobile happened to be using Google’s DNS server on legacy IP (8.8.8.8). If you ran the following on your desktop, I would imagine that you would see the AAAA record:

    dig @8.8.8.8 mydomain.example.com

    If this succeeds, you know that Google’s DNS server is a viable choice for resolving your AAAA record. You can then test your local network’s DNS server, to see if it’ll provide the AAAA record. And then you can test your local machine’s DNS server (eg systemd-resolved). Somewhere, something is not returning your AAAA record, and you can slowly smoke it out. Good luck!


  • If I understand correctly, you’re now able to verify the AAAA on mobile. But you’re still not able to connect to the web server from your mobile phone. Do I have that right?

    I believe in a different comment here, you said that your mobile network doesn’t support IPv6, and nor does a local WiFi network. In that case, it seems like your phone is performing DNS lookups just fine, but has no way to connect to an IPv6 destination.

    If your desktop does have IPv6 connectivity but has DNS resolution issues, then I would now look into resolving that. To be clear, was your desktop a Linux/Unix system?


  • If you describe what you configured using DNS and what tests you’ve performed, people in this community could also help debug that issue as well.

    An AAAA records to map a hostname to an IPv6 address should be fairly trouble-free. If you create a new record, the “dig” command should be able to query it immediately, as the DNS servers will go through to the authoritative server, which has the new record. But if you modified an existing record, then the old record’s TTL value might cause the old value to remain in DNS caches for a while.

    When in doubt, you can also aim “dig” at the authoritative name server directly, to rule out an issue with your local DNS server or with your ISP’s DNS server.



  • I’m not quite sure I follow. The AGPL mirrors the GPL, with an extra proviso that accessing the software via the network constitutes “use” if the binary, not “distribution” of the binary. Under GPL, the mere use of a binary does not require the availability of source.

    Example: a student uses a GNU/Linux computer at their university computer lab. She runs the unmodified “tar” command from GNU Coreutils, which is GPL licensed. She is not entitled to a copy of the source from the university, because execution is a “use” of the binary on an already-provisioned machine, not a “distribution” of the binary.

    Example: a student is given a software assignment from her professor, along with a .7z file containing old versions of “tar” that contain bugs, all GPL licensed. This is a distribution – as in, a copy – of the binary, so she is entitled to a copy or link to the source from her professor.

    The first example helps explain what the AGPL adds, in the context of network use. Consider what happens if the university actually modified the “tar” command installed on their machines. They still would not have to distribute the modified source to the students, because students only execute (“use”) the binaries. But with AGPL, use of modified software obliges source distribution.

    Phrased another way, AGPL has every guarantee that GPL does, but adds another obligation for modified use via a network. Unmodified use does not require source distribution, under both GPL and AGPL.


  • One of the drawbacks of software licensing with community projects – although there are some (controversial) ways to sidestep this – is that the license needs to be selected at the onset of the project, and you’d have to have everyone agree to that license or change the license.

    If all the initial parties agree to use a FOSS license, they and all subsequent contributors under that license cannot complain that someone is actually employing that software per the terms of the license. A project might choose FOSS because they want to make sure the codebase only dies when it disappears from the last developer’s disk.

    If instead, the initial parties decided on some sort of profit-sharing license – I don’t know one of the top of my head – then they and future contributors cannot complain if no business wants to use the software, either because FOSS competitors exist or because they don’t like the profit split ratio in the license. If that ratio is fixed in the license, the project could die from lack of interest, since changing the license terms means everyone who contributed has to agree, so a single hardliner will doom the already-written code to obscurity.

    The sidestep method – which is what appears to have been used by Redis to do this relicensing to the SSPL – is that all contributors must sign a separate agreement giving Redis Inc a stake in your contribution’s copyright. This contributor agreement means any change to the Redis codebase – since its inception? Idk – has been dual-licensed: AGPL to everyone, and a special grant to Redis Inc who can then relicense your work to everyone under a new license.

    Does the latter mean Redis Inc could one day switch to a fully-closed source license? Absolutely! That’s why this mechanism is controversial, since it gives the legal entity of the project all the copyright powers, to level-up to FOSS or level-down to proprietary. Sure, you can still use the old code under the old license, but that’s cold comfort and is exactly why hard forks of Redis are becoming popular right now.

    In short, software projects have to lay out their priorities at the onset. If they want enduring code, that’s their choice. If they want people to pitch in a fair share, that’s fine too. But that choice entails tradeoffs, which they should have known from the start. Some mechanisms allow the flexibility to change priorities in the future, but it’s a centralized, double-edge sword.