Interested in Linux, FOSS, data storage systems, unfucking our society and a bit of gaming.

I help maintain Nixpkgs.

https://github.com/Atemu
https://reddit.com/u/Atemu12 (Probably won’t be active much anymore.)

  • 2 Posts
  • 214 Comments
Joined 4 years ago
cake
Cake day: June 25th, 2020

help-circle


  • They were mentioned because a file they are the code owner of was modified in the PR.

    The modifications came from another branch which you accidentally(?) merged into yours. The problem is that those commits weren’t in master yet, so GH considers them to be part of the changeset of your branch. If they were in master already, GH would only consider the merge commit itself part of the change set and it does not contain any changes itself (unless you resolved a conflict).

    If you had rebased atop of the other branch, you would have still had the commits of the other branch in your changeset; it’d be as if you tried to merge the other branch into master + your changes.






  • Atemu@lemmy.mltoProgrammer Humor@programming.devRebase Supremacy
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    6 months ago

    You should IMO always do this when putting your work on a shared branch

    No. You should never squash as a rule unless your entire team can’t be bothered to use git correctly and in that case it’s a workaround for that problem, not a generally good policy.

    Automatic squashes make it impossible to split commit into logical units of work. It reduces every feature branch into a single commit which is quite stupid.
    If you ever needed to look at a list of feature branch changes with one feature branch per line for some reason, the correct tool to use is a first-parent log. In a proper git history, that will show you all the merge commits on the main branch; one per feature branch; as if you had squashed.

    Rebase “merges” are similarly stupid: You lose the entire notion of what happened together as a unit of work; what was part of the same feature branch and what wasn’t. Merge commits denote the end of a feature branch and together with the merge base you can always determine what was committed as part of which feature branch.



  • Because when debugging, you typically don’t care about the details of wip, some more stuff, Merge remote-tracking branch 'origin/master', almost working, Merge remote-tracking branch 'origin/master', fix some tests etc. and would rather follow logical steps being taken in order with descriptive messages such as component: refactor xyz in preparation for feature, component: add do_foo(), component: implement feature using do_foo() etc.


  • For merge you end up with this nonsense of mixed commits and merge commits like A->D->B->B’->E->F->C->C’ where the ones with the apostrophe are merge commits.

    Your notation does not make sense. You’re representing a multi-dimensional thing in one dimension. Of course it’s a mess if you do that.

    Your example is also missing a crucial fact required when reasoning about merges: The merge base.
    Typically a branch is “branched off” from some commit M. D’s and A’s parent would be M (though there could be any amount of commits between A and M). Since A is “on the main branch”, you can conclude that D is part of a “patch branch”. It’s quite clear if you don’t omit this fact.

    I also don’t understand why your example would have multiple merges.

    Here’s my example of a main branch with a patch branch; in 2D because merges can’t properly be represented in one dimension:

    M - A - B - C - C'
      \           /
        D - E - F
    

    The final code ought to look the same, but now if you’re debugging you can’t separate the feature patch from the main path code to see which part was at fault.

    If you use a feature branch workflow and your main branch is merged into, you typically want to use first-parent bisects. They’re much faster too.






  • https://selfprivacy.org/

    I have never used it but from the website I gather that it’s an app (literally, there’s a mobile app) which enables you to remotely set up a VPS with a set of services at generic VPS hosting providers like Hetzner or DigitalOcean with the click of a few buttons.

    It builds on NixOS which naturally lends itself to abstraction. They have created a pre-made NixOS configuration which configures these services to sensible defaults and provides a few highly abstract options which the user must set themselves. “Enable service xyz”, “Enable backups for services a, b, and c”.
    I assume these are set using a UI; producing a JSON like this. All the generic NixOS config then needs to do is simply consumes the JSON and set the internal options accordingly. But the user doesn’t need to care about any of that, the experienced people who maintain this NixOS config do it for them.

    I don’t know how well it works currently but I absolutely see and love the vision. Imagine being able to deploy all the cloud services you need on your own VPS by creating a few accounts, copy pasting API tokens and then simply tapping sliders and buttons in a mobile app. I can absolutely see that becoming suitable for the masses.




  • Intel and AMD are so similar, they may aswell be the same platform. The only real difference is the iGPU where Intel has an edge in terms of transcoding quality.

    I wouldn’t buy anything new or recently released for a modest home server. I don’t think you can get really good deals on alder lake CPUs yet, so I don’t think you need to worry about efficiency cores.
    Any CPU made in the last decade or so can do virtualisation just fine.

    I haven’t looked into this in detail yet but, for WAPs, I’d buy something that can run OpenWRT.
    For firewall/gateway, it highly depends on your internet connection. If you have fiber terminated to copper, you could use anything that has an Ethernet port but with DSL or DOCSIS, your only reasonably choice is likely a SOHO router. In that case, I’d also look into getting one that can run OPNSense or OpenWRT depending on your taste.