VPN dependent.

  • 4 Posts
  • 53 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle

  • I don’t think I am well positioned to answer that question given my experience. Ill give it my best.

    I believe the advantage of more abstraction of gRPC was desireable because we can point it at a socket (Unix domain or internet sockets) and communicate across different domains. I think we are shooting for a “microserves” architecture but running it on one machine. FFI (IIRC) is more low level and more about language interoperability. gRPC would allow us to prototype stuff faster in other languages (like Python or go) and optimize to rust if it became a bottleneck.

    Short answer is, we are able to deliver more value, quicker, to customers (I guess). But I don’t know much about FFI. Perhaps you can offer some reasons and use cases for it?


  • At work, we started the c++ migration to rust doing the following:

    1. Identify “subsystems” in the c++ code base
    2. Identify the ingress/egress data flows into this subsystem
    3. Replace those ingress/engress interfaces with grpc for data/event sharing (we have yet to profile the performance impact of passing an object over grpc, do work on it, then pass it back)
    4. Start a rewrite of the subsystem. from c++ to rust
    5. Swap out the two subsystems and reattach at the grpc interfaces
    6. Profit in that now our code is memory safe AND decoupled

    The challenge here is identifying the subsystems. If the codebase didn’t have distinct boundaries for subsystems, rewrite becomes much more difficult





  • hahaha good point.

    That colleague, keep in mind is a bit older, also has Vim navigation burned into his head. I think where he was coming from, all these new technologies and syntax for them, he much rather prefers right clicking in the IDE and it’ll show him options instead of doing it all from command line. For example docker container management, Go’s devle debugger syntax, GDB. He has a hybrid workflow tho.

    After having spent countless hours on my Vim config only to restart everything using Lua with nvim, I can relate to time sink that is vim.



  • As a former Vim user myself, I have to say I really dislike screensharing with coworkers who use Vim. They are walking me through code and shit pops up left and right and I don’t know where it comes from or what it is I’m looking at. Code reviews are painful when they walk me through a large-ish PR.

    These days, I tend to bring my vim navigation/key bindings to my IDE instead of IDE funcs to Vim. Hard to beat JetBrains IDEs, especially when you pay them to maintain the IDE functionality.


  • code is just text, so code editors are text editors.

    What sets IDEs apart are their features, like debugger integrations, refactoring assists, etc.

    I love command line ± Vim and used solely it for a large portion of my career but that was back when you had a few big enterprise languages (C/C++, Java).

    With micro services being language agnostic, I find I use a larger variety of languages. And configuring and remembering an environment for rust, go, c, python etc. is just too much mental overhead. Hard to beat JetBrain’s IDEs; now-a-days I bring my Vim navigation key bindings to my IDE instead of my IDE features to Vim. And I pay a company to work out the IDE features.

    for the record, I am in the boat of, use whatever brings you the greatest joy/productivity.




  • Hard Fork: for keeping up with the biggest tech news. they do dissecting of potential impact if stuff.

    Lex Fridman: He interviews really interesting subjects. I’ll listen to subjects I’m interested in based on who they are or the subject matter they are an expert in. Lot’s interesting tech folks. My favorite episode so far is with John Carmack: Doom, Quake, VR, AGI, Programming, Video Games, and Rockets. Epsidoe is 5 f***king hours but broke it up into several sessions and Carmack is so good in articulating, it flew by.

    Huberman Lab: before software I liked biology and medicine. I like these occasionally because I get to learn how systems outside of software/hardware work. These I will watch/listen in a sitting as one would to a movie. It demands your attention to follow along. (I don’t like when doctors have podcasts with all the “alternative medice” BS. But Huberman is an active researcher at Stanford and in charge of a lab that cranks out sweet research. Def credible dude and very methodic and tries to rule out bias).


  • I tried Logitech’s wave keys at the store and I fell in love with them. I have several custom keyboards (including a HHKB with topre keys and WASD Code keeyboard) and this puts them to shame, unfortunetly. Can pick it up for $56 USD.

    https://www.logitech.com/en-us/products/keyboards/wave-keys.html

    • The shape is not those crazy ergo keyboards but the keys are very easy to reach, and you will not have to adjust to a new layout if you are comfortable with laptop keys.
    • The keys have more travel than laptop keys but less than mech keyboards (on average).
    • The Keys are also effortless to press but offer resistance.
    • Bluetooth and if you use wireless Logitech mouse you can use the same BT receiver.
    • They have them at Staples and Best Buy, so you can go and try it out.

    As for programming, I found the WASD Code keyboard to be pretty customizable with their hardware switches. I can flip a switch and boom, my Caps Lock is now another Ctrl, etc. But you can do that in the OS as well. They go around $99 and you can pick different keys. Not sure if they have any wireless ones

    https://www.wasdkeyboards.com/code-v3-87-key-mechanical-keyboard-cherry-mx-blue.html






  • There is a very effective approach (34:00), that big companies like cloudflare use, to ship a product in a fast and quality way. It bears parallels to what you are describing. In essence engineers should not get hung up in the details to trying to solve everything.

    1. Just build a proof of concept
    2. Discard the prototype no matter what and start from scratch keeping the initial feedback in mind
    3. Build something internally that you yourself will use
    4. Only once something is good enough and is used internally, then release it to beta.

    So that tedious process in trying to flush out all the details before seeing a product (or open source effort) working end to end, might be premature before having the full picture.


  • society gains nothing by preventing a software developer from implementing …

    I see the point you are trying to make but I respectfully disagree. Technology is at the core of seemingly every field and at the core of technology is software. Will it result in direct bodily harm? Rarely. But indirectly the impact is certainly more substantial.

    Take internet as an example. The significance of internet and information sharing cannot be disputed. Disturptions to information sharing can send ripples through services that provide essential services. Networking these days is accomplished Vida software defined networking techniques. And we are becoming more dependant on technology and automation.

    I can see why the indirect risk is not as scary as direct risk, but you have to admit, as automation is growing and decisions are being made for us, regulation of those that build these systems should not be overlooked. Professional engineers have a code of ethics they have to adhere to and if you read through it you can see the value it would bring.

    As a counter example to your “doctors are licensed to not kill people” - orthodontists, who move teeth around, pose no fatal risk to their patients. Should they be exempt from being licensed?

    EDIT:

    Just yesterday news was published by Reuters that Musk and managers at Tesla knew about defects of autopilot but marketed otherwise. If those working on it had been licensed, then negligence and decietfulness could line them up to lose their license and prevent them from working in this line again. It would bring accountability