It comes down to bridging. I use discord and slack via IRC bridges. I actually use slack a lot (for work), but primarily through irslackd. I do not use slack for anything outside of work and would prefer to keep it that way.
For discord, I primarily use it through bitlbee-discord. With this bridge/gateway, I can actually chat on different servers at the same time, so I wouldn’t mind this for different communities if I had to.
Matrix is last because I don’t really have a good briding solution for it and it just seems clunkier than the other two for me.
I would be less willing to contribute/participate in discussions if newer platforms such as discord, slack, or matrix are used. Of those three, I would prefer discord, then slack, then matrix.
As it is, I only use Slack for work, and mostly avoid discord and matrix except for a few mostly dead channels/servers.
I understand that this is not the mainstream view and that most people prefer the newer platforms, but personally, I am not a fan of them nor do I use them.
I’m fine with IRC (actually prefer it as I use it all the time).
I agree with others that a mailing list is more intimidating and more of a hassle, but if there is a web archive, I can live with that. It wouldn’t be my preference, but it wouldn’t be an insurmountable barrier (I have contributed to Alpine Linux in the past via their mailing list workflow).
I think this is the author being humble. jmmv
is a long time NetBSD and FreeBSD contributor (tmpfs, ATF, pkg_comp), has worked as a SRE at Google, and has been a developer on projects such as Bazel (build infrastructure). They probably know a thing or two about performance.
Regarding the overall point of the blog, I agree with jmmv
. Big O is a measure of efficiency at scale, not a measure of performance.
As someone who teaches Data Structures and Systems Programming courses, I demonstrate this to students early on by showing them multiple solutions to a problem such as how to detect duplicates in a stream of input. After analyzing the time and space complexities of the different solutions, we run it the programs and measure the time. It turns out that the O(nlogn) version using sorting can beat out the O(n) version due to cache locality and how memory actually works.
Big O is a useful tool, but it doesn’t directly translate to performance. Understanding how systems work is a lot more useful and important if you really care about optimization and performance.
POSTs are how federation works (ActivityPub is a Push-based protocol). When you “subscribe” to a community on say lemmy.ml, you are telling it to periodically send you updates about that community. This comes in the form of POSTS.
As to the frequency of the POSTs, I can imagine something like lemmy.ml having a lot of activity that it needs to inform your instance of (new votes, new comments, new posts, etc)… but I’m not sure if one request per second is reasonable or not.
Contributing immortal objects into Python introduces true immutability guarantees for the first time ever. It helps objects bypass both reference counts and garbage collection checks. This means that we can now share immortal objects across threads without requiring the GIL to provide thread safety.
This is actually really cool. In general, if you can make things immutable or avoid state, then that will help you structure things concurrently. With immortal objects you now can guarantee that immutability without costly locks. It will be interesting to see what the final round of benchmarks are when this is fully implemented.
You can escape the :
URLS = https\://foo.example.com
URLS += https\://bar.example.com
URLS += https\://www.example.org
Interesting… I’ve only ever done python3 -m cProfile script.py
when I’ve had to profile some code.
Indeed… :|
This looks incredibly cool and fun. Would be interested trying to re-write some of the games myself when I have some free time.
Oh yeah, I forgot this existed… I just setup the Firefox Redirector extension to send NPR links to the text only version of the site now. Thanks for reminding me.
I sometimes use gdb
with C and C++, but never really use pdb
with Python… mostly stick with logging (ok, print
statements). This is good information to know though, and I should probably get better at using a debugger with Python.
It will depend on the nature of how the threaded code is structured (how much is sequential, how much is paralle, Amdahl’s law, etc), but it should at least be more effective at scaling up and taking advantage of multiple cores.
That said, the change would come at a cost to single threaded code. From the PEP 703:
The changes proposed in the PEP will increase execution overhead for --disable-gil builds compared to Python builds with the GIL. In other words, it will have slower single-threaded performance. There are some possible optimizations to reduce execution overhead, especially for --disable-gil builds that only use a single thread. These may be worthwhile if a longer term goal is to have a single build mode, but the choice of optimizations and their trade-offs remain an open issue.
Pretty exciting… I just hope they are able to effectively avoid another long transition ala python2 -> python3.
Seems to have been down all day. Earlier it loaded with a completely empty Lemmy :|
Oh. I’m sorry if this was discussed previously… I only returned to lemmy a few weeks ago and didn’t see the story covered yet.