• 0 Posts
  • 77 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle

  • So, if you just use the system API, then this means logging with syslog(3). Learn how to use it.

    This is old advice. These days just log to stdout, no need for your process to understand syslog, systemd, containers and modern systems just capture stdout and forward that where it needs to do. Then all applications can be simple and it us up to the system to handle them in a consistent way.

    NOTICE level: this will certainly be the level at which the program will run when in production

    I have never see anyone use this log level ever. Most use or default to Info or Warn. Even the author later says

    I run my server code at level INFO usually, but my desktop programs run at level DEBUG.

    If your message uses a special charset or even UTF-8, it might not render correctly at the end, but worst it could be corrupted in transit and become unreadable.

    I don’t know if this is true anymore. UTF-8 is ubiquitous these days and I would be surprised if any logging system could not handle it, or at least any modern one. I am very tempted to start adding some emoji to my logs to find out though.

    User 54543 successfully registered e-mail user@domain.com

    Now that is a big no no. Never ever log PII data if you don’t want a world of hurt later on.

    2013-01-12 17:49:37,656 [T1] INFO c.d.g.UserRequest User plays {‘user’:1334563, ‘card’:‘4 of spade’, ‘game’:23425656}

    I do not like that at all. The message should not contain json. Most logging libraries let you add context in a consistent way and can output the whole log line in Json. Having escaped json in json because you decided to add json manually is a pain, just use the tools you are given properly.

    Add timestamps either in UTC or local time plus offset

    Never log in local time. DST fucks shit up when you do that. Use UTC for everything and convert when displayed if needed, but always store dates in UTC.

    Think of Your Audience

    Very much this. I have seen far too many error message that give fuck all context to the problem and require diving through source code to figure out the hell went wrong. Think about how logs will be read without the context of the source code at hand.



  • Whatever language you chose you might want to also look at the htmx JS library. It lets you write in your html snippets better interactivity without actually needing to write JS. It basically lets you do things like when you click on an element, it can make a request to your server and replace some other element with the contents your server responds with - all with attributes on HTML tags instead of writing JS. This lets you keep all the state on the backend and lets you write more backend logic without only relying on full page refreshes to update small sections of the page.

    For a backend language I would use rust as that is what I am most familiar with now and enjoy using the most. Most languages are adequate at serving backend code though so it is hard to go wrong with anything that you enjoy using. Though with rust I tend to find I have fewer issues when I deploy something as appose to other languages which can cause all sorts of runtime errors as they let you ignore the error paths by default.



  • Yup, this is part of what’s lead me to advocate for SRP (the single responsibility principle).

    Even that gets overused and abused. My big problem with it is what is a single responsibility. It is poorly defined and leads to people thinking that the smallest possible thing is one responsibility. But when people think like that they create thousands of one to three line functions which just ends up losing the what the program is trying to do. Following logic through deeply nested function calls IMO is just as bad if not worst than having everything in a single function.

    There is a nice middle ground where SRP makes sense but like all patterns they never talk about where that line is. Overuse of any pattern, methodology or principle is a bad thing and it is very easy to do if you don’t think about what it is trying to achieve and when applying it no longer fits that goal.

    Basically, everything in moderation and never lean on a single thing.


  • Refactoring should not be a separate task that a boss can deny. You need to do feature X, feature X benefits from reworking some abstraction a bit, then you rework that abstraction before starting on feature X. And then maybe refactor a bit more after feature X now you know what it looks like. None of that should take substantially longer, and saves vast amounts of time later on if you don’t include it as part of the feature work.

    You can occasionally squeeze in a feature without reworking things first if time for something is tight, but you will run into problems if you do this too often and start thinking refactoring is a separate task to feature work.


  • “Best practices” might help you to avoid writing worse code.

    TBH I am not sure about this. I have seen many “Best practices” make code worst not better. Not because the rules themselves are bad but people take them as religious gospel and apply them to every situation in hopes of making their code better without actually looking at if it is making their code better.

    For instance I see this a lot in DRY code. While the rules themselves are useful to know and apply they are too easily over applied removing any benefit they originally gave and result in overly abstract code. The number of times I have added duplication back into code to remove a layer of abstraction that was not working only to maybe reapply it in a different way, often keeping some duplication.

    Suddenly requirements change and now it’s bad code.

    This only leads to bad code when people get to afraid to refactor things in light of the new requirements.Which sadly happens far to often. People seem to like to keep what was there already and follow existing patterns even well after they are no longer suitable. I have made quite a lot of bad code better by just ripping out the old patterns and putting back something that better fits the current requirements - quite often in code I have written before and others have added to over time.





  • Have not used it myself, but having a quick look at it I don’t think I likely ever will. Mostly for personal preferences with the projects goals and design than anything else.

    I don’t like large do everything frameworks. They are ok when you want to do what they were designed for. But as soon as you step outside that they become a nightmare to deal with. They also tend to grow more complex over time as more of what everyone wants gets added to them. A framework author’s case against frameworks is a great talk on the matter. Instead I prefer simpler smaller focused libraries that I can pick and choose from that best suit the application I want to build.

    Also it seems like the MVC pattern, which I dislike. Personally I like to group code that changes together next to each other. Where as MVC groups code that has the same function together and splits up code that tends to change together. This means for any change or feature you are editing many files across many folders which gets tedious rather than just co-locating all the related code in one directory or file.

    Because they include all dependencies for everything you might want they often lag behind upstream projects. This was a huge issue for me years ago when I tried out the Rocket framework. I wanted to use a hosted postgres DB that only supported https connections but the version of the library it was using did not yet include that feature - basically killing the project there.

    They can be great if they do everything you want in the way you want and loco looks to be well built and maintained overall. But I find far too often that they don’t, if not at the start of a project then eventually as my projects evolve (which is far worst). I would also question its staying power though (we have seen popular and promising frameworks before that suddenly stop development) but only time will answer that.





  • I think this is true from the original definition of the word. But decades of one side calling out the other sides propaganda in harsh and negative lighting leaves a negative connotation to the word. Which results in each side avoiding the word for their own messaging and using the word for their opponents messaging. Which further reinforces the negative perception of the word and over decades of people doing this it has left a lot of people thinking it only ever applies to negative or deceptive messaging. And I think this was more impactful in places like the US where there were a lot of political people using the word in a negative way - such as in the big red scare campaign in attacking communist ideas by calling it communist propaganda and similar messaging.

    Which is shown by various comments in this post thinking it only applies to negative or deceptive messaging. So I would argue the meaning of the word has or is still changing - as words naturally do over time due to how people use them. Which I think goes a way to answering the OPs question, some places used the word more negatively which gives the people that live in those areas a more negative view on the word. While others have not and so people there have a more neutral take on the word.


  • a hearing doc won’t be much help with anything aside from hearing

    Irrelevant to my argument. All I mean is because they go to visit someone for one problem can mean they visit other people for other problems they have - rather than ignoring things and staying at home. And I am not claiming what I said to be the actual cause - at least no more than what the study can claim. There could be many other factors at play here as well. My only point is to not confuse causation for correlation and that these types of studies are almost useless in what they tell us.

    Hell, even spurious correlations happen all the time. You cannot use two graphs looking similar to prove any point without a lot of control for all the other variables that might be at play and not go looking at large amounts of data for anything that might seem interesting.


  • nous@programming.devtoScience@lemmy.mlHearing Aids May Help People Live Longer
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    edit-2
    6 months ago

    “We found that adults with hearing loss who regularly used hearing aids had a 24% lower risk of mortality than those who never wore them,” said Janet Choi, MD, MPH, an otolaryngologist with Keck Medicine and lead researcher of the study. “These results are exciting because they suggest that hearing aids may play a protective role in people’s health and prevent early death.”

    While the study did not examine why hearing aids may help those who need them live longer, Choi points to recent research linking hearing aid use with lowered levels of depression and dementia. She speculates that the improvements in mental health and cognition that come with improved hearing can promote better overall health, which may improve life span.

    This is the classic causation vs correlation problem with these studies. I can also speculate: Did the hearing aid actually do anything at all - or do the people that go to the doctors for a hearing aid also go to the doctors for other problems they might have and so get more treatment for other conditions? That seems far more likely to have a bigger effect IMO.