Bayesian filters are statistical, they have nothing to do with machine learning.
Bayesian filters are statistical, they have nothing to do with machine learning.
You should consider if you really want to integrate your application super tightly with the HTTP protocol.
Will it always be used exclusively over a REST-ful HTTP API that you control, and it has exactly one hop to the client, or passes through hops that can be trusted to never alter the HTTP metadata significantly? In that case you can afford to make HTTP codes semantically relevant for your app.
But maybe you need to pass data through multiple different types of layers and different mechanisms (socket protocols, pub-sub, file storage etc.) In that case you want all your semantics to be independent from any form of transport.
It’s a perfectly fine way of doing things as long as it’s consistent and the spec is clear.
HTTP is a transport layer. You don’t have to use its codes for your application layer. It’s often done that way but it’s not the only way.
In the example above the transport layer is saying “OK I’ve delivered your output” which is technically correct. It’s not concerned with logical errors inside what it was transporting, just with the delivery itself.
If any client app is blindly converting body to JSON without checking (at the very least) content type and size, they deserve what they get.
If you want to make it part of your API spec to always return JSON that’s one thing, but don’t do it to make up for poorly written clients. There’s no end of ways in which clients can fail. Sticking to a clear spec is the only way to preserve your sanity.
But denormalized databases are not a new thing. There are engines that build on it on purpose in order to be more efficient, like Cassandra. Most data warehousing engines use this “trick”. And of course you can do it with a regular RDBMS too.
To some extent all software is disposable. Some places take it to a more ridiculous level than others. If they have money to burn just make sure as much of it as possible ends up in your pocket.
Oof. I guess you can use typescript to make up for lack of IDE but it sounds like you had bigger problems anyway.
You know, you can validate data structures at runtime… and write unit tests… TS is not a substitute for those things.
Typescript only prevents typing bugs… why did they have so many typing bugs?
Who were they talking to.
I blame whoever had their chat open at 1am.
Which part do you think it’s FUD, and why?
PGP is not particularly related to email. It’s also used to encrypt files, partitions etc.
You can use public key cryptography with any system, because you simply encrypt the content and then send it through the normal unencrypted system.
But PGP does nothing for the headers and nothing for the fact messages are still waiting around on various servers. Also PGP on its own is very impractical due to the need to get keys for every recipient – but even if there were a generalized system of public key autodiscovery (over DNS) it still wouldn’t fix the problems with IMAP/POP3/SMTP.
Each of these things holds a piece of the puzzle – including what Proton & Tuta are doing – but these pieces on their own are useless. We need all of them to come together.
You can’t do secure email. You really can’t, sorry. Point (1) above is a game-ending design flaw that makes it impossible, and (2) is just lock-in and hoops to jump through without really adding anything of value.
You could do remote encrypted storage of your email archive but only if you give up the notion that you can also allow that storage server to send and receive messages for you. If they have access then it can be subverted and the whole proposition is worthless.
The way to achieve such storage is by using a remote file storage service reflected locally as a virtual filesystem, preferably with the encryption layer controlled by your device not their service, and use it to store messages managed by your local email client. Your local email client would then use IMAP and SMTP connections to unrelated email servers to send/receive messages. But you’d have to replicate this stack on every device, which is impractical.
The better approach is to self-host your mail archive, with a webmail client on top connected to a SMTP service, and have a local tool on the server that pulls emails from a POP3 server and deletes them afterward. And you can encrypt the disk there if you want, and use whatever you want to access your archive (regular email clients or webmail).
Frankly, I can’t really wrap my head around what services like Proton and Tuta are trying to do, so in turn I can’t get a clear idea of the threat model.
They’re basically running encrypted file storage servers that are used to store email messages, forcing their users to use proprietary protocols to access them. But sending and receiving email messages implies messages passing through other, non-encrypted servers.
The only scenario where they’d approach anything resembling security is if both the recipient and sender are on the same service. Not even passing messages between two such services (Tuta & Proton) is really secure. And since the vast majority of the average user’s messages are exchanged with other servers it means that the vast majority of their messages have a copy in clear on at least one other server out there, and have passed through clear relays that are also not encrypted at rest, making more potential copies available.
So what exactly is solved by having one copy encrypted if there are non-encrypted copies readily available?
They just need to tunnel the data and let the client decrypt it. Basically what Proton does with their bridge app. And also basically what Tuta’s client does.
This whole debacle is a festival of stupidity:
ip.isPrivate()
, which you can write yourself in 5 minutes.At this point the maintainer is fucked no matter what they do, so archiving the project and telling everybody to fuck off right back was really the only sane thing to do.
Then fork it and do that.
These projects are structured as hobbyist projects and get whatever time the maintainer can spare. I have projects like that, they’re useful, but I’m not gonna prioritize them over… anything else, come to think of it.
The fact so many people treat a hobbyist project with one maintainer as critical infrastructure is insane, but that’s on them. Everybody likes free software, nobody likes to help or pay the maintainer.
Clearly it wasn’t maintained.
Lol. It’s an IP library. IP classifications haven’t changed. What could he possibly update?
He’s not suggesting to replace timestamps (nor database sequences). They’re unique identifiers, and they happen to include a timestamp.
If you were 100% specific you would be effectively writing the code yourself. But you don’t want that, so you’re not 100% specific, so it makes up the difference. The result will include an unspecified percentage of code that does not fit what you wanted.
It’s like code Yahtzee, you keep re-rolling this dice and that dice but never quite manage to get the exact combination you need.
There’s an old saying about computers, they don’t do what you want them to do, they do what you tell them to do. They can’t do what you don’t tell them to do.
Every country formerly behind the Iron Curtain had secret police and surveillance.