I don’t think that is true. Not much at Google really bought into the UUID hype. At least not for internal interfaces. But really there is no difference between a UUID v4 and a large random number. UUID just specifies a standard formatting.
I don’t think that is true. Not much at Google really bought into the UUID hype. At least not for internal interfaces. But really there is no difference between a UUID v4 and a large random number. UUID just specifies a standard formatting.
It is true, don’t do it.
Even at huge companies like Google, lots of stuff was keyed on your email address. This was a huge problem so Google employees were not allowed to change their email for the longest time. Eventually they opened it up by request but they made it very clear that you would run into problems. So many systems and services would break. Over time I think most external services are pretty robust now, but lots of internal systems still use emails (or the username part of it) and have issues.
IIUC Google accounts now use a random number as the key. But there are still places where the email is in use, slowly being fixed at massive cost.
I’m using Kagi. I find that it does a better job at finding “legitimate” sites rather than blogspam and content marketing. However I’m not sure I will stick with it a long time. I seems like it has mostly stalled and the team is getting distracted by making a browser, non-relevant AI (I have no problem with the few AI experiments tied to searching) and other side projects. We’ll see. I really hope that they pull themselves together and focus or it might not last. But for now they seem like one of the better options available.
Bing’s new “Deep Search” where it has some sort of LLM refinement iteration process has also been helpful sometimes. Probably the best AI search product I have seen, but definitely doesn’t replace most searches for me.
My problem is that I do red
, blue
, green
then can’t think of any more clearly visible colours.
Algorithms are like AI but accurate, predictable and usually much faster.
You could go columns for the content, but I think my ideal layout would still have the main content in a single column. I would put all of the chrome horizontally through. For example no header before and footer afterwards, put everything in different columns. Maybe even throw some extra navigation on the screen.
You don’t need to use every pixel, just avoid putting things offscreen unnecessarily.
Yes, I agree with your clarification of commonly in use terms.
In my experience taking a term that is widely used and attempting to give it a more specific meaning doesn’t end well. If people are using “method” interchangeably with “associated function” right now it will be an endless battle of trying to make people stop using the term “sloppily” when it isn’t sloppy it was just the original meaning.
There is no concrete difference between the two options. But in general they will be similar. I think you are talking about these options:
struct Person;
struct Skill;
struct PersonSkills {
person: PersonId,
skill: SkillId,
}
vs
struct Person {
skills: SkillId[],
}
struct Skill;
The main difference that I see is that there is a natural place to put data about this relationship with the “join table”.
struct PersonSkills {
person: PersonId,
skill: SkillId,
acquired: Timestamp,
experience: Duration,
}
You can still do this at in the second one, but you notice that you are basically heading towards an interleaved join table.
struct PersonSkills {
skill: SkillId,
acquired: Timestamp,
experience: Duration,
}
struct Person {
skills: PersonSkills[],
}
There are other less abstract concerns. Such as performance (are you always loading the list of skills, what if it is long) or correctness (if you delete a Person do you want to delete these relationships, it comes “for free” if they are stored on the Person) But which is better will depend on your use case.
I would definitely go for Irish sheep farmer. You get to live in a cute little house in a green pasture by the seaside and the sheep feed themselves. What do you need to do? Sheer them every once and a while? I’d take that over Terraform any day of the week.
Or use a browser extension to implement your preferences rather than push them onto others in a way that makes it harder for them to implement theirs.
If an article links to medium.com
my redirects kick in, my link flagging kicks in and everything else. If everyone uses some different service to “fix” medium I am stuck with what they like. There is valuable to keeping the canonical URL.
I would also love to see domain blocks as a user preference in Lemmy. Just hide these sites that I don’t like.
This is “Testing on the Toilet” which are short flyers that are posted in bathrooms at Google. They aren’t typically meant to be particularly profound, just to remind people of common code patterns that can be written more clearly or other reminders that are good to keep in mind.
I like the option to preserve originals. I wonder if this is now always done or if it is configurable. Often times I am preserving the original footage and project files anyways so don’t need an original. However other times I am just throwing footage straight from the camera and the archive is nice.
It also opens interesting possibilities like re-encoding down the road to new or better codecs or even just better encoders. For example it would be interesting to dedicate one background thread to re-encoding in a much higher effort, and possibly re-running this every few years to take advantage of encoder upgrades.
Yeah, a lot of these were lessened because the task was easy without any knowledge. I like the iPod one because the UX would be unfamiliar to someone who didn’t use it. But things like “Type {phrase} into the search box” are really just a lame way to make a reference.
I’m a millennial and didn’t know who that is. But I don’t know any actors so unsurprising. I did end up just figuring out which face was the most popular and selecting that.
But the rest were fairly easy. I still don’t know what a Skibidi toilet is though.
Yes, if you ask about a tag on a commit that you don’t have git won’t know about it. You would need to download that history. You also can’t in general say “commit A doesn’t contain commit B” as you don’t know all of the parents.
You are completely right that --depth=1
will omit some data. That is sort of the point but it does have some downsides. Filters also omit some data but often the data will be fetched on demand which can be useful. (But will also cause other issues like blame
taking ridiculous amounts of time.)
Neither option is wrong, they just have different tradeoffs.
Once you push it once it is pretty fast.
Competence is expensive. Supply is low and demand is high.
What are you smoking? Shallow clones don’t modify commit hashes.
The only thing that you lose is history, but that usually isn’t a big deal.
--filter=blob:none
probably also won’t help too much here since the problem with node_modules
is more about millions of individual files rather than large files (although both can be annoying).
Yup, that “what can I start in 10min” question really ruins a lot of productivity.