• 6 Posts
  • 52 Comments
Joined 1 year ago
cake
Cake day: June 5th, 2023

help-circle

  • How torrents validate the files being served?

    Recently I read a post where OP said they were transcoding torrents in place and still seeding them, so their question was if this was possible since the files were actually not the same anymore.
    A comment said yes, the torrent was being seeded with the new files and they were “poisoning” the torrent.

    So, how this can be prevented if torrents were implemented as a CDN?
    An in general, how is this possible? I thought torrents could only use the original files, maybe with a hash, and prevent any other data being sent.


  • I’m just annoyed by the regions issues, you’ll get pretty biased results depending in what region you select.
    If you try to search for something specific to a region with other selected you’ll find sometime empty results, which shows you won’t get relevant results about a search if you don’t properly select the region.

    Probably this is more obvious with non technical searches, for example my default region is canada-en and if I try “instituto nacional electoral” I only get a wiki page, an international site and some other random sites with no news, only when I change the region I get the official page ine.mx and news. For me this means kagi hides results from other regions instead of just boosting the selected region’s ones.







  • I want instances that block as few other users as possible so I can decide for myself what content I see.

    Then you want to selfhost, otherwise you’ll always be at the will of someone else to decide which instances they want to federate with.

    Even then, you’ll still want to have in mind instances known for spam, bots, or shady content have been blocked.


  • The last time I checked postgres gets big becouse of a log activity table used for deduplication, it stores the data of 6 months. The devs mentioned you could be deleting it up to some point (IIRC they said 3 months, but confirm first).

    As for pictrs, lemmy caches a lot of stuff, so it copies a lot of data from other instances even when it’s advertised only media from your instance is stored in your server.
    My solution was to disable pictrs since I don’t upload media.
    Other solutions I’ve heard about are to ask users of your instance to upload media to any other media hosting service, the images uploaded to lemmy are just seen as urls, so it wouldn’t be any different.



  • That’s exactly what 1:a:0 does, from the first stream, from the audio streams, select the first stream.
    In this case since the audio is the second stream 1:a:0 is the same as 1:1

    I just tried it the other way, moving the audio from the mkv to the mp4 and it works properly.
    Probably I can try to bundle the video of the mkv into an mp4 since Jellyfin is going to be doing it anyway when I try to stream to most devices.




  • Thanks for all the information and advises!

    So in theory basic auth is enough when sent through HTTPS, right?
    If this is the case then the user would need to handle their password and my API can keep storing just the hash.

    In another comment JWT was suggested, maybe this could also be a solution?
    I’m thinking the user can worry about generating and signing the token and we could only be storing the public key , which requires less strictness when handling it, this way we can validate the token has been signed by who we expect and the user will worry about the private key.





  • Someone want’s me to implement a way to access a resource without having to make the extra HTTP calls required by OAuth, WSSE is a possibility since I saw it had some standards to send the credentials in a secure way.
    I have been reading about WSSE for less than a week '^-^

    Yeah, the idea would be the tokens used to generate the digest WSSE requires will live in our secure environment, and that’s the question: how is a secure environment created to store tokens/API keys of users which will be used to authenticate them into my API?
    I haven’t implemented this kind of stuff so I don’t know what are the best practices to store this kind of sensitive data.
    So, I’d need to research password vaults to store my user’s secrets so I can use them to authenticate them?

    I went into WSSE since sending a client id + secret seems just rewording of basic authentication and well, sending the credentials in plain text seems more insecure than sending a hash.



  • Based on the title you’re right, I asked about how to do X when probably I need to do Y, but the first and last paragraphs mention what’s my requirement: a for of authentication which doesn’t require to make an extra HTTP call to generate a token.

    And what I mean by this is OAuth specifies the client needs to request an access token and an optional refresh token to the authorization server, afterwards the access token can be sent to the resource server (in this case my API), if the token expires the client can make another request to the authorization server with the refresh token.
    Each call to the authorization server is that “extra http call” I mentioned.

    Currently the only solution I found which seemed somewhat secure was WSSE, but again, I’ve only worked with OAuth2 and hashing passwords (or even better, using a dedicated service like keycloak), so I’m not sure what’s the best option to store the data it requires or if there’s a better solution.

    I don’t know how to be more clear, is there a way to authenticate a client to the resource server (my API) without making the client call endpoints to generate the tokens? Is there a way for the client to generate their own tokens and for me to validate them?