• 1 Post
  • 47 Comments
Joined 1 year ago
cake
Cake day: October 4th, 2023

help-circle

  • I mean, you can get the same sequence of cards, as long as your mechanism used to select a card in #1 is the same as in #2. It’s just like doing #2 52 times in advance and then recording the results.

    There are certain reasons that you might want to do #1 that don’t relate to the sequence of cards coming up. There are certain problems involving multiple untrusted parties where it can be advantageous to be able to prove that you have not fiddled with the card order after the initial “deal”; one way to do this is to generate and then transmit an encrypted list of cards, then later send the decryption keys.

    https://en.wikipedia.org/wiki/Mental_poker

    Mental poker is the common name for a set of cryptographic problems that concerns playing a fair game over distance without the need for a trusted third party. The term is also applied to the theories surrounding these problems and their possible solutions. The name comes from the card game poker which is one of the games to which this kind of problem applies. Similar problems described as two party games are Blum’s flipping a coin over a distance, Yao’s Millionaires’ Problem, and Rabin’s oblivious transfer.

    The problem can be described thus: “How can one allow only authorized actors to have access to certain information while not using a trusted arbiter?” (Eliminating the trusted third-party avoids the problem of trying to determine whether the third party can be trusted or not, and may also reduce the resources required.)

    An algorithm for shuffling cards using commutative encryption would be as follows:

    1. Alice and Bob agree on a certain “deck” of cards. In practice, this means they agree on a set of numbers or other data such that each element of the set represents a card.
    2. Alice picks an encryption key A and uses this to encrypt each card of the deck.
    3. Alice shuffles the cards.
    4. Alice passes the encrypted and shuffled deck to Bob. With the encryption in place, Bob cannot know which card is which.
    5. Bob picks an encryption key B and uses this to encrypt each card of the encrypted and shuffled deck.
    6. Bob shuffles the deck.
    7. Bob passes the double encrypted and shuffled deck back to Alice.
    8. Alice decrypts each card using her key A. This still leaves Bob’s encryption in place though so she cannot know which card is which.
    9. Alice picks one encryption key for each card (A1, A2, etc.) and encrypts them individually.
    10. Alice passes the deck to Bob.
    11. Bob decrypts each card using his key B. This still leaves Alice’s individual encryption in place though so he cannot know which card is which.
    12. Bob picks one encryption key for each card (B1, B2, etc.) and encrypts them individually.
    13. Bob passes the deck back to Alice.
    14. Alice publishes the deck for everyone playing (in this case only Alice and Bob, see below on expansion though).

    The deck is now shuffled.




  • Basically every screenshot of the “lost” TUIs look like a normal emacs/vim session for anyone who has learned about splits and :term (guess which god I believe in?). And people still use those near constantly. Hell, my workflow is generally a mix between vim and vscode depending upon what machine and operation I am working on. And that is a very normal workflow.

    I use emacs, and kind of had the same gut reaction, but they do address it and have a valid point in that the IDEs they’re talking about are “out of box” set up and require little learning to use in that mode.

    Like, you can use emacs and I’m sure vim as an IDE, but what you have is more a toolkit of parts for putting together your own IDE. That can be really nice, more flexible, but it’s also true that it isn’t an off-the-shelf, low-effort-to-pick-up solution.

    Emacs had some “premade IDE” project I recall that I tried and wasn’t that enthusiastic about.

    I don’t know vim enough to know what all the parts are. Nerdtree for file browsing? I dunno.

    With emacs, I use magit as a git frontend, a compilation buffer to jump to errors, projectile to know the project build command and auto-identify the build system used for a given project and search through project files, dired to browse the files, etags and some language server – think things have changed recently, but I haven’t been coding recently – to jump around the codebase. I have color syntax highlighting set up. I use .dir-locals.el to store per-project settings like that build command used by projectile. The gdb frontend to traverse code associated with lines in a stack trace on a running program. TRAMP to edit files on remote machines.

    But that stuff isn’t generally set up or obvious out of box. It takes time to learn.

    EDIT: The “premade IDE” I was thinking of for emacs is eide:

    https://software.hjuvi.fr.eu.org/eide/



  • To clarify: I meant how do I do it via API calls,

    If you mean at the X11 call level, I think that it’s a window hint, assuming that you’re talking about a borderless fullscreen window, and not true fullscreen (like, DGA or DGA2 or something, in which case you don’t have a fullscreen X11 window, but rather direct access to video memory).

    https://specifications.freedesktop.org/wm-spec/latest/ar01s05.html

    See _NET_WM_STATE_FULLSCREEN, ATOM

    If you’re using a widget toolkit like gtk or something and writing the program, it’ll probably have some higher-level fullscreen toggle function that’ll flip that on X11. Ditto for SDL.

    If you mean in a script or something, I’d maybe try looking at xprop(1) to set that hint.

    I’d also add, on the “user” front, that I don’t use F11 and I think that that every window manager or desktop environment that I’ve ever used provides some way to set a user-specified keystroke to toggle a window’s fullscreen state. I’ve set Windows-Enter to do that for decades, on every environment I’ve used.









  • So, I’m not going to discourage people from doing stuff that they’re interested in, but if you’re interested in heading down doing low-level security stuff like this as a career path, I’ll leave you with this piece of advice, which I think is probably more-valuable than the actual technical information I provided here:

    A very great amount of security knowledge, and especially low-level security knowledge, has a short shelf life. That is, you spend time to understand something, and it’s only useful for a limited period of time.

    In software engineering, if you spend time to learn an algorithm, it will probably be useful pretty much forever; math doesn’t change. Sometimes the specific problems that an algorithm is especially useful for go away, and sometimes algorithms are superseded by generally-superior algorithms, but algorithms have a very long shelf life.

    Knowledge of software engineering at, say, a programming language level don’t last as long. There’s only so much demand for COBOL programmers today, though you can use things learned in one environment in another, to a fair degree. But as long as you choose a programming language that is going to be around for a long time, the knowledge you spend time acquiring can continue to be valuable for decades, probably your entire career.

    Knowledge of a particular program has a shorter life. There are a very few programs that will be around for an extremely long period of time, like emacs, but it’s hard to know in advance which these will be (though my experience has been that open-source software tends to do better here). For example, I have gone through various version control systems – CVS, VCS, SVN, Bitkeeper, mercurial, git, and a handful of others. The time I spent learning the specifics of most of those is no longer very useful.

    Your professional value depends on your skillset, what you bring to the table. If you spend a lot of time learning a skill that will be applicable for your entire working life, then it will continue to add to the value that you bring to the table for your entire working life. If you spend a lot of time learning a skill that will not be of great use in five or ten years, then the time you invested won’t be providing a return to you after that point.

    That does not mean that everything in the computer security world has a short shelf life. Things like how public/private key systems work or understanding what a man-in-the-middle attack is remain applicable for the long haul.

    But a lot of security knowledge involves understanding flaws in very specific systems, and those flaws will go away or become less relevant over time, and often dealing with low-level security, implementation characteristics of specific systems is an example of such a thing.

    The world does need low-level computer security experts.

    But I would suggest to anyone who is interested in a career in computer security to, when studying things, to keep in mind the likely longevity of what they are teaching themselves and ask themselves whether they believe that that knowledge will likely be relevant at the time that they expect to retire. Everyone needs to learn some short-shelf-life material. But if one specializes in only short-shelf-life things, then they will need to be committing time to re-learn new short-shelf-life material down the line as their current knowledge loses value. I’d try to keep a mix, where a substantial portion of what I’m learning will have a long shelf life, and the short-shelf-life stuff I learn is learned with the understanding that I’m going to need to replace it at some point.

    I’ve spent time hand-assembling 680x0 and x86 code, have written exploits dependent upon quirks in particular compilers and for long-dead binary environments. A lot of that isn’t terribly useful knowledge in 2024. That’s okay – I’ve got other things I know that are useful. But if you go down this road, I would be careful to also allocate time to things that you can say with a higher degree of confidence will be relevant twenty, thirty, and forty years down the line.



  • There are various approaches, but the most common one in an x86 environment is overwriting the return address that was pushed onto the stack.

    When you call a function, the compiler generally maps it to a CALL instruction at the machine language level.

    At the time that a CALL instruction is invoked, the current instruction pointer gets pushed onto the stack.

    kagis

    https://www.felixcloutier.com/x86/call

    When executing a near call, the processor pushes the value of the EIP register (which contains the offset of the instruction following the CALL instruction) on the stack (for use later as a return-instruction pointer). The processor then branches to the address in the current code segment specified by the target operand.

    A function-local, fixed-length array will also live on the stack. If it’s possible to induce the code in the called function to overflow such an array, it can overwrite that instruction pointer saved on the stack. When the function returns, it hits a RET instruction, which will pop that saved instruction pointer off the stack and jump to it:

    https://www.felixcloutier.com/x86/ret

    Transfers program control to a return address located on the top of the stack. The address is usually placed on the stack by a CALL instruction, and the return is made to the instruction that follows the CALL instruction.

    If what was overwriting the saved instruction pointer on the stack was a function pointer to malicious code, it will now be executing.

    If you’re wanting to poke at this, I’d suggest familiarizing yourself with a low-level debugger so that you can actually see what’s happening first, as doing this blindly from source without being able to see what’s happening at the instruction level and being able to inspect the stack is going to be a pain. On Linux, probably gdb. On Windows, I’m long out of date, but SoftICE was a popular low-level debugger last time I was touching Windows.

    You’ll want to be able to at least set breakpoints, disassemble code around a given instruction to show the relevant machine language, display memory at a given address in various formats, and single step at the machine language level.

    I’d also suggest familiarizing yourself with the calling convention for your particular environment, which is what happens at a machine language level surrounding the call and return from a subroutine. Such buffer overflow attacks involve also overwriting other data on the stack, and understanding what is being overwritten is going to be necessary to understand such an attack.


  • So, I’ve got no problem with having page-specific functionality and allocating some kind of space of keybindings to them, right? Like, okay. Say that browsers reserved the Control-Alt prefix for webpages, and then had a list of functions that webpages could implement, and when they do, adding a visible button in a toolbar and having a hover tip to find those. That visible toolbar would solve any issue of discoverability of such functionality on a webpage (and by implementing that in the browser, the browser could choose to have a more-minimal form, like just an indicator that a page supports keybindings.) So the webpage doesn’t have to grab the browser’s keybindings. Or maybe we introduce a “browser button” or something, the way Microsoft did the Windows key.

    But what I don’t like is having access to native functionality blocked by webpages. I don’t think that they should have overlapping keybinding space.

    Emacs has a shitton of keybindings, users who heavily configure it, and a ton of add-on software that needs keybindings. What they did was to reserve some keybinding space for the editor, some for major modes, some for minor modes, and some for user-specified keybindings. These all don’t collide, so that the user doesn’t get functionality blocked by another piece of software:

    https://www.gnu.org/software/emacs/manual/html_node/elisp/Key-Binding-Conventions.html

    Don’t define C-c letter as a key in Lisp programs. Sequences consisting of C-c and a letter (either upper or lower case; ASCII or non-ASCII) are reserved for users; they are the only sequences reserved for users, so do not block them.

    Changing all the Emacs major modes to respect this convention was a lot of work; abandoning this convention would make that work go to waste, and inconvenience users. Please comply with it.

    • Function keys F5 through F9 without modifier keys are also reserved for users to define.
    • Sequences consisting of C-c followed by a control character or a digit are reserved for major modes.
    • Sequences consisting of C-c followed by {, }, <, >, : or ; are also reserved for major modes.
    • Sequences consisting of C-c followed by any other ASCII punctuation or symbol character are allocated for minor modes. Using them in a major mode is not absolutely prohibited, but if you do that, the major mode binding may be shadowed from time to time by minor modes.

    I get that websites need to have keybinding space, and have a legit reason to do so. But I don’t think that they should share keybinding space with the browser’s native functionality. If we want to have a “search” shortcut, hey, that’s cool. But lets leave the browser-native functionality available.

    In Firefox, I have:

    • Alt-f for find. By default, this is Control-f, but normally both Control- and Alt- are reserved for the browser, and I’ve swapped the Control and Alt prefixes so that the menu keys don’t crash into the GTK emacs-lite keybindings. Some websites override this, which is really annoying if I’m trying to navigate around using conventional search; in emacs, it’s common for users to use search constantly to navigate around in a document.

    • Slash. This opens a mini-find, because I’m using vimium, but only if I don’t have a text-editing widget active, in which case the OS’s text editor gets it.

    So I’ve got two different search keybindings and both are inaccessible at various points, because other software packages want to use keybinding space and there’s no convention for avoiding collisions.

    My preference would be that there should be keybinding space for Firefox itself, keybinding space for the OS to use in things like text widgets, keybinding space for the OS (Microsoft dealt with this by adding and reserving the Windows key and mostly using that, except for traditional pre-existing conventions like Alt-F4 or Alt-Enter), keybinding space for OS add-ons, binding space for Firefox add-ons, and keybinding space for websites, and that these shouldn’t overlap (and insofar as possible, and I realize that this isn’t always possible for non-modified keybindings, to not change based on modality, like “this functionality isn’t available if you have a text widget active”).



  • Honestly, the ability to override menu keys is really a long-running flaw in browser UI, IMHO.

    Firefox acquired a not-so-obvious way to disable that for a given site:

    Click the “lock icon” to the left of the URL in the URL bar. Click “connection secure”. Click “more information”. In the window that comes up, click the “permissions” tab. On that page, there’s an option to “override keyboard shortcuts”. You can click “Block”, and it’ll prevent that particular website from overriding your keybindings.

    This had been a long-running pet peeve until I ran into someone explaining how to disable it. I still bet that a ton of people who can’t find the option put up with that. Like, lemmy Web UI keyboard shortcuts clash with GTK emacs-lite keybindings, drives me nuts. Hitting “Control-E” to go to the end of the line instead inserts some Markdown stuff by default.