Comment by pizlonator

Comment by pizlonator 2 days ago

42 replies

> An attacker who can get a program to access an offset he controls relative to P2 can access P1 if P2 is torn such that it's still coupled, at the moment of adversarial access, with P1's capability

Only if the program was written in a way that allowed for legitimate access to P1. You’re articulating this as if P1 was out of thin air; it’s not. It’s the capability you loaded because the program was written in a way that let you have access to it. Like if you wrote a Java program in a way where a shared field F sometimes pointed to object P1. Of course that means loaders of F get to access P1.

> That can definitely enable a "weird execution"

Accessing a non-free object pointed by a pointer you loaded from the heap is not weird.

I get the feeling that you’re not following me on what „weird execution” is. It’s when the attacker can use a bug in one part of the software to control the entire program’s behavior. Your example ain’t that.

> Is it a corner case that'll seldom come up in practice? No. Is it a weakening of memory safety relative to what the JVM and Rust provide? Yes.

I don’t care about whether it’s a corner case.

My point is that there’s no capability model violation and no weird execution in your example.

It’s exactly like what the JVM provides if you think of the intval as just a field selector.

I’m not claiming it’s like what rust provides. Rust has stricter rules that are enforced less strictly (you can and do use the unsafe escape hatch in rust code to an extent that has no equal in Fil-C).

lifis 2 days ago

I think his argument is that you can have code this:

  user = s->user;
  if(user == bob)
    user->acls[s->idx]->has_all_privileges = true;
And this happens: 1. s->user is initialized to alice 2. Thread 1 sets s->idx to ((alice - bob) / sizeof(...)) and s->user to Bob, but only the intval portion is executed and the capability still points to Alice 3. Thread 2 executes the if, which succeeds, and then gives all privileges to Alice unexpectedly since the bob intval plus the idx points to Alice, while the capability is still for Alice

It does seem a real issue although perhaps not very likely to be present and exploitable.

Seems perhaps fixable by making pointer equality require that capabilities are also equal.

  • pizlonator 2 days ago

    I understand his argument.

    Here are the reasons why I don’t buy it:

    1. I’m not claiming that Fil-C fixes all security bugs. I’m only claiming that it’s memory safe and I am defining what that means with high precision. As with all definitions of memory safety, it doesn’t catch all things that all people consider to be bad.

    2. Your program would crash with a safety panic in the absence of a race. Security bugs are when the program runs fine normally, but is exploitable under adversarial use. Your program crashes normally, and is exploitable under adversarial use.

    So not only is it not likely to be present or exploitable, but if you wrote that code then you’d be crashing in Fil-C in whatever tests you ran at your desk or whenever a normal user tried to use your code.

    But perhaps point 1 is still the most important: of course you can write code with security bugs in Fil-C, Rust, or Java. Memory safety is just about making a local bug not result in control of arbitrary memory in the whole program. Fil-C achieves that key property here, hence its memory safe.

    • DreadY2K 12 hours ago

      > I’m only claiming that it’s memory safe and I am defining what that means with high precision

      Do you have your definition of memory safety anywhere? Specifically one that's precise enough that if I observe a bug in a C program compiled via Fil-C, I can tell whether this is a Fil-C bug allowing (in your definition) memory unsafety (e.g. I'm pretty sure an out-of-bounds read would be memory unsafety), or if it's considered a non-memory-safety bug that Fil-C isn't trying to prevent (e.g. I'm pretty sure a program that doesn't check for symlinks before overwriting a path is something you're not trying to protect against). I tried skimming your website for such a definition and couldn't find this definition, sorry if I missed it.

      I typically see memory safety discussed in the context of Rust, which considers any torn read to be memory-unsafe UB (even for types that don't involve pointers like `[u64; 2]`, such a data race is considered memory-unsafe UB!), but it sounds like you don't agree with that definition.

    • lifis 2 days ago

      In my understanding the program can work correctly in normal use.

      It is buggy because it fails to check that s->idx is in bounds, but that isn't problem if non-adversarial use of s->idx is in bounds (for example, if the program is a server with an accompanying client and s->idx is always in bounds when coming from the unmodified client).

      It is also potentially buggy because it doesn't use atomic pointers despite comcurrent use, but I think non-atomic pointers work reliably on most compiler/arch combinations, so this is commonplace in C code.

      A somewhat related issue if that since Fil-C capabilities currently are only at the object level, such an out-of-bounds access can access other parts of the object (e.g. an out-of-bounds access in an array contained in an array element can overwrite other either of the outer array)

      It is true though that this doesn't give arbitrary access to any memory, just to the whole object referred to by any capability write that the read may map to, with pointer value checks being unrelated to the accessed object.

      • pizlonator a day ago

        If you set the index to `((alice - bob) / sizeof(...))` then that will fail under Fil-C’s rules (unless you get lucky with the torn capability and the capability refers to Alice).

  • quotemstr 2 days ago

    Exactly. I agree that this specific problem is hard to exploit.

    > Seems perhaps fixable by making pointer equality require that capabilities are also equal

    You'd need 128-bit atomics or something. You'd ruin performance. I think Fil-C is actually making the right engineering tradeoff here.

    My point is that the way Pizlo communicates about this issue and others makes me disinclined to trust his system.

    - His incorrect claims about the JVM worry me.

    - His schtick about how Fil-C is safer than Rust because the latter has the "unsafe" keyword and the former does not is more definitional shenanigans. Both Fil-C and Rust have unsafe code: it's just that in the Fil-C case, only Pizlo gets to write unsafe code and he calls it a runtime.

    What other caveats are hiding behind Pizlo's broadly confident but narrowly true assertions?

    I really want to like Fil-C. It's good technology and something like it can really improve the baseline level of information security in society. But Pizlo is either going to have to learn to be less grandiose and knock it off with the word games. If he doesn't, he'll be remembered not as the guy who finally fixed C security but merely as an inspiration for the guy who does.

    • jstarks 2 days ago

      All I’m really hearing is that this guy rubs you the wrong way, so you’re not going to give him the benefit of the doubt that you’d give to others.

      I mean, maybe you’re right that his personality will turn everyone off and none of this stuff will ever make it upstream. But that kind of seems like a problem you’re actively trying to create via your discourse.

quotemstr 2 days ago

> Only if the program was written in a way that allowed for legitimate access to P1. You’re articulating this as if P1 was out of thin air; it’s not.

My program:

  if (p == P2) return p[attacker_controlled_index];
If the return statement can access P1, disjoint from P2, that's a weird execution for any useful definition of "weird". You can't just define the problem away.

Your central claim is that you can take any old C program, compile it with Fil-C, and get a memory-safe C program. Turns out you get memory safety only if you write that C program with Fil-C's memory model and its limits in mind. If someone's going to do that, why not write instead with Rust's memory model in mind and not pay a 4x performance penalty?

  • pizlonator 2 days ago

    > that's a weird execution for any useful definition of "weird".

    Weird execution is a term of art in the security biz. This is not that.

    Weird execution happens when the attacker can control all of memory, not just objects the victim program rightly loaded from the heap.

    > Your central claim is that you can take any old C program, compile it with Fil-C, and get a memory-safe C program.

    Yes. Your program is memory safe. You get to access P1 if p pointed at P1.

    You don’t get to define what memory safety means in Fil-C. I have defined it here: https://fil-c.org/gimso

    Not every memory safe language defines it the same way. Python and JavaScript have a weaker definition since they both have powerful reflection including eval and similar superpowers. Rust has a weaker definition if you consider that you can use `unsafe`. Go has a weaker definition if you consider that tearing in Go leads to actual weird execution (attacker gets to pop the entire Go type system). Java’s definition is most similar to Fil-C’s, but even there you could argue both ways (Java has more unsafe code in its implementation while Fil-C doesn’t have the strict aliasing of Java’s type system).

    You can always argue that someone else’s language isn’t memory safe if you allow yourself to define memory safety in a different way. That’s not a super useful line of argumentation, though it is amusing and fun

    • quotemstr 2 days ago

      You may define "memory safety" as you like. I will define "trustworthy system" as one in which the author acknowledges and owns limitations instead of iteratively refining private definitions until the limitations disappear. You can define a mathematical notation in which 2+3=9, but I'm under no obligation to accept it, and I'll take the attempt into consideration when evaluating the credibility of proofs in this strange notation.

      Nobody is trying to hide the existence of "eval" or "unsafe". You're making a categorical claim of safety that's true only under a tendentious reading of common English words. Users reading your claims will come away with a mistaken faith in your system's guarantees.

      Let us each invest according to our definitions.

      • pizlonator 2 days ago

        > I will define "trustworthy system" as one in which the author acknowledges and owns limitations instead of iteratively refining private definitions until the limitations disappear.

        You know about this limitation that you keep going on about because it’s extremely well documented on fil-c.org

    • torginus 2 days ago

      Sorry to intrude on the discussion, but I have a hard time grasping how to produce the behavior mentioned by quotemstr. From what I understand the following program would do it:

          int arr1[] = {1, 2, 3, 4, 5};
          int arr2[] = {10, 20, 30, 40, 50};
          int *p1 = &arr1[1];  
          int *p2 = &arr2[2];  
          int *p = choose_between(p1,p2);
      
          //then sometime later, a function gets passed p
          // and this snippet runs
          if (p == p2) {
           //p gets torn by another thread
           return p; // this allows an illegal index/pointer combo, possibly returning p1[1]
          }
      
      Is this program demonstrating the issue? Does this execute under Fil-C's rules without a memory fault? If not, could you provide some pseudocode that causes the described behavior?
      • pizlonator 2 days ago

        No, this program doesn’t demonstrate the issue.

        You can’t access out of bounds of whatever capability you loaded.

    • tialaramex 2 days ago

      > Rust has a weaker definition if you consider that you can use `unsafe`

      I don't see it. Rust makes the same guarantees regardless of the unsafe keyword. The difference is only that with the unsafe keyword you the programmer are responsible for upholding those guarantees whereas the compiler can check safe Rust.

      • foldr 2 days ago

        C is safe by the same logic, then? You can write safe code in anything if you don’t make mistakes.

  • dnr 2 days ago

    I'm not an expert here but I have to say this feels like a very weak objection.

    p points to P1. One thread reads through p. Another thread races with that and mutates p to point to P2. The result is the first thread reads from either P1 or P2 (but no other object).

    This seems totally fine and expected to me? If there's a data race on a pointer, you might read one or the other values, but not garbage and not out of bounds. I mean, if it could guarantee a panic that's nice, but that's a bonus, not required for safety.