Comment by jagrsw

Comment by jagrsw 2 days ago

62 replies

The author has a knack for generating buzz (and making technically interesting inventions) :)

I'm a little concerned that no one (besides the author?) has checked the implementation to see if reducing the attack surface in one area (memory security) might cause problems in other layers.

For example, Filip mentioned that some setuid programs can be compiled with it, but it also makes changes to ld.so. I pointed this out to the author on Twitter, as it could be problematic. Setuid applications need to be written super-defensively because they can be affected by envars, file descriptors (e.g. there could be funny logical bugs if fd=1/2 is closed for a set-uid app, and then it opens something, and starts using printf(), think about it:), rlimits, and signals. The custom modifications to ld.so likely don't account for this yet?

In other words, these are still teething problems with Fil-C, which will be reviewed and fixed over time. I just want to point out that using it for real-world "infrastructures" might be somewhat risky at this point. We need unix nerds to experiment with.

OTOH, it's probably a good idea to test your codebase with it (provided it compiles, of course) - this phase could uncover some interesting problems (assuming there aren't too many false positives).

jart 2 days ago

I've been doing just that. If there's a way to break fil-c we're gonna find it.

  • yjftsjthsd-h 2 days ago

    Wishful thinking: Any possible chance that means you might make a Fil-C APE hybrid? It would neatly address the fact that Fil-C already needs all of its dependencies to also use Fil-C.

jacquesm 2 days ago

If you are really concerned you should do this and then report back. Otherwise it is just a mild form of concern trolling.

  • jagrsw 2 days ago

    I checked the the code, reported a bug, and Filip fixed it. Therefore, as I said, I was a little concerned.

    • jacquesm 2 days ago

      Yes, but instead of remarking solely on the fact that the author has a pretty good turnaround time for fixing bugs (I wished all open source projects were that fast) and listens to input belies the tone of your comment, which makes me come away with a negative view of the project, when in fact the evidence points to the opposite.

      It's a 'damning with faint praise' thing and I'm not sure to what degree you are aware of it but I don't think it is a fair way to treat the author and the project. HN has enough of a habit of pissing on other people's accomplishments already. Critics have it easy, playwrights put in the hours.

      • jagrsw 2 days ago

        I understand your point, and I have the utmost respect for the author who initiated, implemented, and published this project. It's a fantastic piece of work (I reviewed some part of it) that will very likely play an important role in the future - it's simply too good not to.

        At the same time, however, the author seems to be operating on the principle: "If I don't make big claims, no one will notice." The statements about the actual security benefits should be independently verified -this hasn't happened yet, but it probably will, as the project is gaining increasing attention.

quotemstr 2 days ago

It's difficult for me to have a positive opinion of the author when he responds with dismissal and derision to concerns others have raised about Fil-C and memory safety under data races.

The fact is that Fil-C allows capability and pointer writes to tear. That is, when thread 1 writes pointer P2 to a memory location previously holding P1, thread 2 can observe, briefly, the pointer P2 combined with the capability for P1 (or vice versa, the capability for P2 coupled to the pointer bits for P1).

Because thread 2 can observe a mismatch between a pointer and its capability, an attacker controlled index into P2 from thread 2 can access memory of an object other than the one to which P2 points.

The mismatch of pointer and capability breaks memory safety: an attacker can break the abstraction of pointers-as-handles and do nefarious things with pointers viewed instead as locations in RAM.

On one hand, this break is minor and doesn't appear when memory access is correctly synchronized. Fil-C is plenty useful even if this corner case is unsafe.

On the other hand, the Fil-C as author's reaction to discourse about this corner case makes me hesitant to use his system at all. He claims Java has the same problem. It does not. He claims it's not a memory safety violation because thread 1 could previously have seen P1 and its capability and therefore accessed any memory P1's capability allowed. That's correct but irrelevant: thread 2 has P2 and it's paired with the wrong capability. Kaboom.

The guy is technically talented, but he presents himself as Prometheus bringing the fire of memory safety to C-kind. He doesn't acknowledge corner cases like the one I've described. Nor does he acknowledge practical realities like the inevitability of some kind of unsafe escape hatch (e.g. for writing a debugger). He says such things are unnecessary because he's wrapped every system call and added code to enforce his memory model's invariants around it. Okay, is it possible to do that in the context of process_vm_writev?

I hope, sincerely, the author is able to shift perspectives and acknowledge the limitations of his genuinely useful technology. The more he presents it as a panacea, the less I want to use it.

  • pizlonator 2 days ago

    > Because thread 2 can observe a mismatch between a pointer and its capability, an attacker controlled index into P2 from thread 2 can access memory of an object other than the one to which P2 points.

    Under Fil-C’s memory safety rules, „the object at which P points” is determined entirely by the capability and nothing else.

    You got the capability for P1? You can access P1. That’s all there is to it. And the stores and loads of the capability itself never tear. They are atomic and monotonic (LLVM’s way of saying they follow something like the JMM).

    This isn’t a violation of memory safety as most folks working in this space understand it. Memory safety is about preventing the weird execution that happens when an attacker can access all memory, not just the memory they happen to get a capability to.

    > He claims Java has the same problem. It does not.

    It does: in Java, what object you can access is entirely determined by what objects you got to load from memory, just like in Fil-C.

    You’re trying to define „object” in terms of the untrusted intval, which for Fil-C’s execution model is just a glorified index.

    Just because the nature of the guarantees doesn’t match your specific expectations does not mean that those guarantees are flawed. All type systems allow incorrect programs to do wrong things. Memory safety isn’t about 100% correctness - it’s about bounding the fallout of incorrect execution to a bounded set of memory.

    > That's correct but irrelevant: thread 2 has P2 and it's paired with the wrong capability. Kaboom.

    Yes, kaboom. The kaboom you get is a safety panic because a nonadversarial program would have had in bounds pointers and the tear that arises from the race causes an OOB pointer that panics on access. No memory safe language prevents adversarial programs from doing bad things (that’s what sandboxes are for, as TFA elucidates).

    But that doesn’t matter. What matters is that someone attacking Fil-C cannot use a UAF or OOBA to access all memory. They can only use it to access whatever objects they happen to have visibility into based on local variables and whatever can be transitively loaded from them by the code being attacked.

    That’s memory safety.

    > He doesn't acknowledge corner cases like the one I've described.

    You know about this case because it’s clearly documented in the Fil-C documentation. You’re just disagreeing with the notion that the pointer’s intval is untrusted and irrelevant to the threat model.

    • quotemstr 2 days ago

      > The kaboom you get is a safety panic

      You don't always get a panic. An attacker who can get a program to access an offset he controls relative to P2 can access P1 if P2 is torn such that it's still coupled, at the moment of adversarial access, with P1's capability. That's dangerous if a program has made a control decision based on the pointer bits being P2. IOW, an attacker controlled offset can transform P2 back into P1 and access memory using P1's capability even if program control flow has proceeded as though only P2 were accessible at the moment of adversarial access.

      That can definitely enable a "weird execution" in the sense that it can let an attacker make the program follow an execution path that a plain reading of the source code suggests it can't.

      Is it a corner case that'll seldom come up in practice? No. Is it a weakening of memory safety relative to what the JVM and Rust provide? Yes.

      You are trying to define the problem away with sleigh-of-hand about the pointer "really" being its capability while ignoring that programs make decisions based on pointer identity independent of capability -- because they're C programs and can't even observe these capabilities. The JVM doesn't have this problem, because in the JVM, the pointer is the capability.

      It's exactly this refusal to acknowledge limitations that spooks me about your whole system.

      • pizlonator 2 days ago

        > An attacker who can get a program to access an offset he controls relative to P2 can access P1 if P2 is torn such that it's still coupled, at the moment of adversarial access, with P1's capability

        Only if the program was written in a way that allowed for legitimate access to P1. You’re articulating this as if P1 was out of thin air; it’s not. It’s the capability you loaded because the program was written in a way that let you have access to it. Like if you wrote a Java program in a way where a shared field F sometimes pointed to object P1. Of course that means loaders of F get to access P1.

        > That can definitely enable a "weird execution"

        Accessing a non-free object pointed by a pointer you loaded from the heap is not weird.

        I get the feeling that you’re not following me on what „weird execution” is. It’s when the attacker can use a bug in one part of the software to control the entire program’s behavior. Your example ain’t that.

        > Is it a corner case that'll seldom come up in practice? No. Is it a weakening of memory safety relative to what the JVM and Rust provide? Yes.

        I don’t care about whether it’s a corner case.

        My point is that there’s no capability model violation and no weird execution in your example.

        It’s exactly like what the JVM provides if you think of the intval as just a field selector.

        I’m not claiming it’s like what rust provides. Rust has stricter rules that are enforced less strictly (you can and do use the unsafe escape hatch in rust code to an extent that has no equal in Fil-C).

pizlonator 2 days ago

Posts like the one I made about how to do sandboxing are specifically to make the runtime transparent to folks so that meaningful auditing can happen.

> For example, Filip mentioned that some setuid programs can be compiled with it, but it also makes changes to ld.so. I pointed this out to the author on Twitter, as it could be problematic.

The changes to ld.so are tiny and don’t affect anything interesting to setuid. Basically it’s just one change: teaching the ld.so that the layout of libc is different.

More than a month ago, I fixed a setuid bug where the Fil-C runtime was calling getenv rather than secure_getenv. Now I’m just using secure_getenv.

> In other words, these are still teething problems with Fil-C, which will be reviewed and fixed over time. I just want to point out that using it for real-world "infrastructures" might be somewhat risky at this point. We need unix nerds to experiment with.

There’s some truth to what you’re saying and there’s also some FUD to what you’re saying. Like a perfectly ambiguous mix of truth and FUD. Good job I guess?

  • fc417fc802 2 days ago

    Is it FUD? Approximately speaking, all software has bugs. Being an early adopter for security critical things is bound to carry significant risk. It seems like a relevant topic to bring up in this sort of venue for a project of this sort.

    • nickpsecurity 2 days ago

      It's true. I used to promote high-assurance kernels. They had low odds of coding errors but the specs could be wrong. Many problems Linux et al. solved are essentially spec-level. So, we just apply all of that to the secure designs, right?

      Well, those spec issues are usually not documented or new engineers won't know where to find a full list. That means the architecturally-insecure OS's might be more secure in specific areas due to all the investment put into them over time. So, recommending the "higher-security design" might actually lower security.

      For techniques like Fil-C, the issues include abstraction gap attacks and implementation problems. For the former, the model of Fil-C might mismatch the legacy code in some ways. (Ex: Ada/C FFI with trampolines.) Also, the interactions between legacy and Fil-C might introduce new bugs because integrations are essentially a new program. This problem did occur in practice in a few, research works.

      I haven't reviewed Fil-C. I've forgotten too much C and the author was really clever. It might be hard to prove the absence of bugs in it. However, it might still be very helpful in securing C programs.

    • pizlonator 2 days ago

      It’s like half FUD.

      The FUDish part is that the only actual bug bro is referring to got fixed a while ago (and didn’t have to do with ld.so), and the rest is hypothetical

  • walterbell 2 days ago

    > a perfectly ambiguous mix of truth and FUD

    Congrats on Fil-C reaching heisentroll levels!

  • [removed] 2 days ago
    [deleted]