Comment by iforgotpassword

Comment by iforgotpassword 3 months ago

39 replies

The other issue is that people seem to just copy configure/autotools scripts over from older or other projects because either they are lazy or don't understand them enough to do it themselves. The result is that even with relatively modern code bases that only target something like x86, arm and maybe mips and only gcc/clang, you still get checks for the size of an int, or which header is needed for printf, or whether long long exists.... And then the entire code base never checks the generated macros in a single place, uses int64_t and never checks for stint.h in the configure script...

IshKebab 3 months ago

I don't think it's fair to say "because they are lazy or don't understand". Who would want to understand that mess? It isn't a virtue.

A fairer criticism would be that they have no sense to use a more sane build system. CMake is a mess but even that is faaaaar saner than autotools, and probably more popular at this point.

  • smartmic 3 months ago

    I took the trouble (and even spent the money) to get to grips with autotools in a structured and detailed way by buying a book [1] about it and reading as much as possible. Yes, it's not trivial, but autotools are not witchcraft either, but as written elsewhere, a masterpiece of engineering. I have dealt with it without prejudice and since then I have been more of a fan of autotools than a hater. Anyway, I highly recommend the book and yes, after reading it, I think autotools is better than its reputation.

    [1] https://nostarch.com/autotools2e

    • [removed] 3 months ago
      [deleted]
  • xiaoyu2006 3 months ago

    Autotools use M4 to meta-program a bash script that meta-programs a bunch of C(++) sources and generates C(++) sources that utilizes meta-programming for different configurations; after which the meta-programmed script, again, meta-programs monolithic makefiles.

    This is peak engineering.

    • 1718627440 2 months ago

      Yes, that sound ridiculous, but it is that way, so that the user can modify each intermediate step, which is the main selling point. As a user I really prefer that experience, which is why I as a developer put up with the non-sense of M4. (Which I think is more due to M4 being a macro language, then inherent language flaws.)

    • krior 3 months ago

      Sounds like a headache. Is there a nice Python lib to generate all this M4-mumbo-jumbo?

      • lionkor 3 months ago

        "Sounds complicated. I want it to throw exceptions and have significant whitespace on top of all that complexity!"

  • knorker 3 months ago

    autotools is the worst, except for all the others.

    I'd like to think of myself as reasonable, so I'll just say that reasonable people may disagree with your assertion that cmake is in any way at all better than autotools.

    • IshKebab 3 months ago

      Nope, autotools is actually the worst.

      There is no way in hell anyone reasonable could say that Autotools is better than CMake.

      • knorker 3 months ago

        My experience with cmake, though dated, is that it's simpler because it simply cannot do what autotools can do.

        It really smelled of "oh I can do this better", and you rewrite it, and as part of rewriting it you realise oh, this is why the previous solution was complicated. It's because the problem is actually more complex than I though.

        And then of course there's the problem where you need to install on an old release. But the thing you want to install requires a newer cmake (autotools doesn't have this problem because it's self contained). But this is an old system that you cannot upgrade, because the vendor support contract for what the server runs would be invalidated. So now you're down a rabbit hole of trying to get a new version of cmake to build on an unsupported system. Sigh. It's less work to just try to construct `gcc` commands yourself, even for a medium sized project. Either way, this is now your whole day, or whole week.

        If only the project had used autotools.

      • jeroenhd 3 months ago

        I've seen programs replicate autotools in their Makefiles. That's actually worse. I've also used the old Visual Studio build tooling.

        Autotools is terrible, but it's not the worst.

      • pletnes 3 months ago

        Configure-make is easier to use for someone else. Configuring a cmake based project is slightly harder. In every other conceivable way I agree 100% (until someone can convince me otherwise)

      • tpoacher 3 months ago

        And presumably the measure by which they are judged to be reasonable or not is if they prefer CMake over Autotools, correct? :D

        • ordu 3 months ago

          Correct. I avoid autotools and cmake as much as I can. I'd better write Makefiles by hand. But when I need to deal with them, I'd prefer cmake. I can can modify CMakeLists.txt in a meaningful way and get the results I want. I wouldn't touch autotools build system because I never was able to figure out which of the files is the configuration that is meant to be edited by hands and not generated by scripts in other files. I tried to dig the documentation but I never made it.

  • NekkoDroid 3 months ago

    > CMake is a mess but even that is faaaaar saner than autotools, and probably more popular at this point.

    Having done a deep dive into CMake I actually kinda like it (really modern cmake is actually very nice, except the DSL but that probably isn't changing any time soon), but that is also the problem: I had to do a deep dive into learning it.

  • kazinator 3 months ago

    Someone who doesn't want to understand a huge mess should probably not be bringing it into their project.

    In software you sometimes have to have the courage to reject doing what others do, especially if they're only doing it because of others.

rollcat 3 months ago

This.

Simple projects: just use plain C. This is dwm, the window manager that spawned a thousand forks. No ./configure in sight: <https://git.suckless.org/dwm/files.html>

If you run into platform-specific stuff, just write a ./configure in simple and plain shell: <https://git.suckless.org/utmp/file/configure.html>. Even if you keep adding more stuff, it shouldn't take more than 100ms.

If you're doing something really complex (like say, writing a compiler), take the approach from Plan 9 / Go. Make a conditionally included header file that takes care of platform differences for you. Check the $GOARCH/u.h files here:

<https://go.googlesource.com/go/+/refs/heads/release-branch.g...>

(There are also some simple OS-specific checks: <https://go.googlesource.com/go/+/refs/heads/release-branch.g...>)

This is the reference Go compiler; it can target any platform, from any host (modulo CGO); later versions are also self-hosting and reproducible.

  • Levitating 3 months ago

    I want to agree with you, but as someone who regularly packages software for multiple distributions I really would prefer people using autoconf.

    Software with custom configure scripts are especially dreaded amongst packagers.

    • Joker_vD 3 months ago

      Why, again, software in the Linux world has to be packaged for multiple distributions? On the Windows side, if you make installer for Windows 7, it will still work on Windows 11. And to the boot, you don't have to go through some Microsoft-approved package distibution platform and its approval process: you can, of course, but you don't have to, you can distribute your software by yourself.

      • michaelmior 3 months ago

        > Why, again, software in the Linux world has to be packaged for multiple distributions?

        Because a different distribution is a different operating system. Of course, not all distributions are completely different and you don't necessarily need to make a package for any particular distribution at all. Loads of software runs just fine being extracted into a directory somewhere. That said, you absolutely can use packages for older versions of a distribution in later versions of the same distribution in many cases, same as with Windows.

        > And to the boot, you don't have to go through some Microsoft-approved package distribution platform and its approval process: you can, of course, but you don't have to, you can distribute your software by yourself.

        This is the same with any Linux distribution I've ever used. It would be a lot of work for a Linux distribution to force you to use some approved distribution platform even if it wanted to.

      • rollcat 3 months ago

        As michaelmior has already noted, Linux is not an OS. Anyone is free to take the sources and do as they wish (modulo GPL), which is what a lot of people did. Those people owe you nothing.

        But consider FreeBSD. Contrary to Linux, it is a full, standalone operating system, just like Windows or macOS. It has pretty decent compatibility guarantees for each major release (~5 years of support). It also has an even more liberal license (it boils down to "do as you wish but give us credit").

        Consider macOS. Apple keeps supporting 7yro hardware with new releases, and even after that keeps the security patches flowing for a while. Yet still, they regularly cull backwards compatibility to keep moving forward (e.g. ending support for 32-bit Intel executables to pave the way for Arm64).

        Windows is the outlier here. Microsoft is putting insane amounts of effort into maintaining backwards compatibility, and they are able to do so only because of their unique market position.

      • hulitu 3 months ago

        > On the Windows side, if you make installer for Windows 7, it will still work on Windows 11.

        Do you speak from experience or from anecdotes ?

  • knorker 3 months ago

    Interesting that you would bring up Go. Go is probably the most head-desk language of all for writing portable code. Go will fight you the whole way.

    Even plain C is easier.

    You can have a whole file be for OpenBSD, to work around that some standard library parts have different types on different platforms.

    So now you need one file for all platforms and architectures where Timeval.Usec is int32, and another file for where it is int64. And you need to enumerate in your code all GOOS/GOARCH combinations that Go supports or will ever support.

    You need a file for Linux 32 bit ARM (int32/int32 bit), one for Linux 64 bit ARM (int64,int64), one for OpenBSD 32 bit ARM (int64/int32), etc…. Maybe you can group them, but this is just one difference, so in the end you'll have to do one file per combination of OS and Arch. And all you wanted was pluggable "what's a Timeval?". Something that all build systems solved a long time ago.

    And then maybe the next release of OpenBSD they've changed it, so now you cannot use Go's way to write portable code at all.

    So between autotools, cmake, and the Go method, the Go method is by far the worst option for writing portable code.

    • rollcat 3 months ago

      I have specifically given an example of u.h defining types such as i32, u64, etc to avoid running a hundred silly tests like "how long is long", "how long is long long", etc.

      > So now you need one file for all platforms and architectures where Timeval.Usec is int32, and another file for where it is int64. And you need to enumerate in your code all GOOS/GOARCH combinations that Go supports or will ever support.

      I assume you mean [syscall.Timeval]?

          $ go doc syscall
          [...]
          Package syscall contains an interface to the low-level operating system
          primitives. The details vary depending on the underlying system [...].
      
      Do you have a specific use case for [syscall], where you cannot use [time]?
      • knorker 3 months ago

        Yeah I've had specific use cases when I need to use syscall. I mean... if there weren't use cases for syscall then it wouldn't exist.

        But not only is syscall an example of portability done wrong for APIs, as I said it's also an example of it being implemented in a dumb way causing needless work and breakage.

        Syscall as implementation leads by bad example because it's the only method Go supports.

        Checking for GOARCH+GOOS tuple equality for portable code is a known anti pattern, for reasons I've said and other ones, that Go still decided to go with.

        But yeah, autotools scripts often check for way more things than actually matter. Often because people copy paste configure.ac from another project without trimming.

technion 3 months ago

There's a trending post right now for printf implemented in bare metal and my first thought was "finally, all that autoconf code that checks for printf can handle the use can where it doesn't exist".

epcoa 3 months ago

> either they are lazy or don't understand them enough to do it themselves.

Meh, I used to keep printed copies of autotools manuals. I sympathize with all of these people and acknowledge they are likely the sane ones.

  • Levitating 3 months ago

    I've had projects where I spent more time configuring autoconf than actually writing code.

    That's what you get for wanting to use a glib function.

[removed] 3 months ago
[deleted]
rbanffy 3 months ago

It’s always wise to be specific about the sizes you want for your variables. You don’t want your ancient 64-bit code to act differently on your grandkids 128-bit laptops. Unless, of course, you want to let the compiler decide whether to leverage higher precision types that become available after you retire.