delichon 9 hours ago

I was using an angle grinder to strip paint off of a large table top and necessarily had to remove the guard. This is one of the most dangerous tools I have and I'm very aware of it, so I carefully gripped it toward the end of the handle well away from the disk. For the first few minutes I was very conscious about the position of my grip. Then a half hour later I glanced down and saw my top finger about a centimeter away from the spinning disk. I had gradually choked my grip up in order to get better leverage. Another few seconds and it could have become a more memorable incident. I normalized my deviance just by failing to notice it for a few minutes.

I see the same forces working on my software practices. For instance I start with wide and thorough testing coverage and over time reduce it to the places I usually see problems and ignore the rest. Sometimes production can be nearly maimed before I notice and adjust my grip.

  • 123yawaworht456 9 hours ago

    > Another few seconds and it could have become a very memorable incident.

    it takes a bit of pressure for an angle grinder to cut, even when you're cutting soft material like wood. accidental contact with a spinning disc would give you a nasty flesh wound, but you wouldn't lose your fingers or anything.

    the real danger is the disc breaking up at 10K RPM and shrapnel flying in your face. you might easily lose an eye.

    • klodolph 8 hours ago

      That’s not the only real danger.

      The angle grinder can suddenly shift in practically any direction. It’s very easy to lose control of it.

      • potato3732842 7 hours ago

        The 4.5" angle grinder is like an agressive little dog that's just waiting for a chance to bite you.

        The table saw is like an old pitbull that generally sleeps.

        The 6"/7"/9" grinder is a skunk.

        Everyone who's ever given it the chance has a story about the time the angle grinder nipped at them. The people who pestered the old pitbull until it got sick of their crap have 9 fingers. Nobody has problems with the skunk because it's pretty obvious what'll happen if you disrespect it.

      • convolvatron 7 hours ago

        thats only a problem if it catches your clothes and gets dragged into your body. only turn the switch into the locked position when really necessary. I usually wear leathers. but if it does happen you're looking at a couple months of road rash, not amputation.

    • [removed] 8 hours ago
      [deleted]
  • PaulHoule 8 hours ago

    Software testing practices are still pretty controversial.

    I spent about a week redesigning the interfaces of a security-sensitive system to make it testable and developing a test harness, the actual implementation was about a day of work. I felt it was a great use of time.

    I have not found an answer for testing React components that seems like a good use of resources. I've tried numerous test libraries from Enzyme to react-testing-library and found them all to be terribly flawed (sorry, if you think "I shouldn't have to modify my code to write tests" and then tell me to "use data-testids" that's the end of the conversation. I have an hour to write a test, I don't have two weeks to diagnose how a third party library I use interacts with the Rube Goldberg machine react-testing-library uses to emulate ARIA support)

    Code that is basically "functional" and doesn't interact with a large mysterious system is straightforward to test; a lot of UI code has the issue that it not only calls a mysterious large and complex system but it also gets called by a mysterious large and complex system. I'd really love to have a system that would give me a heads up that some CSS I changed has a spooky effect on a distance on part of the UI because of the complexity of selectors but that's asking a lot

    https://understandlegacycode.com/blog/key-points-of-working-...

    makes the strong test that tests have to be really fast because you should have hundreds or thousands of them that run any build. Feathers makes the best intellectual case for testing when testing is difficult that I've ever seen, but some of the tests I'm proudest of could never run quickly: I built a "Super Hammer" test that ran a race condition prone system with 1000 threads for about 40 seconds that wouldn't have been effective if it was dramatically shorter. You might do 10 builds a day so you could easily do 2500 builds a year and that adds almost 28 hours of waiting a year -- that's just one test but it costs $2800 a year for a fully loaded FTE who costs $100 an hour and probably costs more than that because the long build sets idle hands in motion doing the devil's work and probably wastes several times more time than that.

    • pixl97 6 hours ago

      Companies: "your testing is costing us thousands of dollars"

      Devs: "So how much does an a few hours of outages cost again?"

      • PaulHoule 4 hours ago

        In at least one case, a test routine activated in production created a terrible loss

        https://www.henricodolfing.com/2019/06/project-failure-case-...

        • hex4def6 3 hours ago

          Wow. Ignoring the proximate cause of leaving that code in there, imagine being the poor schmuck that forgets to deploy a code update to 1 of 8 servers, causing $440 million in damages, basically destroying a company overnight. It's so far outside of comprehension at that scale.

    • yashap 4 hours ago

      > I have not found an answer for testing React components that seems like a good use of resources > … > I'd really love to have a system that would give me a heads up that some CSS I changed has a spooky effect on a distance on part of the UI because of the complexity of selectors but that's asking a lot

      I would’ve agreed until recently. I always found basically all other forms of testing valuable (unit tests of almost everything on the BE, unit tests of FE business logic, BE integration tests, E2E tests), but not testing of the visual elements of the FE.

      But the company I work at, ~6 months ago we gave this product a try, and honestly it’s pretty incredible: https://www.meticulous.ai/

      They basically record sessions of real usage in our staging environment, and then replay them against your branch, like taking all the same interactions, and mocking responses to all network calls. It records tonnes of these sessions and is very smart about which ones it uses for a given change. It flags any visual differences, and you can OK them (or not). There’s a bit of work to initially integrate, but then you don’t write any tests, and you get pretty amazing coverage. It has the odd false positives, but not many, and they’re easy to review/approve in their web UI. They’re also a small startup willing to work super closely with you (we share a Slack chat with them, they’re very open to feedback and iterating quickly on it).

      I swear I’m not a paid shill or affiliated with them in any way, just a user who really loves the product. I was skeptical it’d work well at first, but it’s honestly been great, has caught many potential regressions, I feel we’re getting much better coverage than we would with handwritten UI tests. It’s very worth a look IMO if you’re not satisfied with your visual tests. It’s not an E2E testing tool, because the network requests are recorded/replayed (so it can’t stop BE changes that break the FE), but it’s amazing for testing so many elements of the FE.

    • crdrost 3 hours ago

      Hm. Just thinking out loud. TLDR is that I think the core of the solution to good testing of React components, would look like using it like the Web Component model?

      So one thing that I keep coming back to as kind of a "baseline" is called Functional Core, Imperative Shell. It's a little hard to explain in a short space like this and the presentations available on YouTube and blog articles are a bit confusing, but basically it asks for your application to be broken into three parts: (1) modules in the "functional core" define immutable data structures and purely deterministic transformations between them; (2) modules in imperative libraries each "do one thing and do it well", they might call the SaveUser API or something like that; and (3) these are held together by a thin shell of glue code that needs to have no real logic (that's for the functional core) and needs to not do any real operations directly (that's for the libraries). And the reason to break the app apart like this is that it's test-centric: modules in (1) are unit-tested without mocks, by creating immutable data structures, feeding to the transforms, and checking the output; modules in (2) will init a real connection to a dev server upstream and are integration-tested by sending the real request to the real server; and the shell in (3) is basically so simple that it requires one end-to-end test that just makes sure that everything compiled together all right and can initialize at runtime. So this is what software looks like if you elevate Testing to be the One Core Pillar of the application, and demand "no mocks!" as part of that.

      Q1, how do we apply this to a React app? Well, if you think in terms of UI, you can kind of think of an entire view as being kind of (3) as long as you aggressively oversimplify the behaviors: so you click on some part of some view and it dispatches some ButtonClicked data structure into some view-wide event queue, but it does no logic of its own. Reminds me of Redux, also reminds me a lot of Lit and web components where they just kind of emit CustomEvents but aren't supposed to do anything themselves.

      Q2, how do components fit in. We have to be a little more careful there. You're talking about a modular architecture though, real thick components. Componentizing, takes us a step back from that ideal, right? It says "I don't want this to look like a single unified whole that is all tested at once, I want this to look like a composition of subsystems that are reusable and tested independently."

      A simple example might be a tabbed view or accordion control, I want to coax the viewer through these N different steps, the previous step needs to be complete and then you can go to the next one. And I want the components to be each of these views. (The actual tab view or accordion view is of course another component, but it's a "thin component" in the above sense, it doesn't actually have any imperative library and the logic is relatively trivial, it doesn't generate the sorts of questions you're asking about.)

      So just to roll up a random mental example, one of these tabs is some PermissionsEditor component, once you initialize it, it has everything it needs inside the component to fetch permissions from the API, fetch the current user, see what permissions the current user is allowed to grant to other users (or themselves?)... but the other tabs need to be dynamically responsive, once you add yourself to the group that can edit Flotsam and Jetsam, going to the Flotsam tab the "Edit" button should no longer be grayed out etc.

      Then I think the proper way to view these thicker components, is as being inserted at level (2) into the main application? So the main application just treats them as imperative libraries, "I will give you a div and call your init_permissions_editor function with that div and you render into there. I will give you a channel to communicate events to me on. You will provide me the defs of the immutable data structures you'll send down that channel, I will provide deterministic transformations of those events into other events that I need to do."

      With some caveats, yeah, I'd basically say this is the web-component model. Your external application just integration-tests that init_permissions_editor will render _something_ into a blank <div> given. Your PermissionsEditor component is responsible for integration-testing that it can create permission, add user to group, all of that, and is responsible for testing that it emits certain events when these things happen.

  • lanstin 9 hours ago

    As you learn in biking, the position and velocities of objects changes smoothly, but awareness of the position and velocities of objects can change very discontinuously.

    • digitalsushi 7 hours ago

      you also learn in biking that a helmet gives you a redo, just like the blade guard does

      • LoganDark 6 hours ago

        You learn that on basically anything that you can fall or get knocked off of. For me it was an electric scooter!

  • hinkley 6 hours ago

    If you're not familiar with the shop tool space: A band saw or a table saw can chop off a finger (or three) and surgeons can reattach them. It's very skilled work but you have good odds these days, especially if you've been socialized to put the finger(s) on ice and go to the ER.

    A hammer press or a grinder, or a few other tools will just take your finger away.

  • lostlogin 5 hours ago

    I was talking with a friend who runs a building site. One of his crew took the guard off an angle grinder to fit a disc that was designed for a larger tool.

    With it tucked under his arm he plugged it in.

    It was already switched on, and when plugged in the cut the artery and nerve to his arm. He survived the blood loss and has most function back but it’s taken a year.

  • scarecrowbob 6 hours ago

    In my maker space, guards are required. I have taken several trainings with people who use angle grinders professionally and apparently removing a guard is the normal professional practice for a lot of well-founded reasons.

    This is one reason why I do not want to work in a trade like welding or fabrication professionally. At the same time it indicates that with care it's possible to follow the norms of a profession for long periods of time even if they often seem dangerous.

    One thing I noticed when I was climbing a lot- the safer and more knowledgeable climbers I went out with used far less redundant equipment (fewer locking or doubled carbiners for instance). At the same time they were far picker about how the equipment was used- one partner would yell at folks to keep hands off the grigri (a belay device that is very common for folks to hold while using).

    I get why different equipment calls for various "guard rails". At the same time, safety is in the usage and not the devices- forgetting that fact is dangerous in itself.

    • rightbyte 5 hours ago

      > apparently removing a guard is the normal professional practice for a lot of well-founded reasons.

      Dunno why there are no transparent guard to use when you want to see properly.

      • kps 4 hours ago

        No good material. The usual material for safety glasses (polycarbonate) scratches easily. A grinder guard would be see-through for about three seconds.

      • scarecrowbob 2 hours ago

        They kind-of do, but you wear it on your face.

        FWIW, the reason cited for removing the guards was not vision, but access- it is difficult to get the tool in position when you're working on large objects that can't be re positioned.

        • saintfire 2 hours ago

          Was going to chime in that this is the reason.

          I used a grinder almost daily without a guard for years because what i was grinding was 2"+ steel rods buried underground.

          I couldn't move myself or reposition the work.

          I'm not advocating removing the guards but there are actual reasons why people hate them.

          I see some grinders like Milwaukee have guards that can be quickly rotated around the blade. That's not always enough, though. Ive also seen some Welders cut their guards in half, for clearance and at least a shred more safety.

  • quickthrowman 8 hours ago

    > I was using an angle grinder to strip paint off of a large table top and necessarily had to remove the guard.

    You did not have to remove that guard, you chose the wrong tool for the job. Chemical stripper and sanding tools exist, use the right tool for the job or next time you might lose that finger.

    • Jtsummers 8 hours ago

      This is also part of the concept of normalization of deviance, though. Selecting tools because of proximity or accessibility rather than suitability.

      One time you "need" (feeling pressure from somewhere) to get something done and you reach for the tool at hand, it's not the right tool but it's workable. Then later you experience the same thing and think, "Oh yeah, that worked last time I can just do it again." Repeat a few more times and it becomes the tool you reach for instead of your backup when you have nothing else but "need" to get the work done now. Its unsuitability becomes apparent when you injure yourself or cause damage to the items you're working on because it was never the right tool, you were just careful in how you used it the first few times and got too comfortable with it as time went on.

      • bumby 6 hours ago

        I witnessed this quite a bit in software development. People use the frameworks they are comfortable with, even when they know they can't meet requirements. I've at least had some who were upfront about why by stating "because I don't want to learn something new." (She didn't last much longer)

        • Jtsummers 5 hours ago

          > "because I don't want to learn something new."

          I inherited a Visual Fortran program that was written in Visual Fortran because the original developer (who also maintained it for ~15 years) knew Fortran and was not comfortable in other languages. Not because it was a good idea. I was tasked with reimplementing it in C# which was not a bad idea (the target, successful, was to have it as a standalone program or a plugin in another program also written in C#). Almost any language could have been used, but in context C# was a reasonable choice.

          It's a very common and, to me, very curious attitude. Once you learn how to reason about programs and the logic of programs generally, unless you hit the fringes most languages offer very similar semantics. That said, that particular program was decidedly unstructured (goto's everywhere) so perhaps he really didn't feel comfortable in modern languages or modern uses of his favored Fortran.

    • ansgri 6 hours ago

      Angle grinders are just awful in their versatility. It’s almost always a wrong tool for the job, but the right specialized tool often costs more or not readily available.

    • [removed] 7 hours ago
      [deleted]
jonah 9 hours ago

Related: Overton Window

"The Overton window is the range of policies politically acceptable to the mainstream population at a given time. It is also known as the window of discourse.

The term is named after the American policy analyst Joseph Overton, who proposed that an idea's political viability depends mainly on whether it falls within this range, rather than on politicians' individual preferences.[2][3] According to Overton, the window frames the range of policies that a politician can recommend without appearing too extreme to gain or keep public office given the climate of public opinion at that time."

While originally about politics, I feel it can be applied to many other aspects of humanity and maybe is just a specialized form of the normalization of deviance.

https://en.m.wikipedia.org/wiki/Overton_window

woopsn 7 hours ago

The referenced "researcher/guru" Sidney Dekker wrote a whole book titled Drift Into Failure. "Accidents come from relationships, not broken parts."

"Safety may not at all be the result of decisions that were or were not made, but rather an underlying stochastic variation that hinges on a host of other factors, many not easily within the control of those who engage in fine-tuning processes. Empirical success, in other words, is no proof of safety. Past success does not guarantee future safety. Murphy's law is wrong: everything that can go wrong usually goes right, and then we draw the wrong conclusion."

"Why, in hindsight, do all all these other parts (in the regulations, the manufacturer, the airline, the maintenance facility, the technician, the pilots) appear suddenly "broken" now? How is it that a maintenance program which, in concert with other programs like it never revealed any fatigue failures or fatigue damage after 95 million flight hours, suddenly became "deficient"? Why did none of these deficiencies strike anybody as deficiencies at the time?"

The central idea is not to (stop at) discovering what mistakes were made, but to understand why they didn't seem like mistakes to the individuals making them, and what suppressed the influence of anyone who might have warned otherwise.

082349872349872 6 hours ago

I don't know if the American Alpine Journal still reads like this, but I once went through a pile of 1960s or 70s back issues, and at the time it seemed a fairly regular article genre was:

"First we were at an altitude where we probably weren't thinking all that sharply to begin with, and then we got tired, cold, and hungry, and that's when we made the stupid mistake that killed ${COLLEAGUE}."

Verdex 9 hours ago

For a low dimensional space, I think their diagrams make sense. Like, when working with large industrial machines factors that effect safety are probably how close you are to the machine and how fast everything is going and with what urgency.

Even here they have a section on how the safety performance boundary is fuzzy and dynamic.

I wonder though what things look like with super high dimensions. When there are a 100 different things that go into whether or not you're being safe. That boundary's fuzzy and dynamic nature might extend clear across the entire space. And the fact that failures happen due to rare occurrences suggests that we're not starting at a point of safety but actually starting in a danger zone that we've just been lucky enough not to encounter failures for.

100% unit test coverage comes to mind (even for simple getters). Where some might see a slide towards danger as the coverage goes down, another sees more time to verify the properties that really matter. And I don't see why we can't get into the scenario where both are right and wrong in incomparable ways.

  • brianleb 3 hours ago

    >>I wonder though what things look like with super high dimensions.

    You need only look to healthcare in the USA. Many, many professionals (some of which you never interact with) handing off patient cases to each other in a very carefully choreographed dance designed to meet legal and regulatory requirements; quality, safety, and care standards; financial responsibilities; and each individual's own personal standards for the quality of care they believe they provide.

    In healthcare, we often view risks using the Swiss Cheese Model [1]. Everyone makes mistakes sometimes, but the system of checks and balances catches most of them before they reach the patient. Prescriber ordered the wrong dose of medicine in the inpatient setting? Pharmacy intercepts and starts making calls or sending messages to verify. Pharmacy approves the order because "that's what they ordered?" Nursing lays hands and eyes to every medicine administered and can 'stop the line' if they deem appropriate. Not to mention the technical safeguards and guardrails (e.g., clinical decision support systems) that are also supporting everyone involved.

    But still, failures happen, and they can be catastrophic.

    https://en.wikipedia.org/wiki/Swiss_cheese_model

  • mzmzmzm 8 hours ago

    You might not be giving enough credit to the complexity of industrial labor. Industry tends to imply that the humans are a fallible part of a mechanical system, but the skill and culture of manufacturing laborers could be just as complex as in large software systems.

  • carapace 8 hours ago

    > I wonder though what things look like with super high dimensions.

    Biology.

roenxi 10 hours ago

I worry a lot about the similar forces that act on foreign policy and diplomacy. Unfortunately people don't get more cautious as the stakes get higher, organisations at all stakes and scales tend to fail in the same way.

  • 082349872349872 6 hours ago

    Henry Kissinger instigated many war crimes because he felt he was keeping the world from failing in the same way that had led up to World War I.

  • InDubioProRubio 9 hours ago

    And at the same time, we try to project our idealized view of things on the world and do not engage threats early enough, ivan.

svaha1728 10 hours ago

Boeing is far from an anomaly. They’ve just reached the stage where it’s noticeable.

  • wpietri 10 hours ago

    And I'd add that airplanes are unusual in the degree to which declining quality is visible. Boeing is working in an area where badness signals are heavily amplified.

    My guess is that what's going on with Boeing is not that they're a particularly bad apple. It's that standard American business practices have slowly crept in, displacing the previous culture, which had focused on safety and quality. So I'd say Boeing was an anomaly that has now become typical. The interesting question then becomes: what is the deviance that has become normalized so broadly?

    • InDubioProRubio 10 hours ago

      The end of the cold war- and a true market due to systemic competition? It gave union leverage, it gave politician a metric "continued existence" and it gave companies something to be or not to be.

      Also you are right about this being the norm: https://en.wikipedia.org/wiki/Gilded_Age

      This is what a free market without systemic competition always boils down into.

    • vundercind 9 hours ago

      I think workers in prior generations (largely Silent and early Boomer) told us exactly what deviance was becoming normalized starting in the 70s through the 90s: managers who’d only ever managed, and don’t much care what they’re managing. MBAs and finance majors replacing the people who might not have any degree but did know what actual labor the company did and how the productive (not spreadsheets) side of the business functions, because they had personally done some of it for years.

      They’re still around to ask about this stuff.

      I think cutting way back on antitrust enforcement starting in the mid 70s accelerated the shift (that part, you may or may not hear from the older workers who watched this happening—though I bet their stories include some M&A activity as inflection points)

    • rsynnott 2 hours ago

      I mean, being one half of a duopoly can’t help. Even if an airline wants to buy A320neos instead of 737-Maxes, say, it realistically can’t; there’s a waiting list of years. The 737-Max can be almost arbitrarily bad, and people still have to buy it.

      Arguably, this is on the FTC and EC; if the mergers which created Airbus and current-Boeing hadn’t been permitted, the market would look very different, and there’d be less space for making a Terrible Plane; no-one would buy it.

      You see this elsewhere; look at the old Soviet/Warsaw Pact car industry, or somewhat more arguably the pre-European-accession British car market (before it joined the EU, the UK was quite protectionist around cars). _Terrible_ products, but people had to buy them because nothing else was realistically available. So, why bother putting any effort into making them good?

    • tristor 8 hours ago

      > The interesting question then becomes: what is the deviance that has become normalized so broadly?

      Welchian management. "Greed is good". It's so infected the American psyche that people believe corporate management has a fiduciary duty to shareholders to the point that they legally must put profit above all else. That is not in any way the case, and no law has formalized any such thing. It's a choice, a choice made every day in nearly every company, to create human suffering in pursuit of profit and to eschew any concept of loyalty or ethics.

      Jack Welch destroyed GE doing this, and his acolytes destroyed IBM, Boeing, and many other American institutions. His brand of management has now become the only brand of management taught across nearly two generations of MBAs. It is now the normal culture of American corporations.

  • rob74 9 hours ago

    Yeah, the article is from 2017, otherwise they could have added the 737 MAX story as another perfect example...

    • emarsden 8 hours ago

      This is debatable. Perhaps the poor quality management issues and lack of rigour that have been seen in various Boeing production facilities are a case of normalization of deviance. However, the original problem with the 737 MAX was the top management decision not to invest in a new airframe design for cost/strategic reasons, to oblige designers to implement various unsafe workarounds to accommodate larger and more fuel-efficient engines that made the plane unstable, and to ruthlessly silence engineers who argued that this was unsafe. This problem was compounded by the FAA's move to increased delegation of safety oversight to designer-manufacturers, which left it with insufficient ability independently to assess the new design. These are both big, important decisions made by top leaders of the two organizations, rather than the slow progressive evolution driven by people's efforts to optimize their bit of the workplace which characterizes drift to danger.

      • outworlder 4 hours ago

        Yes, but it was also a problem of relying on just one AOA sensor. That was approved across the chain of command and deemed "good enough". It's pretty clear to anyone that a critical system such as this should not rely on just one sensor, which may be faulty.

        But they thought that, since the pilots would just have to do the 'runaway stabilizer procedure', it would be enough. They didn't consider how startling and different the system would feel.

akavel 8 hours ago

Ok, but apart from just noticing it, how can I/we combat the normalization of deviance?

I don't see practical guidance on how to do it in the article? Do I just sit down and throw my arms in the air, and complain "oh, how things are going in a bad way"?

  • yamrzou 6 hours ago

    From https://news.ycombinator.com/item?id=21406452:

    > One way to mitigate the "drift" is to have zero tolerance for deviation from procedure, but to also have a formal and rapid system for updating procedures, including explicitly temporary measures.

    • hinkley 4 hours ago

      That really doesn’t answer the question. I’ve been in Told You So situations particularly last year where people wanted to take the safeties off. These were literally the same people who voted to put them on in the first place.

      Kaboom.

  • renewiltord 4 hours ago

    Any easy distillation loses crucial tail frequencies. I read The Design of Everyday Things by Don Norman and Understanding Human Error by Sidney Dekker back to back and it seemed to me that a lot of this was:

    1. Have ergonomic procedures

    2. Measure usage

    3. Treat compliance failure as a problem with the procedure

    4. Treat 100% compliance as evidence of lack of reporting

    5. Defence in depth

    If you want quick heuristics for a blind man, listen for "if they had just", "oh we never", "a competent X would have". All are signs your tools and procedures have problems. You should expect to have many low-level compliance failures but they should be uncorrelated (i.e. same person should not be making all the mistakes, many people should not be making the same mistake).

    I am not a professional in this field, however, so take this with a grain of salt as my understanding based off what I read.

lanstin 9 hours ago

The article has the line:

> (in particular if they are encouraged by a “cheaper, faster, better” organizational goal)

This struck me, I have never remotely worked for a place that seriously believed "you get what you pay for." I wonder what that would be like.

  • hinkley 4 hours ago

    Data set of three at a stretch: they go out of business. Some slowly, others fast.

derbOac 10 hours ago

The discussion in this essay applies to so many organizational domains if you stretch definitions just a bit.

mzmzmzm 8 hours ago

This is a compelling framework. While the author mostly applies it to examples of physically hazardous accidents, it could just as easily describe the lead up to economic crashes or other less tandible disasters.

torginus 10 hours ago

This diagram looks weird to me, looks like being lazy counteracts the effects of being cheap, so that being both is less dangerous than just trying to save money or effort alone.

  • emarsden 10 hours ago

    "Being cheap" is a pressure from management to increase production in order to avoid economic failure. "Being lazy" is an effort from frontline workers to improve efficiency and avoid being swamped by production demand. One of the points of the diagram is that these partially cancel each other out, but the net effect of the addition of these two "vectors" is to push the system towards the failure boundary.

  • hinkley 4 hours ago

    Efficiency should be in the hands of the clever but lazy employees. They will find the real efficiencies. Said some German general that gets quoted a lot.

  • immibis 9 hours ago

    Can't fuck it up if you never even bother to do it!

travisjungroth 8 hours ago

I seriously read the title in the imperative and that it was going to be some contrarian inspirational essay.

einpoklum 10 hours ago

Brief excerpt re the second term:

A detailed analysis of the organizational culture at NASA, undertaken by sociologist Diane Vaughan after the [Challenger shuttle destruction] accident, showed that people within NASA became so much accustomed to an unplanned behaviour that they didn’t consider it as deviant, despite the fact that they far exceeded their own basic safety rules. This is the primary case study for Vaughan’s development of the concept of normalization of deviance.

  • PaulHoule 8 hours ago

    One of the major points in that book (missed by a lot of people who read it seemingly and certainly missed if you read about that book) was that NASA had a system for normalizing deviance officially in that they knew the Shuttle was unsafe at any speed and had numerous unacceptable safety flaws. For each launch they had a meeting where they reviewed all the things that they knew were wrong and made the decision that the risk was acceptable or could be managed and would go ahead anyway.

    When the O-ring problem was presented at this meeting it was just another in a long list of things that could blow up the shuttle and wasn't all that remarkable among them except that, in retrospect, it really did blow up the shuttle.

    That is, NASA's normalization of deviance wasn't like the individual worker who comes to the conclusion that it is OK to smoke pot and weld, but rather was an official process in the organization. A similar thing is seen in other safety critical fields. If you build a nuclear power plant with two units one might be the mirror of the other except you forgot to mirror how the door to the employee break room opens so it opens the wrong way. Anything complex like that it going to wind up with numerous deviances of that general nature and it is the job of a committee to file out paperwork on each and every one of them to decide what is going to be done about it from nothing to reworking it or taking some remedial measures in operations.

evanjrowley 10 hours ago

This entire website is a gem. At least in my profession, I wish more peers would focus on these things.

  • einpoklum 10 hours ago

    Unfortunately, your manager typically wishes for peers who will get products out the door / tasks marked as completed :-(

Spivak 8 hours ago

I do wonder if the graph at the end is skewed with specifically the phrase "normalization of deviance" because it's searching all of Google books in aggregate and that phrase found a second home to describe lgbt acceptance among conservative political writers. It's not an incorrect usage per se if you assume their premise but it probably doesn't line up with discussions around workplace safety.

empath75 9 hours ago

One place that I see this happening is the Ukraine/Russia conflict where just because there hasn't been a nuclear exchange yet, people assume that there won't be, and keep pushing the line on acceptable escalation (on both sides -- Russia in starting the war, and the west in defending Ukraine). No we've got western tanks on the ground in Russia and Ukrainian drones bombing Moscow and who knows what is going to be the triggering event. 75 years of MAD doctrine thrown out the window and now we're deep in uncharted territory.

  • kiba 8 hours ago

    One place that I see this happening is the Ukraine/Russia conflict where just because there hasn't been a nuclear exchange yet, people assume that there won't be, and keep pushing the line on acceptable escalation (on both sides -- Russia in starting the war, and the west in defending Ukraine). No we've got western tanks on the ground in Russia and Ukrainian drones bombing Moscow and who knows what is going to be the triggering event. 75 years of MAD doctrine thrown out the window and now we're deep in uncharted territory.

    No, this is because Russia don't want western military aid in Ukraine, which is why they make threats to deter the west from sending aids. We also know what is their nuclear doctrine. Hint: Unless NATO invade Russia, nothing will happen.

    The west nor Ukraine aren't interested in collapsing Russia as a state. Frankly, it's more likely that the continuation of war in Ukraine will collapse Russia.

    • empath75 8 hours ago

      The collapse of Russia would also be the result of normalization of deviance. Russia is operating way outside of it's "normal" geopolitical parameters. But seems to be going on as if nothing is going wrong.

throwaway984393 10 hours ago

I find this kind of thing fascinating. In the BDSM rope bondage world there is a lot of ceremony and almost theatrics about safety. But there's actually no real safety, because the participants keep doing things everyone knows is unsafe. The Takate Kote tie is probably responsible for 80% of nerve impingement damage in rope bondage, yet it's wildly popular because people find it pleasing and they keep coming up with new variations on it. Every time you bring up its danger, people like to shout you down like you're over-reacting and they're sick of hearing from you, and then they go give some poor newbie wrist drop.

  • emarsden 10 hours ago

    I did not have BDSM rope bondage in mind when I wrote that article, but nice to know that these concepts can be roped in more widely than anticipated!