jmclnx 9 hours ago

These days, that is true in Corporate IT. You have the choice of investing real dollars to get things faster, more invested faster it gets. But the speed difference is probably not worth the amount spent. Companies find it easier to throw hardware at the issue then speeding up the program(s). Over 35+ years ago, things were far different, back then we did spend plenty of time and $ in making software run faster.

These days, you have the real possibility the program(s) have a limited life span before the next upgrade. With today's hardware, you can bet you will get more performance per $ spent than by changing the software.

IIRC, I think RMS said something like "do not worry about performance, hardware will catch up". In the case of Emacs, it definitely did.

  • jillesvangurp 9 hours ago

    Things weren't actually that different 35 years ago. Your computer would get slow; you'd buy a new one. It would be faster for a while. Repeat. I've been doing this for 35 years, so I remember.

    The amount of money spent on software development was a lot smaller and there were far fewer programmers. And they were creating a lot of bloated nonsense that you could install on your 8086, commodore 64, or whatever. Windows 1.0 is a good example. That was released nearly 35 years ago.

    These days the one thing that's different is that hardware ages a lot slower. I've been on several laptops with 16GB since 2012. Back in 2012 that was a lot. Now Apple seems to still find 8GB more than enough for anyone (Bill Gates pun intended). Things are moving much slower with hardware.

    • josephg 9 hours ago

      They have, finally upped the base configuration to 16gb. But objectively, 8gb would / should be totally fine if modern software wasn't so insanely bloated.

      I remember reading years ago about how people kept asking microsoft's engineers why Visual Studio hadn't been updated to be a 64 bit native app. They said they didn't need to - if visual studio ever gets anywhere near the 3gb memory limit (or whatever it is) on 32 bit windows apps, they spend the time to optimise the application. As a result, there is / was simply no benefit to have the program running in 64 bit mode. Even for massive projects, it simply doesn't need the larger address space.

  • sokoloff 9 hours ago

    > With today's hardware, you can bet you will get more performance per $ spent than by changing the software.

    With today’s hardware and today’s cost of software engineers…

fredtalty5 12 hours ago

The computing industry often sacrifices performance for convenience, complexity, or tight deadlines. I focused on making things faster by stripping out unnecessary layers, optimizing core processes, and prioritizing efficiency over feature creep. It’s amazing how much speed you can regain by simplifying instead of stacking. Performance isn’t a luxury—it’s foundational. The key is to treat it as a feature, not an afterthought.

  • jffhn 9 hours ago

    >Performance isn’t a luxury—it’s foundational.

    Reasons to always try to be nearly optimal on performance from the start, that I rarely saw stated:

    1) What you can specify depends on what can be done, and to know what can be done you need to have tried your best. In particular, it allows to see early whether or not performance expectations are reallistic.

    2) It's more difficult or impossible to upgrade performance later if it requires to break an API. It causes a development process in O(n^2) in number of layers, instead of O(n).

    3) Better lower level performance makes higher level code and architecture simpler, as you can just brute force in more places for a same overall performance.

    • blitzar 9 hours ago

      No offence intended, however, this is literally the pitch I have seen for why startups with 5 users need to architect their product to be global scale for 1 billion concurrent users. (those that try always burn out and fail)

      • JohnBooty 7 hours ago

        In reality, I feel like this tends to translate to "we need to scale horizontally and make sure our code runs on a large number of machines" with little consideration for actually optimizing the code itself.

        In fact, devoting time to code optimization tends to be explicitly discouraged in my experience. The prevailing wisdom seems to be, "write shitty code... just make sure it runs on as many AWS machine instances as possible."

  • jasfi 10 hours ago

    New products or features are of an acceptable performance only, sometimes not even that. Additional work on performance tuning is often followed up in a later version. It's really about priorities that have to be decided, and deadlines usually win.

    Those that prioritize performance upfront can find all that work thrown out if the design needs to change for some reason.

    But I think that performance by design upfront should be done where possible. This is where experience helps a ton.

    • blitzar 8 hours ago

      Allowing (encouraging even) the paying down of technical debt is the real solution. Build the prototypes fast and broken, if they catch on then fix them. This was the way things were once done, and how most of the billions were originally made.

      • jasfi 8 hours ago

        Not too broken, or you waste a lot of time fixing bugs. Rather build prototypes minimally but working well.

walterbell 18 hours ago

Is UI performance better in apps where user per-minute labor is expensive? How about apps in a time-limited multitasking workflow?

Bloomberg terminal and some point-of-sale systems were well regarded on interactive performance. What tooling was used to optimize their performance?

If we can use LLMs to rewind/recall activity, can we use continuous profiling (e.g. eBPF) to identify interactive hotspots in complex user workflows?

  • mr_toad 13 hours ago

    > How about apps in a time-limited multitasking workflow?

    I once worked with a taxi company whose despatch system ran on QNX. During peak hours average call times were around six seconds and the system had to be very low latency and reliable. The UI was designed entirely on getting the relevant information from the caller and into the system in that six second window. It was text-only, made heavy use of keyboard shortcuts, and used predictive text before that was even a thing.

    The “industry” can design performant systems when there’s money on the table.

    • josephg 9 hours ago

      Yep. Look at tools for software engineers. Intellij runs incredibly fast on a large modern computer given what it does. Git is insanely fast given almost every command is a process starting from scratch, and then often scan your entire repository. And VS code is probably the poster child for getting good performance out of electron.

      But I think we only have performance because we're spoiled for choice. People just wouldn't use these tools if they were slow. (Or they'd get fixed). In other industries, thats just not the case.

      I have a friend in Australia who's a doctor. Apparently the computer system they use to look up medical records sometimes just freezes for 30 seconds at a time while it does who knows what. Its a real operational problem. But the procurement process is so complex that it'd be almost impossible to get someone who knows what they're doing to come in and fix the issue. Or hospital could move systems - but that'd probably cost them millions of dollars. Its crazy.

  • HumanOstrich 9 hours ago

    Are you randomly compelled to interject something about LLMs/AI in every conversation? I've noticed this pattern on HN.

    • walterbell 9 hours ago

      Now that you ask - previous instance was 20 comments ago, in an LLM thread.