I did not look for a consulting contract for 18 years. Through my old network more quality opportunities found me than I could take on.
That collapsed during the covid lockdowns. My financial services client cut loose all consultants and killed all 'non-essential' projects, even when mine (that they had already approved) would save them 400K a year, they did not care! Top down the word came to cut everyone -- so they did.
This trend is very much a top down push. Inorganic. People with skills and experience are viewed by HR and their AI software as risky to leave and unlikely to respond to whatever pressures they like to apply.
Since then it's been more of the same as far as consulting.
I've come to the conclusion I'm better served by working on smaller projects I want to build and not chasing big consulting dollars. I'm happier (now) but it took a while.
An unexpected benefit of all the pain was I like making things again... but I am using claude code and gemini. Amazing tools if you have experience already and you know what you want out of them -- otherwise they mainly produce crap in the hands of the masses.
I have worked with a lot of code generation systems.
LLMs strike me as mainly useful in the same way. I can get most of the boilerplate and tedium done with LLM tools. Then for core logic esp learning or meta-programming patterns etc. I need to jump in.
Breaking tasks down to bite size, and writing detailed architecture and planning docs for the LLM to work from, is critical to managing increasing complexity and staying within context windows. Also critical is ruthlessly throwing away things that do not fit the vision and not being afraid to throw whole days away (not too often tho!)
For ref I have built stuff that goes way beyond CRUD app with these tools in 1/10th of the time it previously took me or less -- the key though is I already knew how to do and how to validate LLM outputs. I knew exactly what I wanted a priori.
Code generation technically always 'replaced' junior devs and has been around for ages, the results of the generation are just a lot better now., whereas in the past it was mixed bag of benefits/hassles doing code generation regularly, now it works much better and the cost is much less.
I started my career as a developer and the main reasons I became a solutions systems guy were money and that I hated the tedium boilerplate phase of all software development projects over a certain scale. I never stoped coding because I love it -- just not for large enterprise soul destroying software projects.
I’ve seen Picallilli’s stuff around and it looks extremely solid. But you can’t beat the market. You either have what they want to buy, or you don’t.
> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
The market is speaking. Long-term you’ll find out who’s wrong, but the market can usually stay irrational for much longer than you can stay in business.
I get the moral argument and even agree with it but we are a minority and of course we expect to be able sell our professional skills -- but if you are 'right' and out of business nobody will know. Is that any better than 'wrong' and still in business?
You might as well work on product marketing for ai because that is where the client dollars are allocated.
If it's hype at least you stayed afloat. If it's not maybe u find a new angle if you can survive long enough? Just survive and wait for things to shake out.
Not wanting to help the rich get richer means you'll be fighting an uphill battle. The rich typically have more money to spend. And as others have commented, not doing anything AI related in 2025-2026 is going to further limit the business. Good luck though.
Rejecting clients based on how you wish the world would be is a strategy that only works when you don’t care about the money or you have so many clients that you can pick and choose.
Running a services business has always been about being able to identify trends and adapt to market demand. Every small business I know has been adapting to trends or trying to stay ahead of them from the start, from retail to product to service businesses.
Rejecting clients when you have enough is a sound business decision. Some clients are too annoying to serve. Some clients don't want to pay. Sometimes you have more work than you can do... It is easy to think when things are bad that you must take any and all clients (and when things are bad enough you might be forced to), but that is not a good plan and to be avoided. You should be choosing your clients. It is very powerful when you can afford to tell someone I don't need your business.
This is the type of business that's going to be hit hard by AI. And the type of businesses that survive will be the ones that integrate AI into their business the most successfully. It's an enabler, a multiplier. It's just another tool and those wielding the tools the best, tend to do well.
Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.
The expertise and skill still matter. But customers are going to get a lot further without such a studio and the remaining market is going to be smaller and much more competitive.
There's a lot of other work emerging though. IMHO the software integration market is where the action is going to be for the next decade or so. Legacy ERP systems, finance, insurance, medical software, etc. None of that stuff is going away or at risk of being replaced with some vibe coded thing. There are decades worth of still widely used and critically important software that can be integrated, adapted, etc. for the modern era. That work can be partly AI assisted of course. But you need to deeply understand the current market to be credible there. For any new things, the ambition level is just going to be much higher and require more skill.
Arguing against progress as it is happening is as old as the tech industry. It never works. There's a generation of new programmers coming into the market and they are not going to hold back.
> Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.
So let's all just give zero fucks about our moral values and just multiply monetary ones.
What parent is saying is that what works is what will matter in the end. That which works better than something else will become the method that survives in competition.
You not liking something on purportedly "moral" grounds doesn't matter if it works better than something else.
>So let's all just give zero fucks about our moral values and just multiply monetary ones.
You are misconstruing the original point. They are simply suggesting that the moral qualms of using AI are simply not that high - neither to vast majority of consumers, neither to the government. There are a few people who might exaggerate these moral issues for self service but they wouldn't matter in the long term.
That is not to suggest there are absolutely no legitimate moral problems with AI but they will pale in comparison to what the market needs.
If AI can make things 1000x more efficient, humanity will collectively agree in one way or the other to ignore or work around the "moral hazards" for the greater good.
You can start by explaining what your specific moral value is that goes against AI use? It might bring to clarity whether these values are that important at all to begin with.
Is that the promise of the faustian bargain we're signing?
Once the ink is dry, should I expect to be living in a 900,000 sq ft apartment, or be spending $20/year on healthcare? Or be working only an hour a week?
While humans have historically mildly reduced their working time to today's 40h workweek, their consumption has gone up enormously, and whole new categories of consumption were opened. So my prediction is while you'll never live in a 900,000sqft apartment (unless we get O'Neill cylinders from our budding space industry) you'll probably consume a lot more, while still working a full week
We could probably argue to the end of time about the qualitative quality of life between then and now. In general a metric of consumption and time spent gathering that consumption has gotten better over time.
I don't want to "consume a lot more". I want to work less, and for the work I do to be valuable, and to be able to spend my remaining time on other valuable things.
So you are agreeing with the parent? If consumption has gone up a lot and input hours has gone down or stayed flat, that means you are able to work less.
But that's not what they said, they said they want to work less. As the GP post said, they'd still be working a full week.
I do think this is an interesting point. The trend for most of history seems to have been vastly increasing consumption/luxury while work hours somewhat decrease. But have we reached the point where that's not what people want? I'd wager most people in rich developed countries don't particularly want more clothes, gadgets, cars, or fast food. If they can get the current typical middle class share of those things (which to be fair is a big share, and not environmentally sustainable), along with a modest place to live, they (we) mainly want to work less.
AI is just a tool, like most other technologies, it can be used for good and bad.
Where are you going to draw the line? Only if it effects you, or maybe we should go back to using coal for everything, so the mineworkers have their old life back? Or maybe follow the Amish guidelines to ban all technology that threatens sense of community?
If you are going to draw a line, you'll probably have to start living in small communities, as AI as a technology is almost impossible to stop. There will be people and companies using it to it's fullest, even if you have laws to ban it, other countries will allow it.
I was told that Amish (elders) ban technology that separates you from God. Maybe we should consider that? (depending on your personal take on what God is)
> AI is just a tool, like most other technologies, it can be used for good and bad.
The same could be said of social media for which I think the aggregate bad has been far greater than the aggregate good (though there has certainly been some good sprinkled in there).
I think the same is likely to be true of "AI" in terms of the negative impact it will have on the humanistic side of people and society over the next decade or so.
However like social media before it I don't know how useful it will be to try to avoid it. We'll all be drastically impacted by it through network effects whether we individually choose to participate or not and practically speaking those of us who still need to participate in society and commerce are going to have to deal with it, though that doesn't mean we have to be happy about it.
It’s completely reasonable to take a moral stance that you’d rather see your business fail and shut down than do X, even if X is lucrative.
But don’t expect the market to care. Don’t write a blog post whining about your morals, when the market is telling you loud and clear they want X. The market doesn’t give a shit about your idiosyncratic moral stance.
Edit: I’m not arguing that people shouldn’t take a moral stance, even a costly one, but it makes for a really poor sales pitch. In my experience this kind of desperate post will hurt business more than help it. If people don’t want what you’re selling, find something else to sell.
> when the market is telling you loud and clear they want X
Does it tho? Articles like [1] or [2] seem to be at odd with this interpretation. If it were any different we wouldn't be talking about the "AI bubble" after all.
2) if you had read the paper you wouldn’t use it as an example here.
Good faith discussion on what the market feels about LLMs would include Gemini, ChatGPT numbers. Overall market cap of these companies. And not cherry picked misunderstood articles.
No, I picked those specifically. When Pets.com[1] went down in early 2000 it wasn't neither the idea, nor the tech stack that brought the company down, it was the speculative business dynamics that caused its collapse. The fact we've swapped technology underneath doesn't mean we're not basically falling into ".com Bubble - Remastered HD Edition".
I bet a few Pets.com exec were also wondering why people weren't impressed with their website.
Do you actually want to get into the details on how frequently do markers get things right vs get things wrong? It would make the priors a bit more lucid so we can be on the same page.
"Jeez there so many cynics! It cracks me up when I hear people call AI underwhelming,”
ChatGPT can listen to you in real time, understands multiply languages very well and responds in a very natural way. This is breath taking and not on the horizon just a few years ago.
AI Transcription of Videos is now a really cool and helpful feature in MS Teams.
Segment Anything literaly leapfroged progress on image segmentation.
You can generate any image you want in high quality in just a few seconds.
There are already human beings being shitier in their daily job than a LLM is.
Some people maintain that JavaScript is evil too, and make a big deal out of telling everyone they avoid it on moral grounds as often as they can work it into the conversation, as if they were vegans who wanted everyone to know that and respect them for it.
So is it rational for a web design company to take a moral stance that they won't use JavaScript?
Is there a market for that, with enough clients who want their JavaScript-free work?
Are there really enough companies that morally hate JavaScript enough to hire them, at the expense of their web site's usability and functionality, and their own users who aren't as laser focused on performatively not using JavaScript and letting everyone know about it as they are?
This is a YC forum. That guy is giving pretty honest feedback about a business decision in the context of what the market is looking for. The most unkind thing you can do to a founder is tell them they’re right when you see something they might be wrong about.
What you (and others in this thread) are also doing is a sort of maximalist dismissal of AI itself as if it is everything that is evil and to be on the right side of things, one must fight against AI.
This might sound a bit ridiculous but this is what I think a lot of people's real positions on AI are.
>The only thing people don’t give a shit about is your callous and nihilistic dismissal.
This was you interpreting what the parent post was saying. I'm similarly providing a value judgement that you are doing a maximalist AI dismissal. We are not that different.
> 1B people in the world smoke. The fact something is wildly popular doesn’t make it good or valuable. Human brains are very easily manipulated, that should be obvious at this point.
You should be. You should be equally suspicious of everything. That's the whole point. You wrote:
> My position on things like this is that if enough people use a service, I must defer to their judgement that they benefit from it.
Enough people doing something doesn't make that something good or desirable from a societal standpoint. You can find examples of things that go in both directions. You mentioned gaming, social media, movies, carnivals, travel, but you can just as easily ask the same question for gambling or heavy drugs use.
Just saying "I defer to their judgment" is a cop-out.
> The point is that people FEEL they benefit. THAT’S the market for many things.
I don't disagree, but this also doesn't mean that those things are intrinsically good and then we should all pursuit them because that's what the market wants. And that was what I was pushing against, this idea that since 800M people are using GPT then we should all be ok doing AI work because that's what the market is demanding.
As someone else said, we don't know for sure. But it's not like there aren't some at-least-kinda-plausible candidate harms. Here are a few off the top of my head.
(By way of reminder, the question here is about the harms of LLMs specifically to the people using them, so I'm going to ignore e.g. people losing their jobs because their bosses thought an LLM could replace them, possible environmental costs, having the world eaten by superintelligent AI systems that don't need humans any more, use of LLMs to autogenerate terrorist propaganda or scam emails, etc.)
People become like those they spend time with. If a lot of people are spending a lot of time with LLMs, they are going to become more like those LLMs. Maybe only in superficial ways (perhaps they increase their use of the word "delve" or the em-dash or "it's not just X, it's Y" constructions), maybe in deeper ways (perhaps they adapt their _personalities_ to be more like the ones presented by the LLMs). In an individual isolated case, this might be good or bad. When it happens to _everyone_ it makes everyone just a bit more similar to one another, which feels like probably a bad thing.
Much of the point of an LLM as opposed to, say, a search engine is that you're outsourcing not just some of your remembering but some of your thinking. Perhaps widespread use of LLMs will make people mentally lazier. People are already mostly very lazy mentally. This might be bad for society.
People tend to believe what LLMs tell them. LLMs are not perfectly reliable. Again, in isolation this isn't particularly alarming. (People aren't perfectly reliable either. I'm sure everyone reading this believes at least one untrue thing that they believe because some other person said it confidently.) But, again, when large swathes of the population are talking to the same LLMs which make the same mistakes, that could be pretty bad.
Everything in the universe tends to turn into advertising under the influence of present-day market forces. There are less-alarming ways for that to happen with LLMs (maybe they start serving ads in a sidebar or something) and more-alarming ways: maybe companies start paying OpenAI to manipulate their models' output in ways favourable to them. I believe that in many jurisdictions "subliminal advertising" in movies and television is illegal; I believe it's controversial whether it actually works. But I suspect something similar could be done with LLMs: find things associated with your company and train the LLM to mention them more often and with more positive associations. If it can be done, there's a good chance that eventually it will be. Ewww.
All the most capable LLMs run in the cloud. Perhaps people will grow dependent on them, and then the companies providing them -- which are, after all, mostly highly unprofitable right now -- decide to raise their prices massively, to a point at which no one would have chosen to use them so much at the outset. (But at which, having grown dependent on the LLMs, they continue using them.)
I don't agree with most of these points, I think the points about atrophy, trust, etc will have a brief period of adjustment, and then we'll manage. For atrophy, specifically, the world didn't end when our math skills atrophied with calculators, it won't end with LLMs, and maybe we'll learn things much more easily now.
I do agree about ads, it will be extremely worrying if ads bias the LLM. I don't agree about the monopoly part, we already have ways of dealing with monopolies.
In general, I think the "AI is the worst thing ever" concerns are overblown. There are some valid reasons to worry, but overall I think LLMs are a massively beneficial technology.
We don't know yet? And that's how things usually go. It's rare to have an immediate sense of how something might be harmful 5, 10, or 50 years in the future. Social media was likely considered all fun and good in 2005 and I doubt people were envisioning all the harmful consequences.
Yet social media started as individualized “web pages” and journals on myspace. It was a natural outgrowth of the internet at the time, a way for your average person to put a little content on the interwebules.
What became toxic was, arguably, the way in which it was monetized and never really regulated.
I don't disagree with your point and the thing you're saying doesn't contradict the point I was making. The reason why it became toxic is not relevant. The fact that wasn't predicted 20 years ago is what matters in this context.
I don’t do zero sum games, you can normalize every bad thing that ever happened with that rhetoric.
Also, someone benefiting from something doesn’t make it good. Weapons smuggling is also extremely beneficial to the people involved.
Yes but if I go with your priors then all of these are similarly to be suspect
- gaming
- netflix
- television
- social media
- hacker news
- music in general
- carnivals
A priori, all of these are equally suspicious as to whether they provide value or not.
My point is that unless you have reason to suspect, people engaging in consumption through their own agency is in general preferable. You can of course bring counter examples but they are more of caveats against my larger truer point.
Social media for sure and television and Netflix in general absolutely.
But again, providing value is not the same as something being good. A lot of people think inaccuracies by LLMs to be of high value because it’s provided with nice wrappings and the idea that you’re always right.
This line of thinking made many Germans who thought they're on the right side of history simply by the virtue of joining the crowd, to learn the hard way in 1945.
And today's adapt or die doesn't sound less fascist than in 1930.
You mean, when evaluating suppliers, do I push for those who don't use AI?
Yes.
I'm not going to be childish and dunk on you for having to update your priors now, but this is exactly the problem with this speaking in aphorisms and glib dismissals. You don't know anyone here, you speak in authoritative tone for others, and redefine what "matters" and what is worthy of conversation as if this is up to you.
> Don’t write a blog post whining about your morals,
why on earth not?
I wrote a blog post about a toilet brush. Can the man write a blog post about his struggle with morality and a changing market?
That's how it works. You can be morally righteous all you want, but this isn't a movie. Morality is a luxury for the rich. Conspicuous consumption. The morally righteous poor people just generally end up righteously starving.
This seems rather black and white.
Defining the morals probably makes sense, then evaluating whether they can be lived or whether we can compromise in the face other priorities?
I think it's just as likely that business who have gone all-in on AI are going to be the ones that get burned. When that hose-pipe of free compute gets turned off (as it surely must), then any business that relies on it is going to be left high and dry. It's going to be a massacre.
The latest DeepSeek and Kimi open weight models are competitive with GPT-5.
If every AI lab were to go bust tomorrow, we could still hire expensive GPU servers (there would suddenly be a glut of those!) and use them to run those open weight models and continue as we do today.
Sure, the models wouldn't ever get any better in the future - but existing teams that rely on them would be able to keep on working with surprisingly little disruption.
I understand that website studios have been hit hard, given how easy it is to generate good enough websites with AI tools. I don't think human potential is best utilised when dealing with CSS complexities. In the long term, I think this is a positive.
However, what I don't like is how little the authors are respected in this process. Everything that the AI generates is based on human labour, but we don't see the authors getting the recognition.
Only in exactly the same sense that portrait painters were robbed of their income by the invention of photography. In the end people adapted and some people still paint. Just not a whole lot of portraits. Because people now take selfies.
Authors still get recognition. If they are decent authors producing original, literary work. But the type of author that fills page five of your local news paper, has not been valued for decades. But that was filler content long before AI showed up. Same for the people that do the subtitles on soap operas. The people that create the commercials that show at 4am on your TV. All fair game for AI.
It's not a heist, just progress. People having to adapt and struggling with that happens with most changes. That doesn't mean the change is bad. Projecting your rage, moralism, etc. onto agents of change is also a constant. People don't like change. The reason we still talk about Luddites is that they overreacted a bit.
People might feel that time is treating them unfairly. But the reality is that sometimes things just change and then some people adapt and others don't. If your party trick is stuff AIs do well (e.g. translating text, coming up with generic copy text, adding some illustrations to articles, etc.), then yes AI is robbing you of your job and there will be a lot less demand for doing these things manually. And maybe you were really good at it even. That really sucks. But it happened. That cat isn't going back in the bag. So, deal with it. There are plenty of other things people can still do.
You are no different than that portrait painter in the 1800s that suddenly saw their market for portraits evaporate because they were being replaced by a few seconds exposure in front of a camera. A lot of very decent art work was created after that. It did not kill art. But it did change what some artists did for a living. In the same way, the gramophone did not kill music. The TV did not kill theater. Etc.
Getting robbed implies a sense of entitlement to something. Did you own what you lost to begin with?
But DID the Luddites overreact?
They sought to have machines serve people instead of the other way around.
If they had succeeded in regulation over machines and seeing wealth back into the average factory worker’s hands, of artisans integrated into the workforce instead of shut out, would so much of the bloodshed and mayhem to form unions and regulations have been needed?
Broadly, it seems to me that most technological change could use some consideration of people
It's also important that most of AI content created is slop. On this website most people stand against AI generated writing slop. Also, trust me, you don't want a world where most music is AI generated, it's going to drive you crazy. So, it's not like photography and painting it is like comparing good and shitty quality content.
it's not the "exactly same sense". If an AI generated website is based on a real website, it's not like photography and painting, it is the same craft being compared.
Photography takes pictures of objects, not of paintings. By shifting the frame to "robbed of their income", you completely miss the point of the criticism you're responding to… but I suspect that's deliberate.
Robbing implies theft. The word heist was used here to imply that some crime is happening. I don't think there is such a crime and disagree with the framing. Which is what this is, and which is also very deliberate. Luddites used a similar kind of framing to justify their actions back in the day. Which is why I'm using it as an analogy. I believe a lot of the anti AI sentiment is rooted in very similar sentiments.
I'm not missing the point but making one. Clearly it's a sensitive topic to a lot of people here.
I don't know about you, but I would rather pay some money for a course written thoughtfully by an actual human than waste my time trying to process AI-generated slop, even if it's free. Of course, programming language courses might seem outdated if you can just "fake it til you make it" by asking an LLM everytime you face a problem, but doing that won't actually lead to "making it", i.e. developing a deeper understanding of the programming environment you're working with.
Do you remember the times when "cargo cult programming" was something negative? Now we're all writing incantations to the great AI, hoping that it will drop a useful nugget of knowledge in our lap...
Hot takes from 2023, great. Work with AIs has changed since then, maybe catch up? Look up how agentic systems work, how to keep them on task, how they can validate their work etc. Or don't.
It's not as simple as putting all programmers into one category. There can be oversupply of web developers but at the same time undersupply of COBOL developers. If you are a very good developer, you will always be in demand.
> If you are a very good developer, you will always be in demand.
"Always", in the same way that five years ago we'd "never" have an AI that can do a code review.
Don't get me wrong: I've watched a decade of promises that "self driving cars are coming real soon now honest", latest news about Tesla's is that it can't cope with leaves; I certainly *hope* that a decade from now will still be having much the same conversation about AI taking senior programmer jobs, but "always" is a long time.
“certain areas” is a very important qualifier, though. Typically areas with very predictable weather. Not discounting the achievement just noting that we’re still far away from ubiquity.
The defense industry in southern California used to be huge until the 1980s. Lots and lots of ex-defense industry people moved to other industries. Oil and gas has gone through huge economic cycles of massive investment and massive cut-backs.
.com implosion, tech jobs of all kinds went from "we'll hire anyone who knows how to use a mouse" to the tech jobs section of the classifieds was omitted entirely for 20 months. There have been other bumps in the road since then but that was a real eye-opener.
well same like covid right??? digital/tech company overhiring because everyone is home and at the same time the rise of AI reduce the number of headcount
covid overhiring + AI usage = massive layoff we ever see in decades
It was nothing like covid. The dot com crash lasted years where tech was a dead sector. Equity valuations kept declining year after year. People couldn't find jobs in tech at all.
There are still plenty of tech jobs these days, just less than there were during covid, but tech itself is still in a massive expansionary cycle. We'll see how the AI bubble lasts, and what the fallout of it bursting will be.
The key point is that the going is still exceptionally good. The posts talking about experienced programmers having to flip burgers in the early 2000s is not an exaggeration.
Some people will lose their homes. Some marriages will fail from the stress. Some people will chose to exit life because of it all.
It's happened before and there's no way we could have learned from that and improved things. It has to be just life changing, life ruining, career crippling. Absolutely no other way for a society to function than this.
That's where the post-scarcity society AI will enable comes in! Surely the profits from this technology will allow these displaced programmers to still live comfortable lives, not just be hoarded by a tiny number of already rich and powerful people. /s
I don't get these comments. I'm not here to shill for SO, but it is a damn good website, if only for the archive. Can't remember how to iterate over entries in JavaScript dictionary (object)? SO can tell you, usually much better than W3Schools can, which attracts so much scorn. (I love that site: So simple for the simple stuff!)
When you search programming-related questions, what sites do you normally read? For me, it is hard to avoid SO because it appears in so many top results from Google. And I swear that Google AI just regugitates most of SO these days for simple questions.
I think that's OP's point though, Ai can do it better now.
No searching, no looking. Just drop your question into Ai with your exact data or function and 10 seconds later you have a working solution. Stackoverflow is great but Ai is just better for most people.
Instead of running a google query or searching in Stackoverflow you just need a chatGPT, Claude or your Ai of choice open in a browser. Copy and paste.
Ive honestly never intentionally visited it (as in, went to the root page and started following links) - it was just where google sent me when searching answers to specific technical questions.
Often the answer to the question was simply wrong, as it answered a different question that nobody made. A lot of times you had to follow a maze of links to related questions, that may have an answer or may lead to a different one. The languages that it was most useful (due to bad ecosystem documentation) evolved in a rate way faster than SO could update their answers, so most of the answers on those were outdated...
There were more problems. And that's from the point of view of somebody coming from Google to find questions that already existed. Interacting there was another entire can of worms.
the gatekeeping, gaming the system, capricious moderation (e.g. flagged as duplicate), and general attitude led it to be quite an insufferable part of the internet. There was a meme about how the best way to get a response is to answer your own question in an obviously incorrect fashion, because people want to tell you why you're wrong rather than actively help.
In contrast to others, I just want to say that I applaud the decision to take a moral stance against AI, and I wish more people would do that. Saying "well you have to follow the market" is such a cravenly amoral perspective.
I still don’t blame anyone for trying to chart a different course though. It’s truly depressing to have to accept that the only way to make a living in a field is to compromise your principles.
The ideal version of my job would be partnering with all the local businesses around me that I know and love, elevating their online facilities to let all of us thrive. But the money simply isn’t there. Instead their profits and my happiness are funnelled through corporate behemoths. I’ll applaud anyone who is willing to step outside of that.
> It’s truly depressing to have to accept that the only way to make a living in a field is to compromise your principles.
Isn't that what money is though, a way to get people to stop what they're doing and do what you want them to instead? It's how Rome bent its conquests to its will and we've been doing it ever since.
It's a deeply broken system but I think that acknowledging it as such is the first step towards replacing it with something less broken.
> It’s truly depressing to have to accept that the only way to make a living in a field is to compromise your principles.
Of course. If you want the world to go back to how it was before, you’re going to be very depressed in any business.
That’s why I said your only real options are going with the market or finding a different line of work. Technically there’s a third option where you stay put and watch bank accounts decline until you’re forced to choose one of the first two options, but it’s never as satisfying in retrospect as you imagined that small act of protest would have been.
No, of course you don't have to – but don't torture yourself. If the market is all AI, and you are a service provider that does not want to work with AI at all then get out of the business.
If you found it unacceptable to work with companies that used any kind of digital database (because you found centralization of information and the amount of processing and analytics this enables unbecoming) then you should probably look for another venture instead of finding companies that commit to pen and paper.
> If the market is all AI, and you are a service provider that does not want to work with AI at all then get out of the business.
Maybe they will, and I bet they'll be content doing that. I personally don't work with AI and try my best to not to train it. I left GitHub & Reddit because of this, and not uploading new photos to Instagram. The jury is still out on how I'm gonna share my photography, and not sharing it is on the table, as well.
I may even move to a cathedral model or just stop sharing the software I write with the general world, too.
Nobody has to bend and act against their values and conscience just because others are doing it, and the system is demanding to betray ourselves for its own benefit.
Before that AI craze, I liked the idea of having a CC BY-NC-ND[0] public gallery to show what I took. I was not after any likes or anything. If I got professional feedback, that'd be a bonus. I even allowed EXIF-intact high resolution versions to be downloaded.
Now, I'll probably install a gallery webapp to my webserver and put it behind authentication. I'm not rushing because I don't crave any interaction from my photography. The images will most probably be optimized and resized to save some storage space, as well.
Yeah but the business seems to be education for web front end. If you are going to shun new tech you should really return to the printing press or better copying scribes. If you are going to do modern tech you kind of need to stick with the most modern tech.
Printing press and copying scribes is a sarcastic comment, but these web designers are still actively working and their industry is 100s of years from the state of those old techs. The joke isn’t funny enough nor is the analogy apt enough to make sense.
No it is a pretty good comparison. There is absolutely AI slop but you have to be sticking your head in the sand if you don’t think AI will not continue to shape this industry. If you are selling learning courses and are sticking your head in the sand, well that’s pretty questionable.
I understand this stance, but I'd personally differentiate between taking the moral stand as a consumer, where you actively become part of the growth in demmand that fuels further investment, and as a contractor, where you're a temporary cost, especially if you and people who depend on you necessitate it to survive.
A studio taking on temporary projects isn't investing into AI— they're not getting paid in stock. This is effectively no different from a construction company building an office building, or a bakery baking a cake.
As a more general commentary, I find this type of moral crusade very interesting, because it's very common in the rich western world, and it's always against the players but rarely against the system. I wish more people in the rich world would channel this discomfort as general disdain for the neoliberal free-market of which we're all victims, not just specifically AI, for example.
The problem isn't AI. The problem is a system where new technology means millions fearing poverty. Or one where profits, regardless of industry, matter more than sustainability. Or one where rich players can buy their way around the law— in this case copyright law for example. AI is just the latest in a series of products, companies, characters, etc. that will keep abusing an unfair system.
IMO over-focusing on small moral cursades against specific players like this and not the game as a whole is a distraction bound to always bring disappointment, and bound to keep moral players at a disadvantage constantly second-guessing themselves.
<< over-focusing on small moral cursades against specific players like this and not the game as a whole
Fucking this. What I tend to see is petty 'my guy good, not my guy bad' approach. All I want is even enforcement of existing rules on everyone. As it stands, to your point, only the least moral ship, because they don't even consider hesitating.
Its cravenly amoral until your children are hungry. The market doesn't care about your morals. You either have a product people are willing to pay money for or you don't. If you are financially independent to the point it doesn't matter to you then by all means, do what you want. The vast majority of people are not.
I'm not sure I understand this view. Did seamstresses see sewing machines as amoral? Or carpenters with electric and air drills and saws?
AI is another set of tooling. It can be used well or not, but arguing the morality of a tooling type (e.g drills) vs maybe a specific company (e.g Ryobi) seems an odd take to me.
nobody is against his moral stance. the problem is that he’s playing the “principled stand” game on a budget that cannot sustain it, then externalizing the cost like a victim. if you're a millionaire and can hold whatever moral line you want without ever worrying about rent, food, healthcare, kids, etc. then "selling out" is optional and bad. if you're joe schmoe with a mortgage and 5 months of emergency savings, and you refuse the main kind of work people want to pay you for (which is not even that controversial), you’re not some noble hero, you’re just blowing up your life.
> he’s playing the “principled stand” game on a budget that cannot sustain it, then externalizing the cost like a victim
No. It is the AI companies that are externalizing their costs onto everyone else by stealing the work of others, flooding the zone with garbage, and then weeping about how they'll never survive if there's any regulation or enforcement of copyright law.
The ceo of every one of those Ai companies drives an expensive car home to a mansion at the end of the workday. They are set. The average person does not and they cannot afford to play the principled stand game. Its not a question of right or wrong for most, its a question of putting food on the table
I find this very generic what you are saying and they.
What stance against AI? Image generation is not the same as code generation.
There are so many open source projects out there, its a huge difference than taking all the images.
AI is also just ML so should i not use image bounding box algorithm? Am i not allowed to take training data online or are only big companies not allowed to?
Being broadly against AI is a strange stance. Should we all turn off swipe to type on our phones? Are we supposed to boycott cancer testing? Are we to forbid people with disabilities reading voicemail transcriptions or using text to speech? Make it make sense.
There's a moral line that every person has to make about what work they're willing to do. Things aren't always so black and white, we straddle that line The impression I got reading the article is that they didn't want to work for bubble ai companies trying to generate for the sake of generate. Not that they hated anything with a vector db
Intentionally or not, you are presenting a false equivalency.
I trust in your ability to actually differentiate between the machine learning tools that are generally useful and the current crop of unethically sourced "AI" tools being pushed on us.
How am I supposed to know what specific niche of AI the author is talking about when they don't elaborate? For all I know they woke up one day in 2023 and that was the first time they realized machine learning existed. Consider my comment a reminder that ethical use of AI has been around of quite some time, will continue to be, and even that much of that will be with LLMs.
You have reasonably available context here. "This year" seems more than enough on it's own.
I think there is ethical use cases for LLMs. I have no problem leveraging a "common" corpus to support the commons. If they weren't over-hyped and almost entirely used as extensions of the weath-concentration machine, they could be really cool. Locally hosted llms are kinda awesome. As it is, they are basically just theft from the public and IP laundering.
> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff
If all of "AI stuff" is a "no" for you, then I think you just signed out off working in most industries to some important degree going forward.
This is also not to say that service providers should not have any moral standards. I just don't understand the expectation in this particular case. You ignore what the market wants and where a lot/most of new capital turns up. What's the idea? You are a service provider, you are not a market maker. If you refuse service with the market that exists, you don't have a market.
Regardless, I really like their aesthetics (which we need more of in the world) and do hope that they find a way to make it work for themselves.
> If all of "AI stuff" is a "no" for you, then I think you just signed out off working in most industries to some important degree going forward.
I'm not sure the penetration of AI, especially to a degree where participants must use it, is all that permanent in many of these industries. Already the industry where it is arguably the most "present" (forced in) is SWE and its proving to be quite disappointing... Where I work the more senior you are the less AI you use
Pretty sure HN has become completely detached from the market at this point.
Demand for AI anything is incredible high right now. AI providers are constantly bouncing off of capacity limits. AI apps in app stores are pulling incredible download numbers.
Sora's app has a 4.8 rating on the app store with 142K rating. It seems to me that the market does not care about slop or not, whether I like it or not.
The market wants a lot more high quality AI slop and that's going to be the case perpetually for the rest of the time that humanity exists. We are not going back.
The only thing that's going to change is the quality of the slop will get better by the year.
I don’t think they’re unique. They’re simply among the first to run into
the problems AI creates.
Any white-collar field—high-skill or not—that can be solved logically will
eventually face the same pressure. The deeper issue is that society still has
no coherent response to a structural problem: skills that take 10+ years to
master can now be copied by an AI almost overnight.
People talk about “reskilling” and “personal responsibility,” but those terms
hide the fact that surviving the AI era doesn’t just mean learning to use AI
tools in your current job. It’s not that simple.
I don’t have a definitive answer either. I’m just trying, every day, to use AI
in my work well enough to stay ahead of the wave.
ceaseless AI drama aside, this blog and the set-studio website really hit the sweet spots of good looking, fast and well organized for me, it's been a while since I've felt that about a website
I hope things turn around for them it seems like they do great work
I feel like this person might be just a few bad months ahead of me. I am doing great, but the writing is on the wall for my industry.
We should have more posts like this. It should be okay to be worried, to admit that we are having difficulties. It might reach someone else who otherwise feels alone in a sea of successful hustlers. It might also just get someone the help they need or form a community around solving the problem.
I also appreciate their resolve. We rarely hear from people being uncompromising on principles that have a clear price. Some people would rather ride their business into the ground than sell out. I say I would, but I don’t know if I would really have the guts.
Sorry for them- after I got laid off in 2023 I had a devil of a time finding work to the point my unemployment ran out - 20 years as a dev and tech lead and full stack, including stints as a EM and CTO
Since then I pivoted to AI and Gen AI startups- money is tight and I dont have health insurance but at least I have a job…
> 20 years as a dev and tech lead and full stack, including stints as a EM and CTO
> Since then I pivoted to AI and Gen AI startups- money is tight and I dont have health insurance but at least I have a job…
I hope this doesn't come across as rude, but why? My understanding is American tech pays very well, especially on the executive level. I understand for some odd reason your country is against public healthcare, but surely a year of big tech money is enough to pay for decades of private health insurance?
Not parent commenter, but in the US when someone’s employment doesn’t include health insurance it’s commonly because they’re operating as a contractor for that company.
Generally you’re right, though. Working in tech, especially AI companies, would be expected to provide ample money for buying health insurance on your own. I know some people who choose not to buy their own and prefer to self-pay and hope they never need anything serious, which is obviously a risk.
A side note: The US actually does have public health care but eligibility is limited. Over one quarter of US people are on Medicaid and another 20% are on Medicare (program for older people). Private self-pay insurance is also subsidized on a sliding scale based on your income, with subsidies phasing out around $120K annual income for a family of four.
It’s not equivalent to universal public health care but it’s also different than what a lot of people (Americans included) have come to think.
Yeah. It is much harder now than it used to be. I know a couple of people who came from the US ~15 to 10 years ago and they had it easy. It was still a nightmare with banks that don’t want to deal with US citizens, though.
As Americans, getting a long-term visa or residency card is not too hard, provided you have a good job. It’s getting the job that’s become more difficult. For other nationalities, it can range from very easy to very hard.
Yeah it depends on which countries you're interested in. Netherlands, Ireland, and the Scandinavian ones are on the easier side as they don't require language fluency to get (dev) jobs, and their languages aren't too hard to learn either.
I made a career out of understanding this. In Germany it’s quite feasible. The only challenge is finding affordable housing, just like elsewhere. The other challenge is the speed of the process, but some cities are getting better, including Berlin. Language is a bigger issue in the current job market though.
Counter: come to Taiwan! Anyone with a semi active GitHub can get a Gold Cars Visa. 6 months in you're eligible for national health insurance (about 30$ usd/month). Cost of living is extremely low here.
However salaries are atrocious and local jobs aren't really available to non mandarin speakers. But if you're looking to kick off your remote consulting career or bootstrap some product you wanna build, there's not really anywhere on earth that combines the quality of life with the cost of living like Taiwan does.
Taking a 75% pay cut for free Healthcare that costs 1k a month anyway doesn't math. Not to mention the higher taxes for this privilege. European senior developers routinely get paid less than US junior developers.
I want to sympathize but enforcing a moral blockade on the "vast majority" of inbound inquiries is a self-inflicted wound, not a business failure. This guy is hardly a victim when the bottleneck is explicitly his own refusal to adapt.
It's unfair to place all the blame on the individual.
By that metric, everyone in the USA is responsible for the atrocities the USA war industry has inflicted all over the world. Everyone pays taxes funding Israel, previously the war in Iraq, Afghanistan, Vietnam, etc.
But no one believes this because sometimes you just have to do what you have to do, and one of those things is pay your taxes.
if the alternative to 'selling out' is making your business unviable and having to beg the internet for handouts(essentially), then yes, you should "sell out" every time.
Thank you. I would imagine the entire Fortune 500 list passes the line of "evil", drawing that line at AI is weird. I assume it's a mask for fear people have of their industry becoming redundant, rather than a real morality argument.
"AI products" that are being built today are amoral, even by capitalism's standards, let alone by good business or environmental standards. Accepting a job to build another LLM-selling product would be soul-crushing to me, and I would consider it as participating in propping up a bubble economy.
Taking a stance against it is a perfectly valid thing to do, and the author is not saying they're a victim due to no doing of their own by disclosing it plainly. By not seeing past that caveat and missing the whole point of the article, you've successfully averted your eyes from another thing that is unfolding right in front of us: majority of American GDP is AI this or that, and majority of it has no real substance behind it.
I too think AI is a bubble, and besides the way this recklessness could crash the US economy, there's many other points of criticism to what and how AI is being developed.
But I also understand this is a design and web development company. They're not refusing contracts to build AI that will take people's jobs, or violate copyright, or be used in weapons. They're refusing product marketing contracts; advertising websites, essentially.
This is similar to a bakery next to the OpenAI offices refusing to bake cakes for them. I'll respect the decision, sure, but it very much is an inconsequential self-inflicted wound. It's more amoral to fully pay your federal taxes if you live in the USA for example, considering a good chunk are ultimately used for war, the CIA, NSA, etc, but nobody judges an average US-resident for paying them.
> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
I started TextQuery[1] with same moralistic standing. Not in respect of using AI or not, but that most software industry is suffering from rot that places more importance on making money, forcing subscription vs making something beautiful and detail-focused. I poured time in optimizing selections, perfecting autocomplete, and wrestling with Monaco’s thin documentation. However, I failed to make it sustainable business. My motivation ran out. And what I thought would be fun multi-year journey, collapsed into burnout and a dead-end project.
I have to say my time was better spent on building something sustainable, making more money, and optimizing the details once having that. It was naïve to obsess over subtleties that only a handful of users would ever notice.
There’s nothing wrong with taking pride in your work, but you can’t ignore what the market actually values, because that's what will make you money, and that's what will keep your business and motivation alive.
His business seems to be centered around UI design and front-end development and unfortunately this is one of the things that AI can do decently well. The end result is worse than a proper design but from my experience people don't really care about small details in most cases.
I don’t doubt it at all, but CSS and HTML are also about as commodity as it gets when it comes to development. I’ve never encountered a situation where a company is stuck for months on a difficult CSS problem and felt like we needed to call in a CSS expert, unlike most other specialty niches where top tier consulting services can provide a huge helpful push.
HTML + CSS is also one area where LLMs do surprisingly well. Maybe there’s a market for artisanal, hand-crafted, LLM-free CSS and HTML out there only from the finest experts in all the land, but it has to be small.
How do you measure „absolute top tier“ in CSS and HTML? Honest question. Can he create code for difficult-to-code designs? Can he solve technical problems few can solve in, say, CSS build pipelines or rendering performance issues in complex animations? I never had an HTML/CSS issue that couldn’t be addressed by just reading the MDN docs or Can I Use, so maybe I’ve missed some complexity along the way.
Being absolute top tier at what has become a commodity skillset that can be done “good enough” by AI for pennies for 99.9999% of customers is not a good place to be…
> When 99.99% of the customers have garbage as a website
When you think 99.99% of company websites are garbage, it might be your rating scale that is broken.
This reminds me of all the people who rage at Amazon’s web design without realizing that it’s been obsessively optimized by armies of people for years to be exactly what converts well and works well for their customers.
A lesson many developers have to learn is that code quality / purity of engineering is not a thing that really moves the needle for 90% of companies.
Having the most well tested backend and beautiful frontend that works across all browsers and devices and not just on the main 3 browsers your customers use isn't paying the bills.
Wishing these guys all the best. It's not just about following the market. It's about the ability to just be yourself. When everyone around you is telling you that you just have to start doing something and it's not even about the moral side of that thing. You simply just don't want to do it. Yeah, yeah, it's a cruel world. But this doesn't mean that we all need to victim blame everyone who doesn't feel comfortable in this trendy stream.
I hope things with the AI will settle soon and there will be applications that actually make sense and some sort of new balance will be established. Right now it's a nightmare. Everyone wants everything with the AI.
All the _investors_ want everything with AI. Lots of people - non-tech workers even - just want a product that works and often doesn't work differently than it did last year. That goal is often at odds with the ai-everywhere approach du jour.
My post had the privilege of being on front page for a few minutes. I got some very fair criticism because it wasn't really a solid article and was written when traveling on a train when I was already tired and hungry. I don't think I was thinking rationally.
I'd much rather see these kind of posts on the front page. They're well thought-out and I appreciate the honesty.
I think that, when you're busy following the market, you lose what works for you. For example, most business communication happens through push based traffic. You get assigned work and you have x time to solve all this. If you don't, we'll have some extremely tedious reflection meeting that leads to nowhere. Why not do pull-based work, where you get done what you get done?
Is the issue here that customers aren't informed about when a feature is implemented? Because the alternative is promising date X and delaying it 3 times because customer B is more important
I had a discussion yesterday with someone that owns a company creating PowerPoints for customers. As you might understand, that is also a business that is to be hit hard by AI. What he does is offer an AI entry level option, where basically the questions he asks the customer (via a Form) will lead to a script for running AI. With that he is able to combine his expertise with the AI demand from the market, and gain a profit from that.
> ... we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
I don't use AI tools in my own work (programming and system admin). I won't work for Meta, Palantir, Microsoft, and some others because I have to take a moral stand somewhere.
If a customer wants to use AI or sell AI (whatever that means), I will work with them. But I won't use AI to get the work done, not out of any moral qualm but because I think of AI-generated code as junk and a waste of my time.
At this point I can make more money fixing AI-generated vibe coded crap than I could coaxing Claude to write it. End-user programming creates more opportunity for senior programmers, but will deprive the industry of talented juniors. Short-term thinking will hurt businesses in a few years, but no one counting their stock options today cares about a talent shortage a decade away.
I looked at the sites linked from the article. Nice work. Even so I think hand-crafted front-end work turned into a commodity some time ago, and now the onslaught of AI slop will kill it off. Those of us in the business of web sites and apps can appreciate mastery of HTML and CSS and Javascript, beautiful designs and user-oriented interfaces. Sadly most business owners don't care that much and lack the perspective to tell good work from bad. Most users don't care either. My evidence: 90% of public web sites. No one thinks WordPress got the market share it has because of technical excellence or how it enables beautiful designs and UI. Before LLMs could crank out web sites we had an army of amateur designers and business owners doing it with WordPressl, paying $10/hr or less on Upwork and Fiverr.
I simply have a hard time following the refusal to work on anything AI related. There is AI slop but also a lot of interesting value add products and features for existing products. I think it makes sense to be thoughtful of what to work on but I struggle with the blanket no to AI.
On this thread what people are calling “the market” is just 6 billionaire guys trying to hype their stuff so they can pass the hot potato to someone else right before the whole house of cards collapses.
That might well be the current 'market' for SWE labor though. I totally agree it's a silly bubble but I'm not looking forward to the state of things when it pops.
> On this thread what people are calling “the market” is just 6 billionaire guys trying to hype their stuff so they can pass the hot potato to someone else right before the whole house of cards collapses.
Careful now, if they get their way, they’ll be both the market and the government.
It's very funny reading this thread and seeing the exact same arguments I saw five years ago for the NFT market and the metaverse.
All of this money is being funneled and burned away on AI shit that isn't even profitable nor has it found a market niche outside of enabling 10x spammers, which is why companies are literally trying to force it everywhere they can.
Tough crowd here. Though to be expected - I'm sure a lot of people have a fair bit of cash directly or indirectly invested in AI. Or their employer does ;)
We Brits simply don't have the same American attitude towards business. A lot of Americans simply can't understand that chasing riches at any cost is not a particularly European trait. (We understand how things are in the US. It's not a matter of just needing to "get it" and seeing the light)
Corrected title: "we have inflicted a very hard year on ourselves with malice aforethought".
The equivalent of that comic where the cyclist intentionally spoke-jams themselves and then acts surprised when they hit the dirt.
But since the author puts moral high horse jockeying above money, they've gotten what they paid for - an opportunity to pretend they're a victim and morally righteous.
>especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that.
I intentionally ignored the biggest invention of the 21st century out of strange personal beliefs and now my business is going bankrupt
Yes I find this a bit odd. AI is a tool, what specific part of it do you find so objectionable OP? For me, I know they are never going to put the genie back in the bottle, we will never get back the electricity spent on it, I might as well use it. We finally got a pretty good Multivac we can talk to and for me it usually gives the right answers back. It is a once in a lifetime type invention we get to enjoy and use. I was king of the AI haters but around Gemini 2.5 it just became so good that if you are hating it or criticizing it you aren’t looking at it objectively anymore.
Some folks have moral concerns about AI. They include:
* The environmental cost of inference in aggregate and training in specific is non-negligible
* Training is performed (it is assumed) with material that was not consented to be trained upon. Some consider this to be akin to plagiarism or even theft.
* AI displaces labor, weakening the workers across all industries, but especially junior folks. This consolidates power into the hands of the people selling AI.
* The primary companies who are selling AI products have, at times, controversial pasts or leaders.
* Many products are adding AI where it makes little sense, and those systems are performing poorly. Nevertheless, some companies shove short AI everywhere, cheapening products across a range of industries.
* The social impacts of AI, particularly generative media and shopping in places like YouTube, Amazon, Twitter, Facebook, etc are not well understood and could contribute to increased radicalization and Balkanization.
* AI is enabling an attention Gish-gallop in places like search engines, where good results are being shoved out by slop.
Hopefully you can read these and understand why someone might have moral concerns, even if you do not. (These are not my opinions, but they are opinions other people hold strongly. Please don't downvote me for trying to provide a neutral answer to this person's question.)
Let's put aside the fact that the person you replied to was trying to represent a diversity of views and not attribute them all to one individual, including the author of the article.
Should people not look for reasons to be concerned?
I'm not sure it's helpful to accuse "them" of bad faith, when "them" hasn't been defined and the post in question is a summary of reasons many individual people have expressed over time.
I'm fairly sure all the first three points are true for each new human produced. The environmental cost vs output is probably significantly higher per human, and the population continues to grow.
My experience with large companies (especially American Tech) is that they always try and deliver the product as cheap as possible, are usually evil and never cared about social impacts. And HN has been steadily complaining about the lowering of quality of search results for at least a decade.
I think your points are probably a fair snapshot of peoples moral issue, but I think they're also fairly weak when you view them in the context of how these types of companies have operated for decades. I suspect people are worried for their jobs and cling to a reasonable sounding morality point so they don't have to admit that.
Plenty of people have moral concerns with having children too.
And while some might be doing what you say, others might genuinely have a moral threshold they are unwilling to cross. Who am I to tell someone they don't actually have a genuinely held belief?
"Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that"
The market is literally telling them what it wants and potential customers are asking them for work but they are declining it from "a moral standpoint"
and instead blaming "a combination of limping economies, tariffs, even more political instability and a severe cost of living crisis"
This is a failure of leadership at the company. Adapt or die, your bank account doesn't care about your moral redlines.
> same thing would happen with AI generated website
Probably even moreso. I've seen the shit these things put out, it's unsustainable garbage. At least Wordpress sites have a similar starting point. I think the main issue is that the "fixing AI slop" industry will take a few years to blossom.
Man, I definitely feel this, being in the international trade business operating an export contract manufacturing company from China, with USA based customers. I can’t think of many shittier businesses to be in this year, lol. Actually it’s been pretty difficult for about 8 years now, given trade war stuff actually started in 2017, then we had to survive covid, now trade war two. It’s a tough time for a lot of SMEs. AI has to be a handful for classic web/design shops to handle, on top of the SMEs that usually make up their customer base, suffering with trade wars and tariff pains. Cash is just hard to come by this year. We’ve pivoted to focus more on design engineering services these past eight years, and that’s been enough to keep the lights on, but it’s hard to scale, it is just a bandwidth constrained business, can only take a few projects at a time. Good luck to OP navigating it.
> we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
Although there’s a ton of hype in “AI” right now (and most products are over-promising and under-delivering), this seems like a strange hill to die on.
imo LLMs are (currently) good at 3 things:
1. Education
2. Structuring unstructured data
3. Turning natural language into code
From this viewpoint, it seems there is a lot of opportunity to both help new clients as well as create more compelling courses for your students.
No need to buy the hype, but no reason to die from it either.
Notice the phrase "from a moral standpoint". You can't argue against a moral stance by stating solely what is, because the question for them is what ought to be.
Really depends what the moral objection is. If it's "no machine may speak my glorious tongue", then there's little to be said; if it's "AI is theft", then you can maybe make an argument about hypothetical models trained on public domain text using solar power and reinforced by willing volunteers; if it's "AI is a bubble and I don't want to defraud investors", then you can indeed argue the object-level facts.
Indeed, facts are part of the moral discussion in ways you outlined. My objection was that just listing some facts/opinions about what AI can do right now is not enough for that discussion.
I wanted to make this point here explicitly because lately I've seen this complete erasure of the moral dimension from AI and tech, and to me that's a very scary development.
> because lately I've seen this complete erasure of the moral dimension from AI and tech, and to me that's a very scary development.
But that is exactly what the "is ought problem" manifests, or? If morals are "oughts", then oughts are goal-dependent, i.e. they depend on personally-defined goals. To you it's scary, to others it is the way it should be.
Interesting. I agree that this has been a hard year, hardest in a decade. But comparison with 2020 is just surprising. I mean, in 2020 crazy amounts of money were just thrown around left and right no? For me, it was the easiest year of my career when i basically did nothing and picked up money thrown at me.
Too much demand, all of a sudden. Money got printed and i went from near bankruptcy in mid-Feb 2020 to being awash with money by mid-June.
And it continued growing nonstop all the way through ~early Sep 2024, and been slowing down ever since, by now coming to an almost complete stop - to the point i ever fired all sales staff because they were treading water with no even calls let alone deals, for half a year before being dismissed in mid-July this year.
I think it won't return - custom dev is done. The myth of "hiring coders to get rich" is over. No surprise it did, because it never worked, sooner or later people had to realise it. I may check again in 2-3 years how market is doing, but i'm not at all hopeful.
"especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that."
You will continue to lose business, if you ignore all the 'AI stuff'. AI is here to stay, and putting your head in the sand will only leave you further behind.
I've known people over the years that took stands on various things like JavaScript frameworks becoming popular (and they refused to use them) and the end result was less work and eventually being pushed out of the industry.
It’s ironic that Andy calls himself “ruthlessly pragmatic”, but his business is failing because of a principled stand in turning down a high volume of inbound requests. After reading a few of his views on AI, it seems pretty clear to me that his objections are not based in a pragmatic view that AI is ineffective (though he claims this), but rather an ideological view that they should not be used.
Ironically, while ChatGPT isn’t a great writer, I was even more annoyed by the tone of this article and the incredible overuse of italics for emphasis.
Yeah. For all the excesses of the current AI craze there's a lot of real meat to it that will obviously survive the hype cycle.
User education, for example, can be done in ways that don't even feel like gen AI in ways that can drastically improve activation e.g. recommendation to use feature X based on activity Y, tailored to their use case.
If you won't even lean into things like this you're just leaving yourself behind.
I agree that this year has been extremely difficult, but as far as I know, a large number of companies and individuals still made a fortune.
Two fundamental laws of nature: the strong prey on the weak, and survival of the fittest.
Therefore, why is it that those who survive are not the strong preying on the weak, but rather the "fittest"?
Next year's development of AI may be even more astonishing, continuing to kill off large companies and small teams unable to adapt to the market. Only by constantly adapting can we survive in this fierce competition.
> especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
Sounds like a self inflicted wound. No kids I assume?
I did not look for a consulting contract for 18 years. Through my old network more quality opportunities found me than I could take on.
That collapsed during the covid lockdowns. My financial services client cut loose all consultants and killed all 'non-essential' projects, even when mine (that they had already approved) would save them 400K a year, they did not care! Top down the word came to cut everyone -- so they did.
This trend is very much a top down push. Inorganic. People with skills and experience are viewed by HR and their AI software as risky to leave and unlikely to respond to whatever pressures they like to apply.
Since then it's been more of the same as far as consulting.
I've come to the conclusion I'm better served by working on smaller projects I want to build and not chasing big consulting dollars. I'm happier (now) but it took a while.
An unexpected benefit of all the pain was I like making things again... but I am using claude code and gemini. Amazing tools if you have experience already and you know what you want out of them -- otherwise they mainly produce crap in the hands of the masses.
They also produce crap once you leave the realm of basic CRUD web apps... Try using it with Microsofts Business Central bullshit, does not work well.
I have worked with a lot of code generation systems.
LLMs strike me as mainly useful in the same way. I can get most of the boilerplate and tedium done with LLM tools. Then for core logic esp learning or meta-programming patterns etc. I need to jump in.
Breaking tasks down to bite size, and writing detailed architecture and planning docs for the LLM to work from, is critical to managing increasing complexity and staying within context windows. Also critical is ruthlessly throwing away things that do not fit the vision and not being afraid to throw whole days away (not too often tho!)
For ref I have built stuff that goes way beyond CRUD app with these tools in 1/10th of the time it previously took me or less -- the key though is I already knew how to do and how to validate LLM outputs. I knew exactly what I wanted a priori.
Code generation technically always 'replaced' junior devs and has been around for ages, the results of the generation are just a lot better now., whereas in the past it was mixed bag of benefits/hassles doing code generation regularly, now it works much better and the cost is much less.
I started my career as a developer and the main reasons I became a solutions systems guy were money and that I hated the tedium boilerplate phase of all software development projects over a certain scale. I never stoped coding because I love it -- just not for large enterprise soul destroying software projects.
Quick note that this has not been my experience. LLMs have been very useful with codebases as far from crud web apps as you can get.
I’ve seen Picallilli’s stuff around and it looks extremely solid. But you can’t beat the market. You either have what they want to buy, or you don’t.
> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
The market is speaking. Long-term you’ll find out who’s wrong, but the market can usually stay irrational for much longer than you can stay in business.
I think everyone in the programming education business is feeling the struggle right now. In my opinion this business died 2 years ago – https://swizec.com/blog/the-programming-tutorial-seo-industr...
I get the moral argument and even agree with it but we are a minority and of course we expect to be able sell our professional skills -- but if you are 'right' and out of business nobody will know. Is that any better than 'wrong' and still in business?
You might as well work on product marketing for ai because that is where the client dollars are allocated.
If it's hype at least you stayed afloat. If it's not maybe u find a new angle if you can survive long enough? Just survive and wait for things to shake out.
Not wanting to help the rich get richer means you'll be fighting an uphill battle. The rich typically have more money to spend. And as others have commented, not doing anything AI related in 2025-2026 is going to further limit the business. Good luck though.
Rejecting clients based on how you wish the world would be is a strategy that only works when you don’t care about the money or you have so many clients that you can pick and choose.
Running a services business has always been about being able to identify trends and adapt to market demand. Every small business I know has been adapting to trends or trying to stay ahead of them from the start, from retail to product to service businesses.
Rejecting clients when you have enough is a sound business decision. Some clients are too annoying to serve. Some clients don't want to pay. Sometimes you have more work than you can do... It is easy to think when things are bad that you must take any and all clients (and when things are bad enough you might be forced to), but that is not a good plan and to be avoided. You should be choosing your clients. It is very powerful when you can afford to tell someone I don't need your business.
This is the type of business that's going to be hit hard by AI. And the type of businesses that survive will be the ones that integrate AI into their business the most successfully. It's an enabler, a multiplier. It's just another tool and those wielding the tools the best, tend to do well.
Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.
The expertise and skill still matter. But customers are going to get a lot further without such a studio and the remaining market is going to be smaller and much more competitive.
There's a lot of other work emerging though. IMHO the software integration market is where the action is going to be for the next decade or so. Legacy ERP systems, finance, insurance, medical software, etc. None of that stuff is going away or at risk of being replaced with some vibe coded thing. There are decades worth of still widely used and critically important software that can be integrated, adapted, etc. for the modern era. That work can be partly AI assisted of course. But you need to deeply understand the current market to be credible there. For any new things, the ambition level is just going to be much higher and require more skill.
Arguing against progress as it is happening is as old as the tech industry. It never works. There's a generation of new programmers coming into the market and they are not going to hold back.
> Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.
So let's all just give zero fucks about our moral values and just multiply monetary ones.
What parent is saying is that what works is what will matter in the end. That which works better than something else will become the method that survives in competition.
You not liking something on purportedly "moral" grounds doesn't matter if it works better than something else.
>So let's all just give zero fucks about our moral values and just multiply monetary ones.
You are misconstruing the original point. They are simply suggesting that the moral qualms of using AI are simply not that high - neither to vast majority of consumers, neither to the government. There are a few people who might exaggerate these moral issues for self service but they wouldn't matter in the long term.
That is not to suggest there are absolutely no legitimate moral problems with AI but they will pale in comparison to what the market needs.
If AI can make things 1000x more efficient, humanity will collectively agree in one way or the other to ignore or work around the "moral hazards" for the greater good.
You can start by explaining what your specific moral value is that goes against AI use? It might bring to clarity whether these values are that important at all to begin with.
> If AI can make things 1000x more efficient,
Is that the promise of the faustian bargain we're signing?
Once the ink is dry, should I expect to be living in a 900,000 sq ft apartment, or be spending $20/year on healthcare? Or be working only an hour a week?
They signed it for you as there will be 1000x less workers needed so they didn't need to ask anymore.
While humans have historically mildly reduced their working time to today's 40h workweek, their consumption has gone up enormously, and whole new categories of consumption were opened. So my prediction is while you'll never live in a 900,000sqft apartment (unless we get O'Neill cylinders from our budding space industry) you'll probably consume a lot more, while still working a full week
40h is probably up from pre-industrial times.
Edit: There is some research covering work time estimates for different ages.
We could probably argue to the end of time about the qualitative quality of life between then and now. In general a metric of consumption and time spent gathering that consumption has gotten better over time.
I don't want to "consume a lot more". I want to work less, and for the work I do to be valuable, and to be able to spend my remaining time on other valuable things.
So you are agreeing with the parent? If consumption has gone up a lot and input hours has gone down or stayed flat, that means you are able to work less.
> or stayed flat
But that's not what they said, they said they want to work less. As the GP post said, they'd still be working a full week.
I do think this is an interesting point. The trend for most of history seems to have been vastly increasing consumption/luxury while work hours somewhat decrease. But have we reached the point where that's not what people want? I'd wager most people in rich developed countries don't particularly want more clothes, gadgets, cars, or fast food. If they can get the current typical middle class share of those things (which to be fair is a big share, and not environmentally sustainable), along with a modest place to live, they (we) mainly want to work less.
Not unless rent is cheap, it doesn't. It might mean my boss is able to work less.
AI is just a tool, like most other technologies, it can be used for good and bad.
Where are you going to draw the line? Only if it effects you, or maybe we should go back to using coal for everything, so the mineworkers have their old life back? Or maybe follow the Amish guidelines to ban all technology that threatens sense of community?
If you are going to draw a line, you'll probably have to start living in small communities, as AI as a technology is almost impossible to stop. There will be people and companies using it to it's fullest, even if you have laws to ban it, other countries will allow it.
I was told that Amish (elders) ban technology that separates you from God. Maybe we should consider that? (depending on your personal take on what God is)
> AI is just a tool, like most other technologies, it can be used for good and bad.
The same could be said of social media for which I think the aggregate bad has been far greater than the aggregate good (though there has certainly been some good sprinkled in there).
I think the same is likely to be true of "AI" in terms of the negative impact it will have on the humanistic side of people and society over the next decade or so.
However like social media before it I don't know how useful it will be to try to avoid it. We'll all be drastically impacted by it through network effects whether we individually choose to participate or not and practically speaking those of us who still need to participate in society and commerce are going to have to deal with it, though that doesn't mean we have to be happy about it.
> The same could be said of social media
Yes, absolutely.
Just because it's monopolized by evil people doesn't mean it's inherently bad. In fact, mots people here have seen examples of it done in a good way.
[dead]
It’s completely reasonable to take a moral stance that you’d rather see your business fail and shut down than do X, even if X is lucrative.
But don’t expect the market to care. Don’t write a blog post whining about your morals, when the market is telling you loud and clear they want X. The market doesn’t give a shit about your idiosyncratic moral stance.
Edit: I’m not arguing that people shouldn’t take a moral stance, even a costly one, but it makes for a really poor sales pitch. In my experience this kind of desperate post will hurt business more than help it. If people don’t want what you’re selling, find something else to sell.
> when the market is telling you loud and clear they want X
Does it tho? Articles like [1] or [2] seem to be at odd with this interpretation. If it were any different we wouldn't be talking about the "AI bubble" after all.
[1]https://www.pcmag.com/news/microsoft-exec-asks-why-arent-mor...
[2]https://fortune.com/2025/08/18/mit-report-95-percent-generat...
1) it was failure of specific implementation
2) if you had read the paper you wouldn’t use it as an example here.
Good faith discussion on what the market feels about LLMs would include Gemini, ChatGPT numbers. Overall market cap of these companies. And not cherry picked misunderstood articles.
No, I picked those specifically. When Pets.com[1] went down in early 2000 it wasn't neither the idea, nor the tech stack that brought the company down, it was the speculative business dynamics that caused its collapse. The fact we've swapped technology underneath doesn't mean we're not basically falling into ".com Bubble - Remastered HD Edition".
I bet a few Pets.com exec were also wondering why people weren't impressed with their website.
[1]https://en.wikipedia.org/wiki/Pets.com
Do you actually want to get into the details on how frequently do markers get things right vs get things wrong? It would make the priors a bit more lucid so we can be on the same page.
He is right though:
"Jeez there so many cynics! It cracks me up when I hear people call AI underwhelming,”
ChatGPT can listen to you in real time, understands multiply languages very well and responds in a very natural way. This is breath taking and not on the horizon just a few years ago.
AI Transcription of Videos is now a really cool and helpful feature in MS Teams.
Segment Anything literaly leapfroged progress on image segmentation.
You can generate any image you want in high quality in just a few seconds.
There are already human beings being shitier in their daily job than a LLM is.
Exactly. Microsoft for instance got a noticeable backlash for cramming AI everywhere, and their future plans in that direction.
Some people maintain that JavaScript is evil too, and make a big deal out of telling everyone they avoid it on moral grounds as often as they can work it into the conversation, as if they were vegans who wanted everyone to know that and respect them for it.
So is it rational for a web design company to take a moral stance that they won't use JavaScript?
Is there a market for that, with enough clients who want their JavaScript-free work?
Are there really enough companies that morally hate JavaScript enough to hire them, at the expense of their web site's usability and functionality, and their own users who aren't as laser focused on performatively not using JavaScript and letting everyone know about it as they are?
How do you know? I give a shit. A ton of people in this thread give a shit. This blog post is a great way to communicate with others who give a shit.
The only thing people don’t give a shit about is your callous and nihilistic dismissal.
This is a YC forum. That guy is giving pretty honest feedback about a business decision in the context of what the market is looking for. The most unkind thing you can do to a founder is tell them they’re right when you see something they might be wrong about.
Which founder is wrong? Not only the brainwashed here are entrepreneurs
What you (and others in this thread) are also doing is a sort of maximalist dismissal of AI itself as if it is everything that is evil and to be on the right side of things, one must fight against AI.
This might sound a bit ridiculous but this is what I think a lot of people's real positions on AI are.
That's definitely not what I am doing, nor implying, and while you're free to think it, please don't put words in my mouth.
>The only thing people don’t give a shit about is your callous and nihilistic dismissal.
This was you interpreting what the parent post was saying. I'm similarly providing a value judgement that you are doing a maximalist AI dismissal. We are not that different.
We are basically 100-ϵ% the same. I have no doubt.
Maybe the only difference between us is that I think there is a difference between a description and an interpretation, and you don't :)
In the grand scheme of things, is it even worth mentioning? Probably not! :D :D Why focus on the differences when we can focus on the similarities?
Ok change my qualifier from interpretation to description if it helps. I describe you as someone who dismisses AI in a maximalist way
>Maybe the only difference between us is that I think there is a difference between a description and an interpretation, and you don't :)
>Ok change my qualifier from interpretation to description if it helps.
I... really don't think AI is what's wrong with you.
Yet to see anything good come from it, and I’m not talking about machine learning for specific use cases.
And if we look at the players who are the winners in the AI race, do you see anyone particularly good participating?
[flagged]
> 1B people in the world smoke. The fact something is wildly popular doesn’t make it good or valuable. Human brains are very easily manipulated, that should be obvious at this point.
Almost all smokers agree that it is harmful for them.
Can you explain why I should not be equally suspicious of gaming, social media, movies, carnivals, travel?
You should be. You should be equally suspicious of everything. That's the whole point. You wrote:
> My position on things like this is that if enough people use a service, I must defer to their judgement that they benefit from it.
Enough people doing something doesn't make that something good or desirable from a societal standpoint. You can find examples of things that go in both directions. You mentioned gaming, social media, movies, carnivals, travel, but you can just as easily ask the same question for gambling or heavy drugs use.
Just saying "I defer to their judgment" is a cop-out.
But “good or desirable from a societal standpoint” isn’t what they said, correct me if I’m wrong. They said that people find a benefit.
People find a benefit in smoking: a little kick, they feel cool, it’s a break from work, it’s socializing, maybe they feel rebellious.
The point is that people FEEL they benefit. THAT’S the market for many things. Not everything obv, but plenty of things.
> The point is that people FEEL they benefit. THAT’S the market for many things.
I don't disagree, but this also doesn't mean that those things are intrinsically good and then we should all pursuit them because that's what the market wants. And that was what I was pushing against, this idea that since 800M people are using GPT then we should all be ok doing AI work because that's what the market is demanding.
Ok, I'll bite: What's the harm of LLMs?
As someone else said, we don't know for sure. But it's not like there aren't some at-least-kinda-plausible candidate harms. Here are a few off the top of my head.
(By way of reminder, the question here is about the harms of LLMs specifically to the people using them, so I'm going to ignore e.g. people losing their jobs because their bosses thought an LLM could replace them, possible environmental costs, having the world eaten by superintelligent AI systems that don't need humans any more, use of LLMs to autogenerate terrorist propaganda or scam emails, etc.)
People become like those they spend time with. If a lot of people are spending a lot of time with LLMs, they are going to become more like those LLMs. Maybe only in superficial ways (perhaps they increase their use of the word "delve" or the em-dash or "it's not just X, it's Y" constructions), maybe in deeper ways (perhaps they adapt their _personalities_ to be more like the ones presented by the LLMs). In an individual isolated case, this might be good or bad. When it happens to _everyone_ it makes everyone just a bit more similar to one another, which feels like probably a bad thing.
Much of the point of an LLM as opposed to, say, a search engine is that you're outsourcing not just some of your remembering but some of your thinking. Perhaps widespread use of LLMs will make people mentally lazier. People are already mostly very lazy mentally. This might be bad for society.
People tend to believe what LLMs tell them. LLMs are not perfectly reliable. Again, in isolation this isn't particularly alarming. (People aren't perfectly reliable either. I'm sure everyone reading this believes at least one untrue thing that they believe because some other person said it confidently.) But, again, when large swathes of the population are talking to the same LLMs which make the same mistakes, that could be pretty bad.
Everything in the universe tends to turn into advertising under the influence of present-day market forces. There are less-alarming ways for that to happen with LLMs (maybe they start serving ads in a sidebar or something) and more-alarming ways: maybe companies start paying OpenAI to manipulate their models' output in ways favourable to them. I believe that in many jurisdictions "subliminal advertising" in movies and television is illegal; I believe it's controversial whether it actually works. But I suspect something similar could be done with LLMs: find things associated with your company and train the LLM to mention them more often and with more positive associations. If it can be done, there's a good chance that eventually it will be. Ewww.
All the most capable LLMs run in the cloud. Perhaps people will grow dependent on them, and then the companies providing them -- which are, after all, mostly highly unprofitable right now -- decide to raise their prices massively, to a point at which no one would have chosen to use them so much at the outset. (But at which, having grown dependent on the LLMs, they continue using them.)
I don't agree with most of these points, I think the points about atrophy, trust, etc will have a brief period of adjustment, and then we'll manage. For atrophy, specifically, the world didn't end when our math skills atrophied with calculators, it won't end with LLMs, and maybe we'll learn things much more easily now.
I do agree about ads, it will be extremely worrying if ads bias the LLM. I don't agree about the monopoly part, we already have ways of dealing with monopolies.
In general, I think the "AI is the worst thing ever" concerns are overblown. There are some valid reasons to worry, but overall I think LLMs are a massively beneficial technology.
We don't know yet? And that's how things usually go. It's rare to have an immediate sense of how something might be harmful 5, 10, or 50 years in the future. Social media was likely considered all fun and good in 2005 and I doubt people were envisioning all the harmful consequences.
Yet social media started as individualized “web pages” and journals on myspace. It was a natural outgrowth of the internet at the time, a way for your average person to put a little content on the interwebules.
What became toxic was, arguably, the way in which it was monetized and never really regulated.
I don't disagree with your point and the thing you're saying doesn't contradict the point I was making. The reason why it became toxic is not relevant. The fact that wasn't predicted 20 years ago is what matters in this context.
[flagged]
I don’t do zero sum games, you can normalize every bad thing that ever happened with that rhetoric. Also, someone benefiting from something doesn’t make it good. Weapons smuggling is also extremely beneficial to the people involved.
Yes but if I go with your priors then all of these are similarly to be suspect
- gaming
- netflix
- television
- social media
- hacker news
- music in general
- carnivals
A priori, all of these are equally suspicious as to whether they provide value or not.
My point is that unless you have reason to suspect, people engaging in consumption through their own agency is in general preferable. You can of course bring counter examples but they are more of caveats against my larger truer point.
Social media for sure and television and Netflix in general absolutely. But again, providing value is not the same as something being good. A lot of people think inaccuracies by LLMs to be of high value because it’s provided with nice wrappings and the idea that you’re always right.
This line of thinking made many Germans who thought they're on the right side of history simply by the virtue of joining the crowd, to learn the hard way in 1945.
And today's adapt or die doesn't sound less fascist than in 1930.
Are you going to hire him?
If not, for the purpose of paying his bills, your giving a shit is irrelevant. That’s what I mean.
You mean, when evaluating suppliers, do I push for those who don't use AI?
Yes.
I'm not going to be childish and dunk on you for having to update your priors now, but this is exactly the problem with this speaking in aphorisms and glib dismissals. You don't know anyone here, you speak in authoritative tone for others, and redefine what "matters" and what is worthy of conversation as if this is up to you.
> Don’t write a blog post whining about your morals,
why on earth not?
I wrote a blog post about a toilet brush. Can the man write a blog post about his struggle with morality and a changing market?
That's how it works. You can be morally righteous all you want, but this isn't a movie. Morality is a luxury for the rich. Conspicuous consumption. The morally righteous poor people just generally end up righteously starving.
This seems rather black and white. Defining the morals probably makes sense, then evaluating whether they can be lived or whether we can compromise in the face other priorities?
I think it's just as likely that business who have gone all-in on AI are going to be the ones that get burned. When that hose-pipe of free compute gets turned off (as it surely must), then any business that relies on it is going to be left high and dry. It's going to be a massacre.
The latest DeepSeek and Kimi open weight models are competitive with GPT-5.
If every AI lab were to go bust tomorrow, we could still hire expensive GPU servers (there would suddenly be a glut of those!) and use them to run those open weight models and continue as we do today.
Sure, the models wouldn't ever get any better in the future - but existing teams that rely on them would be able to keep on working with surprisingly little disruption.
I understand that website studios have been hit hard, given how easy it is to generate good enough websites with AI tools. I don't think human potential is best utilised when dealing with CSS complexities. In the long term, I think this is a positive.
However, what I don't like is how little the authors are respected in this process. Everything that the AI generates is based on human labour, but we don't see the authors getting the recognition.
> we don't see the authors getting the recognition.
In that sense AI has been the biggest heist that has ever been perpetrated.
Only in exactly the same sense that portrait painters were robbed of their income by the invention of photography. In the end people adapted and some people still paint. Just not a whole lot of portraits. Because people now take selfies.
Authors still get recognition. If they are decent authors producing original, literary work. But the type of author that fills page five of your local news paper, has not been valued for decades. But that was filler content long before AI showed up. Same for the people that do the subtitles on soap operas. The people that create the commercials that show at 4am on your TV. All fair game for AI.
It's not a heist, just progress. People having to adapt and struggling with that happens with most changes. That doesn't mean the change is bad. Projecting your rage, moralism, etc. onto agents of change is also a constant. People don't like change. The reason we still talk about Luddites is that they overreacted a bit.
People might feel that time is treating them unfairly. But the reality is that sometimes things just change and then some people adapt and others don't. If your party trick is stuff AIs do well (e.g. translating text, coming up with generic copy text, adding some illustrations to articles, etc.), then yes AI is robbing you of your job and there will be a lot less demand for doing these things manually. And maybe you were really good at it even. That really sucks. But it happened. That cat isn't going back in the bag. So, deal with it. There are plenty of other things people can still do.
You are no different than that portrait painter in the 1800s that suddenly saw their market for portraits evaporate because they were being replaced by a few seconds exposure in front of a camera. A lot of very decent art work was created after that. It did not kill art. But it did change what some artists did for a living. In the same way, the gramophone did not kill music. The TV did not kill theater. Etc.
Getting robbed implies a sense of entitlement to something. Did you own what you lost to begin with?
But DID the Luddites overreact? They sought to have machines serve people instead of the other way around.
If they had succeeded in regulation over machines and seeing wealth back into the average factory worker’s hands, of artisans integrated into the workforce instead of shut out, would so much of the bloodshed and mayhem to form unions and regulations have been needed?
Broadly, it seems to me that most technological change could use some consideration of people
It's also important that most of AI content created is slop. On this website most people stand against AI generated writing slop. Also, trust me, you don't want a world where most music is AI generated, it's going to drive you crazy. So, it's not like photography and painting it is like comparing good and shitty quality content.
it's not the "exactly same sense". If an AI generated website is based on a real website, it's not like photography and painting, it is the same craft being compared.
Photography takes pictures of objects, not of paintings. By shifting the frame to "robbed of their income", you completely miss the point of the criticism you're responding to… but I suspect that's deliberate.
I don't think it's a meaningful distinction.
Robbing implies theft. The word heist was used here to imply that some crime is happening. I don't think there is such a crime and disagree with the framing. Which is what this is, and which is also very deliberate. Luddites used a similar kind of framing to justify their actions back in the day. Which is why I'm using it as an analogy. I believe a lot of the anti AI sentiment is rooted in very similar sentiments.
I'm not missing the point but making one. Clearly it's a sensitive topic to a lot of people here.
Totally agree, but I’d state it slightly differently.
This type of business isn’t going to be hit hard by AI; this type of business owner is going to be hit hard by AI.
> Arguing against progress as it is happening is as old as the tech industry. It never works.
I still wondering why I'm not doing my banking in Bitcoins. My blockchain database was replaced by postgres.
So some tech can just be hypeware. The OP has a legitimate standpoint given some technologies track record.
And the doctors are still out on the affects of social media on children or why are some countries banning social media for children?
Not everything that comes out of Silicon Valley is automatically good.
I don't know about you, but I would rather pay some money for a course written thoughtfully by an actual human than waste my time trying to process AI-generated slop, even if it's free. Of course, programming language courses might seem outdated if you can just "fake it til you make it" by asking an LLM everytime you face a problem, but doing that won't actually lead to "making it", i.e. developing a deeper understanding of the programming environment you're working with.
But what if the AI generated course was actually good, maybe even better than the human generated course? Which one would you pick then?
When a single such "actually good" AI-generated course actually exists, this question might be worth engaging with.
Sure, and it takes five whole paragraphs to have a nuanced opinion on what is very obvious to everyone :-)
>the type of business that's going to be hit hard by AI [...] will be the ones that integrate AI into their business the most
There. Fixed!
> And the type of businesses that survive will be the ones that integrate AI into their business the most successfully.
I am an AI skeptic and until the hype is supplanted by actual tangible value I will prefer products that don't cram AI everywhere it doesn't belong.
AI is not a tool, it is an oracle.
Prompting isn't a skill, and praying that the next prompt finally spits out something decent is not a business strategy.
Seeing how many successful businesses are a product of pure luck, using an oracle to roll the dice is not significantly different.
Do you remember the times when "cargo cult programming" was something negative? Now we're all writing incantations to the great AI, hoping that it will drop a useful nugget of knowledge in our lap...
"praying that the next prompt finally spits out something decent is not a business strategy."
well you just describing an chatgpt is, one of the most fastest growing user acquisition user base in history
as much as I agree with your statement but the real world doesn't respect that
> one of the most fastest growing user acquisition user base in history
By selling a dollar of compute for 90 cents.
We've been here before, it doesn't end like you think it does.
Hot takes from 2023, great. Work with AIs has changed since then, maybe catch up? Look up how agentic systems work, how to keep them on task, how they can validate their work etc. Or don't.
what happen if the market is right and this is "new normal"?????
same like StackOverflow down today and seems like not everyone cares anymore, back then it would totally cause breakdown because SO is vital
> what happen if the market is right and this is "new normal"?????
Then there's an oversupply of programmers, salaries will crash, and lots of people will have to switch careers. It's happened before.
It's not as simple as putting all programmers into one category. There can be oversupply of web developers but at the same time undersupply of COBOL developers. If you are a very good developer, you will always be in demand.
> If you are a very good developer, you will always be in demand.
"Always", in the same way that five years ago we'd "never" have an AI that can do a code review.
Don't get me wrong: I've watched a decade of promises that "self driving cars are coming real soon now honest", latest news about Tesla's is that it can't cope with leaves; I certainly *hope* that a decade from now will still be having much the same conversation about AI taking senior programmer jobs, but "always" is a long time.
ai can do code review? do people actually believe this? we have a mr llm bot, it is wrong 95% of the time
I've been taking self-driving cars to get around regularly for a year or more.
waymo and tesla already operate in certain areas, even if tech is ready
regulation still very much a thing
“certain areas” is a very important qualifier, though. Typically areas with very predictable weather. Not discounting the achievement just noting that we’re still far away from ubiquity.
Leaves: https://news.ycombinator.com/item?id=46095867
I'm young, please when was that and in what industry
After the year 2000. dot com burst.
An tech employee posted he looked for job for 6 months, found none and has joined a fast food shop flipping burgers.
That turned tech workers switching to "flipping burgers" into a meme.
I used to watch all of the "Odd Todd" episodes religiously. Does anyone else remember that Adobe Flash-based "TV show" (before YouTube!)?
The defense industry in southern California used to be huge until the 1980s. Lots and lots of ex-defense industry people moved to other industries. Oil and gas has gone through huge economic cycles of massive investment and massive cut-backs.
.com implosion, tech jobs of all kinds went from "we'll hire anyone who knows how to use a mouse" to the tech jobs section of the classifieds was omitted entirely for 20 months. There have been other bumps in the road since then but that was a real eye-opener.
well same like covid right??? digital/tech company overhiring because everyone is home and at the same time the rise of AI reduce the number of headcount
covid overhiring + AI usage = massive layoff we ever see in decades
It was nothing like covid. The dot com crash lasted years where tech was a dead sector. Equity valuations kept declining year after year. People couldn't find jobs in tech at all.
There are still plenty of tech jobs these days, just less than there were during covid, but tech itself is still in a massive expansionary cycle. We'll see how the AI bubble lasts, and what the fallout of it bursting will be.
The key point is that the going is still exceptionally good. The posts talking about experienced programmers having to flip burgers in the early 2000s is not an exaggeration.
Some people will lose their homes. Some marriages will fail from the stress. Some people will chose to exit life because of it all.
It's happened before and there's no way we could have learned from that and improved things. It has to be just life changing, life ruining, career crippling. Absolutely no other way for a society to function than this.
That's where the post-scarcity society AI will enable comes in! Surely the profits from this technology will allow these displaced programmers to still live comfortable lives, not just be hoarded by a tiny number of already rich and powerful people. /s
I haven’t visited StackOverflow for years.
I don't get these comments. I'm not here to shill for SO, but it is a damn good website, if only for the archive. Can't remember how to iterate over entries in JavaScript dictionary (object)? SO can tell you, usually much better than W3Schools can, which attracts so much scorn. (I love that site: So simple for the simple stuff!)
When you search programming-related questions, what sites do you normally read? For me, it is hard to avoid SO because it appears in so many top results from Google. And I swear that Google AI just regugitates most of SO these days for simple questions.
I think that's OP's point though, Ai can do it better now. No searching, no looking. Just drop your question into Ai with your exact data or function and 10 seconds later you have a working solution. Stackoverflow is great but Ai is just better for most people.
Instead of running a google query or searching in Stackoverflow you just need a chatGPT, Claude or your Ai of choice open in a browser. Copy and paste.
I stopped using it much even before the AI wave.
Ive honestly never intentionally visited it (as in, went to the root page and started following links) - it was just where google sent me when searching answers to specific technical questions.
It became as annoying as experts exchange the very thing it railed against!
What was annoying about it?
Often the answer to the question was simply wrong, as it answered a different question that nobody made. A lot of times you had to follow a maze of links to related questions, that may have an answer or may lead to a different one. The languages that it was most useful (due to bad ecosystem documentation) evolved in a rate way faster than SO could update their answers, so most of the answers on those were outdated...
There were more problems. And that's from the point of view of somebody coming from Google to find questions that already existed. Interacting there was another entire can of worms.
That despite their url's claim, they didn't actually have and sex change experts.
They SEOd their way into being a top search result by showing crawlers both questions and answers, but when you visited the answer would be paywalled
Stack Overflow’s moderation is overbearing and all, but that’s nowhere near at the same level as Expert Exchange’s baiting and switching
Users flexing authority
the gatekeeping, gaming the system, capricious moderation (e.g. flagged as duplicate), and general attitude led it to be quite an insufferable part of the internet. There was a meme about how the best way to get a response is to answer your own question in an obviously incorrect fashion, because people want to tell you why you're wrong rather than actively help.
Why do you think those people behave that way?
you mixed up "is dead" with "is vital" :-)
buggywhips are having a temporary setback.
I had a "milk-up-the-nose" laughter moment when I read this comment.
In contrast to others, I just want to say that I applaud the decision to take a moral stance against AI, and I wish more people would do that. Saying "well you have to follow the market" is such a cravenly amoral perspective.
> Saying "well you have to follow the market" is such a cravenly amoral perspective.
You only have to follow the market if you want to continue to stay relevant.
Taking a stand and refusing to follow the market is always an option, but it might mean going out of business for ideological reasons.
So practically speaking, the options are follow the market or find a different line of work if you don’t like the way the market is going.
I still don’t blame anyone for trying to chart a different course though. It’s truly depressing to have to accept that the only way to make a living in a field is to compromise your principles.
The ideal version of my job would be partnering with all the local businesses around me that I know and love, elevating their online facilities to let all of us thrive. But the money simply isn’t there. Instead their profits and my happiness are funnelled through corporate behemoths. I’ll applaud anyone who is willing to step outside of that.
> It’s truly depressing to have to accept that the only way to make a living in a field is to compromise your principles.
Isn't that what money is though, a way to get people to stop what they're doing and do what you want them to instead? It's how Rome bent its conquests to its will and we've been doing it ever since.
It's a deeply broken system but I think that acknowledging it as such is the first step towards replacing it with something less broken.
> It’s truly depressing to have to accept that the only way to make a living in a field is to compromise your principles.
Of course. If you want the world to go back to how it was before, you’re going to be very depressed in any business.
That’s why I said your only real options are going with the market or finding a different line of work. Technically there’s a third option where you stay put and watch bank accounts decline until you’re forced to choose one of the first two options, but it’s never as satisfying in retrospect as you imagined that small act of protest would have been.
No, of course you don't have to – but don't torture yourself. If the market is all AI, and you are a service provider that does not want to work with AI at all then get out of the business.
If you found it unacceptable to work with companies that used any kind of digital database (because you found centralization of information and the amount of processing and analytics this enables unbecoming) then you should probably look for another venture instead of finding companies that commit to pen and paper.
> If the market is all AI, and you are a service provider that does not want to work with AI at all then get out of the business.
Maybe they will, and I bet they'll be content doing that. I personally don't work with AI and try my best to not to train it. I left GitHub & Reddit because of this, and not uploading new photos to Instagram. The jury is still out on how I'm gonna share my photography, and not sharing it is on the table, as well.
I may even move to a cathedral model or just stop sharing the software I write with the general world, too.
Nobody has to bend and act against their values and conscience just because others are doing it, and the system is demanding to betray ourselves for its own benefit.
Life is more nuanced than that.
Good on you. Maybe some future innovation will afford everyone the same opportunity.
Maybe one day we will all become people again!
(But only all of us simultaneously, otherwise won't count! ;))))
The number of triggered Stockholm Syndrome patients in this comment section is terminally nauseating.
How large an audience do you want to share it to? Self host photo album software, on hardware you own, behind a password, to people you trust.
Before that AI craze, I liked the idea of having a CC BY-NC-ND[0] public gallery to show what I took. I was not after any likes or anything. If I got professional feedback, that'd be a bonus. I even allowed EXIF-intact high resolution versions to be downloaded.
Now, I'll probably install a gallery webapp to my webserver and put it behind authentication. I'm not rushing because I don't crave any interaction from my photography. The images will most probably be optimized and resized to save some storage space, as well.
[0]: https://creativecommons.org/licenses/by-nc-nd/4.0/
Yeah but the business seems to be education for web front end. If you are going to shun new tech you should really return to the printing press or better copying scribes. If you are going to do modern tech you kind of need to stick with the most modern tech.
Printing press and copying scribes is a sarcastic comment, but these web designers are still actively working and their industry is 100s of years from the state of those old techs. The joke isn’t funny enough nor is the analogy apt enough to make sense.
No it is a pretty good comparison. There is absolutely AI slop but you have to be sticking your head in the sand if you don’t think AI will not continue to shape this industry. If you are selling learning courses and are sticking your head in the sand, well that’s pretty questionable.
I understand this stance, but I'd personally differentiate between taking the moral stand as a consumer, where you actively become part of the growth in demmand that fuels further investment, and as a contractor, where you're a temporary cost, especially if you and people who depend on you necessitate it to survive.
A studio taking on temporary projects isn't investing into AI— they're not getting paid in stock. This is effectively no different from a construction company building an office building, or a bakery baking a cake.
As a more general commentary, I find this type of moral crusade very interesting, because it's very common in the rich western world, and it's always against the players but rarely against the system. I wish more people in the rich world would channel this discomfort as general disdain for the neoliberal free-market of which we're all victims, not just specifically AI, for example.
The problem isn't AI. The problem is a system where new technology means millions fearing poverty. Or one where profits, regardless of industry, matter more than sustainability. Or one where rich players can buy their way around the law— in this case copyright law for example. AI is just the latest in a series of products, companies, characters, etc. that will keep abusing an unfair system.
IMO over-focusing on small moral cursades against specific players like this and not the game as a whole is a distraction bound to always bring disappointment, and bound to keep moral players at a disadvantage constantly second-guessing themselves.
<< over-focusing on small moral cursades against specific players like this and not the game as a whole
Fucking this. What I tend to see is petty 'my guy good, not my guy bad' approach. All I want is even enforcement of existing rules on everyone. As it stands, to your point, only the least moral ship, because they don't even consider hesitating.
Its cravenly amoral until your children are hungry. The market doesn't care about your morals. You either have a product people are willing to pay money for or you don't. If you are financially independent to the point it doesn't matter to you then by all means, do what you want. The vast majority of people are not.
I'm not sure I understand this view. Did seamstresses see sewing machines as amoral? Or carpenters with electric and air drills and saws?
AI is another set of tooling. It can be used well or not, but arguing the morality of a tooling type (e.g drills) vs maybe a specific company (e.g Ryobi) seems an odd take to me.
Man, y'all gotta stop copying each other homework.
It's said often because it's very true. It's telling that you can't even argue against it and just have to attack the people instead.
nobody is against his moral stance. the problem is that he’s playing the “principled stand” game on a budget that cannot sustain it, then externalizing the cost like a victim. if you're a millionaire and can hold whatever moral line you want without ever worrying about rent, food, healthcare, kids, etc. then "selling out" is optional and bad. if you're joe schmoe with a mortgage and 5 months of emergency savings, and you refuse the main kind of work people want to pay you for (which is not even that controversial), you’re not some noble hero, you’re just blowing up your life.
> he’s playing the “principled stand” game on a budget that cannot sustain it, then externalizing the cost like a victim
No. It is the AI companies that are externalizing their costs onto everyone else by stealing the work of others, flooding the zone with garbage, and then weeping about how they'll never survive if there's any regulation or enforcement of copyright law.
The ceo of every one of those Ai companies drives an expensive car home to a mansion at the end of the workday. They are set. The average person does not and they cannot afford to play the principled stand game. Its not a question of right or wrong for most, its a question of putting food on the table
I find this very generic what you are saying and they.
What stance against AI? Image generation is not the same as code generation.
There are so many open source projects out there, its a huge difference than taking all the images.
AI is also just ML so should i not use image bounding box algorithm? Am i not allowed to take training data online or are only big companies not allowed to?
Being broadly against AI is a strange stance. Should we all turn off swipe to type on our phones? Are we supposed to boycott cancer testing? Are we to forbid people with disabilities reading voicemail transcriptions or using text to speech? Make it make sense.
There's a moral line that every person has to make about what work they're willing to do. Things aren't always so black and white, we straddle that line The impression I got reading the article is that they didn't want to work for bubble ai companies trying to generate for the sake of generate. Not that they hated anything with a vector db
i think when ppl mean AI they mean “LLMs in every consumer facing production”
Intentionally or not, you are presenting a false equivalency.
I trust in your ability to actually differentiate between the machine learning tools that are generally useful and the current crop of unethically sourced "AI" tools being pushed on us.
How am I supposed to know what specific niche of AI the author is talking about when they don't elaborate? For all I know they woke up one day in 2023 and that was the first time they realized machine learning existed. Consider my comment a reminder that ethical use of AI has been around of quite some time, will continue to be, and even that much of that will be with LLMs.
You have reasonably available context here. "This year" seems more than enough on it's own.
I think there is ethical use cases for LLMs. I have no problem leveraging a "common" corpus to support the commons. If they weren't over-hyped and almost entirely used as extensions of the weath-concentration machine, they could be really cool. Locally hosted llms are kinda awesome. As it is, they are basically just theft from the public and IP laundering.
One person's unethical AI product is another's accessibility tool. Where the line is drawn isn't as obvious as you're implying.
> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff
If all of "AI stuff" is a "no" for you, then I think you just signed out off working in most industries to some important degree going forward.
This is also not to say that service providers should not have any moral standards. I just don't understand the expectation in this particular case. You ignore what the market wants and where a lot/most of new capital turns up. What's the idea? You are a service provider, you are not a market maker. If you refuse service with the market that exists, you don't have a market.
Regardless, I really like their aesthetics (which we need more of in the world) and do hope that they find a way to make it work for themselves.
> If all of "AI stuff" is a "no" for you, then I think you just signed out off working in most industries to some important degree going forward.
I'm not sure the penetration of AI, especially to a degree where participants must use it, is all that permanent in many of these industries. Already the industry where it is arguably the most "present" (forced in) is SWE and its proving to be quite disappointing... Where I work the more senior you are the less AI you use
> what the market wants
Pretty sure the market doesn't want more AI slop.
Nobody that actually understands the market right now would say that
Pretty sure HN has become completely detached from the market at this point.
Demand for AI anything is incredible high right now. AI providers are constantly bouncing off of capacity limits. AI apps in app stores are pulling incredible download numbers.
There is absolutely AI slop out there. Many companies rushed to add AI, a glorified chat bot to their existing product, and have marketed it as AI.
There is also absolutely very tasteful products that add value using LLM and other more recent advancements.
Both can exist at the same time.
I understand it as the market wanting more content about competing in an AI world
Sora's app has a 4.8 rating on the app store with 142K rating. It seems to me that the market does not care about slop or not, whether I like it or not.
The market wants a lot more high quality AI slop and that's going to be the case perpetually for the rest of the time that humanity exists. We are not going back.
The only thing that's going to change is the quality of the slop will get better by the year.
I don’t think they’re unique. They’re simply among the first to run into the problems AI creates.
Any white-collar field—high-skill or not—that can be solved logically will eventually face the same pressure. The deeper issue is that society still has no coherent response to a structural problem: skills that take 10+ years to master can now be copied by an AI almost overnight.
People talk about “reskilling” and “personal responsibility,” but those terms hide the fact that surviving the AI era doesn’t just mean learning to use AI tools in your current job. It’s not that simple.
I don’t have a definitive answer either. I’m just trying, every day, to use AI in my work well enough to stay ahead of the wave.
ceaseless AI drama aside, this blog and the set-studio website really hit the sweet spots of good looking, fast and well organized for me, it's been a while since I've felt that about a website
I hope things turn around for them it seems like they do great work
I feel like this person might be just a few bad months ahead of me. I am doing great, but the writing is on the wall for my industry.
We should have more posts like this. It should be okay to be worried, to admit that we are having difficulties. It might reach someone else who otherwise feels alone in a sea of successful hustlers. It might also just get someone the help they need or form a community around solving the problem.
I also appreciate their resolve. We rarely hear from people being uncompromising on principles that have a clear price. Some people would rather ride their business into the ground than sell out. I say I would, but I don’t know if I would really have the guts.
Its a global industry shift.
You can either hope that this shift is not happening or that you are one of these people surviving in your niche.
But the industry / world is shifting, you should start shifting with.
I would call that being innovative, ahead etc.
Sorry for them- after I got laid off in 2023 I had a devil of a time finding work to the point my unemployment ran out - 20 years as a dev and tech lead and full stack, including stints as a EM and CTO
Since then I pivoted to AI and Gen AI startups- money is tight and I dont have health insurance but at least I have a job…
> 20 years as a dev and tech lead and full stack, including stints as a EM and CTO
> Since then I pivoted to AI and Gen AI startups- money is tight and I dont have health insurance but at least I have a job…
I hope this doesn't come across as rude, but why? My understanding is American tech pays very well, especially on the executive level. I understand for some odd reason your country is against public healthcare, but surely a year of big tech money is enough to pay for decades of private health insurance?
Not parent commenter, but in the US when someone’s employment doesn’t include health insurance it’s commonly because they’re operating as a contractor for that company.
Generally you’re right, though. Working in tech, especially AI companies, would be expected to provide ample money for buying health insurance on your own. I know some people who choose not to buy their own and prefer to self-pay and hope they never need anything serious, which is obviously a risk.
A side note: The US actually does have public health care but eligibility is limited. Over one quarter of US people are on Medicaid and another 20% are on Medicare (program for older people). Private self-pay insurance is also subsidized on a sliding scale based on your income, with subsidies phasing out around $120K annual income for a family of four.
It’s not equivalent to universal public health care but it’s also different than what a lot of people (Americans included) have come to think.
Come to Europe. Salaries are (much) lower, but we can use good devs and you'll have vacation days and health care.
The tech sector in UK/EU is bad, too. And the cost of living in big cities is terrible for the salaries.
They are outsourcing just as much as US Big Tech. And never mind the slow-mo economic collapse of UK, France, and Germany.
Moving to Europe is anything but trivial. Have you looked at y'all's immigration processes recently? It can be a real bear.
Yeah. It is much harder now than it used to be. I know a couple of people who came from the US ~15 to 10 years ago and they had it easy. It was still a nightmare with banks that don’t want to deal with US citizens, though.
As Americans, getting a long-term visa or residency card is not too hard, provided you have a good job. It’s getting the job that’s become more difficult. For other nationalities, it can range from very easy to very hard.
Yeah it depends on which countries you're interested in. Netherlands, Ireland, and the Scandinavian ones are on the easier side as they don't require language fluency to get (dev) jobs, and their languages aren't too hard to learn either.
Do you count Finland? I heard that Finnish is very hard to learn.
Finnish people are probably nice when people try to learn their language. Hahaha. Can't say that about the other places.
If you have a US or Japanese passport and want to try NL: https://expatlaw.nl/dutch-american-friendship-treaty aka https://en.wikipedia.org/wiki/DAFT . It applies to freelancers.
Yeah, I'm in NL, so this is my frame of reference. Also, in many companies English is the main language, so that helps.
I made a career out of understanding this. In Germany it’s quite feasible. The only challenge is finding affordable housing, just like elsewhere. The other challenge is the speed of the process, but some cities are getting better, including Berlin. Language is a bigger issue in the current job market though.
Counter: come to Taiwan! Anyone with a semi active GitHub can get a Gold Cars Visa. 6 months in you're eligible for national health insurance (about 30$ usd/month). Cost of living is extremely low here.
However salaries are atrocious and local jobs aren't really available to non mandarin speakers. But if you're looking to kick off your remote consulting career or bootstrap some product you wanna build, there's not really anywhere on earth that combines the quality of life with the cost of living like Taiwan does.
+1, Taiwan is a great place
Taking a 75% pay cut for free Healthcare that costs 1k a month anyway doesn't math. Not to mention the higher taxes for this privilege. European senior developers routinely get paid less than US junior developers.
I want to sympathize but enforcing a moral blockade on the "vast majority" of inbound inquiries is a self-inflicted wound, not a business failure. This guy is hardly a victim when the bottleneck is explicitly his own refusal to adapt.
Survival is easy if you just sell out.
It's unfair to place all the blame on the individual.
By that metric, everyone in the USA is responsible for the atrocities the USA war industry has inflicted all over the world. Everyone pays taxes funding Israel, previously the war in Iraq, Afghanistan, Vietnam, etc.
But no one believes this because sometimes you just have to do what you have to do, and one of those things is pay your taxes.
Selling out is easy when your children have no food.
if the alternative to 'selling out' is making your business unviable and having to beg the internet for handouts(essentially), then yes, you should "sell out" every time.
The guy won’t work with AI, but works with Google…
Thank you. I would imagine the entire Fortune 500 list passes the line of "evil", drawing that line at AI is weird. I assume it's a mask for fear people have of their industry becoming redundant, rather than a real morality argument.
Surely there's AI usage that's not morally reprehensible.
Models that are trained only on public domain material. For value add usage, not simply marketing or gamification gimmicks...
How many models are only trained on legal[0] data? Adobe's Firefly model is one commercial model I can think of.
[0] I think the data can be licensed, and not just public domain; e.g. if the creators are suitably compensated for their data to be ingested
> How many models are only trained on legal[0] data?
None, since 'legal' for AI training is not yet defined, but Olma is trained on the Dolma 3 dataset, which is
1. Common crawl
2. Github
3. Wikipedia, Wikibooks
4. Reddit (pre-2023)
5. Semantic Scholar
6. Project Gutenberg
* https://arxiv.org/pdf/2402.00159
Nice, I hadn't heard of this. For convenience, here are HuggingFace models trained on Olma:
https://huggingface.co/datasets/allenai/dolma
https://huggingface.co/models?dataset=dataset:allenai/dolma
I wonder if there is a pivot where they get to keep going but still avoid AI. There must be for a small consultancy.
> "a self-inflicted wound"
"AI products" that are being built today are amoral, even by capitalism's standards, let alone by good business or environmental standards. Accepting a job to build another LLM-selling product would be soul-crushing to me, and I would consider it as participating in propping up a bubble economy.
Taking a stance against it is a perfectly valid thing to do, and the author is not saying they're a victim due to no doing of their own by disclosing it plainly. By not seeing past that caveat and missing the whole point of the article, you've successfully averted your eyes from another thing that is unfolding right in front of us: majority of American GDP is AI this or that, and majority of it has no real substance behind it.
I too think AI is a bubble, and besides the way this recklessness could crash the US economy, there's many other points of criticism to what and how AI is being developed.
But I also understand this is a design and web development company. They're not refusing contracts to build AI that will take people's jobs, or violate copyright, or be used in weapons. They're refusing product marketing contracts; advertising websites, essentially.
This is similar to a bakery next to the OpenAI offices refusing to bake cakes for them. I'll respect the decision, sure, but it very much is an inconsequential self-inflicted wound. It's more amoral to fully pay your federal taxes if you live in the USA for example, considering a good chunk are ultimately used for war, the CIA, NSA, etc, but nobody judges an average US-resident for paying them.
> Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
I started TextQuery[1] with same moralistic standing. Not in respect of using AI or not, but that most software industry is suffering from rot that places more importance on making money, forcing subscription vs making something beautiful and detail-focused. I poured time in optimizing selections, perfecting autocomplete, and wrestling with Monaco’s thin documentation. However, I failed to make it sustainable business. My motivation ran out. And what I thought would be fun multi-year journey, collapsed into burnout and a dead-end project.
I have to say my time was better spent on building something sustainable, making more money, and optimizing the details once having that. It was naïve to obsess over subtleties that only a handful of users would ever notice.
There’s nothing wrong with taking pride in your work, but you can’t ignore what the market actually values, because that's what will make you money, and that's what will keep your business and motivation alive.
[1]: https://textquery.app/
That is a beautiful product. How unfortunate!
His business seems to be centered around UI design and front-end development and unfortunately this is one of the things that AI can do decently well. The end result is worse than a proper design but from my experience people don't really care about small details in most cases.
I can definitely tell. Some sites just seem to give zero fucks about usability, just that it looks pretty. It's a shame
Andy Bell is absolute top tier when it comes to CSS + HTML, so when even the best are struggling you know it's starting to get hard out there.
I don’t doubt it at all, but CSS and HTML are also about as commodity as it gets when it comes to development. I’ve never encountered a situation where a company is stuck for months on a difficult CSS problem and felt like we needed to call in a CSS expert, unlike most other specialty niches where top tier consulting services can provide a huge helpful push.
HTML + CSS is also one area where LLMs do surprisingly well. Maybe there’s a market for artisanal, hand-crafted, LLM-free CSS and HTML out there only from the finest experts in all the land, but it has to be small.
How do you measure „absolute top tier“ in CSS and HTML? Honest question. Can he create code for difficult-to-code designs? Can he solve technical problems few can solve in, say, CSS build pipelines or rendering performance issues in complex animations? I never had an HTML/CSS issue that couldn’t be addressed by just reading the MDN docs or Can I Use, so maybe I’ve missed some complexity along the way.
Look at his work? I had a look at the studio portfolio and it's damn solid.
Being absolute top tier at what has become a commodity skillset that can be done “good enough” by AI for pennies for 99.9999% of customers is not a good place to be…
When 99.99% of the customers have garbage as a website, 0.01% will grow much faster and topple the incumbents, nothing changed.
> When 99.99% of the customers have garbage as a website
When you think 99.99% of company websites are garbage, it might be your rating scale that is broken.
This reminds me of all the people who rage at Amazon’s web design without realizing that it’s been obsessively optimized by armies of people for years to be exactly what converts well and works well for their customers.
A lesson many developers have to learn is that code quality / purity of engineering is not a thing that really moves the needle for 90% of companies.
Having the most well tested backend and beautiful frontend that works across all browsers and devices and not just on the main 3 browsers your customers use isn't paying the bills.
Amazon has "garbage as a website" and they seem to be doing just fine.
Lots of successful companies have garbage as a website (successful in whatever sense, from Fortune 500 to neighbourhood stores).
Wishing these guys all the best. It's not just about following the market. It's about the ability to just be yourself. When everyone around you is telling you that you just have to start doing something and it's not even about the moral side of that thing. You simply just don't want to do it. Yeah, yeah, it's a cruel world. But this doesn't mean that we all need to victim blame everyone who doesn't feel comfortable in this trendy stream.
I hope things with the AI will settle soon and there will be applications that actually make sense and some sort of new balance will be established. Right now it's a nightmare. Everyone wants everything with the AI.
> Everyone wants everything with the AI.
All the _investors_ want everything with AI. Lots of people - non-tech workers even - just want a product that works and often doesn't work differently than it did last year. That goal is often at odds with the ai-everywhere approach du jour.
My post had the privilege of being on front page for a few minutes. I got some very fair criticism because it wasn't really a solid article and was written when traveling on a train when I was already tired and hungry. I don't think I was thinking rationally.
I'd much rather see these kind of posts on the front page. They're well thought-out and I appreciate the honesty.
I think that, when you're busy following the market, you lose what works for you. For example, most business communication happens through push based traffic. You get assigned work and you have x time to solve all this. If you don't, we'll have some extremely tedious reflection meeting that leads to nowhere. Why not do pull-based work, where you get done what you get done?
Is the issue here that customers aren't informed about when a feature is implemented? Because the alternative is promising date X and delaying it 3 times because customer B is more important
Everyone gets to make their own choices and take principled stances of their choosing. I don’t find that persuasive as a buy my course pitch though
I had a discussion yesterday with someone that owns a company creating PowerPoints for customers. As you might understand, that is also a business that is to be hit hard by AI. What he does is offer an AI entry level option, where basically the questions he asks the customer (via a Form) will lead to a script for running AI. With that he is able to combine his expertise with the AI demand from the market, and gain a profit from that.
> ... we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
I don't use AI tools in my own work (programming and system admin). I won't work for Meta, Palantir, Microsoft, and some others because I have to take a moral stand somewhere.
If a customer wants to use AI or sell AI (whatever that means), I will work with them. But I won't use AI to get the work done, not out of any moral qualm but because I think of AI-generated code as junk and a waste of my time.
At this point I can make more money fixing AI-generated vibe coded crap than I could coaxing Claude to write it. End-user programming creates more opportunity for senior programmers, but will deprive the industry of talented juniors. Short-term thinking will hurt businesses in a few years, but no one counting their stock options today cares about a talent shortage a decade away.
I looked at the sites linked from the article. Nice work. Even so I think hand-crafted front-end work turned into a commodity some time ago, and now the onslaught of AI slop will kill it off. Those of us in the business of web sites and apps can appreciate mastery of HTML and CSS and Javascript, beautiful designs and user-oriented interfaces. Sadly most business owners don't care that much and lack the perspective to tell good work from bad. Most users don't care either. My evidence: 90% of public web sites. No one thinks WordPress got the market share it has because of technical excellence or how it enables beautiful designs and UI. Before LLMs could crank out web sites we had an army of amateur designers and business owners doing it with WordPressl, paying $10/hr or less on Upwork and Fiverr.
I simply have a hard time following the refusal to work on anything AI related. There is AI slop but also a lot of interesting value add products and features for existing products. I think it makes sense to be thoughtful of what to work on but I struggle with the blanket no to AI.
On this thread what people are calling “the market” is just 6 billionaire guys trying to hype their stuff so they can pass the hot potato to someone else right before the whole house of cards collapses.
That might well be the current 'market' for SWE labor though. I totally agree it's a silly bubble but I'm not looking forward to the state of things when it pops.
> On this thread what people are calling “the market” is just 6 billionaire guys trying to hype their stuff so they can pass the hot potato to someone else right before the whole house of cards collapses.
Careful now, if they get their way, they’ll be both the market and the government.
It's very funny reading this thread and seeing the exact same arguments I saw five years ago for the NFT market and the metaverse.
All of this money is being funneled and burned away on AI shit that isn't even profitable nor has it found a market niche outside of enabling 10x spammers, which is why companies are literally trying to force it everywhere they can.
Tough crowd here. Though to be expected - I'm sure a lot of people have a fair bit of cash directly or indirectly invested in AI. Or their employer does ;)
We Brits simply don't have the same American attitude towards business. A lot of Americans simply can't understand that chasing riches at any cost is not a particularly European trait. (We understand how things are in the US. It's not a matter of just needing to "get it" and seeing the light)
Corrected title: "we have inflicted a very hard year on ourselves with malice aforethought".
The equivalent of that comic where the cyclist intentionally spoke-jams themselves and then acts surprised when they hit the dirt.
But since the author puts moral high horse jockeying above money, they've gotten what they paid for - an opportunity to pretend they're a victim and morally righteous.
Par for the course
>especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that.
I intentionally ignored the biggest invention of the 21st century out of strange personal beliefs and now my business is going bankrupt
I don't think it's fair to call them "strange" personal beliefs
I personally would call them ignorant beliefs.
It probably depends on your circle. I find those beliefs strange, seems like moral relativism.
Yes I find this a bit odd. AI is a tool, what specific part of it do you find so objectionable OP? For me, I know they are never going to put the genie back in the bottle, we will never get back the electricity spent on it, I might as well use it. We finally got a pretty good Multivac we can talk to and for me it usually gives the right answers back. It is a once in a lifetime type invention we get to enjoy and use. I was king of the AI haters but around Gemini 2.5 it just became so good that if you are hating it or criticizing it you aren’t looking at it objectively anymore.
> we won’t work on product marketing for AI stuff, from a moral standpoint
Can someone explain this?
Some folks have moral concerns about AI. They include:
* The environmental cost of inference in aggregate and training in specific is non-negligible
* Training is performed (it is assumed) with material that was not consented to be trained upon. Some consider this to be akin to plagiarism or even theft.
* AI displaces labor, weakening the workers across all industries, but especially junior folks. This consolidates power into the hands of the people selling AI.
* The primary companies who are selling AI products have, at times, controversial pasts or leaders.
* Many products are adding AI where it makes little sense, and those systems are performing poorly. Nevertheless, some companies shove short AI everywhere, cheapening products across a range of industries.
* The social impacts of AI, particularly generative media and shopping in places like YouTube, Amazon, Twitter, Facebook, etc are not well understood and could contribute to increased radicalization and Balkanization.
* AI is enabling an attention Gish-gallop in places like search engines, where good results are being shoved out by slop.
Hopefully you can read these and understand why someone might have moral concerns, even if you do not. (These are not my opinions, but they are opinions other people hold strongly. Please don't downvote me for trying to provide a neutral answer to this person's question.)
"Please don't downvote me for trying to provide a neutral answer to this person's question"
Please note, that there are some accounts downvoting any comment talking about downvoting by principle.
These points are so wide and multi dimensionsal that one must really wonder whether they were looking for reasons for concern.
Let's put aside the fact that the person you replied to was trying to represent a diversity of views and not attribute them all to one individual, including the author of the article.
Should people not look for reasons to be concerned?
I can show you many instances of people or organisations representing diversity of views. Example: https://wiki.gentoo.org/wiki/Project:Council/AI_policy
I'm not sure it's helpful to accuse "them" of bad faith, when "them" hasn't been defined and the post in question is a summary of reasons many individual people have expressed over time.
i have noticed this pattern too frequently https://wiki.gentoo.org/wiki/Project:Council/AI_policy
see the diversity of views.
I'm fairly sure all the first three points are true for each new human produced. The environmental cost vs output is probably significantly higher per human, and the population continues to grow.
My experience with large companies (especially American Tech) is that they always try and deliver the product as cheap as possible, are usually evil and never cared about social impacts. And HN has been steadily complaining about the lowering of quality of search results for at least a decade.
I think your points are probably a fair snapshot of peoples moral issue, but I think they're also fairly weak when you view them in the context of how these types of companies have operated for decades. I suspect people are worried for their jobs and cling to a reasonable sounding morality point so they don't have to admit that.
Plenty of people have moral concerns with having children too.
And while some might be doing what you say, others might genuinely have a moral threshold they are unwilling to cross. Who am I to tell someone they don't actually have a genuinely held belief?
Explanation: this article is a marketing piece trying to appeal to anti-AI group.
[dead]
"Landing projects for Set Studio has been extremely difficult, especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that"
The market is literally telling them what it wants and potential customers are asking them for work but they are declining it from "a moral standpoint"
and instead blaming "a combination of limping economies, tariffs, even more political instability and a severe cost of living crisis"
This is a failure of leadership at the company. Adapt or die, your bank account doesn't care about your moral redlines.
Maybe they dont need to "create" website anymore, fixing other website that LLM generated is the future now
we say that wordpress would kill front end but years later people still employ developer to fix wordpress mess
same thing would happen with AI generated website
> same thing would happen with AI generated website
Probably even moreso. I've seen the shit these things put out, it's unsustainable garbage. At least Wordpress sites have a similar starting point. I think the main issue is that the "fixing AI slop" industry will take a few years to blossom.
Man, I definitely feel this, being in the international trade business operating an export contract manufacturing company from China, with USA based customers. I can’t think of many shittier businesses to be in this year, lol. Actually it’s been pretty difficult for about 8 years now, given trade war stuff actually started in 2017, then we had to survive covid, now trade war two. It’s a tough time for a lot of SMEs. AI has to be a handful for classic web/design shops to handle, on top of the SMEs that usually make up their customer base, suffering with trade wars and tariff pains. Cash is just hard to come by this year. We’ve pivoted to focus more on design engineering services these past eight years, and that’s been enough to keep the lights on, but it’s hard to scale, it is just a bandwidth constrained business, can only take a few projects at a time. Good luck to OP navigating it.
Isn't this a bit of an ad?
This article was posted a few days ago, it was flagged and removed within an hour or two. I don't know what is different this time.
I'm glad I wasn't the only one that thought that!
Completly agree
> we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
Although there’s a ton of hype in “AI” right now (and most products are over-promising and under-delivering), this seems like a strange hill to die on.
imo LLMs are (currently) good at 3 things:
1. Education
2. Structuring unstructured data
3. Turning natural language into code
From this viewpoint, it seems there is a lot of opportunity to both help new clients as well as create more compelling courses for your students.
No need to buy the hype, but no reason to die from it either.
> imo LLMs are (currently) good at 3 things
Notice the phrase "from a moral standpoint". You can't argue against a moral stance by stating solely what is, because the question for them is what ought to be.
Really depends what the moral objection is. If it's "no machine may speak my glorious tongue", then there's little to be said; if it's "AI is theft", then you can maybe make an argument about hypothetical models trained on public domain text using solar power and reinforced by willing volunteers; if it's "AI is a bubble and I don't want to defraud investors", then you can indeed argue the object-level facts.
Indeed, facts are part of the moral discussion in ways you outlined. My objection was that just listing some facts/opinions about what AI can do right now is not enough for that discussion.
I wanted to make this point here explicitly because lately I've seen this complete erasure of the moral dimension from AI and tech, and to me that's a very scary development.
> because lately I've seen this complete erasure of the moral dimension from AI and tech, and to me that's a very scary development.
But that is exactly what the "is ought problem" manifests, or? If morals are "oughts", then oughts are goal-dependent, i.e. they depend on personally-defined goals. To you it's scary, to others it is the way it should be.
Get with the program dude. Where we're going, we don't need morals.
I think some people prefer living in reality
[dead]
Previously: https://news.ycombinator.com/item?id=46070842
Interesting. I agree that this has been a hard year, hardest in a decade. But comparison with 2020 is just surprising. I mean, in 2020 crazy amounts of money were just thrown around left and right no? For me, it was the easiest year of my career when i basically did nothing and picked up money thrown at me.
Why would your company or business suddenly require no effort due to covid.
Too much demand, all of a sudden. Money got printed and i went from near bankruptcy in mid-Feb 2020 to being awash with money by mid-June.
And it continued growing nonstop all the way through ~early Sep 2024, and been slowing down ever since, by now coming to an almost complete stop - to the point i ever fired all sales staff because they were treading water with no even calls let alone deals, for half a year before being dismissed in mid-July this year.
I think it won't return - custom dev is done. The myth of "hiring coders to get rich" is over. No surprise it did, because it never worked, sooner or later people had to realise it. I may check again in 2-3 years how market is doing, but i'm not at all hopeful.
Switched into miltech where demand is real.
Interesting how someone can clearly be brilliant in one area and totally have their head buried under the sand in another, and not even realize it.
"especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that."
You will continue to lose business, if you ignore all the 'AI stuff'. AI is here to stay, and putting your head in the sand will only leave you further behind.
I've known people over the years that took stands on various things like JavaScript frameworks becoming popular (and they refused to use them) and the end result was less work and eventually being pushed out of the industry.
[dead]
[dead]
[flagged]
This thread has some serious dickheads
It’s ironic that Andy calls himself “ruthlessly pragmatic”, but his business is failing because of a principled stand in turning down a high volume of inbound requests. After reading a few of his views on AI, it seems pretty clear to me that his objections are not based in a pragmatic view that AI is ineffective (though he claims this), but rather an ideological view that they should not be used.
Ironically, while ChatGPT isn’t a great writer, I was even more annoyed by the tone of this article and the incredible overuse of italics for emphasis.
Yeah. For all the excesses of the current AI craze there's a lot of real meat to it that will obviously survive the hype cycle.
User education, for example, can be done in ways that don't even feel like gen AI in ways that can drastically improve activation e.g. recommendation to use feature X based on activity Y, tailored to their use case.
If you won't even lean into things like this you're just leaving yourself behind.
I agree that this year has been extremely difficult, but as far as I know, a large number of companies and individuals still made a fortune.
Two fundamental laws of nature: the strong prey on the weak, and survival of the fittest.
Therefore, why is it that those who survive are not the strong preying on the weak, but rather the "fittest"?
Next year's development of AI may be even more astonishing, continuing to kill off large companies and small teams unable to adapt to the market. Only by constantly adapting can we survive in this fierce competition.
> especially as we won’t work on product marketing for AI stuff, from a moral standpoint, but the vast majority of enquiries have been for exactly that
Sounds like a self inflicted wound. No kids I assume?
All the AI-brained people are acting like the very AIs they celebrate.
That's horrifying.