Every technological advance in this space has caused harm to someone.
The advent of digital systems harmed artists with developed manual artistic skills.
The availability of cheap paper harmed paper mills hand-crafting paper.
The creation of paper harmed papyrus craftsmen.
The invention of papyrus really probably pissed off those who scraped the hair off thin leather to create vellum.
My point is that in line with Jevon's paradox there is always a wave of destruction that occurs with technological transformation, but we almost always end up with more jobs created by the technology in the middle and long term.
Devils advocate here - pro and max tier customers for all the major inference providers are loss leaders from the data we have been able to figure out, and reverse engineer. They are effectively a marketing exercise.
The real profitability is selling tokens to enterprise, and enterprise demand is growing so fast that they are short on the total amount of tokens they can generate per minute, and are prioritising rationally - enterprise gets a better experience - instead of optimizing for their lowest paying (and most loss leading) customers.
We are in a hardware crunch right now but that won't be forever, and eventually (likely 2028) we will get experiences like we got in January from pro-sumer accounts again.
Well, there probably are some in there. Data centre designers, comms experts, architects, electricians, etc. Lot of smaller organisations benefiting from the work.
I found that the time I spent reviewing and fixing issues/errors/omissions in Copilot’s meeting notes was more than just cleaning up my own notes that I took and sending out.
It's time to accept the new way of working, just change your reality to match the copilot version and boom, you save time fixing its mistakes!
In fact, why have the meeting at all? Just prompt copilot to create notes based on a fictional conception of the meeting and you just saved everyone a whole hour!
I'm building a scraper in Golang based on Colly to do two things:
* Automatically train the scraper on the structure of the page to acquire the data you want, and
* Clean and structure the data into a format suitable to go into a relational database
I got sick of doing all that manually for some pricing data I wanted to monitor on some suppliers sites, and I've always wanted to contribute more to open source and give back.