Hacker Newsnew | past | comments | ask | show | jobs | submit | jstapels's commentslogin

Fun blog, but I was left hanging as the author never found the issue with the OS level driver (and instead <spoiler>used a workaround library</spoiler>).

It's easy to just blame the FTDI driver, but FTDI is used all over the place in the arduino community on MacOS, so I would have have assumed it was working.


I'd trust the FTDI windows drivers, because almost all industrial use of embedded is on windows, factories and electrical engineering and the like. I'd mostly trust the Linux drivers, because the open-source community uses them. I wouldn't trust the Mac drivers at all.


How would these be serviced? I would be surprised if conventional tire machines could mount/unmount these from wheels.


It's both parts, there's no separate wheel.


But we're moving the emissions in a "more solvable" direction. By consolidating the emissions from millions of cars to hundreds (thousands?) of plants, we're making it a MUCH easier problem to address.


Maybe I'm missing something, but no matter how efficient a heat pump is, doesn't it have a limit on how much heat it can pump as ∆T increases?

Even here in western NY we occasionally have to deal with -10 F (-23 C) weather for a week and so that's a pretty big temperature delta if you want a comfortable 68 F (20 C).

Combine that with the fact that gas is still quite a bit cheaper per unit of energy than electricity (in my area) and a heat pump just doesn't make any sense.

The ability to have a single unit heat and cool is definitely nice, but without a geothermal-like ambient temperature to work with, I just don't see this working out, even with all the insulation in the world.


Elsewhere in this thread fastbeef says "They're very common in Sweden and can easily warm an entire house even at -20 C [-4F]." So, it can work though at -10F you would probably need supplemental heat.

Of course in Sweden residential natural gas prices are almost 4 times what they are in the US so its a very different economic decision. Shows what a carbon tax can do.

https://www.statista.com/statistics/702735/household-natural...


Keep in mind that 1 kW worth of gas gives out 0.9 kW, while 1 kW of electricity entered into a heat pump gives out AT LEAST 3 kW of heat.

Also, no one uses gas for heating in Sweden. Very old houses down town use it for cooking only.


Aren't you supposed (at least when you have a single house) to couple it with a geothermal energy so that your external temperature is always the years median of around 9 C?


Geothermal or some other relatively stable source. If you live near a lake you can drop a water sling on the bottom and use that as a source, it is also possible to bury a sling in a field next to your house at a frost-free depth (2.5 m or deeper in the north of Sweden) and use that. In built-up areas with many houses using deep wells for heat extraction there have been cases of those wells freezing up, this can also happen if the system is under-dimensioned.


But that comes at the expense of complicated software. Sure, once you've gone through the effort of procuring all that hardware, configuring it to run Kubernetes, reconfiguring your software to run in containers... you're good to go.

Mainframes allow you to easily scale your software vertically (but at the expense of complicated hardware). That might seem silly in today's world, but a lot of the software running on Z was written decades ago and the risk of rewriting it is extremely high.


How... does it work then?

Was the programming model for mainframes made to be scalable from the beginning?


Coding for a mainframe in the early days was almost regimental in many companies, lots of dotting i's and crossing t's, code sign off, lots of QA and checks.

One example would be a bit of COBOL I did and associated test program, all worked, ticked all the boxes accept there was a single spelling mistake in one comment line, actually I'd missed the full stop of the last sentence. So I had to make that change and retest it all from scratch again. How many would just leave it, or not fully test every instance and associated program that had in full and document all tests and results compared to expected results today? Yes we still have industries that would do that, airline industry or finance systems, but that was the prevail of all code back then.

But then source code control does not consist of a group of auditors in a separate group physically comparing outputs with previous with a lightbox. So been much progress, though equally lots of automation in which a single flaw can cascade.

Now as for scalability - mainframes lived on what we know as moore's law today more than we do today and iirc the likes of IBM effectively promised it's customers a solid path for tomorrow, today and remember - what we now know as LTE today, was and is the mainframe staple and LTE for them, means a lifetime. But you pay for that level of service, but always have. This enables lots of legacy, well tested and proven code to carry on still being used today.

Equally a relevant anecdote, was working in the 80's for a large company who had iirc a DPS88T (Honeywell Bull), which had a maximum of 4 CPU clusters available. This company was hitting max usage and I identified a program that I could optimise and recover near on a whole CPU's worth of processing back for use. I also mentioned that dispite only officially having 4 CPU clusters available, it was possible to attach a 5th. May mistake as next thing they are onto the supplier demanding a 5th CPU. As for them money solved a problem and whilst a cheaper option was available that would entail change and when it comes to changing code that just works as needed, they are always reluctant to change it when they can carry on just adding more power. And the mainframe suppliers make their money on supplying more power, year after year, all in a heavily fault tolerant hardware design that goes beyond just having ECC memory and thinking life is good. But mainframe hardware is a well worth looking into as redundancy, legacy handling, robustness is at a whole level many don't appreciate.


The vast majority of mainframe software, especially older software was batch processing oriented - often millions of discreet financial transactions - that can easily be scaled horizontally across multiple CPUs or processing engines.


Yeah, same here. If Lyme disease is incurable, seems like we should be studying the western fence lizard.


Lyme disease is curable, at least in early stages. Encephalitis is the nasty one you're probably thinking about.

Source: had Lyme twice, caught early, fully cured after a run of antibiotics.


I was wondering the same thing myself. I'm guessing they tested it and found that it hit every coordinate at least once and said "good enough". Meaning, I don't think it's guaranteed, they just picked one that did.

The interesting thing is that you're guaranteed to always have a different number because otherwise the cycle would restart. So you just need to find a sequence that's long enough.


Egress pricing for Google and AWS (sans Lightsail) continues to be one of the biggest price differences between them and smaller hosts such as Linode and DigitalOcean.

I think Google missed an opportunity here. They should have cut the prices more significantly for standard tier (sacrificing performance) to make this more competitive.

Right now Linode's and DO's smallest $5 plan offers 1TB of transfer, which would cost $85.00 on Google's new standard plan.


The extremely high egress prices for any cloud don't seem to have hurt their popularity much so far. So I suspect they all don't want to give up their cash cow.

Compared to a VPS or renting a dedicated servers, the egress costs can be enormous if you come even close to using the traffic contingent you get with many VPS or dedicated hosts.

Just as a comparison, a dedicated server with Hetzner for ~50-70 EUR per month includes 30 TB of traffic, which would be at least 2,400 EUR on the Google Cloud.


Just put the edges somewhere else, use GCP for compute and push the actual delivery to the edge.


Huh? VPS and dedicated hosts usually charge below a penny per gigabyte over your traffic limit, and if you open a support ticket they will usually work with you towards something in the sub-$4 per TB range.


Are major clouds actually facing any real competition because of bandwidth pricing?

On another note, Softlayer used to have generous multiple TB allocations for their dedicated servers but they took them away. It's likely that anyone needing high bandwidth, especially for static content, will have other edge networks and CDNs in place for that use case.


Of course not. They are not competing with Digital Ocean / Linode at all.


If that's true, why did Amazon come out with Lightsail?

https://amazonlightsail.com


That's just a card to be in the game, it's hardly their mainline product.


My Lightsail gives me 3TB. It's a huge card for me. The only reason I use them at all for hosting.


Sorry to be blunt, but they don't really care that much about you. Sure, they want to take your money, they don't want to leave it on the table.

But most of their money comes from the Netflixes and the Adobes of the world, not from mom-and-pop stores :p


:-) Well Bezos is a genius, perhaps he will figure out that mom and pop businesses are the backbone of the world economy.


Based on how the AWS services look, I somehow doubt it. Cloud Formation, RDS Oracle, to name a few thing, seem quite enteprise-y :)


> biggest price differences between them and smaller hosts such as Linode and DigitalOcean.

I suspect that the smaller hosts' plans traffic quota is sold below their own cost - assuming (correctly) that the vast majority of customers would not use anywhere close to the limit.

The traffic quota also scales much slower than the price as you move to bigger, more expensive plans.

I guess that it's fine if you reach the limit on a few servers, but if you rented 1000 x $5 droplets from Linode/DigitalOcean, and maxed out all of their traffic quota, you would get kicked out. Has someone tried to use these hosts just for cheap file servers?


> if you rented 1000 x $5 droplets from Linode/DigitalOcean, and maxed out all of their traffic quota, you would get kicked out.

It might happen but it doesn't refute the parent's point:

"Right now Linode's and DO's smallest $5 plan offers 1TB of transfer, which would cost $85.00 on Google's new standard plan."


I don't know about DO Droplets but you can get a reasonably priced dedicated server* at OVH and knock yourself out piping 4-8 TB to the world for ~$40/mo.

* It's hard to push a lot of useful data with a tiny VPS.


OVH isn't an American company, that probably matters to a lot of use cases.


OVH has a datacenter in Canada which is very well connected to North America, they are also working on their USA datacenter.


A US location will be welcome. Might have to start using a real server for a change.


He's trying to make his audio easier to listen to by using various effects provided by Ardour. The most significant of what he listed is the compressor, which is great for spoken podcasts. It will take any sound that is over a certain threshold and start to to reduce it's volume (eg. compress it), so if he's talking really quietly and then yells something really loud, the person listening isn't subjected to as large of a volume range in their ears.

The limiter just limits the maximum volume, and I'm not sure why he's using an expander.


After comp/limit, a little expander can make the audio less dull sounding.


I'm not sure I understand the point of using an expander after a compressor as one reduces the dynamic range but the other increases it again. What about a little eq cut/boost instead?


There are all sorts of reasons to use an expander after compression/limiting:

https://www.gearslutz.com/board/so-much-gear-so-little-time/...


Sorry I still don't get it. That thread explains some uses for expanders, just not after compressor or limiters in the same effects chain. Surely if you need an expander after a compressor then you've got your setting wrong on the compressor, or maybe you need a noise gate before it.


Not sure I follow this logic...

Do you expect everyone that rides in the back of your car to have read your owner's manual? I realize in this case they also happen to be the owners, but I don't think that's the point here.

Hiding a manual release BEHIND a speaker isn't exactly obvious. I have no opinion on if they deserve $1 million, but I do think Tesla needs take function over form in this case and make the manual override a lot easier to find.


> Do you expect everyone that rides in the back of your car to have read your owner's manual?

That's a rather misleading statement, given that you acknowledge that the people in the back of the car are actually the owners.

And yes, I do expect the owners of the car to be at least aware of what to do in an emergency. The fact that they didn't bother to educate themselves about emergency procedures doesn't mean they get to blame the car manufacturer.


You're ignoring th fact that anyone could have been in that back seat, and having non owners or children there is statistically more likely. Blaming the driver for not fully understanding the safety features of the car is just a red herring which makes you come off as confused at best, disingenuous at worst. It's entirely plausible for the driver to become incapacitated in a crash while the passengers in the back are usefully conscious and still able to move.

There's a reason why all emergency actuation mechanisms are painted bright red and super visible in all forms of transport from cars to airplanes to buildings. That reason is you cannot reasonably expect all passengers to be intimate with the safety measures, nor can you even expect well versed passengers to be thinking clearly in an amergency situation.

Tesla can certainly expect to get their asses handed to them by various federal regulators across the globe for hiding the 'get me out of the burning death trap' lever behind a god dammed speaker grill.


AFAICT (it's not in the digital version of the manual) the speaker-grill door release cable isn't even documented as an "emergency procedure", it's in the section of the manual about locks and keyless entry and stuff like that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: