Hacker Newsnew | past | comments | ask | show | jobs | submit | willio58's commentslogin

Just cancelled. I’ll give my money to a company with leaders that have a modicum of backbone.

Thanks for the reminder. Doing the same now.

The little respect I had left for Sam is now wiped. Makes me sick.

Growing up I always thought AI would be this beautiful tool, this thing that opens the gates to a new society where work becomes optional in a way. But I failed to think about human greed.

I remember following OpenAI way back when it was a non profit explaining how AI uncontrolled could be highly detrimental. Now Sam has not only taken that non profit and made it for-profit. It seems he’s making the most evil decisions he can for a buck.

Cancel your subscription, tell your friends to. And vote to heavily tax these companies and their leaders.


> i'd rather it feel awkward and human than efficient and cold.

So deeply ironic considering he claims he’s doing this because AI can do the jobs these people did.

These billionaires will learn one day that removing humans doesn’t stop at the bottom layer. It’ll continue to happen at layers above until their own position starts to be put into question. They’ll realize those people who are removed due to AI taking their jobs still need to put food on their tables. It’ll take time, but ultimately there are only so many ways that can go. The answer will be extreme taxation on the billionaires.


I do genuinely wonder about the endgame here. Why would the objective winners of the _current_ system, our billionaire class, want to disrupt that system? Do they really believe that they will necessarily be winners in the new world too, are they that arrogant?

They already understand the current system and status quo is going away. They understand, on some level, the consequences of the technocapitalist system they've built and perpetuated.

They're making their own accommodations, rather than trying to change the course: https://www.theguardian.com/news/2022/sep/04/super-rich-prep...


I think assuming human agency (building technocapitalism, correcting course) or the possibility to escape capitalism and its consequences (in bunkers), underestimates what capitalism is.

Astro isn’t solving the same surface as next. Astro is great for static sites with some dynamic behavior. The same could be said about next depending on how you write your code, but next can also be used for highly dynamic websites. Using Astro for highly dynamic websites is like jamming a square peg into a round hole.

We use Astro for our internal dev documentation/design system and it’s awesome for that.


But presumably if you could do this for Next it would be at least as easy for Astro?

Astro is not a static site builder.

Someone at my job just said yesterday “I can’t see a reason to hire any additional devs in the future with AI being the way it is now”

I disagreed vehemently, but it’s really gotten me thinking about just how screwed some orgs are. Especially those with poor technical leadership. Like I can try to convince people otherwise but ultimately they’re not going to believe it until they see productivity reduce by relying solely on AI.

The other part that’s odd to me is I feel like once we do truly reach a point where dev jobs are made irrelevant, I believe that level of intelligence will make essentially all white collar jobs irrelevant, all the way up to the CEO. So it’s kind of this race to all of us not having jobs. It’s just funny that the higher ups at some companies are so delusional to think what they do won’t also be replaced


Based on that list it boils down to 2 things it seems:

- cost (no longer a problem)

- too much code needed and it bloats the data pipelines. Does anyone have any actual evidence of this being the case? Like yes, code would be needed, but why is that innately a bad thing? Bloated data pipelines feels like another hand-wave when I think if you do it right it’s fine. As proven by Waymo.

Really curious if any Tesla engineers feel like this is still the best way forward or if it’s just a matter of having to listen to the big guy musk.

I’ve always felt that relying on vision only would be a detriment because even humans with good vision get into circumstances where they get hurt because of temporary vision hindrances. Think heavy snow, heavy rain, heavy fog, even just when you crest a hill at a certain time of day and the sun flashes you


Just for the record though, Musk isn't blindly anti-LIDAR. He has said (and I think this is an objective fact) that all existing roads and driving are based on vision (which is what all humans do). So that should technically be sufficient. SpaceX uses LIDAR for their docking systems.

I would argue that yes, we do use vision but we get that "lidar depth" from our stereo vision. And that used to be why I thought cameras weren't enough.

But then look at all the work with gaussian splatting (where you can take multiple 2d samples and build a 3d world out of it). So you could probably get 80% there with just that.

The ethos of many Musk companies (you'll hear this from many engineers that work there) is simplify, simplify, simplify. If something isn't needed, take it out. Question everything that might be needed.

To me, LIDAR is just one of those things in that general pattern of "if it isn't absolutely needed, take it out" – and the fact that FSD works so well without it proves that it isn't required. It's probably a nice to have, but maybe not required.


Humans aren't using only fixed vision for driving. This is such a tiresome thing to see repeated in every discussion about self driving.

You're listening to the road and car sounds around you. You're feeling vibration on the road. You're feeling feedback on the steering wheel. You're using a combination of monocular and binocular depth perception - plus, your eyes are not a fixed focal length "cameras". You're moving your head to change the perspective you see the road at. Your inner ear is telling you about your acceleration and orientation.


And also, even with the suite of sensors that humans have, their vision perception is frequently inadequate and leads to crashes. If vision was good enough, "SMIDSY" wouldn't be such an infamous acronym in vehicle injury cases.

For those of us not aware of Australian cycling jargon, "SMIDSY" means "Sorry, Mate, I Didn't See You".

the issue is clearly attention not vision when it comes to humans. if we could actually process 100% of the visual information in our field of view, then accidents would probably go down a shit load.

Humans have both issues. There are many human failures which are distinctly a vision issue and not attention related, e.g. misestimation of depth/speed, obscured or obstructed vision, optical focus issues, insufficient contrast or exposure, etc.

But how many of those crashes not caused by inattention could have been avoided with less idiocy and more defensive driving? I mean, yes, we can’t see as well in fog, but that’s why you should slow down

Again, I'm still not saying that humans don't make bad decisions. I'm saying that, unequivocally, they also get into accidents while paying attention and being careful, as a result of misinterpretation or failure of their senses. These accidents are also common, for example:

* someone parking carefully, misjudges depth perception, bumps an object

* person driving at night, their eyes failed to perceive a poorly lit feature of the road/markings/obstacles

* person driving and suddenly blinded by bright object (the sun, bright lights at night)

* person pulling out in traffic who misinterprets their depth perception and therefore misjudges the speed of approaching traffic

* people can only focus their eyes at one distance at a time, and it takes time to focus at a different distance. It is neither unsafe nor unexpected for humans to check their instruments while driving -- but it can take the human eye hundreds of milliseconds to focus under normal circumstances -- If you look down, focus, look back up, and focus, as quick as you can at highway speeds, you will have travelled quite a long distance.

These type of failures can happen not as a result of poor decision making, but of poor perception.


> But how many of those crashes not caused by inattention could have been avoided with less idiocy and more defensive driving?

Most of them.

We can lump together "inattention" and "idiocy" for the purposes of this conversation, because both could be massively alleviated by a good self-driving car without lidar.

If you look at the parallel comments, you'll see that the majority of accidents and fatalities indeed come from these two factors combined (two-thirds coming from distraction, speeding, and impaired driving), and that kube-system is having to resort to ridiculous fallacies to try to dispute the empirical data that is available.


I didn’t claim vision was responsible for the majority of accidents anywhere in this thread.

> There are many human failures which are distinctly a vision issue and not attention related

Which are a tiny minority. The largest causes of crashes in the US are attention/cognition problems, not vision problems. Most traffic systems in western countries (probably in others, too, but I don't have personal experience), and in particular the US, are designed to limit visibility problems and do so very effectively.


That sounds more like a personal opinion, because I don’t think that data is particularly easy to objectively collect.

Regardless it is irrelevant to the point. Whatever the number may be, lapses in human visual perception are responsible for some crashes


> That sounds more like a personal opinion, because I don’t think that data is particularly easy to objectively collect.

That sounds like a personal opinion?

Maybe do the bare minimum of research before spouting yours.

DOT says that only 5% of crashes are caused by low visibility during weather events.[1]

In 2023, the combined causes of alcohol, speeding, and distracted driving (all cognitive/attention issues) caused 67% of highway deaths. [2]

I was able to find these in 30 seconds. You did zero research to confirm whether your belief was correct before asserting that my claim was opinion. That's pathetic.

> Regardless it is irrelevant to the point.

And your point is therefore irrelevant to the discussion at hand, because the person you were replying to did not claim that vision had no safety impact, but that it had little safety impact:

> the issue is clearly attention not vision when it comes to humans. if we could actually process 100% of the visual information in our field of view, then accidents would probably go down a shit load.

...and, as we can clearly see, the issue is attention (and some bad decision making), not vision.

[1] https://ops.fhwa.dot.gov/weather/roadimpact.htm

[2] https://www.adirondackdailyenterprise.com/opinion/columns/sa...


None of those things you cited is “human vision or perception”

“Low visibility during weather events” is a small subset of this.

A ridiculously common example of the limitations of human vision is when people hit curbs parallel parking because of the inherent limitations of relying on depth perception to estimate the exact location of the vehicle when it cannot otherwise be directly seen. Go look in a parking lot and see how common curbed wheels are.

Also, NHTSA estimates that they don’t have any information for 60% of incidents, because they go unreported.


> None of those things you cited is “human vision or perception”

> “Low visibility during weather events” is a small subset of this.

You're still refusing to do the most basic research or even read my comment:

> In 2023, the combined causes of alcohol, speeding, and distracted driving (all cognitive/attention issues) caused 67% of highway deaths.

Do the math. 100% - 67% is 33%. Even literally not opening Google, you can already deduce that the maximum fraction of fatalities caused by vision is 33%.

Given that you aren't interested in reading or researching and instead just want to push your opinion as fact, I think your claims can be safely discarded.

Edit: Because you're editing your comment because you realize that you're making an absolute fool of yourself:

> A ridiculously common example of the limitations of human vision is when people hit curbs parallel parking

A completely irrelevant distraction - this causes virtually zero accidents and even fewer fatalities, and you know it.

> Also, NHTSA estimates that they don’t have any information for 60% of incidents, because they go unreported.

Aha, so now you actually did research, and found that all of the available data supports my claims, so you're attempting to undermine it. Nice try. "Estimates" vs. actual numbers isn't really a contest.

Come back when you have actual data - until then, you're just continuing to undermine your own point with your ridiculous fallacies and misdirections - because if you actually had a defensible claim, you'd be able to instantly pull out supporting evidence.


Dude, you're arguing with a straw man.

I'm not arguing about fatalities or relative percentages of contributing factors, nor am I arguing that alcohol/speeding/attention are not all also issues. They are, you're right.

The only thing I argued is that "lapses in human visual perception are responsible for some crashes", which is a fact.


Attention is perhaps the limiting factor, but being able to look in two directions at once would help, and would help greatly if we had more attention capacity. E.g. anytime you change lanes you have to alternate between looking behind, beside, and in front and that greatly reduces reaction time should something unexpected happen in the direction you aren't currently looking...

In theory, a computer should be able to do the same. It could do sensor fusion with even more sense modalities than we have. It could have an array of cameras and potentially out-do our stereo vision, or perhaps even use some lightfield magic to (virtually) analyze the same scene with multiple optical paths.

However, there is also a lot of interaction between our perceptual system and cognition. Just for depth perception, we're doing a lot of temporal analysis. We track moving objects and infer distance from assumptions about scale and object permanence. We don't just repeatedly make depth maps from 2D imagery.

The brute-force approach is something like training visual language models (VLMs). E.g. you could train on lots of movies and be able to predict "what happens next" in the imaging world.

But, compared to LLMs, there is a bigger gap between the model and the application domain with VLMs. It may seem like LLMs are being applied to lots of domains, but most are just tiny variations on the same task of "writing what comes next", which is exactly what they were trained on. Unfortunately, driving is not "painting what comes next" in the same way as all these LLM writing hacks. There is still a big gap between that predictive layer, planning, and executing. Our giant corpus of movies does not really provide the ready-made training data to go after those bigger problems.


Putting your point another way, in order to replicate an average human driver’s competence you would need to make several strong advancements in the state of the art in computer vision _and_ digital optics.

In India (among others), honking is essential to reducing crashes

We often greatly underestimate / undervalue the role of our ears relative to vision. As my film director friend says, 80% of the impact in a movie is in the sound


The day a Waymo can functionally navigate the streets of Mumbai is when we really have achieved l5

I'm positive that Teslas have gyroscopes and accelerometers in them. Our eyes actually have a fairly small focal length range due to the fixed nature of our cornea and only being able to change focal length by flexing the crystalline lens.

20 meters away motion vision is more accurate than stereoscopic vision. What is lidar helping to solve here?

Waymo claims its system, which uses a combination of LIDAR & vision, resolves objects up to 500 meters away

https://waymo.com/blog/2024/08/meet-the-6th-generation-waymo...

This company claims their LIDAR works conservatively at 250m, and up to 750m depending on reflectivity

https://www.cepton.com/driving-lidar/reading-lidar-specs-par...


Most of what you said has nothing to do with lidar vs camera

What I said has to do with "vision only systems" (what Musk has claimed will be enough to do FSD) with sensor fusion systems (what everybody else having success in this space does)

Mentioning gaussian splatting for why we don't need lidar depth is a great example of Musk-esque technobabble; surface level seemingly correct, but nonsense to any practitioner. Because one of the biggest problems of all SfM techniques is that the results are scale ambiguous, so they do not in fact recover that crucial real-world depth measurement you get from lidar.

Now you might say "use a depth model to estimate metric depth" and I think if you spend 5 minutes thinking about why a magic math box that pretends to recover real depth from a single 2D image is a very very sketchy proposition when you need it to be correct for emergency braking versus some TikTok bokeh filter you will see that also doesn't get you far.


This is not really true if you have multiple cameras with a known baseline, or well known motion characteristics like you get with an accelerometer+ wheel speed.

> So that should technically be sufficient

Sufficient to build something close to human performance. But self driving cars will be held to a much higher standard by society. A standard only achievable by having sensors like LiDAR.


if a self driving car had the exact vision of humans it would still be better because it has better reaction times. never mind the fact that humans cant actually process all the visual information in our field of view because we dont have the broad attention to be able to do that. its very obvious that you can get super human performance with just cameras.

Whether thats worth completely throwing away LiDAR is a different question, but your argument is just obviously false.


This reminds me of the time I was distantly following a Waymo car at speed on 101 in Mountain View during rush hour. The Waymo brake lights came on first followed a second or two later by the rest of the traffic.

Better reaction times only matter if the decisions are the same / better in every case. Clearly we are not there on that aspect of it yet.

Deciding to crash faster, or "tell human to take over" really fast is NOT better.


Even if they weren’t going to be held to a higher standard for widespread acceptance, tens of thousands of people a year in the us die due to humans driving badly. Why would we not try to do better than that?

Because that's an acceptable loss and better costs more!

Teslas have at least 3 forward facing cameras giving them plenty of depth vision data.

They also have several cameras all around providing constant 360° vision.


Sufficient if all else were equal. But the human brain and artificial neural networks are clearly not equal. This is setting aside the whole question of whether we hope to equal human performance or exceed it.

That doesn't matter. It's not like we use 100% of our brain capacity for driving.

In fact, that's why radio/music/podcasts thrive. Because we're bored when we drive. We have conversations, etc. We daydream.

As long as the skills relevant to actually driving are on parity with humans, the rest doesn't matter.

In fact, in a recent podcast, Musk mused that you actually may have a limit of how smart you want a vehicle model to be, because what if IT starts to get bored? What will it do? I found that to be an interesting (and amusing) thought exercise.


To do gaussian splatting anywhere near in real time, you need good depth data to initialize the gaussian positions. This can of course come from monocular depth but then you are back to monocular depth vs lidar.

LIDAR also struggle in heavy rain, snow, fog, dust. Check how waymo handle such conditions.

It's not only failing, it's causing false positives.


Why is this getting downvoted? It's good faith and probably more accurate than not.

> and the fact that FSD works so well without it proves that it isn't required

The reports that Tesla submits on Austin Robotaxis include several of them hitting fixed objects. This is the same behavior that has been reported on for prior versions of their software of Teslas not seeing objects, including for the incident for which they had a $250M verdict against them reaffirmed this past week. That this is occurring in an extensively mapped environment and with a safety driver on board leads me to the opposite conclusion that you have reached.


If Waymo proven their model works, why the silly automaker is doing several orders of magnitude more autonomous miles?

They aren't. Tesla has logged some 800k total miles with their robotaxi vehicles, including miles with safety drivers. Waymo has logged 200M driverless miles. That's 0.4% of the mileage, with the most generous possible framing.

My understanding is that there's more data processing required with cameras because you need to estimate distance from stereoscopic vision. And as it happens, the required chips for that have shot up in price because of the AI boom.

But I think costs were just part of the reason why Elon decided against Lidar. Apparently, they interfere with each other once the market saturates and you have many such cars on the same streets at the same time. Haven't heard yet how the Lidar proponents are planning to address that.


How does Waymo handle it now? There are many videos of Waymo depots with dozens of cars not running into each other.


Lidar critics like to pretend that anti-collision is not a well-studied branch of Computer Science and telecoms. Wifi, Ethernet and cellphones all work well simultaneously, despite participants all sharing the same physical medium.

I'm not a Lidar critic. I'm really just curious how they're addressing it, or plan to.

And there’s no subscription right?


Icloud subscription.


The other difference between Steve and Tim is Steve would have never been caught dead giving a gold gift to a sitting president. It comes off as desperate and evil, two things Steve would have hated associating with Apple.


Elon cult members still to this day will tell me that because humans only use vision to drive all a Tesla needs is simple cameras. Meanwhile, I've been driven by Waymo and Tesla FSD and Waymo is by far my pick for safety and comfort. I actually trusted the waymo I was in, while the Tesla I rode in we had 2 _very_ scary incidents at high speeds in a 1 hour drive.


> humans only use vision to drive

I love this argument because it is so obviously wrong: how could any self aware person seriously argue that hearing, touch, and the inner ear aren't involved in their driving?

As an adult I can actually afford a reliable car, so I will concede that smell is less relevant than it used to be, at least for me personally :)


> hearing, touch, and the inner ear aren't involved

Not to mention possibly the most complex structure in the known universe, the human brain: 86 billion neurons, 100 trillion connections.


Involved? Yes. Necessary? Pretty sure no.

If it makes you happy, you can read "only vision" as "no lidar or radar." Cars already have microphones and IMUs.


1. in US you can get a driver's license if you're deaf so as a society we think you can drive without hearing

2. since this is in context of Tesla: tesla cars do have microphones and FSD does use it for responding to sirens etc.


(1) is true, but actually driving is definitely harder without hearing or with diminished hearing. And Several US states, including CA, prohibit inhibiting hearing while driving, e.g., by wearing a headset, earbuds, or earplugs.


Human inner ear is worse than a $3 IMU in your average smartphone in literally every way. And that IMU also has a magnetometer in it.

Beating human sensors wasn't hard for over a decade now. The problem is that sensors are worthless. Self-driving lives and dies by AI - all the sensors need to be is "good enough".


Human hearing is excellent. Good directional perception and sensitivity. Eyesight is the weakest sense. Poor color sensitivity, low light sensitivity, blindspot. The terrible natural design flaws are compensated by natural nystagmas and the brain filling in the blanks.


> The problem is that sensors are worthless

Well, in TFA the far more successful manufacturer of self driving cars is saying you're wrong. I think they're in much better position to know than you :)


I loved season 1. Season 2 I thought was great, but to me they opened much more new story than they resolved. I worry season 3 will be much of the same, especially if they’re saying season 4 is basically a certainty. I think severance could have been 2-3 seasons.


I was ok with it opening a new story at the end of season 2 because it’s a fascinating dilemma and I’m happy they wanted to explore a deeper question about the morality of having two souls in one body and how that affects their humanity.

However, it should end next season. This idea it’s going to be a long term project is going to ruin everything and now I am sad.


I think they've been making a solid pace for the story including answering more questions than I expected, but then I can't help but compare the pacing favorably to both the problematic Dollhouse and the under-appreciated later seasons of HBO's Westworld.


I think Dollhouse would have been better had it been an HBO series rather than on Fox... Likely 90% of the issues were studio interference.

And totally agreed on Westworld... they just went off the rails at some point.


Knowing what we know of Joss Whedon today, I think at least 70% of the problems with Dollhouse were Whedon's own issues. It came out in an interview that morally dubious character Topher was Whedon's close to a self-insert character, and that tracks and also explains a lot about the show's problems.

I maintain that Seasons 3 and 4 of Westworld were quite good, but not enough people watched them because they got lost or otherwise fell off in Season 2. But you sort of have to understand Season 2 to get a lot of what 3 and 4 did, to the show's peril. (Massive spoilers: I refer to Season 2 as the Futurama Robot Church season as I think the Season 2 arc was largely about how Delores was the Robot Devil, like in the Futurama Robot Church stories if they were not played for laughs. A lot of what Delores does in S2 and S3 maybe doesn't make half as much sense without that context. S2 trying to be an S1-like puzzle box made the meaning of that a lot less clear than it should have been. Not to mention how many people didn't want Delores to take that dark of a turn, despite it being deeply telegraphed in Season 1, and per flashbacks had already happened once before, hence the whole weird Wyatt thing.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: