Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

First reaction: It's official. I am very afraid.

Second reaction: When's the OK Go video released?

Third reaction: They paid Boston Dynamics engineers to do this?



Up until recently with the sale of Spot robots, YouTube videos were Boston Dynamics' main product.


Of course they did. The DoD puts enormous resources into publicity stunts. They sponsor movies left and right, do tons of advertising and propaganda, and provide early-stage commercialization funding for consumer products to their contractors. They know BD robots look like murderous war machines, so of course they want to spend some money to make the public like them.


I kind of wish there was a reason to be afraid, but I don’t think AI is anywhere near that.


You don’t need true AI to make this thing deadly or oppressive.

You don’t need an AI that ponders the meaning of life for you.

You just need a smart Decision and Control System (DCS).

Paired with additional technology for facial and gait recognition, obstacle avoidance, path planning, and you have all the hallmarks of a rudimentary hunter-killer bot.

What Boston Dynamics cracked here, is the physical control mechanism to allow these robots to live in our world.

Next, they need to build the brains, to allow this machine to move independently and autonomously.



We don't need to have full AI to have "death bots".


Do you have a source for that? I’m genuinely interested what kind of AI can act 100% autonomously on the battlefield.


Why would these things need to be fully autonomous to be scary? The fact that there might be a human operator in the loop would be cold comfort with one of these things hauling ass towards me...


That makes no sense. I would be more worried about it failing like a self driving car would. A malfunctioning robot could set your house on fire through an electrical short even if it is 100% trustworthy and harmless to humans.

Once I am scared of humans wanting to kill me why would I care if a robot with human like intelligence wants to kill me? It's the same thing at that point.


It pushes the ultimate decision higher up the chain of command, for one.

If a general orders a platoon of soldiers to commit a war crime, the soldiers still ultimately have to decide whether to pull the trigger. There's conscience and self-preservation at play. The robots don't have second thoughts.


Wouldn't you be able to stop it with a foil blanket in that case? A lack of autonomy would mean any kind of faraday cage would make it a paperweight.


> Do you have a source for that? I’m genuinely interested what kind of AI can act 100% autonomously on the battlefield.

The various military groups around the world have been using semi-autonomy in warfare for decades; you don't need 100% generalizable autonomy.

Plenty of missiles in the past worked just fine with little more than silhouette matching and 1 bit cameras.


We tend to think that the first widely deployed military drones are going to be like the movies: big bipedal humanoid robots kicking down doors, armed with machine guns and flamethrowers.

In reality, it'll probably be swarms of tiny quadcopter drones, each with a gun and only a few bullets (and/or some sort of self-detonation ability), shooting out the glass windows and flooding into a house.

There's obviously some AI involved in this scenario, (navigation, mapping, facial recognition), but you definitely don't need to balance a huge machine on two feet to wage an effective drone attack.


Why limit to the battlefield? NYPD deployed a BD robot in the field 2 months ago: https://nypost.com/2020/10/29/nypd-deploys-robot-dog-after-b...

So far they have only bee used to look rather than touch, but I predict that will change quickly, within the next year or two.


The Dallas police used a police robot to blow up a suspect with C4.


I think he meant something of the sort of exploding drones are deadly and don't require that level of AI. and I guess if one of these bots ran towards you, you wouldn't be reluctant to dance with it, while it inconspicuously arms it's self-destruct mechanism. the future is here! STILL NO FLYING CARS.



Why would anyone be afraid of these? You should be afraid of stealth drones three miles up with air to ground missiles. They are unstoppable without electronic countermeasures.


Its a bit harder to control how much collateral damage occurs with A2G missiles. Put a rifle on a humanoid robot or small explosive on a drone like in Slaughterbots and that’s much more terrifying.


It isn't, really, because you can at least hope to avoid the slaughterbot. You can't escape that which you can't see. A high-altitude UAV is the nightmare, whether it shoots missiles or bullets. We're just used to them (but not being a target of them), so it doesn't feel so scary as imaginary close combat with a robot.

(To this day I remember this one thing about US drone strikes in Pakistan I read in an article many years ago: the people living there became literally afraid of the blue sky - as the drone strikes tended to happen during good weather. That's terror.)


That's a problem. People are not terrified of AGM carrying stealth drones, but they should be because they are already here and in use around the world. Nothing can stop it, no tank, aircraft, rocket, armor. Probably the only realistic defense is an iron dome type system to shoot the missile down. The US deployed a hellfire AGM with retractable swords instead of explosives they are so confident they can land it on a single person or vehicle.

Biped cyborg killer is just not a realistic weapon. The slaughterbot attack on Maduro in 2018 shows that technology is not very mature yet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: