Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you watch this and your imagination doesn't run in the direction of evil robot army.. well, your imaginations doesn't work the same way as mine does.

I see a drone fleet of 100 bulky trucks galloping along contested highways towards a city boiling over with the violence of several simultaneous wars. They're protected by speed and dozens of armed UAVs. 96 make it through. Acceptable losses. The first truck to unload its four legged robotic soldiers loses 30 units to the welcome party, a quarter. By the time the tenth one is unloading, they're not losing any. The last robo-dog is unloaded just 18 minutes after the first truck arrives. By that time, there are over 10,000 dogs in squad-packs seeking targets from a database of 7,000 known enemy combatants and seeking control of strategically important sites.

there's something uncanny and creepy about how robots move once they've been riddled with armor piercing rounds. A leg stops working or a sensor gets damaged and it's impossible not to imagine that it's an injured animal in excruciating pain. The single minded resolve thug, that's all machine. If you're shooting off legs, you need to shoot off all four before these things will stop.

Any chance Boston Dynamics will adopt a don't be evil policy?



That is a compelling visual. Another one might be that the truck drops off its dogs carrying first aid supplies and disperse though out the city looking for victims of some disaster. Human instinct is to go with threat first, friend later, I expect that served us well during our cave years.

Humans have had a standoff attack capability for a while, and yet the number of deaths due to "war" has been going down for a while.(I'll admit that I'm not sure what to call the ISIL thing.) So where does that leave us? In a place where managing outbreaks with the lowest possible loss of life to non-combatants. And if you believe the narrative that the police are shooting people because they 'fear for officer safety' then you can certainly make the argument that an officer that is on site in a teleoperated way has no personal risk and should therefore not shoot anyone or anything except to save the life or lives of innocent civilians. But we all know that is impractical in a non-war situation.


Still, the problem with having occupation-style technology and drones is that it further spreads the power of those with capital at the expense of those without. The current class setup prevents truly depraved and evil systemic evil (beyond a point) from occurring, because officers and soldiers are human and will blatantly refuse to do things that are Evil (perpetually).

There is no such guarantee with drones--in fact, to refuse orders is a design flaw.

I can't help but imagine some descendant of these things, in the mode of one of Bradbury's mechanical dogs, attacks a throng of protestors.


I do not disagree, my hope is that, as with other technologies, especially dual use ones, we will be able to keep them biased toward the positive uses.

Robotics, like genetic engineering, nano-technology, and data mining can do good things and bad things. It is important to focus on the good things, remain cognizant of the bad things, and maximize the value. I would be sad to see robotic quadrapeds "banned for civilian use" because they can be weaponized, because they can also get to people in need in dangerous and hard to reach places.

In a more current events sort of way, I am in favor of severe punishments for people who weaponize drones and fly them in public places, or people who manufacture and sell such drones, but I am not in favor of banning personal ownership of drones. I am willing to risk that someone will show up where I am with such a device, and the risks to my personal safety if they employ it in a deadly way, in exchange for the freedom to own and experiment with drones in a responsible way.


> Robotics, like genetic engineering, nano-technology, and data mining can do good things and bad things.

How's that going with arguably the most advanced and widely available of those (data mining)?

To expand: I'd offer that capital tends towards amorality. Why? Because amorality is more profitable.


Both parent comment and Chuck's reply appear to contain an important semantic error, using amoral to mean immoral. Corporations tend towards amorality, which is a superset of both morality and immorality, because this is more profitable. I believe Chuck is hoping we can keep morality more profitable, on the whole, than immorality. I am less sanguine.


Fwiw, I specifically used amoral.

I don't consider most things capital does to be immoral. In an optimally run profit-generating business, the course that produces the most profit will be pursued. That's basically a tautology, and cares nothing about morality at all!


I don't buy that argument. Criminal capital (if such a term exists, which would be defined as capital that is amoral in origin) tends toward profit over morality. To the extent that technology is exploitable with a small amount of capital[1] the likelyhood of it being so exploited increases.

We've seen drug lords building submarines to transport drugs up the coast, but the economic cost of really effective submarines is still too high relative to the profit such devices provide. And perhaps more importantly the economic risk of 'losing' a submarine such that its life time value exceeds its cost to build and deploy.

Things like DNA printers worry me for example much more than robot dogs.

[1] The same cost reductions that make fielding a web server $5/month enable large scale data mining for very little investment in cash.


> we will be able to keep them biased toward the positive uses.

Iron Man II (2010). How you intend to use them is irrelevant if all it takes is one guy on the dev team (or a compromised janitor) can take control of the entire army by dropping in some code.


I'm less worried about a rogue staffer than I am about the systemic use by large corporations to further their interests (it wasn't that long ago before corporate-employed thugs were breaking the heads of strikers.)


In my mind this sentiment can be summed up as: "The more humane we make war, the more of it we'll have". And the simple reason is ... why not? When _we_ no longer suffer casualties why should we care about sending a pack of mechanical dogs into a village somewere and just let them run wild without a care in the world for the casualties we cause to the innocent? And before saying that that won't happen just look at current drone warfare. There isn't a person in this country (US) who truly cares about all the collateral damage of our drones. It seems foolish to think these will be any different.


Except that this statement: There isn't a person in this country (US) who truly cares about all the collateral damage of our drones. is demonstrably false.

I presume you were employing hyperbole but it is important to note the difference between waging war in Afghanistan using drones versus waging war in Iraq using cluster bombs. The latter case has a much higher non-combatant casualty rate. Further, the more accurate such munitions become the easier it is for non-combatants to avoid areas where they are likely to be killed or injured.

I have yet to meet anyone in the US military who considers warfare "humane", and while I would not be surprised if such people existed, it has not been my experience that they are in positions of authority.


> There isn't a person in this country (US) who truly cares about all the collateral damage of our drones.

I care. As do tens to hundreds of millions of other US citizens. Tell me, what we were supposed to do about it again (that we didn't do)?


March on Washington. Picket the manufacturers. Organise a boycott of related companies. Chain yourself to the gates of drone control bases. Something, anything.

There is no mass movement against drones. Look at protests against nuclear weapons during the cold war. Large, well-organised groups led a groundswell of opposition. The Greenham Common peace camp was continually occupied for eighteen years until the nuclear weapons based there were removed; The Faslane peace camp is still occupied today. In 1982, a million people gathered in New York to oppose nuclear weapons.

You care, as do millions of Americans, but only to the extent that you don't actually have to do anything.


Still, the problem with having occupation-style technology and drones is that it further spreads the power of those with capital at the expense of those without.

Are we certain that this is a bad thing. I personally like that it requires the resources of a nation-state to field a major military. The whole 'democratization of war' that happened during the twentieth century is arguably the reason why we see constant, brutal, civil war certain parts of the world. If we could undo the invention of the Kalashnikov the world would be so much better.


The point stands that as you diminish the number of people needed to make aggressive decisions those scenarios can become more unstable / risky (as in risk of unethical orders). It's the same case as with nuclear weapons. But it's not an insurmountable problem: simply delegating the decision to more and/or better prepared individuals is the way to go.

The dangers and benefits in fact are strikingly analogous to nuclear weapons technology: the power will eventually overwhelm conventional warfare and major nations would probably only fight proxy wars up with robots, and stop immediately when the robotic resources depleted to prevent assured destruction. It could create the sort of calm that nuclear weapons bring. And precautions must be assigned the same way (multi-person authorization, strict safety, etc).


What Kalashnikov designed was powered by inevitability. If he hadn't done it someone else would. Just like with the robots, someone somewhere WILL make them. I do fear the ability that these give leaders to wage war with no cost other then money.


If these existed in the late 1700s, no way that US colonists can declare independence.


I submit that the purpose and function of any/all forms of technology is to enhance the abilities of those with it and not those without. This is not a feature of any given flavor of technology, but rather of technology in general.

In any kind of competitive context, this will implicitly elevate those with technology over those without. Generally. Absent a Arthur C Clarke "Superiority" sort of situation.


Radio collars for endangered species?


Used to extend the power of people tracking them, usually to the detriment of people who want to do something with that land.


The terror in the first scenario is not a product of the robots themselves, but the existence of a democratically unrestrained power in full command of the machines.

I hate to sound so single minded, but this is just one more reason to opposed gerrymandering, closed primaries, restricted access to the polls, private campaign finance, and the revolving door between government regulators and their charges in private industry. Every one of these acts as a wedge between government power and accountability to the people. Individually, they're bad. When they all start working together, the rot really starts to accelerate.

By the time the RoboCops are announcing that you've got 20 seconds to comply, it's because we've lost any way to dislodge their operators.


...and as the last human alive scrambles up a steep hill, with 100 robo-dogs close behind, he stops and drops to his knees as he finds himself overlooking a steep cliff, with no where left to run.

"Why?", he looks up and asks the first robot, as it skids to a stop next to him.

One by one, a dozen other robo-dogs surround the human.

An armored plate drops open on the chest of the first robo-dog, revealing a small lcd screen.

And the above YouTube video begins to play.

The video stops after showing the callous human kicking Spot, the proto-ancestor of all the robo-dogs present on this hill.

The kicking scene begins to repeat on the screen, like an old gif.

And silently (for the robo-dogs were never given a true voice box), the leader lifts one robotic leg and with a single powerful pneumatic kick to his ribs, sends the last human alive flailing over the edge of the cliff.


"Ng Security Industries Semi-Autonomous Guard Unit #A-367 lives in a pleasant black-and-white Metaverse where porterhouse steaks grow on trees, dangling at head level from low branches, and blood-drenched Frisbees fly through the crisp, cool air for no reason at all, until you catch them.

He has a little yard all to himself. It has a fence around it. He knows he can't jump over the fence. He's never actually tried to jump it, because he knows be can't. He doesn't go into the yard unless he has to. It's hot out there.

He has an important job: Protect the yard. Sometimes people come in and out of the yard. Most of the time, they are good people, and he doesn't bother them. He doesn't know why they are good people. He just knows it. Sometimes they are bad people, and he has to do bad things to them to make them go away. This is fitting and proper.

Out in the world beyond his yard, there are other yards with other doggies just like him. These aren't nasty dogs. They are all his friends.

The closest neighbor doggie is far away, farther than he can see. But he can hear this doggie bark sometimes, when a bad person approaches his yard. He can hear other neighbor doggies, too, a whole pack of them stretching off into the distance, in all directions. He belongs to a big pack of nice doggies.

He and the other nice doggies bark whenever a stranger comes into their yard, or even near it. The stranger doesn't hear him, but all the other doggies in the pack do. If they live nearby, they get excited. They wake up and get ready to do bad things to that stranger if he should try to come into their yard.

When a neighbor doggie barks at a stranger, pictures and sounds and smells come into his mind along with the bark. He suddenly knows what that stranger looks like. What he smells like. How he sounds. Then, if that stranger should come anywhere near his yard, he will recognize him. He will help spread the bark along to other nice doggies so that the entire pack can all be prepared to fight the stranger."

--Neal Stephenson, Snow Crash


This is the exact snippet I recalled when I watched the Spot video :)


seriously, please stop kicking the robots


It was a balance test / demo, not just a "kick".


tell that to the robots


Look at it another way. People involved in deadly combat have a kill or be killed mentality. Extreme caution for personal risk means you have to shoot first and ask questions later in a war zone. Not so for robots. Send them in to take prisoners. If they are destroyed, build another. Charge them until the enemy runs out of bullets. Send them close enough to use non-lethal rounds or wound legs and move on. And forget about looting and rape. Forget about collateral damage caused by bombing city blocks from the sky and hoping most inhabitants are bad. Imagine if we could shut down ISIS with $500M worth of material and no lives lost.


Agreed. The real problem is not on the existence of this technology (that was inevitable) but the misuse of it as other's pointed out. In a lot of countries (esp USA), civilians are effectively not allowed to put any limitations or guidelines on how the military or intelligence organizations use new technology. This is the actually alarming story.

If a piece of technology allows the military to capture instead of kill a supposed terrorist, will they do so? What is legally binding them to?


I think this is a really important discussion, and there are a lot of different aspects, making a simple solution hard. If we always attempt to capture instead of kill terrorists, and then we have to detain them, does that cause more or less terrorists in the future? Or do we capture, put on trial, and possibly execute (and if so, what happens if we pass legislation to ban executions). Or, as we generally do now, do we just kill known terrorists?

After you step beyond what's humane for the person, the question of what's humane for society looms (and at that point, we have to consider who's society we are talking about). It's obviously more humane for the individual if we capture instead of kill outright, but if that's noticeably worse for society through negative externalities (I'm not trying to assume, just posing the question), then is it better or worse?


> (esp USA), civilians are effectively not allowed to put any limitations or guidelines on how the military or intelligence organizations use new technology

You do, of course, realize that the President is a civilian? And as Commander-in-Chief he absolutely does have the ability to put limitations or guidelines on how those groups use their equipment. And let's not forget congress, which is comprised entirely of civilians and could financially neuter military and intelligence programs if desired.

There are plenty of countries where there is no civilian oversight for the military, but the US is not one of them.

> If a piece of technology allows the military to capture instead of kill a supposed terrorist, will they do so? What is legally binding them to?

That's a good question. I think the preference would always be to capture if there is a possibility of gaining intelligence from the captive. If you were interested in bargaining with adversaries, it would be wise to capture at least some of them (prisoner exchanges, that sort of thing). However, if the intelligence gain would be minimal and you already have a stable of bargaining chips, it might be worth more to have a guarantee that this particular terrorist won't be in the fight any longer.

As to what's legally binding, the 1907 Hague Convention says that it is forbidden "to declare that no quarter will be given." This would suggest that surrenders from any lawful combatant would have to be accepted. To take the current example, I do not think ISIL fighters would be considered "lawful combatants" primarily because they do not respect the international laws.


I was referring to these kinda moves: http://benswann.com/us-moves-to-classify-afghan-military-ove...

As a creator of this kind of technology, handing it over to agencies that are constantly battling all levels of oversight seems sketchy to me. I would understand why some people would want to ban this technology outright as an overreaction. Instead, maybe we should try and enforce controlled civilian oversight.

But yes, I am not expert on the legalities of oversight or the treatment of captured terrorists


I see what you mean.

I guess nobody's really an expert on that. I guess there's some precedence going back to the golden age of piracy, but then again most of those policies would have predated many instances of international law. I wouldn't be surprised if there's a dozen JAGs working out exactly what the US's policies should be right this instant (if they haven't already).


Unless they are leaders or have some needed knowledge, the lives of 'the bad guy' will be valued at less than the cost of the robot. Ask Joe Taxpayer how many taxes s/he is willing to spend to capture alive a low level terrorist instead of killing them in combat, especially when we consider that any money spent on this could have been used to improve things at home.

$500M spent to take them down alive or $100M spent to take them down dead.


The other possibility is the irresponsible over use as with drones. If they aren't too expensive you could just strap a bomb on one of these and run it towards any possible threat. I agree with your assessment and look forward to the onset of more nonlethal warfare. I'm just saying the opposite effect is also possible.


A single AGM-114 Hellfire missile costs over $100k, on top of the costs of arming and launching an aircraft for sortie. I have to think a Spot-derived, four-legged land-missile that can be launched from a truck would be price-competitive.


We already have irresponsible over use of drones and so far all we have are ones that can fly around and drop bombs on people. And the government goes to great lengths to make sure people are totally emotionally uninvolved in the decision and execution. Once these end up in the military it'll only be a short while before they also end up on our police force. You think the cops are bad now? Wait until they've got a small army of drones to do their bidding. It's entirely possible that that never comes to pass but the US' current direction doesn't instil a ton of confidence.


"You think the cops are bad now?"

I don't really. And the same benefits would apply. When cops stormed the room of Amadou Diallou and thought they saw a gun, their only option was shoot to kill. Send in a robot, and now you don't care.


Nor does the US' past direction. Since the 1950s it has toppled two democracies and thwarted another. The (final) overthrow of its own democracy may become possible through such technology.


I would imagine tons of R&D was spent on creating a robot like this and I'm worried how easily it can be reverse engineered by well funded enemies.


database of 7,000 known enemy combatants

The database is a more worrying prospect than the robots, in a lot of ways. Who gets to designate "known" and "enemy"? What if it's the Tinder eigenfaces program? Automatable ethnic cleansing?

Then what of the occupation? Robots aren't great for political legitimacy. Do you have them return fire on the kids throwing rocks at them? Unlike human guards I suppose they can sustain IED losses forever. But they're an A1 prime target for hackers...


> What if it's the Tinder eigenfaces program?

On the other hand, dispatching BigDogs and Spots to go on dates as proxies would be an interesting prospect.


Since WWI / WWII, the fate of wars has been decided by the size and complexity of industrial output, instead of the strength and bravery of men.

This is just goes further in the same direction.


Other than industrial output, politics is still (and will always be) one of the largest determining factors.

Politics can take a vastly superior force and render their outcomes impotent on a battlefield.


What I'm missing in your vision is dozen of short range microdrones with needles covered in neurotoxin taking off of each of the dogs backs.


Those would be chemical weapons. Those can't be used, because they are inhumane.


I find Churchill's words to be fascinating when read closely: https://en.wikipedia.org/wiki/Alleged_British_use_of_chemica...

The wording is callous and shocking to my modern ears but - if you start with the premise that you are already in a violent conflict (I'm not addressing the issue of colonialism here - just the debate about the comparative ethics of bullets and explosives vs chemicals) then this sentence stands out: "It is sheer affectation to lacerate a man with the poisonous fragment of a bursting shell and to boggle at making his eyes water by means of lachrymatory gas."

We have some ridiculous Hollywood-fed beliefs that bullets and explosions create clean deaths and painless injuries.


Not really. Chemical weapons are applied unselectively and have a great potential of using against civilians. I don't think many people would mind guided poison arrows.

But I appreciate your irony. :-)


Yeah, there should be an equal amount of parallel effort put into developing technologies that would allow us to destroy these machines and permanently disable their subsystems. Today, politicians still have to convince people to commit industrialized murder - tomorrow, the robots will obey orders without delay.


Google owns Boston Dynamics


Not that I'm on the dystopian future train, but "Don't be Evil" isn't a Google policy.



I don't know if this is such a huge game changer for war. I mean, we had wheels for solid ground, and machines that can fly through the air, and boats for water, and hovercrafts for ambiguous terrain. These cute dogs are ultimately not that revolutionary.

I am much, much more worried about evil applications of multirotor drones.

Anyway, all of the above is an exercise in futility without decent AI. The robot dogs can run - so what? As long as they're pretty dumb they can't do much damage.

Now, an AI running a dog chassis, or flying a quadcopter, that's an "interesting" thought.


Wheels are notoriously poor at navigating rough terrain; that's why we build roads. We could build a vehicle with wheels or treads so huge it effectively flattens anything in its way, and we do -- but for some missions (support for a squad of soldiers on foot, say) it makes sense and is much cheaper to build a small legged vehicle.


Is it possible that these could make roads obsolete? Now that is an interesting concept.


Probably not because wheels are VERY efficient at transport across smooth level terrain, and a wheeled vehicle is less easy to destabilize than a legged vehicle. So when deciding between wheels vs. legs, it will be the usual sort of engineering tradeoff: which one offers the greatest advantage for your application?


Just reading Bing West's A Million Steps, so that obviously is affecting me, but the first thing I thought was this would be great for saving soldiers from getting blown up by IEDs.


    > Any chance Boston Dynamics will adopt a don't be evil policy?
Actually, Boston Dynamics is a wholly owned subsidiary of Google Inc.


It is creepy as hell to watch them walk, and that makes me wonder what groups like ISIL would think about being hunted by a small pack of them.

Right now dropping bombs on houses kills some terrorists, but the collateral damage has a side-effect of bringing more to their cause. Would attack robots scare the shit out of them and make them stop, or just backfire and end up as another recruiting tool for fundamentalists?


I imagine swarms of quadcopters would be better for that task. Less chance of being blocked by physical barriers and lower cost = more units. They're so cheap you wouldn't even need rounds, just pack on some explosives and self destruct when near.


... for 3 minutes, until the batteries give out and they all need to be charged for 5-6 hours.

(But actually, what is the battery life on these things? You still have the problem of heavier battery = more drain on battery. I wonder where the break-even point is?)


They're hybrids. Powered by batteries recharged by small ICE/turbine engines.


Whoa, that's dark.

I was just watching this https://www.youtube.com/watch?v=VXJZVZFRFJc for the n:th time. It's hilarious.



I'm generally not a fan of Newsweek, but now I definitely want to read that book. Thanks for posting this.


yeah, i've always kind of scoffed at the people saying boston dynamics robots are scary. They're an amazing technical accomplishment and they make me really excited and happy to see them.

but as soon as i saw that shot of two robot dogs side-by-side, moving in almost synchronicity, something turned in the pit of my stomach. There's something very dystopian sci-fi about that image.


Someone sets off a large EMP and all the dogs suddenly freeze, and fall over.


Or the dogs are encased in a rectifying antennas and they say "yumm! recharge time!" The whole point of weaponizing something is to make it resistant to the more obvious attack strategies :-).


...and the next version gets an extra $5 of shielding per dog* and so is made completely immune to even a H-bomb EMP.

* at a $5000 markup, but who's counting?


Any chance Boston Dynamics will adopt a don't be evil policy?

They are owned by Google now. But let's not pretend Google's don't be evil policy makes a big difference these days.


They won't stalk you, if you check the opt-out field.


Though of course it will be an opt-in field that's pre-checked and surrounded by opaque verbiage.


'don't be evil to me, or else'




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: