Tuesday, January 29, 2008

Technology in Wartime recap part deux

And, we're back...sorry about the intermission. I've been thinking quite a bit of late about the topic of loyalty and affinity as it relates to traveler retention and churn. I might get to that later tonight...

So, as I mentioned, Schneier gave the keynote. Good stuff. The following presentations discussed the role of robots in wartime. Of particular interest was Dr. Ron Arkin's paper, "Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture".

Arkin and his students have written extensively on issues surrounding reasoning and constraint of autonomous robots in rules of engagement and laws of war. Heck, these are issues I'd never really thought about, but holy cow. I mean, artificial intelligence is tough enough, but I hadn't really given much thought to the multiple sides of the coin involved in incorporating ethics into an autonomous warfighting machine. I've always thought about maximizing survivability for the warfighter, particularly the dismounted one. Further, I'd always thought about lethality as simply a side effect of taking personnel out of the OODA loop, not realizing just how much thought about causality went into the design of these machines.

I was fortunate to sit down next to Ron at lunch, where I peppered him with questions as a follow-on to his talk. I also pulled down a bunch of papers written by his students for later reading. One thing that I brought up that he wasn't aware of was the whole iRobot/Robotic FX saga. Arkin mentioned iRobot in his talk, both in the context of being known for its combat 'bots, as well as for its Roomba vacuums. I've followed the Robotic FX case pretty closely, as they were based not too far from where I grew up. If you have a half-hour, check out the Arkin paper I reference above. If you have another half-hour, check out the excellent Xconomy reports on the whole iRobot/Robotic FX case.  Put 'em together, and it's a spy novel. If you're into corporate espionage, it's a don't-miss...there's a book in there somewhere.

The next set of talks focused on surveillance. The one which resonated most with me was a talk on Tor, the anonymizer. I've always thought of Tor as something that the bad guys used to cover their tracks, but I hadn't given any thought to the entirely humane use of Tor--enabling human rights activists to communicate freely while minimizing their risk of compromise. I've always respected the heck out of the community for its work on Tor, but now that respect is in a whole new light. I'll be at Shmoocon in a couple of weeks, where I'm sure that Simple Nomad and others will further dig into anonymity. Anonymously, of course...

After lunch, Kevin Poulsen moderated a panel on cyberterrorism; Dr. Herb Lin of the National Academy of Science and Dr. Neil Rowe both presented compelling papers. I'd encourage you to check out their research. Dr. Lin is leading an about-to-wrap-up effort focused on policy consequences of legal offensive information warfare. Dr. Rowe is looking at the design of ethical cyberweapons (which I would've thought was an oxymoron, until his talk). All this was seriously way cool stuff. Net-net...is China attacking us? Maybe. Is it legal? Can we attack back? Rowe made a convincing case that cyberweapons are analogous to biological attacks on crops, prohibited by the 1972 biological weapons convention...meaning, computer network attack might have been outlawed 35 years ago. Hmmm...

The next talks were on smart soldiers in battle, primarily referencing work that the U.S. Army has done on Land Warrior and Future Combat Systems. I'm still fairly up to date on these programs, so the talks were interesting but not new to me...although I was dying to ask about battery technology. Wrong guys, I know...call DARPA or DISA.

The day's final session was on the dilemma of socially responsible computer science. Honestly, this panel could've been a day-long topic in and of itself. While each speaker had very interesting things to say, I was particularly interested in the remarks by Dr. Terry Winograd from Stanford. As a founder of Computer Professionals for Social Responsibility, he's been thinking about a lot of these issues for more than a quarter century--longer than some of Saturday's attendees have been alive. Winograd led an extremely interesting discussion about the ethical issues surrounding taking government money for research, the role of universities in performing classified research, and much more. I really wish that the day could've ended with a debate, as Dr. Arkin (of Georgia Tech, which receives government money for classified research) had a passionate yet civilized give-and-take with Dr. Winograd during the final remarks. This could've turned into one helluva debate on the Aykroyd-vs-Curtin scale; calm heads prevailed, but the event did end with everyone a little bit on edge, which was great--no matter where each attendee fell on the spectrum of computers and social responsibility, we all came away better educated, with lots of interesting things to think about.

I'm going back next year. Definitely.

No comments:

Post a Comment