• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.

Robots in Milieu 0.

Interesting answer.

IRL some drone aircraft CARRYING the missile have the tech to decide when to fire. But that isn't currently how they are USED.

Good point. In a very real way, this gets to the heart of my question. I believe you are saying we presently have the technology to use A.I. to make battlefield decisions, but we choose not to. We require a human being to pull the trigger.

Would the Third Imperium do the same? Do they choose not to use warbots? The best they do is use souped-up drones? If you see a "warbot" on the battlefield, it's just a drone, with someone miles away in a bunker watching a monitor and controlling it with a joystick? It's not a robot, it's just a remote-controlled car with a gun?

That's certainly a fair answer.
 
Last edited:
1. Skynet represents an artificial intelligence that can decide for itself on policy.

2. Dead man's switch would activate machinery that would attempt to complete their missions.

3. Essentially, it's a question of whether machines have the capability for themselves to evaluate and act on their options.

4. Otherwise, drones would be like most other sophont members of a military, trying to efficiently carry out their missions, possibly without fear of harm or non existence.
 
Great answer!

3. Essentially, it's a question of whether machines have the capability for themselves to evaluate and act on their options.

Sounds great! So in your Traveller games, do machines have that capability at Tech Level-12? And, if so, does the Third Imperium choose to utilize it?

Or not? Does the Imperium suppress that aspect of technology instead, for some reason? For fear of a Robot Apocalypse, for example. (You're the one who brought up SkyNet.) :devil:
 
Last edited:
214165.jpg


Probably if it represents a clear and present danger.

The issue is more that we don't know quite what to do with it, in the game, unless you emasculate it.

ft-261516-555x856.jpg
 
IRL some drone aircraft CARRYING the missile have the tech to decide when to fire. But that isn't currently how they are USED.

I'd like to see some citation of this to clarify what "decide when to fire" means.

The CWIS Phalanx can be an autonomous system, but it's a defensive one. You can put it in to an automatic mode so that it can live up to its motto "If it flies, it dies".

It's just not particularly good with that whole Friend and Foe thing. It's simply a reactive targeting system with logic to not just identify the inbound threat (i.e. a missile), but track it and also "decide" when it's no longer a threat.

That said, it's never used that way. There's a man in the middle with a finger on the FIRE button.

I am not aware of any combat systems that are even remote autonomous. Nothing able to make any kind of "offensive" decision. There's lot of electronics and software and logic for defensive systems.

Most AI is not used in planning, but more reactive mechanisms such as station keeping, flocking, and such like that. "Hey drone, get in formation and fly with the fighter plane 1A."

I'm sure if you made a bot designed to identify people carrying a gun and shooting them, that would be "straightforward". But you always have the ED-209 scenario to worry about. With simple rules, come simpler solutions, but the real world is not simple rules.

Modern cruise missile routing is probably much easier today than it was back in the gulf war, where it took quite a lot of time to get the routes done and programmed. Modern software can probably spit out a good first cut in a few seconds, that allow the mission planner to tweak or reject outright. But I wouldn't call this autonomous either.

The biggest issue with autonomous anything is someone will to let the device go in to the real world and operate. A surveillance drone working a patrol pattern and keeping its camera on a moving car I wouldn't call autonomous. Sophisticated pattern matching, sure. Autonomous? No. No real decision making here, and what decision making is done is mostly reactive.
 
A perfect example!

But you always have the ED-209 scenario to worry about.

May I ask... in your Traveller games, does the Third Imperium have the technology to create an ED-209, and would they use them on a large, industrialized scale? Would such a machine be an effective weapon at Tech Level-12, and not suffer from the "glitches" that so defined them in RoboCop?
 
Last edited:
You are correct, which is why I leave my phone at home a lot. I am not worried about missiles, but I dislike having any track my movements. Nor do I know all what use is made of the tracking data by Apple, and how much they are selling it to those who can pay for it.

Apple is far less interested in your location than your local cellphone provider.

The local provider keeps careful track of at least your last month or two, if only for billing and local service area improvements. As in, "Do we have adequate capacity in cell tower 123?"

If your cellular service provider isn't the tower owner, the tower owner as well has a stake.

Apple's data largely gets anonymized. Your local provider may or may not.
Apple's goal is to improve function of the built-in apps and the hardware; distance to tower vs signal data is highly useful and readily useful anonymized, as is time on to time in app ___.

Meanwhile, the local provider is very interested in connecting you to local services...

Tracfone has been pretty good about it for me; Google tracks my location pretty thoroughly, but usually gets the business I was there for wrong. (There's a Dutch Bros Coffee next to HWY 34 & 53rd Street in Corvallis... that's right on the main road direction from Alsea and Philomath to downtown Corvallis... it asks often if I went to Dutch Bros when I was instead stopped for the stoplight.)

I like the utility of the cellular service; I know that Google is willing to give me play-store credit for answering questions about local businesses.

The more interesting situation is that Fr8star sold my data within days to a trucker supply catalogue... I'm not about to by a Rig, nor accessories for one - I just needed my container moved.
 
It is extremely hard to find soldiers who are willing to commit suicide, unless you are Imperial Japan.

History seems to show otherwise...

The hashashin, the Jewish Zealots, Sparta, many radical religious groups.... Not to mention other suicide cultists past and present.

And let's not forget the leadership in the French, Russian, US, and UK civil wars/revolutions, for whom failure was a death sentence, and that the US revolution had manned suicide subs - had they succeeded in their drilling, they'd have been dragged under with the target.

And there are the individuals doing damned illegal things knowing their families are under threat by various gangs and/or cartels. Or state agencies (which for reasons of board policy shall remain nebulously referenced).

There are ways to make soldiers willing to suicide. We generally hope they aren't in use in the western nations... and fear that other countries may be using them.

Plus, never underestimate the potency of disinformation and indoctrination to subvert learned ethical standards. It's been done for millenia, documented at least 2 of them.
 
May I ask... in your Traveller games, does the Third Imperium have the technology to create an ED-209, and would they use them on a large, industrialized scale? Would such a machine be an effective weapon at Tech Level-12, and not suffer from the "glitches" that so defined them in RoboCop?

Do they....
  • Have the tech? Yes
  • Use the Tech? Yes.
  • Admit the use of the tech? Nope.
 
I run MTU with a tops of 10, although I would tend to cheat with more TL12ish robots.


The overriding principle for robots IMTU is that a human is responsible for what the robot does. Therefore a ship owner/captain who operates robots as crew is responsible if said bots went crazy or even made an error that costs credits and/or lives.

Given the liability situation, most choose to fully human crew and robots are an extension/adjunct that is helpful in emergency situations. That way you still have a human expert giving instruction and hopefully avoiding error while gaining capabilities.

For combat same thing- bots do warcrime, humans commanding bots VERY responsible.


I would expect robot manufacturers to be similarly liability averse, with all sorts of 'do you want me to shoot the bad guy Y/N' 'are you sure Y/N' 'authenticate bad guy target with multifactor descriptors' sort of 'safeties'.

A lot of people might choose to avoid bots just to not get into arguments with them, either of the product safety kind or the 'do what I mean not what I say' type either.
 
Suicide troops have nuances, certain death is a tad different from likely to die.

It does depend on whether they've been brainwashed, and/or incentivized.

In Europe, we have the concept of the Forlorn Hope, where survivors of a successful attack will get rewarded, contrast that with Penal Battalions, where compulsion is used, and it becomes more of a question of certain punishment and possible death.

With drones, they'd have to be self aware, and be inclined to self preservation.
 
IRL some drone aircraft CARRYING the missile have the tech to decide when to fire. But that isn't currently how they are USED.

Actually I want to clarify something here as well, before the pedants come swarming in.

There is a modern "smart" ballistics system for rifles, it uses a very fancy telescopic sight. (And note I may be off on some of the details here, but the fundamentals are sound.)

You eyeball the target through the scope, put a dot on it (internal to the scope), and then squeeze the trigger.

The scope them gives you another dot to move the reticle to. This is the firing solution. A trivial example at a static long range target is to lift the rifle up a little bit to compensate for the bullet drop. The second dot tells you how far to move it up.

When the rifle detects that you have reached the second dot, it fires.

The device takes in to account ballistics, weather data, even target motion. It's capable of hitting a moving target. You as the shooter just have to keep up.

The operator provides the intent to fire, the rifle supplies the targeting info and when matched, performs the actual firing.

At no time did the rifle "decide" to fire. Call it what you want, it's not a decision. A decision is a much deeper concept. The operator decided to fire. The rifle just tracked it.

An air-to-air missile with a proximity warhead does not decide to explode when it's close to target aircraft. It's a simple detection response stimulus, electrically and mechanically manifested in the mechanism. The fighter pilot made the decision to fire.

Undoubtedly, there are slippery slopes here about cognition, motivation, sentience, and all sorts of philosophical topics.

But just because a trained neural net posits TRUE for the "detonate now question" instead of a foot on a switch, a simple pressure sensor, or anything else doesn't make it any "smarter" or more capable of making a decision.
 
More than a matter of degree.

But just because a trained neural net posits TRUE for the "detonate now question" instead of a foot on a switch, a simple pressure sensor, or anything else doesn't make it any "smarter" or more capable of making a decision.

I think I am suggesting robots, as they are often portrayed in science-fiction literature and media, are more sophisticated than what you are describing. In fact, I would suggest that the questions I am curious about, are not as relevant today as they might be in the future, due to the limitations of modern day technology.

Take, for example, Arnold Schwarzenegger's character in the original movie, The Terminator. Now this is a robot of the highest order! It's broadly given a mission to fulfill, but how it goes about fulfilling this mission is completely up to it. To track down its target, it decides to look up its target's name in the phone book, and finding three similar names, steals a car, drives to each one of their houses, in order, and proceeds to systematically murder each and every one of them. (Cruise missiles can't do that.)

When the Terminator learns its target is under police protection and is protected by thirty officers, it isn't told to storm the station, it makes that decision on its own! The events of the movie make it pretty clear that it is sophisticated enough to adapt to completely unforeseen and unanticipated circumstances.

This is obviously far beyond something as simple as a trip wire or pressure plate.

Perhaps a better example is K-2SO from Rogue One. At the end of the movie, in its heroic death scene, K-2SO decides to pull out a gun and start blasting Sophants. The events of the movie make it quite clear K-2SO's human "handler" does not order it to do so, and is, in fact, not even aware of the decisions the robot makes. This is a decision the robot makes, on its own. It decided when to pull the trigger.

That is a warbot.

Is this something the Third Imperium would unleash upon an enemy knowing full well that something might go astray?

For example, a squad of ED-209s on patrol in a hot zone come under fire. The ED-209s return fire and pursue. The guerrilla fighters fall back to a nearby village, and continue the firefight using the confusion of panicking innocent bystanders as cover. This is a difficult scenario for human troops. Is this something that robots in Traveller, at Tech Level-12, can reliably cope with?

And is this something the average Citizen of the Imperium would be comfortable with knowing that such machines existed?
 
In the last movie of the franchise, the Terminator succeeds in killing John Connor, and then I believe walks into the ocean.

At some point, it decides to get married and start a drapery business.
 
Not quite.

In the last movie of the franchise, the Terminator succeeds in killing John Connor, and then I believe walks into the ocean.

At some point, it decides to get married and start a drapery business.

Forgive me, but I think you're getting your action heroes mixed up. That was Robin Williams in Bicentennial Man. :D
 
Take, for example, Arnold Schwarzenegger's character in the original movie, The Terminator. Now this is a robot of the highest order! It's broadly given a mission to fulfill, but how it goes about fulfilling this mission is completely up to it. To track down its target, it decides to look up its target's name in the phone book, and finding three similar names, steals a car, drives to each one of their houses, in order, and proceeds to systematically murder each and every one of them. (Cruise missiles can't do that.)

Yes, indeed. The other curious thing about the Terminator is that it actually had a touch of "morality" to it. It's amazing the number of people he DIDN'T kill. It was very mission focused. We can all it morality because it applied to people, but it was probably more fundamental efficiency on the order of don't waste time on this, and this is trouble I don't need.

This is why I cringe when folks talk about how advance our current stuff is. It really isn't, it's mostly magic tricks to be honest, and it's very narrow focused. Machine learning is no panacea. Many of the algorithms and such that we use today are actually quite old, we just happen to have modern hardware that can execute them efficiently in real time to make them useful.
When the Terminator learns its target is under police protection and is protected by thirty officers, it isn't told to storm the station, it makes that decision on its own! The events of the movie make it pretty clear that it is sophisticated enough to adapt to completely unforeseen and unanticipated circumstances.
In the police station, he was simply being systematic in his search. If he managed to kill Sara, he would have likely just turned around and left. Then what, who knows.

That is a warbot.
Truly, the earliest real mentions of robots in CT is the Zhodani Warbot. I don't know it's capabilities.

And is this something the average Citizen of the Imperium would be comfortable with knowing that such machines existed?

There was some hub bub a while back about, and I may be wrong here, the first "robot" to "kill someone" in a civilian setting. I think there was a barricaded suspect, and the police sent in something like a "bomb" bot that rolled up to him and blew up.

Yea, this is it (WP article): https://www.washingtonpost.com/nati...ee114e-4a84-11e6-bdb9-701687974517_story.html

Using a pair of thumb-controlled joysticks, a Dallas police officer guided a robot loaded with a pound of C-4 plastic explosive toward cop-killer Micah Xavier Johnson and blew him up.

As sophisticated as this device was, it's was just a step up from an R/C car with a grenade. Not a "robot" at all. If there's any real question here, it's the use of the explosive (something not particularly selective compared to, say, a marksman) to do the work. But I imagine there's a camera on this thing and the operator was able to see where he was when he set off the explosive.
 
the zhodani robots in AHL needed to be supervised
by both an officer and a technician

unsupervised warbots were limited in the actions they could take

unsupervised maint bots were unable to take any offensive action
 
Back
Top