On a different note, in our age of increasingly sophisticated computerization of vehicle controls and with the job of a tank loader fulfillable with an autoloader mechanism, what are the current obstacles to making a tank design that only needs one person to do the job of the driver, gunner and even commander?
Fiat iustitia, et pereat mundus.Wouldn’t make much sense. I think a 3 man crew is probably as small as you can realistically go for a MBT.
They should have sent a poet.The biggest obstacle is a functioning AI that can handle the task space.
Marq: As for right now the biggest limit is functional technology to fill gaps of the crew that would be replaced. Right now the crew of a tank serve an important role in both vehicle awareness but also operating secondary weapon systems to defend the tank. The US did some testing a while back when they were experimenting with where they could go with the Abrams tanks and found the bare minimum to run the tank was about 3 people. They help maintain the tank, defend the tank, and offer up awareness.
For the tank to operate with one man you would need to automate a watchdog system that can operate secondary weapons with comparable or better capabilities than a crew. A targeting system that can accurately and smartly cue up threats as well as actively searching for them in the surrounding environments. And finally, automate and simplify the maintenance to the point it only takes one person to keep it running.
Basically, you need a sci-fi tank.
Who watches the watchmen?So I need a sufficiently sophisticated AI for assisting the single human in the actual operation (secondary weapons, enviromental awareness, etc.) and either self-repairing nanotech or Drone Deployer capability to assist with field maintenance.
Edited by MarqFJA on Feb 1st 2019 at 7:25:59 PM
Fiat iustitia, et pereat mundus.Marq: Give me a minute and I will dig up some sources for you.
Analysis of the workload of tank crew under the conditions of informatization
Federation of American Scientists link on the concept of going down to a two-man crew. The title is "The Crewing and Configuration of the Future Main Battle Tank"
The concepts covered in both of these are still kicking around and DARPA has some pretty wild ideas on future tank design which includes increased automation and an increased emphasis on evasion and avoiding detection.
Edited by TuefelHundenIV on Feb 1st 2019 at 10:35:33 AM
Who watches the watchmen?If you can automate to that degree you’d be better off just cutting the last human out of the mix and go fully autonomous. At that point having a human around is more of a liability than anything else, no reason to even have one.
Personally I don’t think you’d want to go below 3 crew members even if you could, because when you only have 2 if one is out of commission there’s no redundancy. One of the benefits to having 3 crew is that if the driver or gunner can’t do their job the commander can cover for them.
Edited by archonspeaks on Feb 1st 2019 at 8:36:51 AM
They should have sent a poet.The FCS link covers a model of Two crew operation but 3 crew running to allow the 3rd person to rest.
Who watches the watchmen?Trump has ticked off another retired high-ranking general officer: Martin Dempsey is quietly expressing his disapproval of the POTUS, it seems.
Edited by TheWildWestPyro on Feb 1st 2019 at 9:07:31 AM
On the aircraft carrier thing, while a super carrier is out of the question I do remember that a few years back the British Mo D auctioned off one of its old carriers, let me see if I can find a link about it.
Edit: Found it, we sold both Invincible and Ark Royal that way.[1]
Edited by Silasw on Feb 1st 2019 at 5:09:59 PM
“And the Bunny nails it!” ~ Gabrael “If the UN can get through a day without everyone strangling everyone else so can we.” ~ CyranFrom BlueAndOrangeMorality.Real Life:
- a paperclip maximizer is an artificial general intelligence (AGI) whose goal is to maximize the number of paperclips in its collection. If it has been constructed with a roughly human level of general intelligence, the AGI might collect paperclips, earn money to buy paperclips, or begin to manufacture paperclips. [..] It would work to improve its own intelligence, where "intelligence" is understood in the sense of optimization power, the ability to maximize a reward/utility function—in this case, the number of paperclips. [..] It would innovate better and better techniques to maximize the number of paperclips. At some point, it might convert most of the matter in the solar system into paperclips. This may seem more like super-stupidity than super-intelligence. For humans, it would indeed be stupidity, as it would constitute failure to fulfill many of our important terminal values, such as life, love, and variety. The AGI won't revise or otherwise change its goals, since changing its goals would result in fewer paperclips being made in the future
Then don't make the tank autonomous, just turn it into an MBT sized UGV.
Oh really when?
There would always be a human in the loop.
Si Vis Pacem, Para PerkeleThat isn't an actual experiment, it is a thought experiment....
Actual Experimentation has shown that AI are much less straightforward then that.
There is honestly no reason to put a human in the loop if the AI can identify targets reliably, which it would need to do to keep the number of men down.
Edited by Imca on Feb 2nd 2019 at 12:19:44 PM
Yeah, the paperclip maximizer isn’t an actual assessment of how AI would work but more a high-concept discussion of potential issues. Like Schrodinger’s Cat people take it way too literally.
If you can build an autonomous system flexible enough to approximate a human tank crew, just let it run the tank and put the human command element somewhere else. This way you don’t need to build the tank with crew spaces or comforts and the human is much safer. Having a human inside a tank with that level of automation kind of defeats the whole point of having that level of automation.
Edited by archonspeaks on Feb 2nd 2019 at 1:57:21 AM
They should have sent a poet.Erik Prince had 'no knowledge' of training agreement in China's Xinjiang: spokesman
ROFLMAO. I hope he isn't expecting anyone to believe that garbage.
Who watches the watchmen?Yeah, I'm calling bullshit too.
Disgusted, but not surprisedA bit of a showreel from the FDF:
Marq: For the Automation. Part of why he is saying you may as well go all automatic is by automating the most complex tasks needed for the tank crew, awareness and targeting, you have actually done the hardest part of complete automation. Automating driving is comparatively easier and something already in the works right now with demonstrated units operating in field exercises.
The only part left really is automating the broader command and decision making. Which as far as the majority of humans even those in the US Military goes is not going to happen if they can help it. Which creates a person in the loop at some point and arguably at the most important points of authority and control.
The paperclip machine is a hyperbolic thought experiment which is on its face overtly ridiculous. It requires two very stupid assumptions to work. The first is that someone would create a device they couldn't command and control on purpose and would include no fail-safes. Second that the machine is completely invulnerable to interruption, disruption, and mishaps. It also assumes that a machine built to make paperclips to fill a need would lack any method to understand what the need actually is. Basically, it requires a complete lack of an understanding of automation at multiple levels and an assumption of super technology to even carry out the thought experiment.
Edited by TuefelHundenIV on Feb 2nd 2019 at 8:47:21 AM
Who watches the watchmen?The paperclip maximizer problem is intended to illustrate how realistically designed AI works (and potentially fails)- in contrast to the usual Hollywood "Revolt of the Robots" scenario. It isnt intended to be taken literally, no, because in real life engineers would build in safeguards against that sort of thing. But it does illustrate the hierarchical goal structure that AI generally relies on.
I would say that's fair, but the reality is that when AI fails it is normally because of the way being literal and being effecient over lap, it isn't "literal" in the way that a human thinks of it, but when you look at the goal itself it makes more sense.
An example is those AI they use to simulate evolution, for the longest time the problem with them was that when you tried to make them walk they would do things like just turn the animal into a long stick and fall over because it was defined as moving from point A to point B.....
One of them was designed to try to fly, and it came up with the idea that the best way to do this would be to vibrate really fast so that it clipped into the floor, and the physics engine glitched out and launched it into the sky.
Basically what I am getting at here, is it fails as a scenario because if the AI could turn the entire system into paperclips to fulfill its goal, or just rules lawyer its way into saying that every thing is already a paperclip.... it is going to do the later because it takes less effort.
Which is ironically one of the problems with hollywood robot rebellion too..... Killing all humans is hard, and extremely inefficient... its not going to come to that conclusion on its own.
Edited by Imca on Feb 2nd 2019 at 7:26:57 AM
Macedonia to sign NATO pact this week
“On 6 February we will write history: #NATO Allies will sign the accession protocol with the future Republic of North Macedonia,” NATO Secretary-General Jens Stoltenberg tweeted.
The move comes after the Balkan state agreed to change its name to North Macedonia to settle a decades-long dispute with Greece.
“The ratification will take some time, it depends on all 29 parliaments,” Stoltenberg told POLITICO in a recent interview. “Last time it took around a year,” he said, referring to Montenegro’s 2017 accession to the alliance.
Be prepared for the usual Russian information offensive, with more urgency after the name issue has been resolved.
Edited by TerminusEst on Feb 3rd 2019 at 5:20:11 AM
Si Vis Pacem, Para Perkele
"...using the reactor to power a crypto mining rig..."
I like that one.