Follow TV Tropes

Following

The Military Thread

Go To

DeMarquis Since: Feb, 2010
#56026: Feb 1st 2019 at 6:49:54 PM

"...using the reactor to power a crypto mining rig..."

I like that one.

MarqFJA The Cosmopolitan Fictioneer from Deserts of the Middle East (Before Recorded History) Relationship Status: Anime is my true love
The Cosmopolitan Fictioneer
#56027: Feb 1st 2019 at 6:55:19 PM

On a different note, in our age of increasingly sophisticated computerization of vehicle controls and with the job of a tank loader fulfillable with an autoloader mechanism, what are the current obstacles to making a tank design that only needs one person to do the job of the driver, gunner and even commander?

Fiat iustitia, et pereat mundus.
archonspeaks Since: Jun, 2013
#56028: Feb 1st 2019 at 7:02:25 PM

[up] Wouldn’t make much sense. I think a 3 man crew is probably as small as you can realistically go for a MBT.

They should have sent a poet.
DeMarquis Since: Feb, 2010
#56029: Feb 1st 2019 at 7:06:59 PM

The biggest obstacle is a functioning AI that can handle the task space.

TuefelHundenIV Night Clerk of the Apacalypse. from Doomsday Facility Corner Store. Since: Aug, 2009 Relationship Status: I'd need a PowerPoint presentation
Night Clerk of the Apacalypse.
#56030: Feb 1st 2019 at 8:21:22 PM

Marq: As for right now the biggest limit is functional technology to fill gaps of the crew that would be replaced. Right now the crew of a tank serve an important role in both vehicle awareness but also operating secondary weapon systems to defend the tank. The US did some testing a while back when they were experimenting with where they could go with the Abrams tanks and found the bare minimum to run the tank was about 3 people. They help maintain the tank, defend the tank, and offer up awareness.

For the tank to operate with one man you would need to automate a watchdog system that can operate secondary weapons with comparable or better capabilities than a crew. A targeting system that can accurately and smartly cue up threats as well as actively searching for them in the surrounding environments. And finally, automate and simplify the maintenance to the point it only takes one person to keep it running.

Basically, you need a sci-fi tank.

Who watches the watchmen?
MarqFJA The Cosmopolitan Fictioneer from Deserts of the Middle East (Before Recorded History) Relationship Status: Anime is my true love
The Cosmopolitan Fictioneer
#56031: Feb 1st 2019 at 8:25:14 PM

So I need a sufficiently sophisticated AI for assisting the single human in the actual operation (secondary weapons, enviromental awareness, etc.) and either self-repairing nanotech or Drone Deployer capability to assist with field maintenance.

Edited by MarqFJA on Feb 1st 2019 at 7:25:59 PM

Fiat iustitia, et pereat mundus.
TuefelHundenIV Night Clerk of the Apacalypse. from Doomsday Facility Corner Store. Since: Aug, 2009 Relationship Status: I'd need a PowerPoint presentation
Night Clerk of the Apacalypse.
#56032: Feb 1st 2019 at 8:31:52 PM

Marq: Give me a minute and I will dig up some sources for you.

Analysis of the workload of tank crew under the conditions of informatization

Abstract A consensus has been reached that the tanks need to be integrated into the informatization battlefield. With the development of technology, the tank crew has being gradually decreased, so the research on two-soldier crew tank has become a hotspot. The workload of the tank crew under the conditions of informatization is analyzed based on the combat mission of tank and the typical combat scenarios, and the impact of new technologies on workload is evaluated. The crew members in tank can be reduced from three to two, but it is necessary to substantially improve the automation of target search and the reliability of each subsystem and component.

Federation of American Scientists link on the concept of going down to a two-man crew. The title is "The Crewing and Configuration of the Future Main Battle Tank"

The concepts covered in both of these are still kicking around and DARPA has some pretty wild ideas on future tank design which includes increased automation and an increased emphasis on evasion and avoiding detection.

Edited by TuefelHundenIV on Feb 1st 2019 at 10:35:33 AM

Who watches the watchmen?
archonspeaks Since: Jun, 2013
#56033: Feb 1st 2019 at 8:33:36 PM

If you can automate to that degree you’d be better off just cutting the last human out of the mix and go fully autonomous. At that point having a human around is more of a liability than anything else, no reason to even have one.

Personally I don’t think you’d want to go below 3 crew members even if you could, because when you only have 2 if one is out of commission there’s no redundancy. One of the benefits to having 3 crew is that if the driver or gunner can’t do their job the commander can cover for them.

Edited by archonspeaks on Feb 1st 2019 at 8:36:51 AM

They should have sent a poet.
TuefelHundenIV Night Clerk of the Apacalypse. from Doomsday Facility Corner Store. Since: Aug, 2009 Relationship Status: I'd need a PowerPoint presentation
Night Clerk of the Apacalypse.
#56034: Feb 1st 2019 at 8:38:24 PM

The FCS link covers a model of Two crew operation but 3 crew running to allow the 3rd person to rest.

Who watches the watchmen?
TheWildWestPyro from Seattle, WA Since: Sep, 2012 Relationship Status: Healthy, deeply-felt respect for this here Shotgun
#56035: Feb 1st 2019 at 9:07:13 PM

Trump has ticked off another retired high-ranking general officer: Martin Dempsey is quietly expressing his disapproval of the POTUS, it seems.

Edited by TheWildWestPyro on Feb 1st 2019 at 9:07:31 AM

Silasw A procrastination in of itself from A handcart to hell (4 Score & 7 Years Ago) Relationship Status: And they all lived happily ever after <3
A procrastination in of itself
#56036: Feb 1st 2019 at 9:07:26 PM

On the aircraft carrier thing, while a super carrier is out of the question I do remember that a few years back the British Mo D auctioned off one of its old carriers, let me see if I can find a link about it.

Edit: Found it, we sold both Invincible and Ark Royal that way.[1]

Edited by Silasw on Feb 1st 2019 at 5:09:59 PM

“And the Bunny nails it!” ~ Gabrael “If the UN can get through a day without everyone strangling everyone else so can we.” ~ Cyran
MarqFJA The Cosmopolitan Fictioneer from Deserts of the Middle East (Before Recorded History) Relationship Status: Anime is my true love
The Cosmopolitan Fictioneer
#56037: Feb 1st 2019 at 11:29:20 PM

If you can automate to that degree you’d be better off just cutting the last human out of the mix and go fully autonomous. At that point having a human around is more of a liability than anything else, no reason to even have one.
How so? No really; I legitimately do not get the logic. Just because you have an AI that can identify threats on a battlefield about as well as your average well-trained human soldier can doesn't necessarily mean that it's qualified to make all forms of decision-making on its own, because humans make many of their decisions based on more than just cold, unfeeling logic... and experiments with modern computers have demonstrated that they follow a really alien form of "logic".

From BlueAndOrangeMorality.Real Life:

* The paperclip maximizer is a thought experiment on artificial intelligence that could result in our doom because of lack of compatible morality.
a paperclip maximizer is an artificial general intelligence (AGI) whose goal is to maximize the number of paperclips in its collection. If it has been constructed with a roughly human level of general intelligence, the AGI might collect paperclips, earn money to buy paperclips, or begin to manufacture paperclips. [..] It would work to improve its own intelligence, where "intelligence" is understood in the sense of optimization power, the ability to maximize a reward/utility function—in this case, the number of paperclips. [..] It would innovate better and better techniques to maximize the number of paperclips. At some point, it might convert most of the matter in the solar system into paperclips. This may seem more like super-stupidity than super-intelligence. For humans, it would indeed be stupidity, as it would constitute failure to fulfill many of our important terminal values, such as life, love, and variety. The AGI won't revise or otherwise change its goals, since changing its goals would result in fewer paperclips being made in the future

Fiat iustitia, et pereat mundus.
LeGarcon Blowout soon fellow Stalker from Skadovsk Since: Aug, 2013 Relationship Status: Gay for Big Boss
Blowout soon fellow Stalker
#56038: Feb 1st 2019 at 11:30:32 PM

Then don't make the tank autonomous, just turn it into an MBT sized UGV.

Oh really when?
TerminusEst from the Land of Winter and Stars Since: Feb, 2010
#56039: Feb 1st 2019 at 11:54:04 PM

[up][up]

There would always be a human in the loop.

Si Vis Pacem, Para Perkele
Imca (Veteran)
#56040: Feb 2nd 2019 at 12:18:04 AM

[up][up][up] That isn't an actual experiment, it is a thought experiment....

Actual Experimentation has shown that AI are much less straightforward then that.

There is honestly no reason to put a human in the loop if the AI can identify targets reliably, which it would need to do to keep the number of men down.

Edited by Imca on Feb 2nd 2019 at 12:19:44 PM

archonspeaks Since: Jun, 2013
#56041: Feb 2nd 2019 at 1:55:45 AM

Yeah, the paperclip maximizer isn’t an actual assessment of how AI would work but more a high-concept discussion of potential issues. Like Schrodinger’s Cat people take it way too literally.

If you can build an autonomous system flexible enough to approximate a human tank crew, just let it run the tank and put the human command element somewhere else. This way you don’t need to build the tank with crew spaces or comforts and the human is much safer. Having a human inside a tank with that level of automation kind of defeats the whole point of having that level of automation.

Edited by archonspeaks on Feb 2nd 2019 at 1:57:21 AM

They should have sent a poet.
TerminusEst from the Land of Winter and Stars Since: Feb, 2010
#56042: Feb 2nd 2019 at 3:52:09 AM

Erik Prince had 'no knowledge' of training agreement in China's Xinjiang: spokesman

Frontier Services Group (FSG), a Hong Kong-listed company founded by Prince, said in a Chinese-language statement posted on its website on Jan. 22 that it had signed a deal to build a training centre in southern Xinjiang.

Si Vis Pacem, Para Perkele
TuefelHundenIV Night Clerk of the Apacalypse. from Doomsday Facility Corner Store. Since: Aug, 2009 Relationship Status: I'd need a PowerPoint presentation
Night Clerk of the Apacalypse.
#56043: Feb 2nd 2019 at 5:42:30 AM

ROFLMAO. I hope he isn't expecting anyone to believe that garbage.

Who watches the watchmen?
M84 Oh, bother. from Our little blue planet Since: Jun, 2010 Relationship Status: Chocolate!
Oh, bother.
#56044: Feb 2nd 2019 at 5:55:18 AM

Yeah, I'm calling bullshit too.

Disgusted, but not surprised
TerminusEst from the Land of Winter and Stars Since: Feb, 2010
#56045: Feb 2nd 2019 at 6:41:05 AM

A bit of a showreel from the FDF:

Si Vis Pacem, Para Perkele
TuefelHundenIV Night Clerk of the Apacalypse. from Doomsday Facility Corner Store. Since: Aug, 2009 Relationship Status: I'd need a PowerPoint presentation
Night Clerk of the Apacalypse.
#56046: Feb 2nd 2019 at 6:44:40 AM

Marq: For the Automation. Part of why he is saying you may as well go all automatic is by automating the most complex tasks needed for the tank crew, awareness and targeting, you have actually done the hardest part of complete automation. Automating driving is comparatively easier and something already in the works right now with demonstrated units operating in field exercises.

The only part left really is automating the broader command and decision making. Which as far as the majority of humans even those in the US Military goes is not going to happen if they can help it. Which creates a person in the loop at some point and arguably at the most important points of authority and control.

The paperclip machine is a hyperbolic thought experiment which is on its face overtly ridiculous. It requires two very stupid assumptions to work. The first is that someone would create a device they couldn't command and control on purpose and would include no fail-safes. Second that the machine is completely invulnerable to interruption, disruption, and mishaps. It also assumes that a machine built to make paperclips to fill a need would lack any method to understand what the need actually is. Basically, it requires a complete lack of an understanding of automation at multiple levels and an assumption of super technology to even carry out the thought experiment.

Edited by TuefelHundenIV on Feb 2nd 2019 at 8:47:21 AM

Who watches the watchmen?
DeMarquis Since: Feb, 2010
#56047: Feb 2nd 2019 at 7:19:08 PM

The paperclip maximizer problem is intended to illustrate how realistically designed AI works (and potentially fails)- in contrast to the usual Hollywood "Revolt of the Robots" scenario. It isnt intended to be taken literally, no, because in real life engineers would build in safeguards against that sort of thing. But it does illustrate the hierarchical goal structure that AI generally relies on.

Imca (Veteran)
#56048: Feb 2nd 2019 at 7:25:49 PM

I would say that's fair, but the reality is that when AI fails it is normally because of the way being literal and being effecient over lap, it isn't "literal" in the way that a human thinks of it, but when you look at the goal itself it makes more sense.

An example is those AI they use to simulate evolution, for the longest time the problem with them was that when you tried to make them walk they would do things like just turn the animal into a long stick and fall over because it was defined as moving from point A to point B.....

One of them was designed to try to fly, and it came up with the idea that the best way to do this would be to vibrate really fast so that it clipped into the floor, and the physics engine glitched out and launched it into the sky.

Basically what I am getting at here, is it fails as a scenario because if the AI could turn the entire system into paperclips to fulfill its goal, or just rules lawyer its way into saying that every thing is already a paperclip.... it is going to do the later because it takes less effort.

Which is ironically one of the problems with hollywood robot rebellion too..... Killing all humans is hard, and extremely inefficient... its not going to come to that conclusion on its own.

Edited by Imca on Feb 2nd 2019 at 7:26:57 AM

TerminusEst from the Land of Winter and Stars Since: Feb, 2010
#56049: Feb 3rd 2019 at 5:19:10 AM

Macedonia to sign NATO pact this week

North Macedonia will on Wednesday begin the process to join NATO, the secretary general of the military alliance announced Saturday.

“On 6 February we will write history: #NATO Allies will sign the accession protocol with the future Republic of North Macedonia,” NATO Secretary-General Jens Stoltenberg tweeted.

The move comes after the Balkan state agreed to change its name to North Macedonia to settle a decades-long dispute with Greece.

The accession protocol must then be signed off by each NATO member.

“The ratification will take some time, it depends on all 29 parliaments,” Stoltenberg told POLITICO in a recent interview. “Last time it took around a year,” he said, referring to Montenegro’s 2017 accession to the alliance.

Be prepared for the usual Russian information offensive, with more urgency after the name issue has been resolved.

Edited by TerminusEst on Feb 3rd 2019 at 5:20:11 AM

Si Vis Pacem, Para Perkele

Total posts: 67,470
Top