Actuarial Outpost
 
Go Back   Actuarial Outpost > Actuarial Discussion Forum > Risk Management
FlashChat Actuarial Discussion Preliminary Exams CAS/SOA Exams Cyberchat Around the World Suggestions


Reply
 
Thread Tools Search this Thread Display Modes
  #31  
Old 11-29-2018, 02:22 PM
IANAE IANAE is offline
Member
CAS AAA
 
Join Date: Oct 2017
Posts: 206
Default

Quote:
Originally Posted by Arroway View Post
Blindly relying on sensors and warnings without understanding them is, IMHO, riskier than not having them.
Relying on them at all is riskier than not having them; at best they are indicators to alert of potential problems that should be confirmed and may require action sooner than later.

Apollo 13 is a great topically relevant flick.
Reply With Quote
  #32  
Old 11-29-2018, 04:18 PM
campbell's Avatar
campbell campbell is offline
Mary Pat Campbell
SOA AAA
 
Join Date: Nov 2003
Location: NY
Studying for duolingo and coursera
Favorite beer: Murphy's Irish Stout
Posts: 85,021
Blog Entries: 6
Default

Quote:
Originally Posted by IANAE View Post
Inadequate training perhaps exaggerated/accelerated by training "drift" that is over-reliant on technology-assisted safety devices and permitted to occur by regulators.

I recall pilot training involving flight/landing after loss of power; same for auto drivers ed (at least I learned how) - when did regulators(?) decide people don't need to know how to safely control/manage their vehicle when something goes wrong?
There's that -- but also look above. Examples are being given where the "automatic safety features" themselves are overriding what the humans would do (like stopping in a left lane). Part of the issue is not even being able to override the machine that's supposedly making a "safe" decision... and it's not.
__________________
It's STUMP

LinkedIn Profile
Reply With Quote
  #33  
Old 11-29-2018, 05:11 PM
IANAE IANAE is offline
Member
CAS AAA
 
Join Date: Oct 2017
Posts: 206
Default

Quote:
Originally Posted by campbell View Post
There's that -- but also look above. Examples are being given where the "automatic safety features" themselves are overriding what the humans would do (like stopping in a left lane). Part of the issue is not even being able to override the machine that's supposedly making a "safe" decision... and it's not.
Saw that, too...falls under 'relying on them at all is riskier than not having them'... I think humans need to be able to override fail-safe/fail-secure safety features under certain circumstances and properly trained to know when and how to do so, and what to do when they've regained control.
Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off


All times are GMT -4. The time now is 08:28 PM.


Powered by vBulletin®
Copyright ©2000 - 2018, Jelsoft Enterprises Ltd.
*PLEASE NOTE: Posts are not checked for accuracy, and do not
represent the views of the Actuarial Outpost or its sponsors.
Page generated in 0.13717 seconds with 12 queries