<?xml version="1.0" encoding="UTF-8"?>        <rss version="2.0"
             xmlns:atom="http://www.w3.org/2005/Atom"
             xmlns:dc="http://purl.org/dc/elements/1.1/"
             xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
             xmlns:admin="http://webns.net/mvcb/"
             xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
             xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <channel>
            <title>
									Discussion Topics - MasterAirlinePilot.com Forum				            </title>
            <link>https://www.masterairlinepilot.com/community/discussion-topics/</link>
            <description>MasterAirlinePilot.com Discussion Board</description>
            <language>en-US</language>
            <lastBuildDate>Thu, 23 Apr 2026 13:55:58 +0000</lastBuildDate>
            <generator>wpForo</generator>
            <ttl>60</ttl>
							                    <item>
                        <title>Topic 24-5: Let’s retire “Complacency”</title>
                        <link>https://www.masterairlinepilot.com/community/discussion-topics/topic-24-5-lets-retire-complacency/</link>
                        <pubDate>Mon, 19 Aug 2024 19:50:01 +0000</pubDate>
                        <description><![CDATA[Applying the label complacency is common across the fields of aviation psychology and human factors. The term distorts our perspective so that we tend to recognize it in any incident when op...]]></description>
                        <content:encoded><![CDATA[<p>      Applying the label <em>complacency </em>is common across the fields of aviation psychology and human factors. The term distorts our perspective so that we tend to recognize it in any incident when operators fail to act effectively to stop a mishap’s trajectory. It feels very useful because it seamlessly connects an undesirable outcome with a cause – that the operator’s complacency led them to miss a warning sign that they should have seen, to miscomprehend a sign that they did see, to miss making a decision they should have made, or to fail to perform actions they should have taken. Complacency also exaggerates the gap between what we accept as acceptable performance standards and the “substandard behaviors” that they apparently demonstrated. It fits deductive “if/then” logic. <em>If </em>the crew hadn’t been complacent, <em>then </em>they would have detected the deteriorating conditions, recovered to a more favorable trajectory, and avoided the mishap. Finally, the term provides wholeness to justify an event. It allows us to hold the <em>complacent</em> <em>operators</em> completely and exclusively responsible for the mishap, even though simplistic environments never exist within complex systems.</p>
<p>      As our use of the label <em>complacency</em> has spread, it has neither advanced our understanding of mishap evolution nor furthered our progress toward aviation safety. This is because it relies on the flawed bad apple theory – that these particular operators (bad apples) are fundamentally flawed and that their removal or rehabilitation will eliminate both the cause and undesired outcome from reoccurring. This is rarely the case. Also, the complacency label implies a level of carelessness to the operators’ motivations and decision making. In over 20 years of incident and mishap investigation, I have never encountered a pilot who felt careless about their flying, their profession, or their commitment to safe operations. The label of complacency did not fit their mindsets or actions. In the end, investigators simply apply the term because they find it convenient, familiar, and uncomplicated, not because it accurately describes the cause or evolution of the mishap.</p>
<p>      Assuming that we significantly reduce our use of <em>complacency</em>, we need to apply suitable replacements. As we look deeply into the underlying causes of mishap events that we currently label as arising from complacent behaviors, we find the common factor of <em>attention level.</em> This is especially evident with experienced, proficient operators. I addressed this effect in a past discussion topic (<a href="https://www.masterairlinepilot.com/community/discussion-topics/topic-23-8-the-adverse-effects-created-by-the-comfort-zone/#post-25">Topic 23-8 on the Master Pilot Forum</a>) and in my book, <em>Master Airline Pilot: Applying Human Factors to Achieve Peak Performance and Operational Resilience</em>. As proficient pilots become more comfortable and familiar with performing their tasks, they don’t feel the need to apply as much attention. The task doesn’t feel difficult, so it doesn’t require as much mental focus. Over time, the level of attention devoted toward completing familiar tasks drops. Resting snuggly in their comfort zone, repetitive tasks become automated. Manual tasks are performed through muscle memory. Cognitive tasks apply well-worn, reliable game plans. A mismatch develops between the attention that the operator feels that they need to complete a task and the attention level that is appropriate for that task. As an aviation example, imagine a highly proficient pilot that repeatedly gazes out their side window while flying down short final. Imagine a monitoring pilot who does the same while the pilot flying lands the aircraft. While both of these examples indicate low attention focus, they don’t necessarily show complacency. Anecdotal evidence seems to indicate that high proficiency and a long history of successful flying tend to increase the mismatch in attention level. Laxity doesn’t cause failure. Instead, it promotes a latent vulnerability that lingers unseen below the surface. Everything works out fine, flight after flight, year after year – until one day, it doesn’t.</p>
<p>      We need terms to classify the mishap events that emerge when an operator applies an inappropriately low level of attention to the task at hand. These new terms shouldn’t imply negative motivations like unconcern, neglect, negligence, carelessness, or sloppiness. They should focus only on the attention gap. I suggest using <em>laxity </em>or <em>laxness </em>because they accurately describe how the operator has relaxed their attention focus below what is appropriate for a particular task or operating environment<em>.</em> Laxity accurately describes how highly experienced, proficient operators can lower their guard and become surprised by unexpected situations. It explains how an operator can fail to notice a deteriorating situation’s warning signs as quickly as they could and how they might experience a prolonged startle effect that inhibits timely recovery from distraction. Once they become confused and startled, they succumb to task overload, tunneled attention, and plan continuation bias. Forcibly ejected from their comfort zones, familiar habits and game plans vanish. Their situational awareness instantly changes from familiar and comfortable to chaotic and unpleasant. These effects emerge when the operator is lax, but do not indicate that they have been complacent.</p>
<p>      Using the realigned perspective that the term <em>laxity</em> provides, we reorient our analysis toward recognizing the mismatch between the appropriate attention level that the task requires and the reduced level employed by the mishap operator. Additionally, we structure our teaching to help operators align their attention level with the task phase and environment, not by their perception of the task’s ease or difficulty. This emphasis will help counteract the depressive effects of familiarity and comfort zone. Even for highly proficient operators, every final approach becomes a high attention level task, regardless of how proficient they are or how easy the environment happens to be.</p>]]></content:encoded>
						                            <category domain="https://www.masterairlinepilot.com/community/discussion-topics/">Discussion Topics</category>                        <dc:creator>Steve Swauger</dc:creator>
                        <guid isPermaLink="true">https://www.masterairlinepilot.com/community/discussion-topics/topic-24-5-lets-retire-complacency/</guid>
                    </item>
				                    <item>
                        <title>Topic 24-4: Pilot Strategies Across the Range of Scenarios</title>
                        <link>https://www.masterairlinepilot.com/community/discussion-topics/topic-24-4-pilot-strategies-across-the-range-of-scenarios/</link>
                        <pubDate>Wed, 29 May 2024 04:40:38 +0000</pubDate>
                        <description><![CDATA[Consider the following spectrum depicting the range of situations encountered by pilots across all flights. (This graphic is from my book, Master Airline Pilot: Applying Human Factors to Rea...]]></description>
                        <content:encoded><![CDATA[<p>Consider the following spectrum depicting the range of situations encountered by pilots across all flights. (This graphic is from my book, <em>Master Airline Pilot: Applying Human Factors to Reach Peak Performance and Operational Resilience  </em><em>, </em>Figure 11.3, page 203) </p>
15
<p> (tap on image to expand it)</p>
<p>            The left end of the spectrum depicts mundane, familiar, uncomplicated flights. Envision crews flying the same flights, between the same destinations, in unchallenging conditions, day after day. They would apply familiar, proven game plans that require no modification. Flights would go exactly as planned. In relative frequency, these kinds of flights are fairly plentiful in professional aviation.</p>
<p>            On the right end of the spectrum, we have extremely rare, unanticipated, and untrained events. Crews would have no prior training, procedures, and little guidance for handling them, either from simulator training, ground training, or flight manuals. To handle these situations, crews would need to diagnose unique problems, innovate solutions, and coordinate specific tasks and roles. In relative frequency, these situations are extremely rare – so rare that most pilots would not encounter even a single event during their careers. These flights, however, tend to be highly consequential and potentially hazardous – for example, US Airways Flight 1549 (2010) – Hudson River landing following dual engine failure while departing from LGA.</p>
<p>            Virtually all real-world flights fall within the middle range where crews must continuously modify their plans for unplanned and changing conditions. Starting on the left end and moving to the right, crews must adapt to conditions that are increasingly complex and severe. When flight profiles begin to stray from what was planned, we make instinctive and immediate corrections. A simple example is when a bump of turbulent air causes our right wing to rise. We instinctively counter by deflecting the flight controls to restore wings-level, stabilized flight. On the spectrum, this is described as “plans work with some force”. This strategy reliably works because all game plans include safety margins that accommodate minor deviations. </p>
<p>            As we move further to the right, we encounter conditions that cause larger deviations from our planned profile. For example, when a crew finds themselves jammed during an approach, they modify standard procedures by lowering landing gear to bleed off excess energy and reach stabilized approach parameters (note: most configuration profiles start by extending the flaps to an initial/maneuvering setting, then extending the landing gear, then extending the flaps to the landing setting). On the spectrum, this is described as “plans need significant force”. This strategy works because pilot techniques exploit available time and safety margins to mitigate these larger deviations and restore the planned flight profile.</p>
<p>            In the middle of the spectrum, we see a range labeled “Crossover Zone”. On the left half of the model, crews apply increasing levels of force to regain acceptable parameters. At some point, however, force becomes ineffective. The original game plan just won’t work. Crews need to modify their game plan, abandon it for a familiar/briefed backup game plan (like executing a go around), or innovate an unplanned backup game plan (like diverting to an airport with a longer runway to better accommodate a landing gear malfunction). The Crossover Zone illustrates how crew decision making doesn’t use either familiar game plans (left side of the spectrum) or unique innovation (the right side of the spectrum). It transitions from forcing the original game plan, to increasing the force needed to make it work, to modifying aspects of the original game plan to preserve desired objectives, and finally, to innovating a unplanned/unbriefed game plan to achieve a desirable outcome.</p>
<p>            What makes this Crossover Zone highlights where incident crews make misjudgments that lead to mishaps. Consider a crew that encounters unexpected conditions while maneuvering for final approach and find themselves too fast and steep. They initially apply force (like reducing thrust and extending speed brakes). When these corrections don’t resolve their excess energy problem, they choose to apply more force. They lower landing gear, extend flaps, and steepen their flightpath. They soon realize that their corrections aren’t working. From our informed safety perspective, we clearly see the need to go around and reattempt another approach. From their rushed, quickening, tunnel-focused, in-the-moment perspective, crews often choose to continue and land. A host of biases, rationalizations, and compromises arise. They reason: We are behind schedule and a go-around will make us later – We have plenty of  runway to accommodate a long rollout – We aren’t really that fast – The corrections are working and we should be effectively stabilized before landing – We’ll apply more braking after touchdown. It is only after they land that they begin to view their approach with hindsight and conclude that they should have gone around.</p>
<p>            When we accurately locate where we fall on the spectrum, we choose the proper blend of force, modification, or innovation to guide our flight toward a successful outcome. When we misjudge and choose to increase force instead of modification or innovation, we succumb to plan continuation bias, tunneled attention, and deteriorating flight profiles. Continuing our unstabilized approach example, after our crew has applied every available correction and technique to force their profile back to stabilized parameters, they seem to resign themselves to the situation. Having done everything they can, they accept the approach failure, land, and attempt to dissipate their excess energy using reverse thrust, wheel braking, and longer runway rollout.</p>
<p>            As we move further to the right on our spectrum, we encounter profiles that become unmanageable and unsalvageable. A crew that fails to detect or accept that their game plan is failing might keep applying more force even though no amount of force or pilot action will solve their flight problems. These severe events require us to abandon our original game plan and switch to a safer backup. Using our fast/steep final approach example, forcing the failing game plan would still result in landing too fast and too long and risk a runway excursion. If we recognize this possibility while on final, we would abandon our original game plan (a planned landing) an execute a familiar/trained backup option (a go around). In the heat of the moment, however, mishap crews don’t recognize this hazard. </p>
<p>            At the far right end of the spectrum are events that are so rare and unpredictable that they exceed our training and procedural guidance. They require crews to recognize the indications (a loud bang followed by aircraft control difficulty), communicate and agree on problem (aircraft damage within the flap extension mechanism resulting in asymmetric wing lift), make time to deal with the problem (go around, but don’t change the flap settings), form the new game plan (refer to the non-normal checklist), and coordinate unique duties and roles (who flies, who runs the checklist). This severe aircraft damage would adversely affect controllability, so the crew would need to construct a unique game plan and innovate new procedures that compensate for lost or degraded systems while maximizing their chances for a favorable outcome. They might also consult outside experts through their company’s operations center or aircraft manufacturer for advice.</p>
<p>            When mishap crews fail, we conclude that if they had just followed procedures, they never would have failed. As safety professionals, we need to accept that just encouraging pilots to “follow procedures” and “go around when your approach is unstabilized” is not enough. When pilots become especially stressed, overloaded, or time-pressured, they lose reasoned decision making. This is especially important since PFs (pilot flying) can become so task saturated and overloaded that they don’t recognize when they succumb to this ill-fated mindset. This is why it is so important for PMs (pilot monitoring) to intervene and direct switching to a safer option.             As a profession, we need to promote a culture that encourages self-assessment and personal awareness. As we become aware of how our mindset changes under stressful conditions, we learn to automatically activate a recovery trigger that switches us toward safer backup options. When we recognize that we are using more force, feeling more stressed, and sensing rising workload, we recognize that we need to automatically switch safer backup plan. Pursuing the Master Class path guides pilots to recognize their early indications of bias, rationalization, and compromise. Armed with this awareness, they build personal firewalls to interdict failing trajectories. Through continuous debrief and introspection, we learn to skillfully identify our position on the event spectrum and apply the appropriate level of force, modification, abandonment, or innovation. The key parameter is appropriateness – accurately identifying what our situation requires and applying the appropriate blend of strategies to resolve the problem.</p>
<p> </p>]]></content:encoded>
						                            <category domain="https://www.masterairlinepilot.com/community/discussion-topics/">Discussion Topics</category>                        <dc:creator>Steve Swauger</dc:creator>
                        <guid isPermaLink="true">https://www.masterairlinepilot.com/community/discussion-topics/topic-24-4-pilot-strategies-across-the-range-of-scenarios/</guid>
                    </item>
				                    <item>
                        <title>Topic 24-3: The Difference Between Simulator Training and Real-World Events</title>
                        <link>https://www.masterairlinepilot.com/community/discussion-topics/topic-24-3-the-difference-between-simulator-training-and-real-world-events/</link>
                        <pubDate>Thu, 25 Apr 2024 21:16:11 +0000</pubDate>
                        <description><![CDATA[We rely on simulators to teach and rehearse the procedures and skills needed to handle serious emergency situations. Unavoidably, this practice sacrifices realism. The interaction of operati...]]></description>
                        <content:encoded><![CDATA[<p>      We rely on simulators to teach and rehearse the procedures and skills needed to handle serious emergency situations. Unavoidably, this practice sacrifices realism. The interaction of operational conditions make it extremely difficult to recreate real-world complexity. From the perspective of the typical pilot, each line-flying emergency feels unique while the simulator training profiles tend to follow familiar, canned scenarios. This is not a failure of simulator training. Instead, it is a byproduct of our current training, certification, and evaluation system. As an unintended consequence, crews demonstrate less success with handling line-flying non-normal events than they do in simulator scenarios.</p>
<p>      Consider the ubiquitous V1-cut training event (trains the decision to reject or continue the takeoff at the critical decision point – the calculated V1 speed). Typically, it starts with an engine failure, fire, windshear, or a “bang”. The average pilot will experience and practice V1-cut training events well over 100 times during their career. They become quite competent at handling them. Let’s examine the reasons why. First, crews expect these failure events in the simulator, so they already are primed and ready to handle them. This reduces surprise and startle. Second, their training simulators events are scheduled well in advance. Crews review the procedures and often practice them in procedural trainers ahead of time. This elevates the procedural steps to the forefront of their working memory. Third, the onset conditions typically follow a limited number of canned profiles. Year-after-year, we practice these same V1-cut scenarios. This gives us the skills to complete them smoothly, accurately, and quickly.</p>
<p>      Real-world V1 events prove to be far more nuanced and complex. Consider the following summary of 15 reported V1-related, non-normal events from the NASA ASRS database from a one-year period (November 2019 – November 2020). Note: This list is copied from my book, <em>Master Airline Pilot </em>(pages 438-439):</p>
<p> </p>
<ul>
<li>The ATC Tower controller queried the crew during takeoff prior to V1 (crew had initiated takeoff without clearance). The Captain elected to continue the takeoff. The FO was returning from a 7-month absence (Report #1769610).</li>
<li>A highly experienced Captain rejected their takeoff due to a “sudden veer” to the right. The Captain neglected to make a required “Reject” callout. Both the FO and the jumpseater were confused, but monitored and supported the Captain’s actions (Report #1761771).</li>
<li>Both pilots noticed a flock of birds on the runway approaching V1. They elected to continue. They hit about 30 birds which inflicted damage to both engines. They returned for an emergency landing (Report #1759404).</li>
<li>The crew experienced numerous engine anomalies during takeoff roll (second incident for the same aircraft on the same day). With too many indications to analyze, they decided to reject at V1 due to the accumulation of unknowns (Report # 1758495).</li>
<li>The pilots became confused during taxi-out and made an intersection take-off when full length was planned. Approaching V1, they detected minimal runway remaining, but continued their takeoff. They would not have been able to stop if they had rejected at V1 (Report #1751568).</li>
<li>The EECs (Electronic Engine Control computers) reverted to ALTN during taxi-out. The crew coordinated with maintenance, reset the EECs, and were cleared to continue. The autothrottles disengaged during takeoff as the EECs again reverted to ALTN. The crew reported startle effect, but continued their takeoff. When airborne, they experienced airspeed indication problems (Report # 1748317).</li>
<li>The crew received a windshear warning at V1. They successfully rejected the takeoff (Report #1746586).</li>
<li>The crew experienced anomalous airspeed discrepancies near V1 and rejected their takeoff. Maintenance discovered mud dauber wasp blockage in pitot static system (Report #1740194).</li>
<li>The Captain/PM became tunnel-focused on possible engine underperformance and missed both the V1 and VR (rotation speed) callouts. The FO/PF made his own callouts and rotated. The Captain, who was heads-down, called for a rejected takeoff. The FO informed, “Negative, we are passed V1.” The Captain pushed engines to full thrust and continued the takeoff (Report #1739089).</li>
<li>The crew reported that multiple avionics blanked during takeoff and rejected 50 knots below V1 (Report # 1720363).</li>
<li>One turboprop engine rolled back, but didn’t auto-feather after V1. The crew continued their takeoff and returned for an emergency landing (Report # 1715079).</li>
<li>The crew experienced multiple anomalies during rotation. While returning for an emergency landing, they had unsafe gear indications and multiple confusing electrical system anomalies (Report # 1704467).</li>
<li>A spoiler warning activated at VR. The crew performed a high-speed reject. The same spoiler warning from the previous flight had been signed off (Report #1702333).</li>
<li>The crew struck a large (10′ wingspan) bird approaching V1 causing a very loud bang. They rejected the takeoff. All tires subsequently deflated (the proper functioning of a tire overheat protection feature designed to prevent tire explosion) (Report #1700045).</li>
<li>The FO lost attitude indications during takeoff into a late afternoon sun. They estimated normal pitch until airborne, then transferred control to Captain who had normal indications (Report # 1699712).</li>
</ul>
<p> </p>
<p>As I analyze these fifteen events, only three followed typically indications/progressions as we see then in the simulator. The remaining twelve crews encountered complex/startling/untrained indications that fell outside of anything they had probably ever practiced before. Of these, five crews failed to follow procedures or made significant errors. There might have been additional mistakes that were not documented in these self-reported summaries. While none of these events resulted in accidents, at least a third of them would be classified as failed procedures.</p>
<p>      We conclude that when crews experienced real-world events that were similar to the V1-cuts that they practiced in the simulator, they accurately followed procedures. The training worked. When scenarios strayed from canned profiles, their decisions and actions became less consistent. Despite how often they had practiced V1-cuts in the simulator, too many of these crews made significant procedural errors, experienced startle/surprise, or became confused by the indications.</p>
<p>      Another problem is with scope. V1-cut training in the simulator centers on the moments immediately before and after reaching V1. Focusing just on engine failures, the real world reality is that most events occur well outside of the few seconds near V1. Engine failures outside of this time window require procedural modification. For example, an engine failure that occurs during climbout passing 500’ requires that we modify the typical V1-cut takeoff profile. So, while simulator training provides excellent practice, real-world events add more dimensions that challenge our expectations and complicate our decision making.</p>
<p>      Another consideration is the distraction generated by operational complexity. Non-normal events require focused attention and crew discussion to accurately diagnose and solve the problem. This means that we’ll need to hold our train of thought as we diagnose and remedy the problem. In simulator practice, instructors typically reduce distractions or place the simulator on “freeze” to create an ideal environment for crew coordination. While flying, operational tasks and outside disruptions constantly interrupt us. ATC asks us questions or the FAs call up wanting to know what is going on.</p>
<p> </p>
<p><strong>How skills do we need to teach pilots for handling real world emergencies? </strong>The current system of simulator practice unintentionally leads pilots to hold different mindsets between simulator training and line flying. The most obvious split is that we expect to <em>always</em> experience non-normal/emergency events in the simulator, while we <em>almost never</em> experience them in the aircraft. In the simulator, expecting that something will go wrong primes us to mentally prepare. Before the event begins, we visualize the emergency, the indications that we should detect, and the steps we need to follow. When the instructor initiates the event, our mental preparation helps us to quickly make the mental switch from routine, normal flying to exceptional, non-normal event handling. Conversely, everyday flying promotes a mindset that subconsciously assumes everything to go normally. When something does go wrong, we often experience startle, surprise, and debilitating biases.</p>
<p>      So, what can we do to solve this mismatch? What is the Master Class skillset? We start with the realization that we handle emergencies rather well in the simulator. We just need to find a way to carry our simulator world competence into the real world. When that rare emergency happens, we need to quickly switch from our routine, normal-flying mindset to our non-normal, event-handling mindset. This switch is guided by three distinct skills. We start with recognition. We need to detect and acknowledge that something exceptional has occurred. Often, mishap crews spend too much time wondering what happened, questioning how they misdiagnosed the problem, or downplaying its severity. Next, we must accept that that the non-normal event will upset our established game plan. This means that we will need to reduce our attachment to our original game plan to prevent succumbing to plan continuation bias. Even if we are established on final approach with the runway in sight, our best course of action might be to go around, sort out the problem, run checklists, and come back around for another approach. Third, we need to reorient our mental processes (detecting indications, recognizing patterns, applying meaning, and decision making) from our everyday flying mode to our emergency event handling mode. Our everyday habits and decision making may not work and may impede reaching a successful outcome. I devote several chapters to this process in my book.</p>
<p>      I theorize that most crews that mishandle line-flying, non-normal events fail to make this switch in their mindset either quickly enough or accurately enough – either individually and as a crew. An additional complication is that aviation does not follow a simple dichotomy between normal flying mindset and serious emergency mindset. It is actually a continuum that requires us to master a range of responses. We will unwrap this concept in the next discussion topic.</p>]]></content:encoded>
						                            <category domain="https://www.masterairlinepilot.com/community/discussion-topics/">Discussion Topics</category>                        <dc:creator>Steve Swauger</dc:creator>
                        <guid isPermaLink="true">https://www.masterairlinepilot.com/community/discussion-topics/topic-24-3-the-difference-between-simulator-training-and-real-world-events/</guid>
                    </item>
				                    <item>
                        <title>Topic 24-2: Learning to Manage Distractions More Skillfully</title>
                        <link>https://www.masterairlinepilot.com/community/discussion-topics/topic-24-2-learning-to-manage-distractions-more-skillfully/</link>
                        <pubDate>Tue, 05 Mar 2024 23:24:07 +0000</pubDate>
                        <description><![CDATA[One notoriously unhelpful mantra within the aviation safety world is, “avoid becoming distracted”. This guidance is useless because it contradicts a core pilot responsibility. We are expecte...]]></description>
                        <content:encoded><![CDATA[<p>      One notoriously unhelpful mantra within the aviation safety world is, “avoid becoming distracted”. This guidance is useless because it contradicts a core pilot responsibility. We are expected to accurately detect and identify the source of every distraction and respond quickly and skillfully while maintaining a smooth flight profile and precise flightpath management. Moreover, some of the most consequential distractions demand our immediate attention. Notable examples are warning lights, fire alarms, and audible directives from systems like the collision avoidance system (TCAS). A more accurate directive would be “accurately identify and skillfully respond to all distractions while maintaining a safe flight profile and preserving situational awareness.” Granted, this doesn’t fit neatly on a safety poster.</p>
<p>      While we are immersed in the operational flow of flying, we encounter each distraction as it happens. After we detect it, we choose either to ignore it or to respond to it. We need to decide whether that event is a trivial blip with minimal effect on our operational flow or whether it is something that has disrupted our operational flow. There isn’t a clear line between these two situations. The more deeply we study distractions, the more we understand how they affect us. In my book, <em>Master Airline Pilot,</em> I present the following range of “bird encounter” distraction events while flying down short final. They range from fleeting distractions that we immediately dismiss to persisting distractions that demand our full attention.</p>
<p> </p>
<p><em>Case 1</em>: We see the bird and watch it as it zips by. We don’t hear an impact, so we conclude that we successfully missed it. Most of us would conclude that this is, at most, a very minor distraction. If we don’t hit the bird and maintain a stabilized approach down final, we would treat it as an inconsequential event. We saw the bird, we missed it, and we maintained our attention on flying the aircraft.</p>
<p><em>Case 2</em>: We only detect the bird at the last instant. It startles us and we reflexively move the aircraft to avoid it. Not hearing an impact, we conclude that we probably missed it. Since we altered our flightpath when startled, we return our attention to flying, restore our flightpath, and continue down final, albeit a bit shaken. We would categorize this as a strong, momentary distraction. It diverts our attention from flying for a few moments, but we maintain our operational flow and continue down final without further disruption.</p>
<p><em>Case 3</em>: We see the bird at the last moment. Before we can react, it impacts somewhere behind us on the fuselage. This becomes a much stronger, startling, distraction event. Recovering from the startle, we check our engine instruments to ensure that they appear normal. We sample the cabin air for the odor of burnt bird. For a longer time period, our attention is diverted from flying. Our monitoring of the operational flow is disrupted. After recovering, we would look for cues that could provide context for where to rejoin the operational flow. We would also weigh whether to continue the approach or to go around to assess any operational consequences from the birdstrike.</p>
<p><em>Case 4</em>: We see the bird at the last moment. It impacts on the radome with a loud bang. Bird parts spread across our windscreen and obscure our forward visibility. One of the wings lodges under our wiper blade and continues flapping wildly. This would undoubtedly qualify as a significantly distracting event. Unless we recover and assess quickly, we would favor going around to regain our composure, analyze the damage, rebuild our SA, and return for another approach.</p>
<p> </p>
<p>      These four cases span the range from inconsequential to very distracting. The most important consideration is how they affect our operational flow – the sequence of planned tasks and events that we expect to occur down final. Consider the particular task of completing Before Landing Checklist. If the bird encounter happened near the point in the operational flow where we planned to complete the checklist, the distraction might cause us to miss it. In Cases 3 and 4, we should recognize that the birdstrike has probably distracted us and that we might have missed doing something. We would look for cues about possibly missed tasks. We would either clearly recall completing the checklist or feel unsure whether we had completed it. If we have any doubt, the prudent choice is to run the checklist, even if it results in running it twice.</p>
<p>      If we don’t have an awareness while flying in-the-moment that the birdstrike has distracted us, we might just try to rejoin our operational flow based on our current position on final approach. If that position is past our usual checklist completion point, we might miss performing it. The critical difference is recognizing that we were distracted and deliberately investigating what we might have missed during the lost time. Examining this range of distractions, we summarize that they affect three parameters.</p>
<p> </p>
<ul>
<li><em>Intensity or severity</em>: How much of our attention was diverted by the distraction?</li>
<li><em>Duration</em>: How long did the distraction last?</li>
<li><em>Operational flow disruption</em>: How different is our current position from where we were before the distraction?</li>
</ul>
<p> </p>
<p>       We may need to assess these parameters very quickly. If we decide that the event is fully understood, processed, and recovered, we can safely continue. If not, we need to assess how much time we have available. If time is short, we should consider making extra time to process the event (go around), settle down (physically recover from the startle), and fly another approach.</p>
<p>      There isn’t a clear distinction between distractions that prove consequential and those that aren’t. They feel similar as we experience them while flying in-the-moment. They both feel like flying the aircraft, dealing with events as they come up, and keeping our game plan on track. We only recognize the differences in hindsight. Our challenge is to learn how to translate hindsight clarity into skills that actively improve our present-moment awareness. We refine this skill by analyzing our encounters with distracting events and recognizing our personal biases, strengths, and weaknesses. We begin by constructing two stories comparing how we expected the flight to flow (our game plan) with what actually happened because of the distraction. Holding these two stories side-by-side, we run the timeline back and note the onset of the distraction and our reactions to it. Next, we identify the indications that were present. They may not have registered as important at the time. Maybe they felt unimportant. Maybe they felt like minor anomalies obscured by the background noise of everything else that happens while flying. Maybe we missed them entirely. We assess whether a startle effect drew our attention away from normal flightpath monitoring. We also assess personal and crew factors. Were we too relaxed or tired? Were we engaging in discretionary activities? Were we inappropriately diverting our attention? By comparing the two stories, we discover ways to realign our practice to respond more skillfully in the future. We identify monitoring techniques that would have accurately detected all of the adverse effects. We recognize decisions that would have aided our recovery. Each time we perform this kind of personal debrief analysis, we improve how skillfully we will process future distracting events.</p>
<p>      Compare the differences between three bird encounter events – missing the bird, maneuvering to avoid it, and hitting it. Imagine how each one would feel, where we would focus our attention, and how we would manage our flying. If we see and miss the bird, we might register the event subtly while easily maintaining our flightpath. We wouldn’t lose focus. Our operational flow would remain intact. For the second bird encounter where we maneuver to avoid it, we might need to divert a significant portion of our attention to recovering from startle and maneuvering the aircraft. In this encounter, the bird distracts us, but we would still remain sufficiently connected to the operational flow. For the third encounter, the bird splatting against our windscreen might completely grab our attention. This distraction would divert our attention away from the operational flow of flying the aircraft. The greater the severity of the distracting event, the more our attention becomes diverted away from holding a stabilized final approach path. As we improve our awareness of how we personally respond to each of these encounters, we continue refining our awareness management skills. When a similar event happens to us in the future, we wouldn’t experience as much startle and disruption. We’ll engage it more mindfully.</p>
<p>      Event duration is a measure of how long the distraction diverts our attention away from the operational flow. Imagine a scenario where we pass one bird while on final approach, followed immediately by another, and then another. Each time, we successfully avoid each bird, but the succession of distractions keeps our attention diverted from the actively managing the operational flow. With each encounter, we attempt to recover, but another distraction immediately arises. We struggle to fully return our attention back to flying the approach. Now, imagine that during this series of bird distractions, an aircraft incurs onto our landing runway. At what point would our SA become so deflated that we might miss this new threat? The duration of a distraction or a chain of distractions might affect our ability to restore our attention back to flying. When successive distractions prevent us from fully returning our attention to the operational flow, we should consider exiting the game plan (like going around) and resetting the operational flow.</p>
<p>      As we study our personal experiences with distractions, we may notice trends. Perhaps we discover that we tend to respond differently when we are tired or relaxed. Perhaps we lower our vigilance during the last leg of our pairing back to our home base. Perhaps we discover that events from our home life adversely affect our ability to remain focused. Perhaps we notice that we respond differently when paired with particular types of pilots. For example, I discovered that when I was flying with a particularly competent FO, I would subconsciously lower my level of vigilance. Because they were so good at detecting and mitigating any errors, I allowed myself to relax into my comfort zone. I subconsciously lowered my level of vigilance because they raised theirs. Looking deeper, I recognized that the opposite also occurred. I would raise my level of vigilance when flying with an inexperienced or especially lax FO. Aware of my personal biases, I learned to monitor my attention level more appropriately. The more we learn about ourselves, the more accurately we can improve our resilience against distractions. Sensing that we are tired signals us to increase our vigilance. Noticing that we perform better during daytime encourages us to bid for early schedules. Noticing that we are less alert during early morning flights encourages us to bid for later schedules. We should also monitor how we change as we age. We are constantly changing individuals, so we need to continuously reassess ourselves.</p>]]></content:encoded>
						                            <category domain="https://www.masterairlinepilot.com/community/discussion-topics/">Discussion Topics</category>                        <dc:creator>Steve Swauger</dc:creator>
                        <guid isPermaLink="true">https://www.masterairlinepilot.com/community/discussion-topics/topic-24-2-learning-to-manage-distractions-more-skillfully/</guid>
                    </item>
				                    <item>
                        <title>Topic 24-1: The Difference of Workload Priorities Between Taxi-out and Taxi-in</title>
                        <link>https://www.masterairlinepilot.com/community/discussion-topics/topic-24-1-the-difference-of-workload-priorities-between-taxi-out-and-taxi-in/</link>
                        <pubDate>Sun, 14 Jan 2024 22:51:51 +0000</pubDate>
                        <description><![CDATA[Observing crew behavior, we notice that pilots handle workload priorities differently during taxi-out compared with taxi-in. Since both flight phases involve identical procedures and objecti...]]></description>
                        <content:encoded><![CDATA[<p>Observing crew behavior, we notice that pilots handle workload priorities differently during taxi-out compared with taxi-in. Since both flight phases involve identical procedures and objectives, we would logically assume that they deserve comparable levels of crew attention and diligence. Instead, we notice that while taxi-out behaviors tend to follow trained standards, many crews choose to engage in discretionary tasks during taxi-in. Let’s examine the flow of immediate and future workload to understand the psychology behind this practice.</p>
<p><strong>Workflow and attention focus before taxi-out:</strong> Before taxi-out, we follow a scripted flight preparation process. The procedural sequence includes individually preparing our personal gear, reviewing flight planning products, acquiring clearances, and programming aircraft systems. We then coordinate as a crew to review the ATC clearance, verify flight documentation, complete required checklists, push back from the gate, and start the engines. Analyzing how the workflow changes across this process, notice how our attention focus begins wide and fluid and then steadily narrows until we become fully focused during aircraft taxi. By the time we begin aircraft movement, all necessary tasks are completed and all future tasks are briefed. There is little that we can do to get ahead of future workload demands.</p>
<p><strong>Workflow and attention focus during taxi-in:</strong> In many ways, our workload flow and attention focus during taxi-in is opposite of taxi-out. During takeoff and departure, our workload and attention focus steadily decreases. During we descent, approach, and landing, our workload steadily rises and our attention focus increasingly narrows. Following our highest phase of attention focus (landing), we experience a perceptible letdown in our workload (See the <em>Master Pilot Forum Topic 23-9: The Vulnerabilities of the Psychological Letdown</em> for more on this effect). Nestled within this after landing letdown, however, airline pilots face a looming spike in their workload that begins after gate arrival. This spike is higher if we need to park the aircraft or hand it over to the next crew. These leaving-the-aircraft tasks are superimposed over the required gate arrival procedures of parking, transferring aircraft power to the APU/Jetway, engine shutdown, shutdown/parking checklists, unique gate arrival tasks, and completing required paperwork. None of these required procedures can be performed in advance, so there is no way to work ahead. The only tasks that we can potentially complete in advance are flightdeck clean-up and gathering personal items. To get ahead of these tasks, many pilots try to complete some of them while taxiing to the gate. Combine this with the psychological post-landing letdown and we see how our environment creates a feeling of “extra time” that we can use to accomplish a clean-up item or two.</p>
<p><strong>How task accomplishment drifts during taxi-in: </strong>Completing a clean-up item or two seems fairly benign, but it is vulnerable to drift over time. Consider a fairly new airline First Officer (FO) and their mindset regarding taxi-in. Following training, they embrace the need to maintain a high level of attention focus during a high Area of Vulnerability (AOV) phase of aircraft movement (See <em>Master Airline Pilot – Chapter 14 – Workload Management Techniques</em> for a detailed explanation of AOVs). They focus their attention on the taxi routing and ground threats as their Captain maneuvers the aircraft. After reaching the gate, they complete all of their required gate arrival procedures. Then, they need to gather up their gear to depart the aircraft. Long before they are finished, their Captains leave the flightdeck. They hurry to finish. Feeling behind feels uncomfortable, so they study their Captains to understand how to work more quickly. They notice that their Captains get a head start in their clean-up tasks while taxiing the aircraft. Maybe they put away their coffee mug. Then they dispose of expired flight planning paperwork, and so on. Their Captains seem to easily manage aircraft movement and successfully arrive safely at the gate. This reinforces an illusion of harmlessness with accomplishing these discretionary tasks. After finishing the gate checklists, their Captains quickly complete a few remaining clean-up tasks and depart. Modeling their example, our new FO begins to experiment with the same discretionary clean-up tasks while enroute to the gate. Nothing seems to go wrong and they get out of the aircraft more quickly, so their discretionary behavior is rewarded. Over time, this process reinforces itself with more time-consuming and attention-absorbing clean-up tasks. With some pilots, it becomes a challenge to see if they can complete all of their clean-up tasks before arriving at the gate.</p>
<p><strong>The forces driving discretionary tasks: </strong>As a training standard, we know that we shouldn’t be engaging in discretionary tasks during aircraft movement. Still, we feel a strong urge to do it. One driver behind this is the desire to stay ahead of future high-workload situations. Indeed, this motivation is baked into our procedures and checklists. Over time, it becomes integrated into our perceptions, pacing, and techniques. In anticipation of the demands of taxi-out, takeoff, and departure, we thoroughly plan and brief while at the gate. In anticipation of the demands of descent, approach, and landing, we thoroughly plan and brief while in cruise before top-of-descent. <em>Backload your workload </em>becomes our mantra and a cornerstone of proficiency. Working ahead to compensate for the high-workload gate arrival flight phase feels the same. Unable to perform any shutdown and checklist tasks until parked, the only tasks that we can accomplish early are flightdeck clean-up and personal equipment gathering tasks. Another driver behind this behavior is our discomfort with feeling behind. We hate feeling the need to hurry or rush to catch up. This becomes so strongly hard-wired into our practice that we constantly search for ways to avoid feeling behind. Combine the desire to get ahead of future workload with the wish to avoid feeling behind and the after-landing letdown and we feel strong motivation to engage in discretionary tasks during taxi-in.</p>
<p><strong>The latent vulnerabilities that emerge:</strong> For most flights, these discretionary actions don’t lead to problems. Virtually every time, we successfully make our way from the landing runway to our gate without mishap. What we don’t detect are the latent vulnerabilities that bubble below the surface and only emerge when particular conditions interact. Most often, these involve distractions or unanticipated changes that occur while our attention focus is diverted by discretionary tasks. These distractions and changes often prolong our startle reactions and increase the recovery time from disruptions or surprises. Anomalous events that we could normally handle successfully while we are fully attentive veer off toward undesirable outcomes while we are distracted. This is because our time to react or compensate is very short while taxiing. Ideally, we should stop the aircraft, set the parking brake, and sort out the disruption. More often, however, we optimistically press forward with the expectation that we can successfully recover from the disruption without stopping. Thanks to our experience and proficiency, we usually succeed. We might even consider ourselves lucky that we dodged that bullet. What we fail to recognize is that it was that discretionary task that set the stage in the first place. In the end, it is best not to rely on luck or perfect performance to recover from a situation that only became serious because our attention was unnecessarily diverted.</p>
<p><strong>Restoring the desired attention level during aircraft movement:</strong> Master Class pilots practice AOV attention discipline and self evaluation. Sure, we have the skills to successfully taxi the aircraft and quickly complete discretionary clean-up tasks, but we choose not to. Anytime the aircraft is dynamically moving, we apply high levels of attention focus. Even though we believe that we can multitask, we choose not to even attempt it. In reality, there is always plenty of time to perform clean-up tasks after completing all of our required gate arrival tasks. We just need to make a conscious commitment to maintain appropriate attention standards and to model those standards with other crewmembers. To counter drift, we constantly assess our performance and look for signs that our personal practice may be drifting. This begins with seemingly harmless slips and shortcuts that help with our workload. Through self introspection, we detect and correct these early deviations to restore our commitment to Master Class standards.</p>]]></content:encoded>
						                            <category domain="https://www.masterairlinepilot.com/community/discussion-topics/">Discussion Topics</category>                        <dc:creator>Steve Swauger</dc:creator>
                        <guid isPermaLink="true">https://www.masterairlinepilot.com/community/discussion-topics/topic-24-1-the-difference-of-workload-priorities-between-taxi-out-and-taxi-in/</guid>
                    </item>
				                    <item>
                        <title>Topic 23-9: The Vulnerabilities of the Psychological Letdown</title>
                        <link>https://www.masterairlinepilot.com/community/discussion-topics/topic-23-9-the-vulnerabilities-of-the-psychological-letdow/</link>
                        <pubDate>Mon, 18 Dec 2023 22:27:05 +0000</pubDate>
                        <description><![CDATA[Our minds crave excitement and stimulation, but we can’t sustain high levels of excitement for prolonged periods. Following minutes of high-intensity stimulus, our minds want to relax. As we...]]></description>
                        <content:encoded><![CDATA[<p>      Our minds crave excitement and stimulation, but we can’t sustain high levels of excitement for prolonged periods. Following minutes of high-intensity stimulus, our minds want to relax. As we enter this psychological letdown state, we lower our guard against potentially surprising or startling events. Because of this, the shock effect is stronger. The startle effect is intensified and it takes us longer for us to recover from disruptions. Film makers exploit this effect in their plot lines. After a high-energy chase scene, they insert a low-intensity lull. As our minds sense that the exciting part is over, we relax our tense shoulders, sink back into our seats, grab another handful of popcorn, and lower our guard. Then, film director hits us with an unexpected zinger. It really shocks us.</p>
<p>      As pilots, this psychological effect can emerge following any high task-loaded flight segment (like shortly after takeoff, after landing, or following an intense inflight event). Let’s examine a typical example of a taxi-in event following a challenging approach and landing. As professional pilots, we are quite skillful at raising our attention level to cope with the challenges of a difficult approach and landing in adverse conditions. Bumping down a turbulent final, concentrating our instruments, we feel rewarded to see the runway lights emerge from the gloom. We land, slow down, exit the runway and immediately feel the urge to relax. We did it. We made it in. We unknot our shoulders and sit back in the seat while anticipating a routine taxi-in to the gate.</p>
<p>      The speed at which we can settle into this relaxed state seems to be related to our experience level. The more flight time we have and the more proficient we have become, the easier it is for us to relax following a challenging approach. When we combine this with expectation bias (expecting a non-challenging taxi-in), the stage is set for us to make an error, miss an error, or fail to recover quickly from a disruption. Following is an example event from a B-787 crew reported in the NASA ASRS program (report #1998466).</p>
<p><span style="color: #993300">Captain’s report: After landing Runway XL uneventfully, I exited runway and began taxiing as instructed by ground control. While beginning a slight left turn I noticed the aircraft didn’t respond to my tiller input. I applied brakes and the brakes did not actuate. I looked up at the hydraulics switches and noticed they were turned off. I called out to turn the hydraulic pumps back on and the First Officer complied. Once the pumps were activated, the brakes and tiller began to work again. The First Officer apologized and said he had a “brain fart”. The nose wheel crossed the double yellow line and towing was required. There was no aircraft damage and no taxiway lighting or equipment damage and there were no injuries.</span></p>
<p><span style="color: #993300">First Officer’s report: Ferry flight from ZZZ1 with no passengers – upon turning onto Taxiway X after landing at ZZZ, the First Officer inadvertently turned off the hydraulic pumps prior to the southbound turn. Upon realizing the error, the pumps were turned back on, but not before the nose wheel was outside the double solid line. There were no injuries or damage.</span></p>
<p>While this report doesn’t say, it is likely that the crew used engine anti-icing on final. Exiting the runway and experiencing a natural psychological letdown, the FO probably mistook the hydraulic switches for the engine anti-ice switches (identically shaped and colored switches on an adjoining panel) and turned them off instead. The error should have generated a Master Caution warning light, but their report does not say. If so, we can add that to the list of indications missed during their psychological letdown. Additionally, the Captain appeared to be a bit slow to recognize the FO’s error as the aircraft veered off of the taxiway centerline. Luckily, they recognized the cause and got the hydraulics on in time to prevent a taxiway excursion. Unfortunately, they still required tow-in because they felt that they were too close to the taxiway edge to maneuver safely back to the centerline.</p>
<p>      Other errors that can occur during this vulnerable relaxed state are taxiway clearance errors, configuration errors, and distraction events while performing discretionary flightdeck cleanup tasks. We rarely attribute these events to psychological letdown following high task loading flight segments. In the end, our human minds remain vulnerable to mental vulnerabilities. We each should study this effect in ourselves and within our safety programs. Following your next challenging approach, notice if you have a tendency to relax perhaps too quickly or too much. Within our safety programs, see if we have a rise of error events following intense flight phases. I predict that we will discover that this is a worthy topic for self reflection and a continuation training module.</p>]]></content:encoded>
						                            <category domain="https://www.masterairlinepilot.com/community/discussion-topics/">Discussion Topics</category>                        <dc:creator>Steve Swauger</dc:creator>
                        <guid isPermaLink="true">https://www.masterairlinepilot.com/community/discussion-topics/topic-23-9-the-vulnerabilities-of-the-psychological-letdow/</guid>
                    </item>
				                    <item>
                        <title>Topic 23-8: The Adverse Effects Created by the Comfort Zone</title>
                        <link>https://www.masterairlinepilot.com/community/discussion-topics/topic-23-8-the-adverse-effects-created-by-the-comfort-zone/</link>
                        <pubDate>Mon, 06 Nov 2023 17:30:42 +0000</pubDate>
                        <description><![CDATA[Early in our flying careers, we concerned ourselves with acquiring ratings and amassing hours. After landing our final career airline job, our priorities fundamentally changed. For most prof...]]></description>
                        <content:encoded><![CDATA[<p>Early in our flying careers, we concerned ourselves with acquiring ratings and amassing hours. After landing our final career airline job, our priorities fundamentally changed. For most professional pilots, this transition starts shortly after we get hired with a major airline or at our desired career destination. Settled securely on a seniority list, our priorities shift toward managing our flight schedule, crew base, and seat upgrade. Unlike earlier career factors, all of these considerations are dependent upon forces beyond our control (company growth, the national economy, and senior pilot retirements, to name a few). With little that we can do to further our career, we tend to become less motivated toward advancing our aviation professional knowledge.</p>
<p>      Along with relaxing our career advancement goals, performing our job becomes easier. Every day, we fly the same aircraft, use the same procedures, and traverse the same routes. Repetition promotes familiarity, proficiency, and confidence. We quickly become quite competent at our jobs. As part of this process, we become more skillful at combining individual tasks into familiar flows. This achieves quicker job completion, reduced mental effort, and improved relaxation. We find our groove and settle into it. All of these conditions sound beneficial – and in many ways, they are. Unfortunately, they also generate some adverse byproducts which cultivate latent vulnerabilities.</p>
<p> </p>
<p><strong>Reduced attention – habit patterns, laxity, and lower vigilance:</strong> Performing any repetitive task promotes the development of familiar habit patterns and personalized flows. Blending individual tasks together into smooth flow patterns improves efficiency, expands our situational awareness, and aids in error detection. As part of the process, our understanding of task importance deepens. We learn which indications deserve our attention and which we can ignore. This helps us to focus our attention on the most important conditions and factors. Unfortunately, we sometimes draw inaccurate conclusions. Our understanding may become skewed toward task frequency instead of task importance or error probability. Stated another way, we unintentionally bias our priorities toward the tasks that we perform most often because they produce observable results and away from discretionary tasks that only guard against bad things happening.</p>
<p>      Another adverse drift that emerges from repetition is lower levels of task vigilance. Many Human Factors scientists and industry analysts often characterize this as <em>complacency</em>. Consider that this may be an inaccurate label. Complacency connotes a level of uncaring. Surveying accident/incident investigations, I haven’t encountered uncaring attitudes in mishap pilots. A more accurate term may be <em>laxity</em> which identifies an inappropriate level of attention focus for the situation.</p>
<p>      Events rarely go astray in everyday line flying. Since bad occurrences almost never happen, paying close attention to reliable, mundane tasks feels unnecessary. Little by little, our attention level can sink as we unconsciously equate the attention needed to complete a task with the level of attention that the task deserves. Simply stated, since a task doesn’t require much attention, we don’t give it much attention. For example, compare the attention level we devote toward an engine start during our recurrent simulator training (where we expect to encounter an engine start malfunction) with a line-flying engine start (where we rarely ever see any malfunctions). Over time, settling into our comfort zone can promote a lowering of attention focus. It becomes the natural path of least resistance – well-worn river bed that the water of our attention naturally follows. It takes a conscious commitment on our part to mindfully monitor repetitive, rarely failing tasks. This is a fundamental perspective of the Master Class pilot mindset.</p>
<p><strong> </strong></p>
<p><strong>Task simplification – shortcutting techniques: </strong>As we study how our task accomplishment evolves as we gain proficiency, we typically see a drop in attention-to-detail. For example, the flow pattern that we use to complete our flightdeck preparation tends to shorten over time. This is often driven by past time-pressured situations. Trying to make up lost time, we sped up our preparation process. Since we knew which preflight tasks were most important, we skipped the less important tasks. Nothing went wrong during those events, so our shortcutting was rewarded. These shortcuts felt more efficient. Over time, they became our everyday standard. Comfort zone and lack of unfavorable outcomes promoted this drift and solidified the changes. It all works great – until it doesn’t. What we miss through shortcutting is that we unintentionally expose the kind of latent vulnerabilities that only surface under uncommon combinations of conditions. We can’t predict these combinations in advance. Instead, we need to maintain our commitment to completing tasks accurately even when they never produce apparent benefit. It takes discipline and dedication to resist this drift and preserve the quality of our task completion.</p>
<p><strong> </strong></p>
<p><strong>Game plan reduction – few “go-to” plans:</strong> One of the byproducts of comfort zone is that we whittle our game plans down to a short list of favorites<em>.</em> These become our “go-to” profiles that we apply whenever we can. These favored game plans come with familiar sets of decisions and monitoring priorities – ready-made kits that contain all of the necessary parts. When conditions remain predicable and benign, they work very well. Problems only emerge when unexpected conditions arise and interact in ways that make these favored game plans inappropriate. When the mismatch is small, we either apply more force to push them through or accept the inevitable deviations. Most of the time, this strategy works because deviations fall within established safety margins. When conditions become extreme, unique, or complex, our favored game plans break down. Often, this breakdown occurs under time pressure and increased complexity. Mishap pilots succumb to plan continuation bias and continue forcing their failing profiles. Master Class pilots recognize the warning signs of failing game plans, watch for evidence of deteriorating profiles, and employ trigger points to switch to safer backup game plans.</p>
<p><strong> </strong></p>
<p><strong>Reduction of professional challenge: </strong>In highly repetitive line-flying environments, we quickly achieve proficiency. With standardization and repetition, our aviation skills reach an acceptable standard. The amount of effort we need to devote while completing our flights steadily drops. We discover that we can lower our levels of preparation, attention, and contingency planning without experiencing adverse consequences. Line flying becomes quite easy. Problems remain small and we become quite adept at solving them as they arise. This lowers our perceived need to plan for contingencies. Soon, we come to rely on our go-to game plans and solve any deviations using in-the-moment problem solving. Again, this works well when problems remain small. As complexity mounts, problems emerge. Lacking contingency preparation and briefing, we feel especially uncomfortable when scenarios exceed our comfort zone. Unpracticed at contingency thinking and unprepared with briefed backup game plans, we push harder to force deteriorating situations back toward our go-to profiles in ill-fated efforts to restore our comfort zone. Master Class pilots, however, maintain disciplined practices of contingency planning, profile monitoring, and game plan switching. Having mentally practiced thinking about and switching to contingency game plans, they easily make the transition.</p>
<p><strong> </strong></p>
<p><strong>Effect of the airline’s culture: </strong>Unique line cultures form within all professional aviation company environments. If the line culture leans toward lax professional standards, new pilots feel pressure to lower their professionalism to fit. If the line culture leans toward disciplined professional standards, new pilots raise their professionalism to match. Even the most well-intentioned training program and philosophy can fail to alter culture against the daily reinforcement of line-flying norms. Unfortunately, the underlying forces driving line cultures often favor relaxed comfort zones. It is a natural byproduct of our human nature. It takes committed intention and dedicated effort to instill Master Class practices that promote aviation professionalism.</p>
<p> </p>
<p><strong>Recap: </strong>Human existence encourages us to seek comfort. When we reach a point in our career path where our proficiency can allow us to relax, we are understandably drawn toward it. This is not, in itself, a bad thing. We can still choose to increase our professional wisdom while remaining psychologically relaxed and content. We just need to commit to purposeful practice, life-long learning, embracing excellence, and mindful self-reflection.</p>]]></content:encoded>
						                            <category domain="https://www.masterairlinepilot.com/community/discussion-topics/">Discussion Topics</category>                        <dc:creator>Steve Swauger</dc:creator>
                        <guid isPermaLink="true">https://www.masterairlinepilot.com/community/discussion-topics/topic-23-8-the-adverse-effects-created-by-the-comfort-zone/</guid>
                    </item>
				                    <item>
                        <title>Topic 23-7: Calling out Risky Decisions as the Pilot Monitoring</title>
                        <link>https://www.masterairlinepilot.com/community/discussion-topics/topic-23-7-calling-out-risky-decisions-as-the-pilot-monitoring/</link>
                        <pubDate>Mon, 04 Sep 2023 18:08:49 +0000</pubDate>
                        <description><![CDATA[In previous topics, I discussed scripted deviation callouts and unscripted deviation callouts. The third category is calling out risky decisions – the most challenging or Pilot Monitoring (P...]]></description>
                        <content:encoded><![CDATA[<p>In previous topics, I discussed scripted deviation callouts and unscripted deviation callouts. The third category is calling out risky decisions – the most challenging or Pilot Monitoring (PM) duties. From <em>Master Airline Pilot</em> – pages 371-373<em>:</em></p>
<p><span style="color: #ff0000">Calling out a risky decision is more difficult because it highlights the differences between the PM’s opinion and the PF’s decision/game plan. It’s no longer just about factual parameters a. By questioning the quality of their decisions, we risk appearing confrontational. We need to resolve these issues skillfully. Start with facts. Each pilot can perceive similar facts, yet draw different conclusions. Consider a situation where our flight is navigating around some towering thunderstorms. The PF has selected a course that flies, in our opinion, too close to a menacing buildup. We are uncomfortable and want to say something. How can we proceed?</span></p>
<ul>
<li><span style="color: #ff0000"><em>Assess the conditions:</em> Recognize that our decision is based on our experience, our perception of conditions, and our risk assessment. The PF has selected their course based on their experience, their perception of conditions, and their risk assessment. Something has generated a mismatch between our mental models. If we can uncover the source of this mismatch, we can address it directly. Perhaps the PF lacks experience with maneuvering around towering buildups (inexperience). Perhaps they are overlooking important conditions that would otherwise change their assessment (missing information). Perhaps they perceive the same conditions that we do, but weigh their importance differently (risk assessment). Maybe they are just more risk tolerant than we are (risk management). At first, we really don’t know. To restore our shared mental model, we need to initiate a discussion. A good place to start is to assemble the facts and understand how we each formed our particular mindsets. If the PF is an experienced Captain, the selected course is based on their wealth of experience and past success, so they might be right and we might be wrong. As we investigate deeper, we notice a strong crosswind component blowing toward the closer buildup. Given that additional piece of information, the selected course makes sense. We see that the PF has chosen a course that shaves closer/upwind to one buildup and further/downwind from another buildup. The selected path will probably have the smoothest ride while avoiding any hail that the further/downwind buildup may be generating. Given this information, we change our opinion and accept the PF’s decision. If, however, their chosen course is actually downwind from the close buildup, then a different consideration must be driving the PF’s decision. We notice that their chosen course tracks closer to the magenta-line flight planned course. We conclude that the PF may be trying to shorten the flight distance to save time, even though it risks a turbulent ride. Their chosen course accepts the increased risk from the buildup to shorten the air miles flown. In our opinion, this decision is riskier, so we decide to speak up and offer a safer alternative.</span></li>
</ul>
<ul>
<li><span style="color: #ff0000"><em>Clearly present our reasoning:</em> We clarify the facts as we see them and organize our reasons why a different course might be better. We frame the scenario by stating that we are uncomfortable with flying so close to the buildup because of the downwind hazards. “This course seems too close to that buildup. I’m concerned about turbulence and hail. Also, the flight attendants are still up finishing the passenger service.” Ideally, the PF agrees with our reasoning and alters course. If the PF doesn’t share our concerns, we expand the discussion. We may need to add more facts to convince the PF to alter course. Perhaps their risk management has drifted over time to accept higher levels of risk. Discussions like this recalibrate our thinking and return us to a safer standard. Let’s ratchet up the scenario further. After voicing our concerns, the PF still elects to continue with the risky course. If we are the Captain, we can override the FO’s decision and direct the safer course. If we are the FO, our options are more difficult.</span></li>
</ul>
<ul>
<li><span style="color: #ff0000"><em>Offer a safer alternative:</em> Assuming the Captain/PF won’t accept our concerns, suggest a safer alternative. One tactic is to highlight better options. “Flying to the left of that buildup keeps us upwind of the turbulence and any hail that the buildup may be throwing out. However, if you want to stay on this heading, I’d like to have the flight attendants immediately take their seats.” If our airline is using a risk management system like Risk and Resource Management (RRM), we can include a color code. “This course puts me in the Red. About 20 degrees to the left feels safer to me.” Notice in these two statements, we stick to either factual statements or our personal opinions. This is what I feel and this is what I need. Most Captains value a cohesive flightdeck team. Most will bend to our concerns when they understand that the issue is important to us. Avoid accusatory statements and undertones. “You are flying us directly toward the severe turbulence and hail from that buildup.” “You are about to hurt some flight attendants when we hit the turbulence from that buildup.” Statements like these link their decision with bad outcomes. The implied message is, “Bad things are about to happen and they will be your fault.” If necessary, this option is still available to make our point. If the Captain is particularly headstrong and resistant to our input and we have tried persuasion without success, this may be a useful tactic to get them to do the right thing.</span></li>
</ul>
<p>Notice how PM callouts are distributed along a continuum of complexity and difficulty. On one extreme are simple scripted deviation callouts like “Airspeed” and “Sink Rate”. These are factual, clearly modeled in our manuals, and easily understood. Near the middle of our continuum are unscripted deviation callouts like “Come Right. We are drifting toward the edge of the taxiway.” These callouts cover a wide range of events that require us to detect unfavorable trends and highlight emerging hazards. These require judgment, communication skills, and effective CRM. An important aspect is time available. The earlier we identify the trend, the more time we will have to rebuild a shared mental model, agree on a remedy, and modify our path. The most common problem we see in these kinds of mishaps is reluctance to speak up (see the previous Topic 23-6: PM Callouts and Consent by Silence). On the other end of our callout continuum are risky decisions. Here the “facts” are usually known by both pilots, but they disagree on the significance of the conditions and the probability of hazardous outcomes. As FOs, we are tasked to initiate a process that may lead to direct intervention. In many mishap events, FOs report their discomfort with their PFs’ risky decisions, but lacked the skills to communicate them effectively or waited until too late (while hoping that their Captains would recognize their error and self-correct their unfavorable path). This is why it is important to mentally rehearse these types of scenarios. Rehearsal reduces hesitation and improves callout word choice. This leads to better CRM and event outcomes.</p>]]></content:encoded>
						                            <category domain="https://www.masterairlinepilot.com/community/discussion-topics/">Discussion Topics</category>                        <dc:creator>Steve Swauger</dc:creator>
                        <guid isPermaLink="true">https://www.masterairlinepilot.com/community/discussion-topics/topic-23-7-calling-out-risky-decisions-as-the-pilot-monitoring/</guid>
                    </item>
				                    <item>
                        <title>Topic 23-6: PM Callouts and Consent by Silence</title>
                        <link>https://www.masterairlinepilot.com/community/discussion-topics/topic-23-6-pm-callouts-and-consent-by-silence/</link>
                        <pubDate>Mon, 07 Aug 2023 17:40:10 +0000</pubDate>
                        <description><![CDATA[A commonly occurring feature we see in event mishap reporting is Pilots Monitoring (PMs) admitting that they were torn between whether to verbalize their concern about the Pilot Flying’s (PF...]]></description>
                        <content:encoded><![CDATA[<p>A commonly occurring feature we see in event mishap reporting is Pilots Monitoring (PMs) admitting that they were torn between whether to verbalize their concern about the Pilot Flying’s (PF’s) actions (make a callout) or to remain silent. Too often, they choose silence. This occurs much more often with PM/FOs and PF/Captains. The reasons for this include deference to experience, flightdeck authority gradient, the desire to maintain flightdeck rapport, indecision about what to say, role synchronization, reluctance to say something that would be recorded, and plan continuation bias.</p>
<p><strong>Deference to experience:</strong> With the high retirement rate and strong growth of the airline industry, the typical time that a career pilot serves in the FO seat before upgrading to Captain has significantly shortened. This means that in almost all situations, Captains have far more experience in the aircraft and the airline’s operations than their FOs. As complexity rises and time available shrinks, FOs often suppress their concerns about undesirable trends. “The Captain has been here a lot longer than me. They must know what they are doing.” “The Captain must see something that I am missing or misunderstanding. They don’t seem as concerned as me.”</p>
<p><strong>Authority gradient</strong>: Authority gradient is the perceived hierarchical distance between people in different organizational positions. Think of it as the slope that people perceive separating themselves and others. A new-hire pilot flying with a check pilot perceives a very steep slope. They might be quite reluctant to voice their concerns or make callouts. Consider how much steeper it becomes when the Captain is our chief pilot (with the power to fire us). As FOs gain time and experience, this slope eases. Imagine how gentle the slope would appear to an FO on the cusp of Captain upgrade while flying with a newly upgraded Captain.</p>
<p><strong>The desire to maintain flightdeck rapport:</strong> Flightdeck environments evolve dynamically. Consider how the environment changes from when a crew flies their first flight together compared with how it might feel after three days of flying (and socializing) together. How about other combinations like two pilots who dislike each other, two pilots who are close friends, or two pilots who are family members. Regardless of the combination, we all want the flightdeck environment to remain cordial and friendly. Making a callout can feel like we are challenging their abilities or professionalism. The callout can feel like it is disrupting (or even violating) that rapport. This reluctance also affects Captains who worry that speaking up will make them seem too authoritarian or adversely affect crew effectiveness.</p>
<p><strong>Indecision about what to say: </strong>In my book, I discuss the various types of callouts. The easiest category are procedurally scripted callouts. These are the standard callouts that are clearly directed in our manuals. They generally highlight exceedances of parameters with callouts like “AIRSPEED” or “GLIDESLOPE”. The next level are unscripted callouts. These callouts highlight undesirable trends that may not be specifically addressed in our procedures. For example, if the Captain is distracted by an inside-the-flightdeck task during taxi movement, the FO/PM might state “We are drifting, we need to come right.” Notice that the specific wording of this callout is not specified in our manuals. Also, these kind of callouts may include descriptive or directive wording. In the book, I highlight a third category of <em>risky decisions </em>where we challenge the gameplan that the PF has chosen. The reason why these cases are not addressed in our manuals is because they are too numerous and variable. It would prove onerous and unrealistic to compile a comprehensive list. Additionally, the wording is highly dependent on time available. A callout about an unstabilized approach at 200’ might include descriptive wording while a callout in the flare might just direct “GO AROUND”. The bottom line is that more variable and time sensitive the situation, the more reluctant many PMs feel to speak up.</p>
<p><strong>Role synchronization:</strong> I addressed this in Topic 23-4: <strong> </strong>The Importance of Staying Out-of-Synch. Under stress, many PMs find themselves aligning their perspective with the PF’s. They essentially become non-flying PFs. As they become sympathetic with the PF’s dilemma, they fall into the same “work harder to make the gameplan work out” mindset. Since both pilots are thinking the same thoughts, the need to speak up feels unnecessary.</p>
<p><strong>Reluctance to say something that would be recorded:</strong> We all know that everything we say on the flightdeck is preserved on the CVR (Cockpit Voice Recorder). Mishap pilots have reported that they were either reluctant to highlight the PF’s errors or were unsure about which words to use. Often this manifests as pointing or hinting. Overloaded PFs report never seeing the PM point or hearing their hints.</p>
<p><strong>Plan continuation bias:</strong> A strong psychological bias driving flightdeck silence is plan continuation bias. The more complex, task saturated, and time constrained pilots feel, the more unlikely they are to abandon their gameplan and start over. They find themselves working harder to force the progress of their existing game plan. When the PM looks over at the PF and sees that they are fully engaged and committed to making the gameplan work out, they feel reluctant to disrupt their determination. While working harder to make a familiar game plan work is appropriate in many situations, there comes a point where we need to recognize the failing trend, abort the current gameplan, and reset the situation. PM silence tends to emerge in event reports with statements like:</p>
<p>      - “I was just about to say something.”</p>
<p>      - “I was so surprised by their actions, I didn’t know what to say”</p>
<p>      - “The go-around callout was on the tip of my tongue during that whole approach.”</p>
<p>      - “In hindsight, I’m embarrassed that I didn’t speak up.”</p>
<p>All of these contributors to PM silence emerge from flawed assumptions. In interviews with mishap crews, we commonly hear PMs express statements starting with “I thought…”, “I assumed…”, “They already knew…”, and “They must have known…”. Grounded in these flawed assumptions and fueled by flightdeck culture, the path of least resistance promotes silence by both PFs and PMs.</p>
<p>This social dynamic becomes especially important because task-overloaded PFs need an especially strong stimulus to break free from their tunneled attention and plan continuation bias. That is why we direct PMs to make strong callouts like “GO AROUND”. Lacking this decisive callout, PFs interpret silence as consent to continue. The irony is that PFs later report that they wished that their PMs had been more decisive. While consumed by the many details within the situation’s complexity, PFs lacked the big picture of the gameplan’s deterioration. This is why we rely on PMs to maintain the larger perspective by not only assessing the accuracy of the PFs flying, but also the quality, trajectory, and appropriateness of their gameplan.</p>
<p>As PMs, it is better to make the callouts and clearly communicate our concerns as early as possible during a deteriorating situation. This gives the crew the most time to mentally step back, assess the trajectory of the game plan, and either choose to continue or abort the gameplan. This promotes a mechanistic approach to callouts. We should never doubt whether we should make a callout. Any time we detect exceedances or adverse trends, we make the callout. Our discretion lies with timing and wording. The earlier we start our callouts, the more time we both have to assess and discuss the situation. Additionally, it allows the PM to escalate the callouts in case the PF doesn’t comply. More on that in a later discussion topic.</p>]]></content:encoded>
						                            <category domain="https://www.masterairlinepilot.com/community/discussion-topics/">Discussion Topics</category>                        <dc:creator>Steve Swauger</dc:creator>
                        <guid isPermaLink="true">https://www.masterairlinepilot.com/community/discussion-topics/topic-23-6-pm-callouts-and-consent-by-silence/</guid>
                    </item>
				                    <item>
                        <title>Topic 23-5: Event Complexity and Recognizing Wrongness</title>
                        <link>https://www.masterairlinepilot.com/community/discussion-topics/topic-23-5-event-complexity-and-recognizing-wrongness/</link>
                        <pubDate>Wed, 12 Jul 2023 19:46:42 +0000</pubDate>
                        <description><![CDATA[As complexity rises, two effects emerge. First, the number of event paths or possible scenarios increase. This means that the probability that the event will follow a familiar, well-practice...]]></description>
                        <content:encoded><![CDATA[<p>As complexity rises, two effects emerge. First, the number of event paths or possible scenarios increase. This means that the probability that the event will follow a familiar, well-practiced trajectory decreases. Second, the probability that we may encounter an unexpectedly extreme or hazardous event trajectory increases. Simply stated, as complexity rises more potential scenarios can happen and more of them may lead to mishap. This widening range of scenarios can exceed our planning, situational awareness, and imagination. One coping skill is to monitor for the feeling of <em>wrongness. </em>From <em>Master Airline Pilot, </em>page 210:</p>
<p><strong> </strong></p>
<p><span style="color: #0000ff"><strong>11.5 IDENTIFYING COMPLEX PROBLEMS</strong></span></p>
<p><span style="color: #0000ff">Now that we have examined some of the cases and strategies where complex problems are inappropriately treated like familiar, simple situations, we will focus on how to identify and solve complex problems. Consider the following list of characteristic that imply complexity.</span></p>
<p> </p>
<ul>
<li><span style="color: #0000ff">Something happens that is new to us.</span></li>
<li><span style="color: #0000ff">It feels important.</span></li>
<li><span style="color: #0000ff">It doesn’t match the current game plan.</span></li>
<li><span style="color: #0000ff">There are significant consequences if the wrong choice is made.</span></li>
<li><span style="color: #0000ff">It falls outside of what we expected to happen.</span></li>
<li><span style="color: #0000ff">It involves coordination with crewmembers or outside agencies.</span></li>
<li><span style="color: #0000ff">It presents unclear or uncertain outcomes.</span></li>
<li><span style="color: #0000ff">It adds more complexity to a situation.</span></li>
</ul>
<p> </p>
<p><span style="color: #0000ff"><strong>11.5.1 Recognizing Wrongness</strong></span></p>
<p><span style="color: #0000ff">While this list seems wide-ranging, all of the points share one common characteristic – they describe events or characteristics that fall outside of our future SA prediction. Since they don’t match what we expect to see, they feel wrong. Remember that we continuously envision a story of how we expect the flight to unfold. This story is like a movie formed from our experience of how our game plan progressed in past flights and how we adjusted it to fit with conditions. If we were to freeze our aircraft like we can in a simulator, we could describe how we would expect the flight to progress from that point on. When we detect events that fall outside of our story, they feel wrong. We can classify wrongness in three categories.</span></p>
<p> </p>
<ol>
<li><span style="color: #0000ff"><em>Small blips: </em>We expect to see many subtle variations. They contain a small amount of wrongness. We either dismiss or correct them.</span></li>
</ol>
<p> </p>
<ol start="2">
<li><span style="color: #0000ff"><em>Events requiring quick corrections: </em>As we ratchet up the intensity of these variations, events start to clearly feel wrong. They still remain within the safety margin of our game plan, so they shouldn’t generate adverse consequences. We only need to make quick, corrective decisions to preserve our game plan.</span></li>
</ol>
<p> </p>
<ol start="3">
<li><span style="color: #0000ff"><em>Significant events requiring reasoned decision making: </em>Significant events feel very wrong. These events make us sit up and pay attention. Their complexity requires us to apply well-reasoned decision making. It feels like our current game plan is failing, so we need to modify or replace it.</span></li>
</ol>
<p> </p>
<p>As Master Class pilots, we value our ability to assess the level of wrongness within our situation. This sense arises from our subconscious as a feeling, not as a conscious thought. We know that a rising sense of wrongness often means that our game plan may no longer align with evolving conditions – that unknown conditions or interactions between conditions have degraded our situational awareness. We respond by changing our perspective to view our game plan more skeptically.</p>
<p>The hazard with pilots discounting or misjudging their felt sense of wrongness is that they tend to cling to their original game plan despite rising indications that it is failing (plan continuation bias). They increase the force necessary to push their familiar, but failing, game plan through to its conclusion.</p>
<p>Our Master Class response when detecting wrongness is to increase our vigilance for counterfactuals. This means that we need to assign similar weighting both to signs that our game plan is failing and to signs that our game plan is remaining valid.</p>
<p>Even after detecting wrongness, a common problem we see is pilots devoting too much time diagnosing their unfavorable trend or berating themselves for missing the rise of adverse conditions. This highlights another Master Class skill – assessing time available. If we have ample extra time or can make more time by altering our flight profile, then we can employ a deliberative, CRM-assisted, decision-making process. If not, and if the scenario continues to veer off from our expected trajectory, the safer course of action may be to abort the game plan, diagnose the conditions, and try again.</p>
<p>Our ability to sense wrongness evolves from both experience and of personal introspection. This improves our awareness of our feelings and our responses to them. After encountering a difficult event, we debrief ourselves and our crew.</p>
<p>            - What was the first point on our event timeline when we sensed wrongness?</p>
<p>            - What was our first, instinctive reaction to this wrongness?</p>
<p>            - Did we assess time available before reacting or launching into deliberative decision making?</p>
<p>            - Was our assessment of the level of wrongness accurate?</p>
<p>            - Did we choose the best course of action (dismiss the sense of wrongness as trivial, make corrections to our game plan to accommodate the emerging conditions, or abort our game plan for a safer option)?</p>
<p>            - What can we do to refine and calibrate our wrongness detector?</p>
<p>By repeatedly evaluating and refining our experience of wrongness, we develop the skills to manage the most appropriate response to rising complexity.</p>]]></content:encoded>
						                            <category domain="https://www.masterairlinepilot.com/community/discussion-topics/">Discussion Topics</category>                        <dc:creator>Steve Swauger</dc:creator>
                        <guid isPermaLink="true">https://www.masterairlinepilot.com/community/discussion-topics/topic-23-5-event-complexity-and-recognizing-wrongness/</guid>
                    </item>
							        </channel>
        </rss>
		