<?xml version="1.0" encoding="UTF-8"?>        <rss version="2.0"
             xmlns:atom="http://www.w3.org/2005/Atom"
             xmlns:dc="http://purl.org/dc/elements/1.1/"
             xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
             xmlns:admin="http://webns.net/mvcb/"
             xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
             xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <channel>
            <title>
									MasterAirlinePilot.com Forum - Recent Posts				            </title>
            <link>https://www.masterairlinepilot.com/community/</link>
            <description>MasterAirlinePilot.com Discussion Board</description>
            <language>en-US</language>
            <lastBuildDate>Thu, 23 Apr 2026 12:09:44 +0000</lastBuildDate>
            <generator>wpForo</generator>
            <ttl>60</ttl>
							                    <item>
                        <title>Topic 24-5: Let’s retire “Complacency”</title>
                        <link>https://www.masterairlinepilot.com/community/discussion-topics/topic-24-5-lets-retire-complacency/#post-36</link>
                        <pubDate>Mon, 19 Aug 2024 19:50:01 +0000</pubDate>
                        <description><![CDATA[Applying the label complacency is common across the fields of aviation psychology and human factors. The term distorts our perspective so that we tend to recognize it in any incident when op...]]></description>
                        <content:encoded><![CDATA[<p>      Applying the label <em>complacency </em>is common across the fields of aviation psychology and human factors. The term distorts our perspective so that we tend to recognize it in any incident when operators fail to act effectively to stop a mishap’s trajectory. It feels very useful because it seamlessly connects an undesirable outcome with a cause – that the operator’s complacency led them to miss a warning sign that they should have seen, to miscomprehend a sign that they did see, to miss making a decision they should have made, or to fail to perform actions they should have taken. Complacency also exaggerates the gap between what we accept as acceptable performance standards and the “substandard behaviors” that they apparently demonstrated. It fits deductive “if/then” logic. <em>If </em>the crew hadn’t been complacent, <em>then </em>they would have detected the deteriorating conditions, recovered to a more favorable trajectory, and avoided the mishap. Finally, the term provides wholeness to justify an event. It allows us to hold the <em>complacent</em> <em>operators</em> completely and exclusively responsible for the mishap, even though simplistic environments never exist within complex systems.</p>
<p>      As our use of the label <em>complacency</em> has spread, it has neither advanced our understanding of mishap evolution nor furthered our progress toward aviation safety. This is because it relies on the flawed bad apple theory – that these particular operators (bad apples) are fundamentally flawed and that their removal or rehabilitation will eliminate both the cause and undesired outcome from reoccurring. This is rarely the case. Also, the complacency label implies a level of carelessness to the operators’ motivations and decision making. In over 20 years of incident and mishap investigation, I have never encountered a pilot who felt careless about their flying, their profession, or their commitment to safe operations. The label of complacency did not fit their mindsets or actions. In the end, investigators simply apply the term because they find it convenient, familiar, and uncomplicated, not because it accurately describes the cause or evolution of the mishap.</p>
<p>      Assuming that we significantly reduce our use of <em>complacency</em>, we need to apply suitable replacements. As we look deeply into the underlying causes of mishap events that we currently label as arising from complacent behaviors, we find the common factor of <em>attention level.</em> This is especially evident with experienced, proficient operators. I addressed this effect in a past discussion topic (<a href="https://www.masterairlinepilot.com/community/discussion-topics/topic-23-8-the-adverse-effects-created-by-the-comfort-zone/#post-25">Topic 23-8 on the Master Pilot Forum</a>) and in my book, <em>Master Airline Pilot: Applying Human Factors to Achieve Peak Performance and Operational Resilience</em>. As proficient pilots become more comfortable and familiar with performing their tasks, they don’t feel the need to apply as much attention. The task doesn’t feel difficult, so it doesn’t require as much mental focus. Over time, the level of attention devoted toward completing familiar tasks drops. Resting snuggly in their comfort zone, repetitive tasks become automated. Manual tasks are performed through muscle memory. Cognitive tasks apply well-worn, reliable game plans. A mismatch develops between the attention that the operator feels that they need to complete a task and the attention level that is appropriate for that task. As an aviation example, imagine a highly proficient pilot that repeatedly gazes out their side window while flying down short final. Imagine a monitoring pilot who does the same while the pilot flying lands the aircraft. While both of these examples indicate low attention focus, they don’t necessarily show complacency. Anecdotal evidence seems to indicate that high proficiency and a long history of successful flying tend to increase the mismatch in attention level. Laxity doesn’t cause failure. Instead, it promotes a latent vulnerability that lingers unseen below the surface. Everything works out fine, flight after flight, year after year – until one day, it doesn’t.</p>
<p>      We need terms to classify the mishap events that emerge when an operator applies an inappropriately low level of attention to the task at hand. These new terms shouldn’t imply negative motivations like unconcern, neglect, negligence, carelessness, or sloppiness. They should focus only on the attention gap. I suggest using <em>laxity </em>or <em>laxness </em>because they accurately describe how the operator has relaxed their attention focus below what is appropriate for a particular task or operating environment<em>.</em> Laxity accurately describes how highly experienced, proficient operators can lower their guard and become surprised by unexpected situations. It explains how an operator can fail to notice a deteriorating situation’s warning signs as quickly as they could and how they might experience a prolonged startle effect that inhibits timely recovery from distraction. Once they become confused and startled, they succumb to task overload, tunneled attention, and plan continuation bias. Forcibly ejected from their comfort zones, familiar habits and game plans vanish. Their situational awareness instantly changes from familiar and comfortable to chaotic and unpleasant. These effects emerge when the operator is lax, but do not indicate that they have been complacent.</p>
<p>      Using the realigned perspective that the term <em>laxity</em> provides, we reorient our analysis toward recognizing the mismatch between the appropriate attention level that the task requires and the reduced level employed by the mishap operator. Additionally, we structure our teaching to help operators align their attention level with the task phase and environment, not by their perception of the task’s ease or difficulty. This emphasis will help counteract the depressive effects of familiarity and comfort zone. Even for highly proficient operators, every final approach becomes a high attention level task, regardless of how proficient they are or how easy the environment happens to be.</p>]]></content:encoded>
						                            <category domain="https://www.masterairlinepilot.com/community/"></category>                        <dc:creator>Steve Swauger</dc:creator>
                        <guid isPermaLink="true">https://www.masterairlinepilot.com/community/discussion-topics/topic-24-5-lets-retire-complacency/#post-36</guid>
                    </item>
				                    <item>
                        <title>Topic 24-4: Pilot Strategies Across the Range of Scenarios</title>
                        <link>https://www.masterairlinepilot.com/community/discussion-topics/topic-24-4-pilot-strategies-across-the-range-of-scenarios/#post-35</link>
                        <pubDate>Wed, 29 May 2024 04:40:38 +0000</pubDate>
                        <description><![CDATA[Consider the following spectrum depicting the range of situations encountered by pilots across all flights. (This graphic is from my book, Master Airline Pilot: Applying Human Factors to Rea...]]></description>
                        <content:encoded><![CDATA[<p>Consider the following spectrum depicting the range of situations encountered by pilots across all flights. (This graphic is from my book, <em>Master Airline Pilot: Applying Human Factors to Reach Peak Performance and Operational Resilience  </em><em>, </em>Figure 11.3, page 203) </p>
15
<p> (tap on image to expand it)</p>
<p>            The left end of the spectrum depicts mundane, familiar, uncomplicated flights. Envision crews flying the same flights, between the same destinations, in unchallenging conditions, day after day. They would apply familiar, proven game plans that require no modification. Flights would go exactly as planned. In relative frequency, these kinds of flights are fairly plentiful in professional aviation.</p>
<p>            On the right end of the spectrum, we have extremely rare, unanticipated, and untrained events. Crews would have no prior training, procedures, and little guidance for handling them, either from simulator training, ground training, or flight manuals. To handle these situations, crews would need to diagnose unique problems, innovate solutions, and coordinate specific tasks and roles. In relative frequency, these situations are extremely rare – so rare that most pilots would not encounter even a single event during their careers. These flights, however, tend to be highly consequential and potentially hazardous – for example, US Airways Flight 1549 (2010) – Hudson River landing following dual engine failure while departing from LGA.</p>
<p>            Virtually all real-world flights fall within the middle range where crews must continuously modify their plans for unplanned and changing conditions. Starting on the left end and moving to the right, crews must adapt to conditions that are increasingly complex and severe. When flight profiles begin to stray from what was planned, we make instinctive and immediate corrections. A simple example is when a bump of turbulent air causes our right wing to rise. We instinctively counter by deflecting the flight controls to restore wings-level, stabilized flight. On the spectrum, this is described as “plans work with some force”. This strategy reliably works because all game plans include safety margins that accommodate minor deviations. </p>
<p>            As we move further to the right, we encounter conditions that cause larger deviations from our planned profile. For example, when a crew finds themselves jammed during an approach, they modify standard procedures by lowering landing gear to bleed off excess energy and reach stabilized approach parameters (note: most configuration profiles start by extending the flaps to an initial/maneuvering setting, then extending the landing gear, then extending the flaps to the landing setting). On the spectrum, this is described as “plans need significant force”. This strategy works because pilot techniques exploit available time and safety margins to mitigate these larger deviations and restore the planned flight profile.</p>
<p>            In the middle of the spectrum, we see a range labeled “Crossover Zone”. On the left half of the model, crews apply increasing levels of force to regain acceptable parameters. At some point, however, force becomes ineffective. The original game plan just won’t work. Crews need to modify their game plan, abandon it for a familiar/briefed backup game plan (like executing a go around), or innovate an unplanned backup game plan (like diverting to an airport with a longer runway to better accommodate a landing gear malfunction). The Crossover Zone illustrates how crew decision making doesn’t use either familiar game plans (left side of the spectrum) or unique innovation (the right side of the spectrum). It transitions from forcing the original game plan, to increasing the force needed to make it work, to modifying aspects of the original game plan to preserve desired objectives, and finally, to innovating a unplanned/unbriefed game plan to achieve a desirable outcome.</p>
<p>            What makes this Crossover Zone highlights where incident crews make misjudgments that lead to mishaps. Consider a crew that encounters unexpected conditions while maneuvering for final approach and find themselves too fast and steep. They initially apply force (like reducing thrust and extending speed brakes). When these corrections don’t resolve their excess energy problem, they choose to apply more force. They lower landing gear, extend flaps, and steepen their flightpath. They soon realize that their corrections aren’t working. From our informed safety perspective, we clearly see the need to go around and reattempt another approach. From their rushed, quickening, tunnel-focused, in-the-moment perspective, crews often choose to continue and land. A host of biases, rationalizations, and compromises arise. They reason: We are behind schedule and a go-around will make us later – We have plenty of  runway to accommodate a long rollout – We aren’t really that fast – The corrections are working and we should be effectively stabilized before landing – We’ll apply more braking after touchdown. It is only after they land that they begin to view their approach with hindsight and conclude that they should have gone around.</p>
<p>            When we accurately locate where we fall on the spectrum, we choose the proper blend of force, modification, or innovation to guide our flight toward a successful outcome. When we misjudge and choose to increase force instead of modification or innovation, we succumb to plan continuation bias, tunneled attention, and deteriorating flight profiles. Continuing our unstabilized approach example, after our crew has applied every available correction and technique to force their profile back to stabilized parameters, they seem to resign themselves to the situation. Having done everything they can, they accept the approach failure, land, and attempt to dissipate their excess energy using reverse thrust, wheel braking, and longer runway rollout.</p>
<p>            As we move further to the right on our spectrum, we encounter profiles that become unmanageable and unsalvageable. A crew that fails to detect or accept that their game plan is failing might keep applying more force even though no amount of force or pilot action will solve their flight problems. These severe events require us to abandon our original game plan and switch to a safer backup. Using our fast/steep final approach example, forcing the failing game plan would still result in landing too fast and too long and risk a runway excursion. If we recognize this possibility while on final, we would abandon our original game plan (a planned landing) an execute a familiar/trained backup option (a go around). In the heat of the moment, however, mishap crews don’t recognize this hazard. </p>
<p>            At the far right end of the spectrum are events that are so rare and unpredictable that they exceed our training and procedural guidance. They require crews to recognize the indications (a loud bang followed by aircraft control difficulty), communicate and agree on problem (aircraft damage within the flap extension mechanism resulting in asymmetric wing lift), make time to deal with the problem (go around, but don’t change the flap settings), form the new game plan (refer to the non-normal checklist), and coordinate unique duties and roles (who flies, who runs the checklist). This severe aircraft damage would adversely affect controllability, so the crew would need to construct a unique game plan and innovate new procedures that compensate for lost or degraded systems while maximizing their chances for a favorable outcome. They might also consult outside experts through their company’s operations center or aircraft manufacturer for advice.</p>
<p>            When mishap crews fail, we conclude that if they had just followed procedures, they never would have failed. As safety professionals, we need to accept that just encouraging pilots to “follow procedures” and “go around when your approach is unstabilized” is not enough. When pilots become especially stressed, overloaded, or time-pressured, they lose reasoned decision making. This is especially important since PFs (pilot flying) can become so task saturated and overloaded that they don’t recognize when they succumb to this ill-fated mindset. This is why it is so important for PMs (pilot monitoring) to intervene and direct switching to a safer option.             As a profession, we need to promote a culture that encourages self-assessment and personal awareness. As we become aware of how our mindset changes under stressful conditions, we learn to automatically activate a recovery trigger that switches us toward safer backup options. When we recognize that we are using more force, feeling more stressed, and sensing rising workload, we recognize that we need to automatically switch safer backup plan. Pursuing the Master Class path guides pilots to recognize their early indications of bias, rationalization, and compromise. Armed with this awareness, they build personal firewalls to interdict failing trajectories. Through continuous debrief and introspection, we learn to skillfully identify our position on the event spectrum and apply the appropriate level of force, modification, abandonment, or innovation. The key parameter is appropriateness – accurately identifying what our situation requires and applying the appropriate blend of strategies to resolve the problem.</p>
<p> </p>]]></content:encoded>
						                            <category domain="https://www.masterairlinepilot.com/community/"></category>                        <dc:creator>Steve Swauger</dc:creator>
                        <guid isPermaLink="true">https://www.masterairlinepilot.com/community/discussion-topics/topic-24-4-pilot-strategies-across-the-range-of-scenarios/#post-35</guid>
                    </item>
				                    <item>
                        <title>Situation 24-3: Mismatch Between Simulator Training and Real-World Events</title>
                        <link>https://www.masterairlinepilot.com/community/event-discussion/situation-24-3-mismatch-between-simulator-training-and-real-world-events/#post-34</link>
                        <pubDate>Sat, 04 May 2024 23:09:21 +0000</pubDate>
                        <description><![CDATA[While simulator training provides an invaluable resource for training rare, abnormal, and emergency events, it can unintentionally foster mindsets and biases that interfere with our successf...]]></description>
                        <content:encoded><![CDATA[<p>While simulator training provides an invaluable resource for training rare, abnormal, and emergency events, it can unintentionally foster mindsets and biases that interfere with our successfully handling of real-world situations. Following are two events from the NASA ASRS database that demonstrate this mismatch. The first (ASRS event #1691666) is from a crew that misapplied their windshear response procedures because of the training scenarios used in simulator training.</p>
<p> </p>
<p><span style="color: #993300">Pilot Flying narrative: …The windy conditions had been a subject of discussion and mentioned during the briefing of the arrival and approach. During the descent we had some light chop and the surface reports were easing up as we made the descent on the RNAV Approach. We were fully configured, on speed, Vref +15,  the runway in sight  the autopilot off.</span></p>
<p><span style="color: #993300"><em>At about 1,500 ft. AGL on final, we received a “windshear ahead” warning. Since it was a lot smoother ride than anticipated and I didn’t have any other cues about wind shear with either the airspeed or vertical speed, it took me a couple of seconds to interpret the warning  which time it went away. Since we remained on speed and profile after the warning went away, I continued the approach and landed.</em></span></p>
<p><span style="color: #993300"><em>In debriefing the approach, we both agreed that a go around should have been initiated, but that a lack of other cues was a factor in a few moments of hesitation about whether initiating a go around was necessary.</em> I felt like we otherwise had good monitoring, cross checking, and communication about the aircraft state throughout the arrival and approach. <em>So often, wind shear training in the simulator is accompanied by the cues that help us identify the onset of wind shear, wind shift, airspeed vertical speed etc.</em> I need to brief or at least tell myself to be ready to go around when directed by the predictive computer.</span></p>
<p> </p>
<p>The reporter brings up important points. We use simulator windshear training profiles to teach the full range of indications and procedures. To get the most from our limited simulator time, airlines pack a wide range of experiences and stimuli into a single training scenario. We even instruct pilots to verbally announce when they would initiate their go around from increasing turbulence, but to delay their actual recovery maneuver until they receive the automated windshear warning (when the windshear conditions have progressed to the most challenging state). While this trains us to handle the worst possible windshear events, it unintentionally instills a mindset that windshear events start with turbulence, which intensify to flightpath deviations, and finally progress to an automated warning as we enter severe windshear conditions. In this case, the pilots got the automated warning first while experiencing “a lot smoother ride than anticipated”. While they were mentally processing this mismatch from their training, the audible warning ceased. At this point, it felt natural for the pilots to classify the warning as an anomaly or a transient condition, so they continued to land. Procedurally, a windshear warning on final approach requires an automatic, non-discretionary go around.</p>
<p>      Another factor is the strong incentive to land from each real-world flight versus our experience in simulator training where going around feels much more appropriate. The vast majority of daily flights end with landing, while a large percentage of simulator flights end with a go around or missed approach. Going around from a real-world flight tends to feel a bit like failure – and pilots hate failing to get their passengers to their destinations. This creates a strong incentive to rationalize landing.</p>
<p> </p>
<p>In this second situation (ASRS event #1876901), a crew discovers a fuel imbalance which they quickly diagnose as a fuel leak.</p>
<p> </p>
<p><span style="color: #993300">Captain’s narrative: I operated Aircraft X ZZZ-ZZZ1 on DATE with a suspected fuel leak that resulted in an engine shutdown and diversion to ZZZ2. I was the Pilot Monitoring and the First Officer was the Pilot Flying on this flight. Around the ZZZ3 area the left center tank low pressure lights flickered and I turned the left center pump off with about 400 pounds remaining in the center tank. I didn’t see a fuel imbalance between the left and right tanks at this time. Several minutes later the center fuel tank was empty and I turned the right center pump switch off. In addition at this time I didn’t notice anything abnormal about the cross feed valve selector or light. At approximately 70 miles SE of ZZZ2, <em>I heard the First Officer say we had a fuel imbalance and looked over to see the fuel IMBAL light illuminated on the right fuel tank. At this point I noted a 1,000 pound fuel imbalance and that in my experience the fuel in the right tank was decreasing at an abnormally fast rate.</em> <em>It had only been around 10 minutes since I turned off the center fuel tanks. At this point the First Officer and myself thought a fuel leak was plausible due to a 1,000 pound fuel imbalance occurring in around 10 minutes and observing an abnormally high rate of fuel decreasing from the right tank. I proceeded to run the fuel leak-engine QRH and contacted the flight attendants to request one of them check for a fuel leak/mist coming from the back of the right (#2) engine.</em> <em>While running the QRH, I felt time pressures to stop the fuel imbalance before it led to adverse control issues. At step 5, I recorded the total fuel and time (I don’t remember what I recorded) and proceeded to the condition statement in step 6. After reading the condition statements and based on the abnormally high rate of fuel decrease in the right tank I proceeded to step 7. At this point I thought I confirmed an engine fuel leak because we were now at 1,200 pounds imbalance in 10-15 minutes. Far greater than the 500 pounds in less than 30 minutes that the QRH states. By this time the FA reported not seeing any fuel leaking from behind the right (#2) engine. Knowing that we were going to shut down the #2 engine I requested priority handling and requested a lower altitude.</em> In addition we requested vectors to ZZZ2. By the time we shut down the #2 engine we had a 1,400 pound fuel imbalance. I’d like to add that after shutting down the #2 engine the QRH calls for the cross feed selector to be opened. The cross feed valve opened normally with no abnormal indications and closed normally with no abnormal indications several minutes later when we decided to even out the imbalance by burning fuel from the left tank. .... <em>While rereading the Fuel leak-engine QRH step 6 after the event, I realize that my decision to proceed to step 7 was based on what I read the step 6 condition statement to say of “the fuel quantity is decreasing at an abnormal rate out of the right tank”. Rather than basing it on what the condition statement actually said “or the total fuel quantity is decreasing at an abnormal rate”. This was a mistake.</em> I can say that the fuel imbalance QRH checklist would probably have been more appropriate to call first. However this wasn’t an imbalance that took time to develop. We experienced the fuel IMBAL light and an abnormally high rate of fuel decrease from the right tank around 10 minutes after turning the center pumps off. I didn’t notice any fuel imbalance prior to turning the center tanks off. I don’t think that it was unreasonable to run the fuel leak-engine QRH in these circumstances.</span></p>
<p> </p>
<p><span style="color: #993300">First Officer’s narrative: … While looking at the fuel quantity, I noticed that quantity in the right tank was decreasing at a higher-than-normal rate, and the fuel imbalance was approximately 1,000 pounds. <em>The Captain pulled out the QRH. I also may have verbalized “do we have a fuel leak,” or “is this a fuel leak.”</em> <em>This was probably confirmation bias on my part. I recently completed my annual simulator training, about 2.5 months ago. The scenario that I had on day 3 was depart ZZZ4 for ZZZ5. During that sim session we had a fuel imbalance shortly after takeoff, which was actually a fuel leak. So, the scenario we were experiencing in the airplane seemed similar to a recent training event. Since the Captain and I both thought a fuel leak was possible, he started to run the Fuel Leak checklist in the QRH.</em> The Captain also contacted the flight attendants and asked them to check the right wing and engine for any visible fuel spray. The flight attendants reported back to us that they didn’t see anything. However, based on the fuel imbalance rapidly getting worse, exceeding the 500 pounds within 30 minutes the Captain and I confirmed a fuel leak. We requested priority handling, lower altitude requested, as well as vectors for ZZZ2. Once on a localizer intercept vector, the Captain took controls and landed the airplane. We taxied clear of the runway, and emergency personnel (crash fire and rescue) visual inspected the airplane to make sure fuel was not leaking from the aircraft. We then taxied towards the gate. <em>Since I was the Pilot Flying, and the first person to notice the imbalance I should have asked for imbalance checklist first.</em></span></p>
<p> </p>
<p>While neither pilot reveals the maintenance finding in their report, their tone implies that the cause of the fuel imbalance may have been a malfunctioning crossfeed valve that allowed fuel to transfer from the right wing fuel tank to the left resulting in the imbalance or perhaps a fuel quantity sensing probe issue. The First Officer reports that their recent simulator training event included a fuel leak scenario which biased them to interpret their imbalance as a leak. The fuel leak checklist directs an engine shutdown, while the fuel imbalance checklist does not. The FO admits that his bias may have influenced the Captain to follow the leak scenario. The crew elected to divert and land which was the safest call when single engine, but shutting down a engine may have been unnecessary.</p>
<p>      Another interesting human factors bias is crew behavior created by the first conclusion expressed. Once the FO mentioned “fuel leak” and the crew established a fuel leak mindset, they appeared to abandon further analysis and inquiry. Granted, fuel leaks generate a high level of urgency which encourages prompt action. The Captain even mentions “time pressure” to mitigate the leak. They did take the additional analytical step of asking the cabin crew to check for fuel misting from the suspected engine. Even after the cabin crew reported no misting, they didn’t investigate further. Ideally, the crew could have compared expected fuel burn rates from their flight plan against the actual totals to see if the total fuel quantity was dropping significantly. This might have raised their curiosity to expand their inquiry and discover that the imbalance was caused by unintended fuel transfer instead of a fuel leak. Fuel imbalance procedures direct a series of steps to cure the imbalance.</p>
<p>      In both of these events, crews followed mindsets learned from simulator training. As an industry, we should take steps to encourage pilots to expand their mindset beyond the lessons learned in the simulator. Simulators teach procedures, but they need to also teach abnormal event processing strategies. Especially with rare events, we want crews to take enough time to examine the range of possibilities to not only select the first cause they think of. They need to take the extra step of excluding other possible causes. Had the fuel imbalance crew expanded their mindset to weigh <em>fuel leak versus imbalance</em>, they would have taken a closer look at their total fuel burn. Since they landed short of destination, we can assume that they had adequate fuel to take an extra minute to conduct this analysis.</p>
<p>      Of course, we are analyzing these events from our perfect hindsight perspective. The bottom line is that both crews landed their aircraft safely following stressful and rarely encountered conditions. I submit these events for our collective analysis to better understand the latent problems promoted by simulator versus real-world mindsets. In my book, <em>Master Airline Pilot, </em>I share a range of strategies for countering these biases. I encourage you to post your comments and suggestions to this forum thread.</p>]]></content:encoded>
						                            <category domain="https://www.masterairlinepilot.com/community/"></category>                        <dc:creator>Steve Swauger</dc:creator>
                        <guid isPermaLink="true">https://www.masterairlinepilot.com/community/event-discussion/situation-24-3-mismatch-between-simulator-training-and-real-world-events/#post-34</guid>
                    </item>
				                    <item>
                        <title>Topic 24-3: The Difference Between Simulator Training and Real-World Events</title>
                        <link>https://www.masterairlinepilot.com/community/discussion-topics/topic-24-3-the-difference-between-simulator-training-and-real-world-events/#post-33</link>
                        <pubDate>Thu, 25 Apr 2024 21:16:11 +0000</pubDate>
                        <description><![CDATA[We rely on simulators to teach and rehearse the procedures and skills needed to handle serious emergency situations. Unavoidably, this practice sacrifices realism. The interaction of operati...]]></description>
                        <content:encoded><![CDATA[<p>      We rely on simulators to teach and rehearse the procedures and skills needed to handle serious emergency situations. Unavoidably, this practice sacrifices realism. The interaction of operational conditions make it extremely difficult to recreate real-world complexity. From the perspective of the typical pilot, each line-flying emergency feels unique while the simulator training profiles tend to follow familiar, canned scenarios. This is not a failure of simulator training. Instead, it is a byproduct of our current training, certification, and evaluation system. As an unintended consequence, crews demonstrate less success with handling line-flying non-normal events than they do in simulator scenarios.</p>
<p>      Consider the ubiquitous V1-cut training event (trains the decision to reject or continue the takeoff at the critical decision point – the calculated V1 speed). Typically, it starts with an engine failure, fire, windshear, or a “bang”. The average pilot will experience and practice V1-cut training events well over 100 times during their career. They become quite competent at handling them. Let’s examine the reasons why. First, crews expect these failure events in the simulator, so they already are primed and ready to handle them. This reduces surprise and startle. Second, their training simulators events are scheduled well in advance. Crews review the procedures and often practice them in procedural trainers ahead of time. This elevates the procedural steps to the forefront of their working memory. Third, the onset conditions typically follow a limited number of canned profiles. Year-after-year, we practice these same V1-cut scenarios. This gives us the skills to complete them smoothly, accurately, and quickly.</p>
<p>      Real-world V1 events prove to be far more nuanced and complex. Consider the following summary of 15 reported V1-related, non-normal events from the NASA ASRS database from a one-year period (November 2019 – November 2020). Note: This list is copied from my book, <em>Master Airline Pilot </em>(pages 438-439):</p>
<p> </p>
<ul>
<li>The ATC Tower controller queried the crew during takeoff prior to V1 (crew had initiated takeoff without clearance). The Captain elected to continue the takeoff. The FO was returning from a 7-month absence (Report #1769610).</li>
<li>A highly experienced Captain rejected their takeoff due to a “sudden veer” to the right. The Captain neglected to make a required “Reject” callout. Both the FO and the jumpseater were confused, but monitored and supported the Captain’s actions (Report #1761771).</li>
<li>Both pilots noticed a flock of birds on the runway approaching V1. They elected to continue. They hit about 30 birds which inflicted damage to both engines. They returned for an emergency landing (Report #1759404).</li>
<li>The crew experienced numerous engine anomalies during takeoff roll (second incident for the same aircraft on the same day). With too many indications to analyze, they decided to reject at V1 due to the accumulation of unknowns (Report # 1758495).</li>
<li>The pilots became confused during taxi-out and made an intersection take-off when full length was planned. Approaching V1, they detected minimal runway remaining, but continued their takeoff. They would not have been able to stop if they had rejected at V1 (Report #1751568).</li>
<li>The EECs (Electronic Engine Control computers) reverted to ALTN during taxi-out. The crew coordinated with maintenance, reset the EECs, and were cleared to continue. The autothrottles disengaged during takeoff as the EECs again reverted to ALTN. The crew reported startle effect, but continued their takeoff. When airborne, they experienced airspeed indication problems (Report # 1748317).</li>
<li>The crew received a windshear warning at V1. They successfully rejected the takeoff (Report #1746586).</li>
<li>The crew experienced anomalous airspeed discrepancies near V1 and rejected their takeoff. Maintenance discovered mud dauber wasp blockage in pitot static system (Report #1740194).</li>
<li>The Captain/PM became tunnel-focused on possible engine underperformance and missed both the V1 and VR (rotation speed) callouts. The FO/PF made his own callouts and rotated. The Captain, who was heads-down, called for a rejected takeoff. The FO informed, “Negative, we are passed V1.” The Captain pushed engines to full thrust and continued the takeoff (Report #1739089).</li>
<li>The crew reported that multiple avionics blanked during takeoff and rejected 50 knots below V1 (Report # 1720363).</li>
<li>One turboprop engine rolled back, but didn’t auto-feather after V1. The crew continued their takeoff and returned for an emergency landing (Report # 1715079).</li>
<li>The crew experienced multiple anomalies during rotation. While returning for an emergency landing, they had unsafe gear indications and multiple confusing electrical system anomalies (Report # 1704467).</li>
<li>A spoiler warning activated at VR. The crew performed a high-speed reject. The same spoiler warning from the previous flight had been signed off (Report #1702333).</li>
<li>The crew struck a large (10′ wingspan) bird approaching V1 causing a very loud bang. They rejected the takeoff. All tires subsequently deflated (the proper functioning of a tire overheat protection feature designed to prevent tire explosion) (Report #1700045).</li>
<li>The FO lost attitude indications during takeoff into a late afternoon sun. They estimated normal pitch until airborne, then transferred control to Captain who had normal indications (Report # 1699712).</li>
</ul>
<p> </p>
<p>As I analyze these fifteen events, only three followed typically indications/progressions as we see then in the simulator. The remaining twelve crews encountered complex/startling/untrained indications that fell outside of anything they had probably ever practiced before. Of these, five crews failed to follow procedures or made significant errors. There might have been additional mistakes that were not documented in these self-reported summaries. While none of these events resulted in accidents, at least a third of them would be classified as failed procedures.</p>
<p>      We conclude that when crews experienced real-world events that were similar to the V1-cuts that they practiced in the simulator, they accurately followed procedures. The training worked. When scenarios strayed from canned profiles, their decisions and actions became less consistent. Despite how often they had practiced V1-cuts in the simulator, too many of these crews made significant procedural errors, experienced startle/surprise, or became confused by the indications.</p>
<p>      Another problem is with scope. V1-cut training in the simulator centers on the moments immediately before and after reaching V1. Focusing just on engine failures, the real world reality is that most events occur well outside of the few seconds near V1. Engine failures outside of this time window require procedural modification. For example, an engine failure that occurs during climbout passing 500’ requires that we modify the typical V1-cut takeoff profile. So, while simulator training provides excellent practice, real-world events add more dimensions that challenge our expectations and complicate our decision making.</p>
<p>      Another consideration is the distraction generated by operational complexity. Non-normal events require focused attention and crew discussion to accurately diagnose and solve the problem. This means that we’ll need to hold our train of thought as we diagnose and remedy the problem. In simulator practice, instructors typically reduce distractions or place the simulator on “freeze” to create an ideal environment for crew coordination. While flying, operational tasks and outside disruptions constantly interrupt us. ATC asks us questions or the FAs call up wanting to know what is going on.</p>
<p> </p>
<p><strong>How skills do we need to teach pilots for handling real world emergencies? </strong>The current system of simulator practice unintentionally leads pilots to hold different mindsets between simulator training and line flying. The most obvious split is that we expect to <em>always</em> experience non-normal/emergency events in the simulator, while we <em>almost never</em> experience them in the aircraft. In the simulator, expecting that something will go wrong primes us to mentally prepare. Before the event begins, we visualize the emergency, the indications that we should detect, and the steps we need to follow. When the instructor initiates the event, our mental preparation helps us to quickly make the mental switch from routine, normal flying to exceptional, non-normal event handling. Conversely, everyday flying promotes a mindset that subconsciously assumes everything to go normally. When something does go wrong, we often experience startle, surprise, and debilitating biases.</p>
<p>      So, what can we do to solve this mismatch? What is the Master Class skillset? We start with the realization that we handle emergencies rather well in the simulator. We just need to find a way to carry our simulator world competence into the real world. When that rare emergency happens, we need to quickly switch from our routine, normal-flying mindset to our non-normal, event-handling mindset. This switch is guided by three distinct skills. We start with recognition. We need to detect and acknowledge that something exceptional has occurred. Often, mishap crews spend too much time wondering what happened, questioning how they misdiagnosed the problem, or downplaying its severity. Next, we must accept that that the non-normal event will upset our established game plan. This means that we will need to reduce our attachment to our original game plan to prevent succumbing to plan continuation bias. Even if we are established on final approach with the runway in sight, our best course of action might be to go around, sort out the problem, run checklists, and come back around for another approach. Third, we need to reorient our mental processes (detecting indications, recognizing patterns, applying meaning, and decision making) from our everyday flying mode to our emergency event handling mode. Our everyday habits and decision making may not work and may impede reaching a successful outcome. I devote several chapters to this process in my book.</p>
<p>      I theorize that most crews that mishandle line-flying, non-normal events fail to make this switch in their mindset either quickly enough or accurately enough – either individually and as a crew. An additional complication is that aviation does not follow a simple dichotomy between normal flying mindset and serious emergency mindset. It is actually a continuum that requires us to master a range of responses. We will unwrap this concept in the next discussion topic.</p>]]></content:encoded>
						                            <category domain="https://www.masterairlinepilot.com/community/"></category>                        <dc:creator>Steve Swauger</dc:creator>
                        <guid isPermaLink="true">https://www.masterairlinepilot.com/community/discussion-topics/topic-24-3-the-difference-between-simulator-training-and-real-world-events/#post-33</guid>
                    </item>
				                    <item>
                        <title>Situation 24-2 Radio Altimeter malfunction causes distraction and unstabilized approach</title>
                        <link>https://www.masterairlinepilot.com/community/event-discussion/situation-24-2-radio-altimeter-malfunction-causes-distraction-and-unstabilized-approach/#post-32</link>
                        <pubDate>Sun, 24 Mar 2024 22:01:30 +0000</pubDate>
                        <description><![CDATA[As a companion piece for Discussion Topic 24-2: Learning to Manage Distractions More Skillfully, I offer NASA ASRS report #2051046. In this event, a malfunctioning radio altimeter generated ...]]></description>
                        <content:encoded><![CDATA[<p>As a companion piece for <em>Discussion Topic 24-2: Learning to Manage Distractions More Skillfully</em>, I offer NASA ASRS report #2051046. In this event, a malfunctioning radio altimeter generated spurious low altitude warnings in a CRJ200 during an ILS approach. Distraction, confusion, and unskillful flightpath management resulted in an unstabilized approach, change of aircraft control, and landing from an approach instead of going around.</p>
<p><span style="color: #ff0000">First Officer’s report (flying a Pilot Flying-PF until the Captain takes over, then as Pilot Monitoring-PM): On departure from ZZZ CA (Captain) noticed that the RA  was showing 0 ft . We were on a cruise altitude of 12,000 ft, as soon as we started to descend, the gear horn started to warn that the gear was not down. Disregarded that warning because it was due to the RA. We were cleared for the approach for visual XXL, backed up by ILS. FO (First Officer) turned into green needles 18 miles from ZZZ1  and armed APPR mode. Soon the FMA showed the green ‘GS’ mode captured. Soon Captain and FO noticed that the autopilot was not descending for us. Canceled AP (Autopilot) and GS was already 1.5 dots going down . The airspeed was 180kts with flaps 20, FO called gear down. After the gear, FO tried to put flaps 30 and therefore almost leveled off for the airspeed to reach below 180 (we were nose down to follow the GS so airspeed didn't slow down much). During this attempt, GPWS false alarm started, yelling at us “Terrain, Terrain, Pull Up”, “Too Low, Terrain”, “Glide Slope” and “Sink rate”. Captain called out disregard, it was clear that there was no terrain, <em>but FO got distracted with the aural and started to lose cross-check</em>. <em>The warnings have continued all the way down to the landing, therefore distracting us from normal sequence and ATC calls.</em> After the flaps 30 configuration was established, FO started to follow down the GS, but the lateral side was off the centerline. At some point we had PAPI showing 3~4 red lights , and Captain called out “You’re Low”. Passing ZZZZZ, at 6,800 ft FO called Flaps 45, Before Landing Checklist. FO called “One Thousand” at 6,100 ft. <em>At this time, we were already in an unstable condition but continued.</em> <em>Soon after FO got even more unstable again showing 4 Red lights, Captain called “my controls”.</em> After the Captain took control we were established again with 2 white 2 red. Around 300 AFE the aural warning said something other than usual, “Too low, Flaps”. FO noticed that the flaps were at 30 configuration, and called “Flaps 45?”. <em>It was unstable but FO did not call “Go around”, CA put flaps 45 and landed.</em></span></p>
<p><span style="color: #ff0000">Lack of hand-flying skills. Distraction management.  confidence to call go around. Maybe muting the aural warnings might have helped, but also on the other hand, aural did let us know that we were in flaps 30. I think it would’ve been better if I was trained as: If “Too Low, Flaps” is heard, the next callout is “Go around”. Just like an automatic call out like “Go around thrust flaps 8” being automatic if I hear “Missed Approach”. The first thing I heard “Too Low, Flaps”, my instinct thought was “Okay, we’re low and flaps 30... so put flaps 45?”</span></p>
<p><strong>Briefing potential warnings during approach:</strong> The crew knew after takeoff that they had a malfunctioning radio altimeter (indicating zero). Perhaps they didn’t know that this malfunction might generate spurious aural warnings during the approach. The report does not say whether they discussed these possible approach warnings or how they intended to handle them. When the spurious warnings began to happen, the report implied that the Captain immediately knew that the warnings were false since they instructed the FO to disregard them. It further implied that the FO became significantly distracted and affected their ability to fly the approach. Perhaps if the Captain had briefed this while in cruise flight, the FO might have been less vulnerable to distraction.</p>
<p><strong>Getting behind on the approach parameters: </strong>The first warning they received was a landing gear warning horn as they descended out of 12,000’. The next distraction was the failure of the autopilot to capture the glideslope and descend. As the flightpath reached 1.5 dots high on glideslope, the FO disconnected the autopilot and started down. This went poorly. The FO admitted their lack of hand-flying skills. The flightpath became steep and fast. The FO called for landing gear down (to increase drag and reduce airspeed). The combination of their steep descent and high airspeed prevented them from extending the flaps to 30. They leveled until they could decelerate below flap placard speed and extend more flaps. After extending flaps to 30, the FO again increased descent rate to rejoin the glideslope. At this point, the approach appeared to be salvageable.</p>
<p><strong>Escalation of warning notifications: </strong>Around 2,000’ above runway elevation, the FO was falling behind the approach profile and struggling to shed airspeed to get landing flaps extended. Addition warnings began to sound – “Terrain, Terrain, Pull Up”, “Too Low, Terrain”, “Glide Slope” and “Sink rate”. The situation became even more distracting. The FO appeared to become so tunnel focused while trying to rejoin the glideslope that they began to “lose crosscheck” resulting in a lateral deviation. Correcting for the lateral deviation, they lost glideslope alignment and flew too low. The Captain made the callout, “You’re Low”. While we don’t know this airline’s procedures, industry conventions generally require scripted callouts for approach deviations. The Captain’s callout, while accurate, probably didn’t adhere to procedures. This transition from informative callouts to procedurally-scripted callouts is an interesting topic that we can address in a later discussion (also covered in detail in my book).</p>
<p><strong>Captain also becomes task saturated and tunnel focused:</strong> The FO called for flaps 45. At this point, the Captain (as PM) should have verified placard speed compliance, announced “Flaps 45”, placed the flap lever to the appropriate position, and monitored the flap gauge for desired extension. The Captain didn’t perform any of these tasks, probably because they became tunnel focused on the unstabilized flightpath as they were approaching 1000’. Understandably, the Captain probably directed their attention to the deteriorating approach parameters and on deciding whether to assume aircraft control. Somewhere below 1000’, the Captain had seen enough and assumed aircraft control.</p>
<p><strong>Procedural breakdown and unstabilized approach landing:</strong> Soon after the Captain assumed aircraft control, the crew experienced an, “…aural warning  said something other than the usual, ‘Too low, Flaps’”. Looking down, the FO (now serving as PM) noticed that the flaps weren’t extended to the planned 45 position. They queried the Captain, “Flaps 45?”. The FO acknowledges that they did not direct a go around. At this point, the Captain apparently reached over, set their own flaps to 45, and landed. This is considered nonstandard in most crew aircraft. First of all, they should have called for or executed a go around. Second, assuming that they were committed to land, the Captain should have called for flaps 45 to allow the FO to verify parameters, set the flaps, and confirm their extension. Third, while not stated, it is strongly indicated that they failed to complete their Before Landing Checklist.</p>
<p><strong>FO’s analysis: </strong>The FO finishes their report with a brief analysis of what went wrong – “Lack of hand-flying skills. Distraction management.  confidence to call go around.” They then suggested that “Maybe muting the aural warnings might have helped”. I am not sure of the systems of the CRJ200, but generally, many EGPWS (Enhanced Ground Proximity Warning System) warnings are not “mutable” until the out-of-tolerance conditions are corrected. The FO goes on to suggest that if their training was more specific, they would have felt more confident to call for a go around. While probably valid, this misses the larger point that their unstabilized approach parameters should have triggered a “Go Around” callout even without the EWGPS warnings.</p>
<p><strong>Lack of Captain’s report: </strong>An interesting sidenote is that this record did not include a Captain’s report. Typically, this is because the report was either not submitted or because it lacked useful information. From my experience, it was probably the former. Often, pilots who choose to deviate from procedures often choose not to highlight their noncompliance by submitting written reports. We have every indication that this Captain failed to brief for expected spurious warnings, make required deviation callouts, direct a go around when the FO’s approach became unstabilized, direct the FO to set flaps to 45, call for the Before Landing Checklist, or go around from their unstabilized approach. It would also be informative to know whether the Captain documented the radio altimeter malfunction in the logbook.</p>
<p><strong>Summary: </strong>This event reflects many of the distraction-related concepts detailed in my book, <em>Master Airline Pilot. </em>We see how a series of escalating distractions disrupted flying, led to flightpath deviations, and inhibited the FO’s ability to restore the intended flightpath. Moreover, we see how both pilots became consumed by task saturation, plan continuation bias, and event quickening. These led them to tunnel their attention focus while trying to “save” an unstabilized approach. Granted, they landed safely. Unfortunately, landing safely has an effect of minimizing past errors. Hopefully, this crew engaged in a detailed debriefing to analyze their errors and to recommit themselves to maintaining higher standards in the future. From Discussion Topic 24-2, we have the distraction parameters of:</p>
<p>            - <em>Intensity or severity:</em> How much of our attention was diverted by the distraction?</p>
<p>            - <em>Duration:</em> How long did the distraction last?</p>
<p>            - <em>Operational flow disruption:</em> How different is our current position from where we were before the distraction?</p>
<p>The intensity of the distractions increasingly diverted their attention. More importantly, the severity of the distractions steadily increased. This seemed to undermine their ability to recover. The distractions demanded increasing levels of attention focus to restore their flightpath and aircraft configuration. Once the chain of distractions started, they continued. Apparently, the crew never reached a point where their distractions ceased. Finally, their operational flow remained disrupted all the way down final. When the FO failed to restore stabilized parameters, the Captain assumed aircraft control. This, in effect, became another distraction as both pilots needed to switch roles, reestablish new flight perspectives, and assume new tasks to complete. This is why most airlines encourage their pilots to go around rather than try to salvage unstabilized approaches.</p>
<p>I welcome your comments on this discussion.</p>]]></content:encoded>
						                            <category domain="https://www.masterairlinepilot.com/community/"></category>                        <dc:creator>Steve Swauger</dc:creator>
                        <guid isPermaLink="true">https://www.masterairlinepilot.com/community/event-discussion/situation-24-2-radio-altimeter-malfunction-causes-distraction-and-unstabilized-approach/#post-32</guid>
                    </item>
				                    <item>
                        <title>Topic 24-2: Learning to Manage Distractions More Skillfully</title>
                        <link>https://www.masterairlinepilot.com/community/discussion-topics/topic-24-2-learning-to-manage-distractions-more-skillfully/#post-31</link>
                        <pubDate>Tue, 05 Mar 2024 23:24:07 +0000</pubDate>
                        <description><![CDATA[One notoriously unhelpful mantra within the aviation safety world is, “avoid becoming distracted”. This guidance is useless because it contradicts a core pilot responsibility. We are expecte...]]></description>
                        <content:encoded><![CDATA[<p>      One notoriously unhelpful mantra within the aviation safety world is, “avoid becoming distracted”. This guidance is useless because it contradicts a core pilot responsibility. We are expected to accurately detect and identify the source of every distraction and respond quickly and skillfully while maintaining a smooth flight profile and precise flightpath management. Moreover, some of the most consequential distractions demand our immediate attention. Notable examples are warning lights, fire alarms, and audible directives from systems like the collision avoidance system (TCAS). A more accurate directive would be “accurately identify and skillfully respond to all distractions while maintaining a safe flight profile and preserving situational awareness.” Granted, this doesn’t fit neatly on a safety poster.</p>
<p>      While we are immersed in the operational flow of flying, we encounter each distraction as it happens. After we detect it, we choose either to ignore it or to respond to it. We need to decide whether that event is a trivial blip with minimal effect on our operational flow or whether it is something that has disrupted our operational flow. There isn’t a clear line between these two situations. The more deeply we study distractions, the more we understand how they affect us. In my book, <em>Master Airline Pilot,</em> I present the following range of “bird encounter” distraction events while flying down short final. They range from fleeting distractions that we immediately dismiss to persisting distractions that demand our full attention.</p>
<p> </p>
<p><em>Case 1</em>: We see the bird and watch it as it zips by. We don’t hear an impact, so we conclude that we successfully missed it. Most of us would conclude that this is, at most, a very minor distraction. If we don’t hit the bird and maintain a stabilized approach down final, we would treat it as an inconsequential event. We saw the bird, we missed it, and we maintained our attention on flying the aircraft.</p>
<p><em>Case 2</em>: We only detect the bird at the last instant. It startles us and we reflexively move the aircraft to avoid it. Not hearing an impact, we conclude that we probably missed it. Since we altered our flightpath when startled, we return our attention to flying, restore our flightpath, and continue down final, albeit a bit shaken. We would categorize this as a strong, momentary distraction. It diverts our attention from flying for a few moments, but we maintain our operational flow and continue down final without further disruption.</p>
<p><em>Case 3</em>: We see the bird at the last moment. Before we can react, it impacts somewhere behind us on the fuselage. This becomes a much stronger, startling, distraction event. Recovering from the startle, we check our engine instruments to ensure that they appear normal. We sample the cabin air for the odor of burnt bird. For a longer time period, our attention is diverted from flying. Our monitoring of the operational flow is disrupted. After recovering, we would look for cues that could provide context for where to rejoin the operational flow. We would also weigh whether to continue the approach or to go around to assess any operational consequences from the birdstrike.</p>
<p><em>Case 4</em>: We see the bird at the last moment. It impacts on the radome with a loud bang. Bird parts spread across our windscreen and obscure our forward visibility. One of the wings lodges under our wiper blade and continues flapping wildly. This would undoubtedly qualify as a significantly distracting event. Unless we recover and assess quickly, we would favor going around to regain our composure, analyze the damage, rebuild our SA, and return for another approach.</p>
<p> </p>
<p>      These four cases span the range from inconsequential to very distracting. The most important consideration is how they affect our operational flow – the sequence of planned tasks and events that we expect to occur down final. Consider the particular task of completing Before Landing Checklist. If the bird encounter happened near the point in the operational flow where we planned to complete the checklist, the distraction might cause us to miss it. In Cases 3 and 4, we should recognize that the birdstrike has probably distracted us and that we might have missed doing something. We would look for cues about possibly missed tasks. We would either clearly recall completing the checklist or feel unsure whether we had completed it. If we have any doubt, the prudent choice is to run the checklist, even if it results in running it twice.</p>
<p>      If we don’t have an awareness while flying in-the-moment that the birdstrike has distracted us, we might just try to rejoin our operational flow based on our current position on final approach. If that position is past our usual checklist completion point, we might miss performing it. The critical difference is recognizing that we were distracted and deliberately investigating what we might have missed during the lost time. Examining this range of distractions, we summarize that they affect three parameters.</p>
<p> </p>
<ul>
<li><em>Intensity or severity</em>: How much of our attention was diverted by the distraction?</li>
<li><em>Duration</em>: How long did the distraction last?</li>
<li><em>Operational flow disruption</em>: How different is our current position from where we were before the distraction?</li>
</ul>
<p> </p>
<p>       We may need to assess these parameters very quickly. If we decide that the event is fully understood, processed, and recovered, we can safely continue. If not, we need to assess how much time we have available. If time is short, we should consider making extra time to process the event (go around), settle down (physically recover from the startle), and fly another approach.</p>
<p>      There isn’t a clear distinction between distractions that prove consequential and those that aren’t. They feel similar as we experience them while flying in-the-moment. They both feel like flying the aircraft, dealing with events as they come up, and keeping our game plan on track. We only recognize the differences in hindsight. Our challenge is to learn how to translate hindsight clarity into skills that actively improve our present-moment awareness. We refine this skill by analyzing our encounters with distracting events and recognizing our personal biases, strengths, and weaknesses. We begin by constructing two stories comparing how we expected the flight to flow (our game plan) with what actually happened because of the distraction. Holding these two stories side-by-side, we run the timeline back and note the onset of the distraction and our reactions to it. Next, we identify the indications that were present. They may not have registered as important at the time. Maybe they felt unimportant. Maybe they felt like minor anomalies obscured by the background noise of everything else that happens while flying. Maybe we missed them entirely. We assess whether a startle effect drew our attention away from normal flightpath monitoring. We also assess personal and crew factors. Were we too relaxed or tired? Were we engaging in discretionary activities? Were we inappropriately diverting our attention? By comparing the two stories, we discover ways to realign our practice to respond more skillfully in the future. We identify monitoring techniques that would have accurately detected all of the adverse effects. We recognize decisions that would have aided our recovery. Each time we perform this kind of personal debrief analysis, we improve how skillfully we will process future distracting events.</p>
<p>      Compare the differences between three bird encounter events – missing the bird, maneuvering to avoid it, and hitting it. Imagine how each one would feel, where we would focus our attention, and how we would manage our flying. If we see and miss the bird, we might register the event subtly while easily maintaining our flightpath. We wouldn’t lose focus. Our operational flow would remain intact. For the second bird encounter where we maneuver to avoid it, we might need to divert a significant portion of our attention to recovering from startle and maneuvering the aircraft. In this encounter, the bird distracts us, but we would still remain sufficiently connected to the operational flow. For the third encounter, the bird splatting against our windscreen might completely grab our attention. This distraction would divert our attention away from the operational flow of flying the aircraft. The greater the severity of the distracting event, the more our attention becomes diverted away from holding a stabilized final approach path. As we improve our awareness of how we personally respond to each of these encounters, we continue refining our awareness management skills. When a similar event happens to us in the future, we wouldn’t experience as much startle and disruption. We’ll engage it more mindfully.</p>
<p>      Event duration is a measure of how long the distraction diverts our attention away from the operational flow. Imagine a scenario where we pass one bird while on final approach, followed immediately by another, and then another. Each time, we successfully avoid each bird, but the succession of distractions keeps our attention diverted from the actively managing the operational flow. With each encounter, we attempt to recover, but another distraction immediately arises. We struggle to fully return our attention back to flying the approach. Now, imagine that during this series of bird distractions, an aircraft incurs onto our landing runway. At what point would our SA become so deflated that we might miss this new threat? The duration of a distraction or a chain of distractions might affect our ability to restore our attention back to flying. When successive distractions prevent us from fully returning our attention to the operational flow, we should consider exiting the game plan (like going around) and resetting the operational flow.</p>
<p>      As we study our personal experiences with distractions, we may notice trends. Perhaps we discover that we tend to respond differently when we are tired or relaxed. Perhaps we lower our vigilance during the last leg of our pairing back to our home base. Perhaps we discover that events from our home life adversely affect our ability to remain focused. Perhaps we notice that we respond differently when paired with particular types of pilots. For example, I discovered that when I was flying with a particularly competent FO, I would subconsciously lower my level of vigilance. Because they were so good at detecting and mitigating any errors, I allowed myself to relax into my comfort zone. I subconsciously lowered my level of vigilance because they raised theirs. Looking deeper, I recognized that the opposite also occurred. I would raise my level of vigilance when flying with an inexperienced or especially lax FO. Aware of my personal biases, I learned to monitor my attention level more appropriately. The more we learn about ourselves, the more accurately we can improve our resilience against distractions. Sensing that we are tired signals us to increase our vigilance. Noticing that we perform better during daytime encourages us to bid for early schedules. Noticing that we are less alert during early morning flights encourages us to bid for later schedules. We should also monitor how we change as we age. We are constantly changing individuals, so we need to continuously reassess ourselves.</p>]]></content:encoded>
						                            <category domain="https://www.masterairlinepilot.com/community/"></category>                        <dc:creator>Steve Swauger</dc:creator>
                        <guid isPermaLink="true">https://www.masterairlinepilot.com/community/discussion-topics/topic-24-2-learning-to-manage-distractions-more-skillfully/#post-31</guid>
                    </item>
				                    <item>
                        <title>Situation 24-1: Crew Misses their Assigned Gate During Taxi-in</title>
                        <link>https://www.masterairlinepilot.com/community/event-discussion/situation-24-1-crew-misses-their-assigned-gate-during-taxi-in/#post-30</link>
                        <pubDate>Mon, 29 Jan 2024 19:48:45 +0000</pubDate>
                        <description><![CDATA[As a companion piece for Discussion Topic 24-1, The Difference of Workload Priorities Between Taxi-out and Taxi-in, consider the following NASA ASRS event where a crew missed their parking g...]]></description>
                        <content:encoded><![CDATA[<p>As a companion piece for <em>Discussion Topic 24-1, The Difference of Workload Priorities Between Taxi-out and Taxi-in</em>, consider the following NASA ASRS event where a crew missed their parking gate during taxi-in. Referring to the discussion topic, notice how preconceptions, biases, distractions, and misplaced attention toward discretionary tasks interacted to result in their error. From NASA ASRS report #1404064 (italics added):</p>
<p><span style="color: #ff0000">On taxi in to the gate in BWI, I taxied past our assigned gate towards the incorrect side of the concourse and required an amended taxi clearance to reverse back to our gate.</span></p>
<p><span style="color: #ff0000">We departed BWI from Gate YY. Now, on the return leg to BWI we were assigned Gate XX. I conducted a routine arrival briefing prior to the top of descent. At the time, I briefed that we would taxi F, T, to Gate XX. <em>Without referencing the chart, I misidentified the location of Gate XX from “memory”.</em> It is on the same side as YY. <em>I believe that I had an expectation bias</em> of the even gates being  of the concourse because we had just operated out of an odd gate (YY) on the . <em>The Pilot Monitoring (PM) did not catch my error in the briefing.</em></span></p>
<p><span style="color: #ff0000">After landing on Runway 33L, we told Ground Control that we were assigned Gate XX. We were cleared to taxi F, T, to the gate. After receiving the clearance, t<em>he First Officer (PM) went off frequency to contact Operations. He was distracted receiving aircraft swap information</em> as I taxied past Gate XX.</span></p>
<p><span style="color: #ff0000">Another additive condition was another carrier wide body aircraft was being towed on the parallel taxiway, surrounded by emergency vehicles with lights flashing. As we taxied past the other aircraft, I wanted to make sure we had wingtip clearance as I was not sure if he was off his taxiway (it was difficult to see clearly at night with the emergency vehicle lights).</span></p>
<p><span style="color: #ff0000">When we got to the end of the concourse and I began a turn , I first realized that the gate numbers were odd. I stopped the aircraft and advised Ground Control that I had “screwed up” our taxi and needed clearance to go back to Gate XX. The PM was just finishing communications with Operations and now realized, as I,  Gate XX was. Ground Control approved us for a 180 degree turn and clearance to taxi back to Gate XX. We taxied to Gate XX without further incident and no conflicts.</span></p>
<p><span style="color: #ff0000"><em>The error chain started when I did not do a thorough briefing by referencing the chart. The PM had an opportunity to trap my error, but fell into the same expectation bias.</em> After landing, I should have caught what the taxi clearance was. The bright lights of the emergency vehicles were a distraction as was the extended ground call to Operations for the PM. These were additive conditions that should have been recognized and identified. Normally, at that point I may have caught that we were taxiing past Gate XX and the even gates.</span></p>
<p><span style="color: #ff0000"><em>The briefing and taxi error were my fault due to complacency and lack of thoroughness. We were fortunate that there were no traffic conflicts at this time of the evening. Had the same error occurred during peak operations, it could have caused significant congestion and potential safety conflicts.</em></span></p>
<p><span style="color: #ff0000">To prevent future occurrences, I need to be more thorough with my briefings. Even when I think “I know”, I need to reference and view the ramp charts just as I do the Jeppesen charts. I also should have paid closer attention to our taxi clearance and not assumed. Lastly, I need to do a better job of engaging the PM during briefings and avoid the rote regurgitation of information that leads to PM missing errors on my part.</span></p>
<p> </p>
<p>Points to consider:</p>
<ol>
<li><strong>Operational familiarity: </strong>The report implies that the Captain was quite familiar with operating out of BWI (Baltimore/Washington International – Thurgood Marshall Airport). This was reflected by briefing their gate arrival without referencing the airport diagram. This allowed their misconception that Gate XX was on the opposite side of the terminal from previous gate, Gate YY (by assuming that all even numbered gates were on one side and all odd numbered gates were on the other). Both pilots made the same assumption. Ideally, the PM needs to capture errors like this. We don’t want PMs agreeing with the PFs, we want them verifying each facet of the game plan to detect and correct errors as early as possible. This reflects a one-sided briefing perspective (only the pilot flying dictating the game plan) instead of the interactive two-way briefing method (where both pilots work together to form the game plan). Their aligned perspectives effectively solidified their expectation bias that the gate would be where they thought it was (on the opposite side of the terminal from Gate YY) versus where it really was (adjacent to Gate XX). This kind of bias strongly influences future conceptions and choices because we treat them as “facts” that don’t require future verification or confirmation. Since they <em>knew </em>where their gate was, they never perceived a need to reconfirm the actual gate location or taxi routing using the ramp diagram.</li>
</ol>
<ol start="2">
<li><strong>Distraction:</strong> On their way to the gate, the crew encountered a wide-body aircraft under tow and escorting emergency vehicles. This would understandably attract most of their attention. Concerned with wingtip clearance, the Captain was highly focused on getting clear. This type of event seems to create a psychological letdown when finally clear of the hazard. It can feel like, “okay, that  has ended, now we can relax and get back to normal taxi-in.” We feel a strong motivation to return to our familiar game plan. Solving a big problem creates an impression that we have solved all problems. Often, some lesser problems slip through undetected or mitigated. Having passed the wide-body aircraft under tow, we can envision this Captain breathing a sigh of relief at finally being able to taxi normally to the gate.</li>
</ol>
<ol start="3">
<li><strong>Additive Conditions:</strong> The Captain referred to additive conditions. Additive conditions reflects the language of Risk and Resource Management (RRM) – covered extensively in <em>Master Airline Pilot: Applying Human Factors to Achieve Peak Performance and Operational Resilience. </em>They are complicating factors that urge us to focus more attention on understanding and handling emerging problems. Conditions interact in increasingly complex ways to create competing priorities that allow unpredictable outcomes to emerge. In this case, nighttime conditions and ramp congestion required their full attention. This intensified their plan continuation bias, increased the intensity/duration of distractions/disruptions, and encouraged continuing their flawed game plan.</li>
</ol>
<ol start="4">
<li><strong>Discretionary actions:</strong> The FO had two opportunities to interdict the Captain’s misconception about the gate location. The first was during the arrival briefing while they were still in cruise flight. Had either pilot consulted their ramp diagram, they could have detected and corrected the misconception. Second, the FO engaged in time-consuming “off frequency” coordination with the station regarding an aircraft swap. While aircraft swaps are not operational concerns that we can do anything about during taxi-in, they represent major concerns with future task load. Especially if this crew routinely engaged in discretionary clean-up tasks during taxi-in, they would immediately feel behind as they now faced a much greater task load of gathering their personal items before scrambling to the swap aircraft (as compared with leaving all of their gear in place for a follow-on flight with the same aircraft). We can imagine that these concerns left the FO highly disengaged with the taxi-in process.</li>
</ol>
<ol start="5">
<li><strong>Low Operational Priority:</strong> The taxi-in and gate arrival flight phases seemed to have low operational priority with both pilots. While they went through the steps to brief the taxi-in before top-of-descent, they admitted that they didn’t give it adequate attention – briefing it completely from memory versus referencing the ramp/gate diagram. This often occurs among highly proficient pilots operating frequently through familiar airports. Familiarity with a typical flight profile allows pilots to allocate more attention toward disruptions and less toward familiar operational details. In time, these normal features become cognitively automated. Settled within our comfort zone, we relegate these repetitive tasks to habit. While this frees up mental resources to deal with unplanned or exceptional events, habits tend to weaken error detection/mitigation. We counter this by mindfully following procedures. Familiar and unfamiliar airports are treated equally, even when the process feels unnecessary or redundant.</li>
</ol>
<ol start="6">
<li><strong>Event Insignificance:</strong> In the end, this error/event proved to be fairly insignificant. It was quickly sorted out and they proceeded safely to their gate. Even so, we should not classify these kinds of errors as unimportant. In practice, many taxi-in errors remain so inconsequential that most pilots choose not to document them through event reporting. Also, they often misattribute them to conditions like adverse weather, day/night conditions, poor signage, worn taxi guidelines, etc. The importance of this topic is that it encourages us to raise our awareness level and attention focus to match high AOV aircraft movement (see Discussion Topic 24-1 and <em>Master Airline Pilot </em>for more on this topic). As Master Class pilots, we study ourselves to detect lapses when we allow discretionary tasks to migrate into inappropriate flight phases. Appropriate attention discipline is a Master Class skill that we continue to refine throughout our entire flying careers.</li>
</ol>]]></content:encoded>
						                            <category domain="https://www.masterairlinepilot.com/community/"></category>                        <dc:creator>Steve Swauger</dc:creator>
                        <guid isPermaLink="true">https://www.masterairlinepilot.com/community/event-discussion/situation-24-1-crew-misses-their-assigned-gate-during-taxi-in/#post-30</guid>
                    </item>
				                    <item>
                        <title>Topic 24-1: The Difference of Workload Priorities Between Taxi-out and Taxi-in</title>
                        <link>https://www.masterairlinepilot.com/community/discussion-topics/topic-24-1-the-difference-of-workload-priorities-between-taxi-out-and-taxi-in/#post-29</link>
                        <pubDate>Sun, 14 Jan 2024 22:51:51 +0000</pubDate>
                        <description><![CDATA[Observing crew behavior, we notice that pilots handle workload priorities differently during taxi-out compared with taxi-in. Since both flight phases involve identical procedures and objecti...]]></description>
                        <content:encoded><![CDATA[<p>Observing crew behavior, we notice that pilots handle workload priorities differently during taxi-out compared with taxi-in. Since both flight phases involve identical procedures and objectives, we would logically assume that they deserve comparable levels of crew attention and diligence. Instead, we notice that while taxi-out behaviors tend to follow trained standards, many crews choose to engage in discretionary tasks during taxi-in. Let’s examine the flow of immediate and future workload to understand the psychology behind this practice.</p>
<p><strong>Workflow and attention focus before taxi-out:</strong> Before taxi-out, we follow a scripted flight preparation process. The procedural sequence includes individually preparing our personal gear, reviewing flight planning products, acquiring clearances, and programming aircraft systems. We then coordinate as a crew to review the ATC clearance, verify flight documentation, complete required checklists, push back from the gate, and start the engines. Analyzing how the workflow changes across this process, notice how our attention focus begins wide and fluid and then steadily narrows until we become fully focused during aircraft taxi. By the time we begin aircraft movement, all necessary tasks are completed and all future tasks are briefed. There is little that we can do to get ahead of future workload demands.</p>
<p><strong>Workflow and attention focus during taxi-in:</strong> In many ways, our workload flow and attention focus during taxi-in is opposite of taxi-out. During takeoff and departure, our workload and attention focus steadily decreases. During we descent, approach, and landing, our workload steadily rises and our attention focus increasingly narrows. Following our highest phase of attention focus (landing), we experience a perceptible letdown in our workload (See the <em>Master Pilot Forum Topic 23-9: The Vulnerabilities of the Psychological Letdown</em> for more on this effect). Nestled within this after landing letdown, however, airline pilots face a looming spike in their workload that begins after gate arrival. This spike is higher if we need to park the aircraft or hand it over to the next crew. These leaving-the-aircraft tasks are superimposed over the required gate arrival procedures of parking, transferring aircraft power to the APU/Jetway, engine shutdown, shutdown/parking checklists, unique gate arrival tasks, and completing required paperwork. None of these required procedures can be performed in advance, so there is no way to work ahead. The only tasks that we can potentially complete in advance are flightdeck clean-up and gathering personal items. To get ahead of these tasks, many pilots try to complete some of them while taxiing to the gate. Combine this with the psychological post-landing letdown and we see how our environment creates a feeling of “extra time” that we can use to accomplish a clean-up item or two.</p>
<p><strong>How task accomplishment drifts during taxi-in: </strong>Completing a clean-up item or two seems fairly benign, but it is vulnerable to drift over time. Consider a fairly new airline First Officer (FO) and their mindset regarding taxi-in. Following training, they embrace the need to maintain a high level of attention focus during a high Area of Vulnerability (AOV) phase of aircraft movement (See <em>Master Airline Pilot – Chapter 14 – Workload Management Techniques</em> for a detailed explanation of AOVs). They focus their attention on the taxi routing and ground threats as their Captain maneuvers the aircraft. After reaching the gate, they complete all of their required gate arrival procedures. Then, they need to gather up their gear to depart the aircraft. Long before they are finished, their Captains leave the flightdeck. They hurry to finish. Feeling behind feels uncomfortable, so they study their Captains to understand how to work more quickly. They notice that their Captains get a head start in their clean-up tasks while taxiing the aircraft. Maybe they put away their coffee mug. Then they dispose of expired flight planning paperwork, and so on. Their Captains seem to easily manage aircraft movement and successfully arrive safely at the gate. This reinforces an illusion of harmlessness with accomplishing these discretionary tasks. After finishing the gate checklists, their Captains quickly complete a few remaining clean-up tasks and depart. Modeling their example, our new FO begins to experiment with the same discretionary clean-up tasks while enroute to the gate. Nothing seems to go wrong and they get out of the aircraft more quickly, so their discretionary behavior is rewarded. Over time, this process reinforces itself with more time-consuming and attention-absorbing clean-up tasks. With some pilots, it becomes a challenge to see if they can complete all of their clean-up tasks before arriving at the gate.</p>
<p><strong>The forces driving discretionary tasks: </strong>As a training standard, we know that we shouldn’t be engaging in discretionary tasks during aircraft movement. Still, we feel a strong urge to do it. One driver behind this is the desire to stay ahead of future high-workload situations. Indeed, this motivation is baked into our procedures and checklists. Over time, it becomes integrated into our perceptions, pacing, and techniques. In anticipation of the demands of taxi-out, takeoff, and departure, we thoroughly plan and brief while at the gate. In anticipation of the demands of descent, approach, and landing, we thoroughly plan and brief while in cruise before top-of-descent. <em>Backload your workload </em>becomes our mantra and a cornerstone of proficiency. Working ahead to compensate for the high-workload gate arrival flight phase feels the same. Unable to perform any shutdown and checklist tasks until parked, the only tasks that we can accomplish early are flightdeck clean-up and personal equipment gathering tasks. Another driver behind this behavior is our discomfort with feeling behind. We hate feeling the need to hurry or rush to catch up. This becomes so strongly hard-wired into our practice that we constantly search for ways to avoid feeling behind. Combine the desire to get ahead of future workload with the wish to avoid feeling behind and the after-landing letdown and we feel strong motivation to engage in discretionary tasks during taxi-in.</p>
<p><strong>The latent vulnerabilities that emerge:</strong> For most flights, these discretionary actions don’t lead to problems. Virtually every time, we successfully make our way from the landing runway to our gate without mishap. What we don’t detect are the latent vulnerabilities that bubble below the surface and only emerge when particular conditions interact. Most often, these involve distractions or unanticipated changes that occur while our attention focus is diverted by discretionary tasks. These distractions and changes often prolong our startle reactions and increase the recovery time from disruptions or surprises. Anomalous events that we could normally handle successfully while we are fully attentive veer off toward undesirable outcomes while we are distracted. This is because our time to react or compensate is very short while taxiing. Ideally, we should stop the aircraft, set the parking brake, and sort out the disruption. More often, however, we optimistically press forward with the expectation that we can successfully recover from the disruption without stopping. Thanks to our experience and proficiency, we usually succeed. We might even consider ourselves lucky that we dodged that bullet. What we fail to recognize is that it was that discretionary task that set the stage in the first place. In the end, it is best not to rely on luck or perfect performance to recover from a situation that only became serious because our attention was unnecessarily diverted.</p>
<p><strong>Restoring the desired attention level during aircraft movement:</strong> Master Class pilots practice AOV attention discipline and self evaluation. Sure, we have the skills to successfully taxi the aircraft and quickly complete discretionary clean-up tasks, but we choose not to. Anytime the aircraft is dynamically moving, we apply high levels of attention focus. Even though we believe that we can multitask, we choose not to even attempt it. In reality, there is always plenty of time to perform clean-up tasks after completing all of our required gate arrival tasks. We just need to make a conscious commitment to maintain appropriate attention standards and to model those standards with other crewmembers. To counter drift, we constantly assess our performance and look for signs that our personal practice may be drifting. This begins with seemingly harmless slips and shortcuts that help with our workload. Through self introspection, we detect and correct these early deviations to restore our commitment to Master Class standards.</p>]]></content:encoded>
						                            <category domain="https://www.masterairlinepilot.com/community/"></category>                        <dc:creator>Steve Swauger</dc:creator>
                        <guid isPermaLink="true">https://www.masterairlinepilot.com/community/discussion-topics/topic-24-1-the-difference-of-workload-priorities-between-taxi-out-and-taxi-in/#post-29</guid>
                    </item>
				                    <item>
                        <title>Topic 23-9: The Vulnerabilities of the Psychological Letdown</title>
                        <link>https://www.masterairlinepilot.com/community/discussion-topics/topic-23-9-the-vulnerabilities-of-the-psychological-letdow/#post-28</link>
                        <pubDate>Mon, 18 Dec 2023 22:27:05 +0000</pubDate>
                        <description><![CDATA[Our minds crave excitement and stimulation, but we can’t sustain high levels of excitement for prolonged periods. Following minutes of high-intensity stimulus, our minds want to relax. As we...]]></description>
                        <content:encoded><![CDATA[<p>      Our minds crave excitement and stimulation, but we can’t sustain high levels of excitement for prolonged periods. Following minutes of high-intensity stimulus, our minds want to relax. As we enter this psychological letdown state, we lower our guard against potentially surprising or startling events. Because of this, the shock effect is stronger. The startle effect is intensified and it takes us longer for us to recover from disruptions. Film makers exploit this effect in their plot lines. After a high-energy chase scene, they insert a low-intensity lull. As our minds sense that the exciting part is over, we relax our tense shoulders, sink back into our seats, grab another handful of popcorn, and lower our guard. Then, film director hits us with an unexpected zinger. It really shocks us.</p>
<p>      As pilots, this psychological effect can emerge following any high task-loaded flight segment (like shortly after takeoff, after landing, or following an intense inflight event). Let’s examine a typical example of a taxi-in event following a challenging approach and landing. As professional pilots, we are quite skillful at raising our attention level to cope with the challenges of a difficult approach and landing in adverse conditions. Bumping down a turbulent final, concentrating our instruments, we feel rewarded to see the runway lights emerge from the gloom. We land, slow down, exit the runway and immediately feel the urge to relax. We did it. We made it in. We unknot our shoulders and sit back in the seat while anticipating a routine taxi-in to the gate.</p>
<p>      The speed at which we can settle into this relaxed state seems to be related to our experience level. The more flight time we have and the more proficient we have become, the easier it is for us to relax following a challenging approach. When we combine this with expectation bias (expecting a non-challenging taxi-in), the stage is set for us to make an error, miss an error, or fail to recover quickly from a disruption. Following is an example event from a B-787 crew reported in the NASA ASRS program (report #1998466).</p>
<p><span style="color: #993300">Captain’s report: After landing Runway XL uneventfully, I exited runway and began taxiing as instructed by ground control. While beginning a slight left turn I noticed the aircraft didn’t respond to my tiller input. I applied brakes and the brakes did not actuate. I looked up at the hydraulics switches and noticed they were turned off. I called out to turn the hydraulic pumps back on and the First Officer complied. Once the pumps were activated, the brakes and tiller began to work again. The First Officer apologized and said he had a “brain fart”. The nose wheel crossed the double yellow line and towing was required. There was no aircraft damage and no taxiway lighting or equipment damage and there were no injuries.</span></p>
<p><span style="color: #993300">First Officer’s report: Ferry flight from ZZZ1 with no passengers – upon turning onto Taxiway X after landing at ZZZ, the First Officer inadvertently turned off the hydraulic pumps prior to the southbound turn. Upon realizing the error, the pumps were turned back on, but not before the nose wheel was outside the double solid line. There were no injuries or damage.</span></p>
<p>While this report doesn’t say, it is likely that the crew used engine anti-icing on final. Exiting the runway and experiencing a natural psychological letdown, the FO probably mistook the hydraulic switches for the engine anti-ice switches (identically shaped and colored switches on an adjoining panel) and turned them off instead. The error should have generated a Master Caution warning light, but their report does not say. If so, we can add that to the list of indications missed during their psychological letdown. Additionally, the Captain appeared to be a bit slow to recognize the FO’s error as the aircraft veered off of the taxiway centerline. Luckily, they recognized the cause and got the hydraulics on in time to prevent a taxiway excursion. Unfortunately, they still required tow-in because they felt that they were too close to the taxiway edge to maneuver safely back to the centerline.</p>
<p>      Other errors that can occur during this vulnerable relaxed state are taxiway clearance errors, configuration errors, and distraction events while performing discretionary flightdeck cleanup tasks. We rarely attribute these events to psychological letdown following high task loading flight segments. In the end, our human minds remain vulnerable to mental vulnerabilities. We each should study this effect in ourselves and within our safety programs. Following your next challenging approach, notice if you have a tendency to relax perhaps too quickly or too much. Within our safety programs, see if we have a rise of error events following intense flight phases. I predict that we will discover that this is a worthy topic for self reflection and a continuation training module.</p>]]></content:encoded>
						                            <category domain="https://www.masterairlinepilot.com/community/"></category>                        <dc:creator>Steve Swauger</dc:creator>
                        <guid isPermaLink="true">https://www.masterairlinepilot.com/community/discussion-topics/topic-23-9-the-vulnerabilities-of-the-psychological-letdow/#post-28</guid>
                    </item>
				                    <item>
                        <title>FAA looks to require black boxes record 25 hours of data</title>
                        <link>https://www.masterairlinepilot.com/community/news-topics-in-aviation/faa-looks-to-require-black-boxes-record-25-hours-of-data/#post-27</link>
                        <pubDate>Thu, 30 Nov 2023 23:50:59 +0000</pubDate>
                        <description><![CDATA[FAA looks to require black boxes record 25 hours of data
ByAmanda Maile





November 30, 2023, 12:48 PM




 












 










The U.S. will move ...]]></description>
                        <content:encoded><![CDATA[<div class="dHdH jLRE zXXj aGO eCCl nTLv jLsY gmur TOSF VmeZ sCkV hkQa wGrl MUuG fcia qQjt DhNV Tgcq IGLA tWjk ">
<div class="kCTV qtHu lqtk HkWF HfYh kGyA " data-testid="prism-headline">
<h1 class="vMjA gjbz eHrJ mTgU ">FAA looks to require black boxes record 25 hours of data</h1>
<p class="jxTE Poys lqtk HkWF HfYh kGyA "><span class="tChG zbFa ">By</span><a class="zZyg UbGl iFzk qdXb WCDh DbOX tqUt GpWV iJYz " href="https://abcnews.go.com/author/amanda_maile" target="_blank" rel="noopener" data-testid="prism-linkbase">Amanda Maile</a></p>
</div>
<div class="QHbl nkdH hTos whbO " data-testid="prism-byline">
<div class="VZTD mLAS OcxM oJce ">
<div class="kKfX VZTD rEPu ">
<div class="VZTD mLAS ">
<div class="xAPp Zdbe  jTKb pCRh ">November 30, 2023, 12:48 PM</div>
</div>
</div>
</div>
<div class="RwkL Wowz FjRY " data-testid="prism-share">
<div class="JpUf "> </div>
</div>
</div>
<div class="">
<div>
<div class="FeaturedMedia">
<div class="ResponsiveWrapper">
<aside class="InlineElement InlineElement--content-width InlineElement--desktop InlineVideo FeaturedVideo" aria-label="Video">
<div class="SingleVideo">
<div class="MediaPlaceholder relative MediaPlaceholder--16x9 cursor-pointer" aria-hidden="false">
<div class="ABCNewsVideoPlayer aspect-ratio dock StickyVideoPlayer VideoPlayer">
<div class="VideoPlayer--placeholder absolute-fill">
<div class="StickyVideoPlayer-wrapper">
<div class="StickyVideoPlayer-metaInfo"> </div>
</div>
</div>
</div>
</div>
</div>
</aside>
</div>
</div>
</div>
</div>
<div class="XQpS " data-testid="prism-divider">The U.S. will move to require new planes to be equipped with cockpit voice recorders, or CVRs, to <a class="zZyg UbGl iFzk qdXb WCDh DbOX tqUt GpWV iJYz " href="https://abcnews.go.com/Politics/faa-requiring-airplane-black-boxes-record-25-hours/story?id=97919562" target="_blank" rel="noopener" data-testid="prism-linkbase">capture 25 hours of information</a>. The move will help prevent critical data from be over written after an incident in which the plane keeps flying more than two hours.</div>
</div>
<div class="xvlf ZRif TKoO eaKK bOdf " data-testid="prism-article-body">
<p class="Ekqk nlgH yuUa lqtk TjIX aGjv">The proposed rule, announced by the Federal Aviation Administration on Thursday, comes after a slew of close calls earlier this year involving commercial flights.</p>
<p class="Ekqk nlgH yuUa lqtk TjIX aGjv">Current regulations require CVRs, commonly referred to as black boxes, to tape for at least two hours at a time and then new data begins to overwrite the previous recording.</p>
<div class="oLzS QrHM QBoF pvsT EhJP vPlO zNYg OsTs AkFW daRV ISNQ sKyC eRft acPP BZyg nFwa MCnQ mEee SmBj xegr VvTx iulO NIuq zzsc lzDC aHUB hbvn OjMN eQqc SVqK GQmd jaoD iSha ONJd vrZx OnRT gbbf roDb kRoB oMlS gfNz oJhu eXZc zhVl ">
<div class="zzkI UtCf Ufjy  XQpS " data-testid="prism-divider">The new rule, if enacted, would require certain newly manufactured aircraft -- including commercial planes -- to have CVRs that record 25 hours of information.</div>
</div>
<p class="Ekqk nlgH yuUa lqtk TjIX aGjv">"This rule will give us substantially more data to identify the causes of incidents and help prevent them in the future," FAA Administrator Mike Whitaker said.</p>
<p class="Ekqk nlgH yuUa lqtk TjIX aGjv">CVR data is not available in at least six of the close calls involving commercial planes in the U.S. being investigated by the FAA and National Transportation Safety Board.</p>
<p class="Ekqk nlgH yuUa lqtk TjIX aGjv">The public will have 60 days to comment on the rule after it's entered into the Federal Register. If enacted, the requirement would go into effect one year after the final rule publishes.</p>
<p class="Ekqk nlgH yuUa lqtk eTIW sUzS">The NTSB has been pushing for this requirement since 2018.</p>
<p>https://abcnews.go.com/US/faa-require-black-boxes-record-25-hours-data/story?id=105281775#:~:text=The%20move%20comes%20after%20a%20slew%20of%20close%20calls%20involving%20commercial%20flights.&amp;text=The%20U.S.%20will%20move%20to,capture%2025%20hours%20of%20information.</p>
</div>]]></content:encoded>
						                            <category domain="https://www.masterairlinepilot.com/community/"></category>                        <dc:creator>Steve Swauger</dc:creator>
                        <guid isPermaLink="true">https://www.masterairlinepilot.com/community/news-topics-in-aviation/faa-looks-to-require-black-boxes-record-25-hours-of-data/#post-27</guid>
                    </item>
							        </channel>
        </rss>
		