Gary Brown

Gary Brown

  • Principal Safety Engineer & AI Referent
  • Airbus Commercial SAS

Gary is a Chartered Engineer with a Master’s Degree in Safety Critical Engineering with Distinction at the University of York. He has performed the role of Aircraft Safety Director for 5 years on the Airbus own Beluga XL development. In addition, he was the Aircraft Safety Manager for 4 years on the military A400M, and heavily involved in the Certification of the new A321neo derivative the eXtra Long Range (XLR), where an additional; Rear Centre Tank (RCT) is added.
At the moment Gary is engaged on the A350 Ultra Long Range, again with an RCT as well as supporting the A350 addition of a Freighter platform. He covers all Airbus commercial aircraft as the safety approver for all systems at Airbus Filton UK and Getafe Spain.
Gary regularly teaches at Cranfield University on AI, OEM and UERF PRA as well as speaking at events on AI safety with CURe. He is the technical secretary of the SAE G-34 AI committee as well a voting member at the AE-7F Hydrogen and Fuel Cells, WG-117 software and safety WG-63. Gary is also a member of the IDCA (Independent Data Consortium for Aviation).

Presentations outside of Airbus and Cranfield University:
1. Colloque Intelligence Artificielle de L’AAE – DGAC Paris 13th Nov 24 – How to safely integrate an AI ML use function within an aviation CFR/CS25 platform that can be approved
2. RAeS Toulouse branch – 16th Oct 24- A Theoretical Use Case – Object Ground Taxi Detection with CURe
3. Safety Critical Systems Club (SCSC) UK: Certification Use Reliance (CURe) an Aircraft level view – at Developing Safe AI at BCS Copthall Avenue, London 27th June 24
4. IDCA (virtual) 3rd April 24 – End to End Built-in Safety at an ML product system integration
5. AeroTalks Charlotte US – End to End Built-in Safety at an ML product system integration, 14th March 24
6. FAA AI Tech Talks (virtual) Jan 24 – AI safety Assessment approach
7. FAA AI Tech Talks (virtual) 20th Sept 23 – CURe an Aircraft Level view of an AI ML Part 21integration
8. SafeComp23 – CURe – Toulouse Sept 23

Sessions

  • AI and ML in Testing

    AI and ML have provided some great opportunities for assisting speed and quality of testing, when such large amounts of data require analysis, and highlighted the importance of a robust testing strategy for AI systems. While AI can effectively analyze and digest large datasets, ensuring the quality of the AI algorithm and the accuracy of its model is paramount. Factors such as avoiding biases, balancing accuracy with computational costs, and considering on-board processing limitations must be carefully addressed. How can AI assist when an aircraft is in flight, no longer ‘connected’? Qualifying an AI tool involves defining parameters, setting targets, and identifying patterns within the data. How can accuracy be achieve with several AI algorithms running concurrently?

  • The role AI/ML and big data play in creating an operations to design/engineering safety loop